Distributed non-convex optimization

Size: px
Start display at page:

Download "Distributed non-convex optimization"

Transcription

1 Distributed non-convex optimization Behrouz Touri Assistant Professor Department of Electrical and Computer Engineering University of California San Diego/University of Colorado Boulder AFOSR Computational Math Program Review 2017

2 Outline What is distributed optimization? A motivating example Main results Sketch of the proof Discussions Behrouz Touri Distributed non-convex optimization 2 / 21

3 Distributed Optimization problem Standard (unconstrained) optimization problem: minimize x R d f(x) f : R d R is a nice (convex, differentiable, etc.) function Behrouz Touri Distributed non-convex optimization 3 / 21

4 Distributed optimization A network of n agents (processors, robots, etc.) Behrouz Touri Distributed non-convex optimization 4 / 21

5 Distributed optimization f 2 (x) 2 f 1 (x) 1 3 f 3 (x) A network of n agents (processors, robots, etc.) Agent i has a local cost function f i : R d R Behrouz Touri Distributed non-convex optimization 4 / 21

6 Distributed optimization f 2 (x) 2 f 1 (x) 1 3 f 3 (x) A network of n agents (processors, robots, etc.) Agent i has a local cost function f i : R d R Agents are not willing to communicate f i s Behrouz Touri Distributed non-convex optimization 4 / 21

7 Distributed optimization 2 f 2 (x) f 1 (x) 1 3 f 3 (x) A network of n agents (processors, robots, etc.) Agent i has a local cost function f i : R d R Agents are not willing to communicate f i s Global objective: each agent aims to n minimize x R d f i (x) i=1 Behrouz Touri Distributed non-convex optimization 4 / 21

8 Why distributed optimization? Application 1: distributed estimation Standard distributed optimization problem: minimize x R d n f i (x), i=1 A network of n sensors [n] := {1,..., n} Behrouz Touri Distributed non-convex optimization 5 / 21

9 Why distributed optimization? Application 1: distributed estimation Standard distributed optimization problem: minimize x R d n f i (x), i=1 A network of n sensors [n] := {1,..., n} Sensor i measures y i = θ + η i, with η i N (0, σ 2 ) Behrouz Touri Distributed non-convex optimization 5 / 21

10 Why distributed optimization? Application 1: distributed estimation Standard distributed optimization problem: minimize x R d n f i (x), i=1 A network of n sensors [n] := {1,..., n} Sensor i measures y i = θ + η i, with η i N (0, σ 2 ) ML decoder for θ: ˆθ argmin x R n i=1 (x y i ) 2 (1) Behrouz Touri Distributed non-convex optimization 5 / 21

11 Why distributed optimization? Application 1: distributed estimation Standard distributed optimization problem: minimize x R d n f i (x), i=1 A network of n sensors [n] := {1,..., n} Sensor i measures y i = θ + η i, with η i N (0, σ 2 ) ML decoder for θ: ˆθ argmin x R n i=1 (x y i ) 2 (1) What if there is no central computing unit but the sensors can communicate? Behrouz Touri Distributed non-convex optimization 5 / 21

12 Why distributed optimization? Application 1: distributed estimation Standard distributed optimization problem: minimize x R d n f i (x), i=1 A network of n sensors [n] := {1,..., n} Sensor i measures y i = θ + η i, with η i N (0, σ 2 ) ML decoder for θ: ˆθ argmin x R n i=1 (x y i ) 2 (1) What if there is no central computing unit but the sensors can communicate? Applications: source localization, distributed machine learning, control of swarm robots, distributed resource allocation, etc. Behrouz Touri Distributed non-convex optimization 5 / 21

13 Distributed optimization: setup Unconstrained distributed optimization problem: minimize x R d i. f i : agent i s local cost n f i (x) (2) i=1 Behrouz Touri Distributed non-convex optimization 6 / 21

14 Distributed optimization: setup Unconstrained distributed optimization problem: n minimize x R d f i (x) (2) i=1 i. f i : agent i s local cost ii. Assumption: agents are not sharing local cost functions Behrouz Touri Distributed non-convex optimization 6 / 21

15 Distributed optimization: setup Unconstrained distributed optimization problem: n minimize x R d f i (x) (2) i=1 i. f i : agent i s local cost ii. Assumption: agents are not sharing local cost functions iii. At time t 0, agent i can share information with a set of neighbors N + i (t) [n] and receives information from N i (t) [n] Behrouz Touri Distributed non-convex optimization 6 / 21

16 Distributed optimization: setup Unconstrained distributed optimization problem: n minimize x R d f i (x) (2) i=1 i. f i : agent i s local cost ii. Assumption: agents are not sharing local cost functions iii. At time t 0, agent i can share information with a set of neighbors N + i (t) [n] and receives information from N i (t) [n] iv. Denote the directed graph by G(t) = ([n], E(t)) Behrouz Touri Distributed non-convex optimization 6 / 21

17 Distributed optimization: setup Unconstrained distributed optimization problem: n minimize x R d f i (x) (2) i=1 i. f i : agent i s local cost ii. Assumption: agents are not sharing local cost functions iii. At time t 0, agent i can share information with a set of neighbors N + i (t) [n] and receives information from N i (t) [n] iv. Denote the directed graph by G(t) = ([n], E(t)) Main question: How to solve problem (2) subject to the above information constraint? How much communication is needed? Behrouz Touri Distributed non-convex optimization 6 / 21

18 Distributed optimization: past works - Nedić, A. and Ozdaglar, A., Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1), pp Nedić, A., Ozdaglar, A. and Parrilo, P.A., Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control, 55(4), pp Wang, J. and Elia, N., 2010, September. Control approach to distributed optimization. In Communication, Control, and Computing (Allerton), th Annual Allerton Conference on (pp ). IEEE. - Lobel, I. and Ozdaglar, A., Distributed subgradient methods for convex optimization over random networks. IEEE Transactions on Automatic Control, 56(6), pp Chen, J. and Sayed, A.H., Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Transactions on Signal Processing, 60(8), pp Tsianos, K.I., Lawlor, S. and Rabbat, M.G., 2012, December. Push-sum distributed dual averaging for convex optimization. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC) (pp ). IEEE. - Li, N. and Marden, J.R., Designing games for distributed optimization. IEEE Journal of Selected Topics in Signal Processing, 7(2), pp Bianchi, P. and Jakubowicz, J., Convergence of a multi-agent projected stochastic gradient algorithm for non-convex optimization. IEEE Transactions on Automatic Control, 58(2), pp Gharesifard, B. and Cortés, J., Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Transactions on Automatic Control, 59(3), pp Kia, S.S., Cortés, J. and Martinez, S., Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication. Automatica, 55, pp Nedić, A. and Olshevsky, A., Distributed optimization over time-varying directed graphs. IEEE Transactions on Automatic Control, 60(3), pp Behrouz Touri Distributed non-convex optimization 7 / 21

19 Notations, definitions, and assumptions Graph G(t): representing who is communicating to whom at t, directed, for this talk: unweighted 2 G: Behrouz Touri Distributed non-convex optimization 8 / 21

20 Notations, definitions, and assumptions Graph G(t): representing who is communicating to whom at t, directed, for this talk: unweighted 2 G: 1 3 Adjacency matrix A(t): A ij (t) = 1 if i j in G(t) otherwise A ij (t) = A = Behrouz Touri Distributed non-convex optimization 8 / 21

21 Notations, definitions, and assumptions G = ([n], E) is strongly connected if: there exists a directed path from any node i to any node j Behrouz Touri Distributed non-convex optimization 9 / 21

22 Notations, definitions, and assumptions G = ([n], E) is strongly connected if: there exists a directed path from any node i to any node j Behrouz Touri Distributed non-convex optimization 9 / 21

23 Notations, definitions, and assumptions Reminder: objective is to minimize x R d n i=1 f i(x) Behrouz Touri Distributed non-convex optimization 10 / 21

24 Notations, definitions, and assumptions Reminder: objective is to minimize x R d n i=1 f i(x) previously assumed: local cost functions f i : R d R s are convex Behrouz Touri Distributed non-convex optimization 10 / 21

25 Notations, definitions, and assumptions Reminder: objective is to minimize x R d n i=1 f i(x) previously assumed: local cost functions f i : R d R s are convex Behrouz Touri Distributed non-convex optimization 10 / 21

26 An existing method Push-sum based method: minimize x R d n f i (x), i=1 Agent i maintains estimate x i (t) R d of an optimizer and auxiliary variables w i (t), z i (t) Behrouz Touri Distributed non-convex optimization 11 / 21

27 An existing method Push-sum based method: minimize x R d n f i (x), i=1 Agent i maintains estimate x i (t) R d of an optimizer and auxiliary variables w i (t), z i (t) Agents update rule: x i (t + 1) = 1 N + j (t) x j(t) α(t) f i (z i (t)) w(t + 1) = j N i (t) j N i (t) z i (t) = x i(t) w i (t) 1 N + j (t) w j(t) where x, z, f i (z i (t)) (R d ) n, w R n and w(0) = 1 Behrouz Touri Distributed non-convex optimization 11 / 21

28 Existing methods: method #2 Push-sum based method-main result: Proposition Suppose that 1 {G(t)} is a graph sequence such that for some B > 0, G(t : t + B) = ([n], E(t) E(t + B 1)) is strongly connected for any t 0, 2 the step-size α(t) > 0 is non-increasing with t=0 α(t) = and t=0 α2 (t) <, 3 f i s are convex with bounded gradients and the underlying distributed optimization problem has a non-empty solution set X. Then, for any initial condition x(0) (R d ) n, lim t z i (t) = x for some x X and all i [n]. Behrouz Touri Distributed non-convex optimization 12 / 21

29 Existing methods: why it works? Agents update rule: x(t + 1) = A T (t)x(t) α(t) f z(t) w(t + 1) = A T (t)w(t) z i (t) = x i(t) w i (t) Behrouz Touri Distributed non-convex optimization 13 / 21

30 Existing methods: why it works? Agents update rule: Why it works? x(t + 1) = A T (t)x(t) α(t) f z(t) w(t + 1) = A T (t)w(t) z i (t) = x i(t) w i (t) Behrouz Touri Distributed non-convex optimization 13 / 21

31 Existing methods: why it works? Agents update rule: x(t + 1) = A T (t)x(t) α(t) f z(t) w(t + 1) = A T (t)w(t) z i (t) = x i(t) w i (t) Why it works? - average observer: under the connectivity assumptions, in long run z i(t) x(t) for all i Behrouz Touri Distributed non-convex optimization 13 / 21

32 Existing methods: why it works? Agents update rule: x(t + 1) = A T (t)x(t) α(t) f z(t) w(t + 1) = A T (t)w(t) z i (t) = x i(t) w i (t) Why it works? - average observer: under the connectivity assumptions, in long run z i(t) x(t) for all i Behrouz Touri Distributed non-convex optimization 13 / 21

33 Existing methods: why it works? Agents update rule: x(t + 1) = A T (t)x(t) α(t) f z(t) w(t + 1) = A T (t)w(t) z i (t) = x i(t) w i (t) Why it works? - average observer: under the connectivity assumptions, in long run z i(t) x(t) for all i - flow tracking: we have x(t + 1) = x(t) α(t) n n i=1 f i(z i(t)) x(t) α(t) n n f i( x(t)) i=1 Behrouz Touri Distributed non-convex optimization 13 / 21

34 Non-convex Distributed Optimization Convexity allows to quantify a decrease for x(t + 1) x 2 for some x X But regardless, one can think of the dynamics on x(t) as a perturbed gradient descent Behrouz Touri Distributed non-convex optimization 14 / 21

35 Main Result Still one can show that under suitable conditions: lim z i(t) x(t) = 0 for all i (push-sum protocol) t Behrouz Touri Distributed non-convex optimization 15 / 21

36 Main Result Still one can show that under suitable conditions: lim z i(t) x(t) = 0 for all i (push-sum protocol) t Theorem (Convergence to critical points) Let 1 {G(t)} is a graph sequence such that for some B > 0, G = ([n], E(t) E(t + B 1)) is strongly connected for any t 0, 2 the step-size α(t) > 0 is non-increasing with t=0 α(t) = and t=0 α2 (t) <, and 3 F (x) is bounded from bellow (existence of a global minimizer), F (x) = n i=1 f i(x) is coercive, and f i (x) is Lipschitz continuous for all i. Then, lim t x(t) = x exists and belongs to the set of critical points of F (x). Furthermore, lim t z i (t) = x. Behrouz Touri Distributed non-convex optimization 15 / 21

37 Sketch of the Proof Idea: Stochastic Approximation Approach 1 to analyze x(t) 1 M.B. Nevelson and R. Z. Khasminskii. Stochastic approximation and recursive estimation, 1973 Behrouz Touri Distributed non-convex optimization 16 / 21

38 Sketch of the Proof Idea: Stochastic Approximation Approach 1 to analyze x(t) LV (t, x) = E[V (t + 1, x(t + 1)) x(t) = x] V (t, x). V (t, x) = V (x) = F (x) + C 1 M.B. Nevelson and R. Z. Khasminskii. Stochastic approximation and recursive estimation, 1973 Behrouz Touri Distributed non-convex optimization 16 / 21

39 Sketch of the Proof Idea: Stochastic Approximation Approach 1 to analyze x(t) LV (t, x) = E[V (t + 1, x(t + 1)) x(t) = x] V (t, x). V (t, x) = V (x) = F (x) + C V ( x(t)) is a nonnegative martingale 1 M.B. Nevelson and R. Z. Khasminskii. Stochastic approximation and recursive estimation, 1973 Behrouz Touri Distributed non-convex optimization 16 / 21

40 Sketch of the Proof Idea: Stochastic Approximation Approach 1 to analyze x(t) LV (t, x) = E[V (t + 1, x(t + 1)) x(t) = x] V (t, x). V (t, x) = V (x) = F (x) + C V ( x(t)) is a nonnegative martingale there exists a finite limit of V ( x(t)) a.s. as t 1 M.B. Nevelson and R. Z. Khasminskii. Stochastic approximation and recursive estimation, 1973 Behrouz Touri Distributed non-convex optimization 16 / 21

41 Sketch of the Proof Idea: Stochastic Approximation Approach 1 to analyze x(t) LV (t, x) = E[V (t + 1, x(t + 1)) x(t) = x] V (t, x). V (t, x) = V (x) = F (x) + C V ( x(t)) is a nonnegative martingale there exists a finite limit of V ( x(t)) a.s. as t Since, F is coercive, x(t) is bounded a.s. 1 M.B. Nevelson and R. Z. Khasminskii. Stochastic approximation and recursive estimation, 1973 Behrouz Touri Distributed non-convex optimization 16 / 21

42 Sketch of the Proof Idea: Stochastic Approximation Approach 1 to analyze x(t) LV (t, x) = E[V (t + 1, x(t + 1)) x(t) = x] V (t, x). V (t, x) = V (x) = F (x) + C V ( x(t)) is a nonnegative martingale there exists a finite limit of V ( x(t)) a.s. as t Since, F is coercive, x(t) is bounded a.s. α(t) and Assumptions x(t) critical point of F (boundary of a connected component) 1 M.B. Nevelson and R. Z. Khasminskii. Stochastic approximation and recursive estimation, 1973 Behrouz Touri Distributed non-convex optimization 16 / 21

43 Non-convex Distributed Optimization How to get out of the local maximas? Behrouz Touri Distributed non-convex optimization 17 / 21

44 Non-convex Distributed Optimization How to get out of the local maximas? We consider: x(t + 1) = A T (t)x(t) α(t) f z(t) + α(t)k(t) w(t + 1) = A T (t)w(t) z i (t) = x i(t) w i (t), where K(t) is a noise process with K i (t) a random i.i.d. vector zero-mean coordinates and variance one Behrouz Touri Distributed non-convex optimization 17 / 21

45 Main Result 2 With the new term, we have x(t + 1) = x(t) a(t) n n f i (z i (t)) + α(t) K(t), i=1 lim z i(t) x(t) = 0 almost surely for all i (push-sum protocol) t Behrouz Touri Distributed non-convex optimization 18 / 21

46 Main Result 2 With the new term, we have x(t + 1) = x(t) a(t) n n f i (z i (t)) + α(t) K(t), i=1 lim z i(t) x(t) = 0 almost surely for all i (push-sum protocol) t Theorem (Convergence to critical points) Let 1 {G(t)} is a graph sequence such that for some B > 0, G = ([n], E(t) E(t + B 1)) is strongly connected for any t 0, 2 the step-size α(t) > 0 is non-increasing with t=0 α(t) = and t=0 α2 (t) <, and 3 F (x) is bounded from bellow (existence of a global minimizer), F (x) = n i=1 f i(x) is coercive, and f i (x) is Lipschitz continuous for all i. Then, lim t x(t) = x is a local maxima with probability zero. Behrouz Touri Distributed non-convex optimization 18 / 21

47 Sketch of the Proof Theorem Let x be a local maxima, and D: x D. Assume there exists a function V (t, x), bounded below on x D for all t, such that: (1) LV (t, x) γ(t) for t 0 and x D, where t=1 γ(t) <, (2) lim t V (t, x(t)) =, if lim t x(t) = x. Then Pr{lim t x(t) = x} = 0 independently on x(0). V (t, x) chosen intelligently to reflect the infinite push needed to climb the local maxima. Behrouz Touri Distributed non-convex optimization 19 / 21

48 Further considerations Getting rid of saddle points? Behrouz Touri Distributed non-convex optimization 20 / 21

49 Further considerations Getting rid of saddle points? Other distributed optimization methods? Behrouz Touri Distributed non-convex optimization 20 / 21

50 Further considerations Getting rid of saddle points? Other distributed optimization methods? Rate of convergence result? Behrouz Touri Distributed non-convex optimization 20 / 21

51 Further considerations Getting rid of saddle points? Other distributed optimization methods? Rate of convergence result? Payoff based methods? Results are reported at: T. Tatarenko and BT, Non-convex distributed optimization. IEEE Transactions on Automatic Control (2017). Behrouz Touri Distributed non-convex optimization 20 / 21

52 Publications Journal: T. Tatarenko and BT, Non-convex distributed optimization. IEEE Transactions on Automatic Control (2017). BT and B. Gharesifard, Structural approach to distributed optimization, submitted. BT and B. Gharesifard, Saddle-point dynamics for distributed convex optimization on general directed graphs, submitted. BT and B. Gharesifard, Average consensus on random chains, submitted. Conference: T. Tatarenko and BT, On local analysis of distributed optimization. American Control Conference (ACC), Boston, July BT and B. Gharesifard, Continuous-time distributed convex optimization on time-varying directed networks. IEEE 54th Annual Conference on Decision and Control (CDC), Japan, Dec BT and B. Gharesifard, A Distributed Observer for Distributed Control Systems, Allerton Conference on Communication, Control, and Computing Behrouz Touri Distributed non-convex optimization 21 / 21

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Cun Mu, Asim Kadav, Erik Kruus, Donald Goldfarb, Martin Renqiang Min Machine Learning Group, NEC Laboratories America

More information

Distributed Alternating Direction Method of Multipliers

Distributed Alternating Direction Method of Multipliers Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,

More information

AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang

AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION Jialin Liu, Yuantao Gu, and Mengdi Wang Tsinghua National Laboratory for Information Science and

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

When does a digraph admit a doubly stochastic adjacency matrix?

When does a digraph admit a doubly stochastic adjacency matrix? When does a digraph admit a doubly stochastic adjacency matrix? Bahman Gharesifard and Jorge Cortés Abstract Digraphs with doubly stochastic adjacency matrices play an essential role in a variety of cooperative

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

On Distributed Submodular Maximization with Limited Information

On Distributed Submodular Maximization with Limited Information On Distributed Submodular Maximization with Limited Information Bahman Gharesifard Stephen L. Smith Abstract This paper considers a class of distributed submodular maximization problems in which each agent

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

Analyzing Stochastic Gradient Descent for Some Non- Convex Problems

Analyzing Stochastic Gradient Descent for Some Non- Convex Problems Analyzing Stochastic Gradient Descent for Some Non- Convex Problems Christopher De Sa Soon at Cornell University cdesa@stanford.edu stanford.edu/~cdesa Kunle Olukotun Christopher Ré Stanford University

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

On Distributed Convex Optimization Under Inequality and Equality Constraints Minghui Zhu, Member, IEEE, and Sonia Martínez, Senior Member, IEEE

On Distributed Convex Optimization Under Inequality and Equality Constraints Minghui Zhu, Member, IEEE, and Sonia Martínez, Senior Member, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 57, NO 1, JANUARY 2012 151 On Distributed Convex Optimization Under Inequality Equality Constraints Minghui Zhu, Member, IEEE, Sonia Martínez, Senior Member,

More information

On distributed optimization under inequality constraints via Lagrangian primal-dual methods

On distributed optimization under inequality constraints via Lagrangian primal-dual methods 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 FrA05.4 On distributed optimization under inequality constraints via Lagrangian primal-dual methods Minghui

More information

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces Chapter 6 Curves and Surfaces In Chapter 2 a plane is defined as the zero set of a linear function in R 3. It is expected a surface is the zero set of a differentiable function in R n. To motivate, graphs

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Kevin James. MTHSC 206 Section 15.6 Directional Derivatives and the Gra

Kevin James. MTHSC 206 Section 15.6 Directional Derivatives and the Gra MTHSC 206 Section 15.6 Directional Derivatives and the Gradient Vector Definition We define the directional derivative of the function f (x, y) at the point (x 0, y 0 ) in the direction of the unit vector

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and

More information

Distributed Optimization of Continuoustime Multi-agent Networks

Distributed Optimization of Continuoustime Multi-agent Networks University of Maryland, Dec 2016 Distributed Optimization of Continuoustime Multi-agent Networks Yiguang Hong Academy of Mathematics & Systems Science Chinese Academy of Sciences Outline 1. Background

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Convex Optimization Lecture 2

Convex Optimization Lecture 2 Convex Optimization Lecture 2 Today: Convex Analysis Center-of-mass Algorithm 1 Convex Analysis Convex Sets Definition: A set C R n is convex if for all x, y C and all 0 λ 1, λx + (1 λ)y C Operations that

More information

Lecture 19 Subgradient Methods. November 5, 2008

Lecture 19 Subgradient Methods. November 5, 2008 Subgradient Methods November 5, 2008 Outline Lecture 19 Subgradients and Level Sets Subgradient Method Convergence and Convergence Rate Convex Optimization 1 Subgradients and Level Sets A vector s is a

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Hausdorff Institute for Mathematics (HIM) Trimester: Mathematics of Signal Processing

More information

Consensus-Based Distributed Optimization: Practical Issues and Applications in Large-Scale Machine Learning

Consensus-Based Distributed Optimization: Practical Issues and Applications in Large-Scale Machine Learning Consensus-Based Distributed Optimization: Practical Issues and Applications in Large-Scale Machine Learning Konstantinos I. Tsianos, Sean Lawlor, and Michael G. Rabbat Department of Electrical and Computer

More information

Conforming Vector Interpolation Functions for Polyhedral Meshes

Conforming Vector Interpolation Functions for Polyhedral Meshes Conforming Vector Interpolation Functions for Polyhedral Meshes Andrew Gillette joint work with Chandrajit Bajaj and Alexander Rand Department of Mathematics Institute of Computational Engineering and

More information

Proximal operator and methods

Proximal operator and methods Proximal operator and methods Master 2 Data Science, Univ. Paris Saclay Robert M. Gower Optimization Sum of Terms A Datum Function Finite Sum Training Problem The Training Problem Convergence GD I Theorem

More information

CSE 20 DISCRETE MATH WINTER

CSE 20 DISCRETE MATH WINTER CSE 20 DISCRETE MATH WINTER 2016 http://cseweb.ucsd.edu/classes/wi16/cse20-ab/ Today's learning goals Explain the steps in a proof by (strong) mathematical induction Use (strong) mathematical induction

More information

Digital Image Processing Laboratory: MAP Image Restoration

Digital Image Processing Laboratory: MAP Image Restoration Purdue University: Digital Image Processing Laboratories 1 Digital Image Processing Laboratory: MAP Image Restoration October, 015 1 Introduction This laboratory explores the use of maximum a posteriori

More information

Dual Subgradient Methods Using Approximate Multipliers

Dual Subgradient Methods Using Approximate Multipliers Dual Subgradient Methods Using Approximate Multipliers Víctor Valls, Douglas J. Leith Trinity College Dublin Abstract We consider the subgradient method for the dual problem in convex optimisation with

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Detection and Mitigation of Cyber-Attacks using Game Theory

Detection and Mitigation of Cyber-Attacks using Game Theory Detection and Mitigation of Cyber-Attacks using Game Theory João P. Hespanha Kyriakos G. Vamvoudakis Correlation Engine COAs Data Data Data Data Cyber Situation Awareness Framework Mission Cyber-Assets

More information

Optimization. Industrial AI Lab.

Optimization. Industrial AI Lab. Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)

More information

Affine function. suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex

Affine function. suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex Affine function suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex S R n convex = f(s) ={f(x) x S} convex the inverse image f 1 (C) of a convex

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Algorithms for finding the minimum cycle mean in the weighted directed graph

Algorithms for finding the minimum cycle mean in the weighted directed graph Computer Science Journal of Moldova, vol.6, no.1(16), 1998 Algorithms for finding the minimum cycle mean in the weighted directed graph D. Lozovanu C. Petic Abstract In this paper we study the problem

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

A Derivative-Free Approximate Gradient Sampling Algorithm for Finite Minimax Problems

A Derivative-Free Approximate Gradient Sampling Algorithm for Finite Minimax Problems 1 / 33 A Derivative-Free Approximate Gradient Sampling Algorithm for Finite Minimax Problems Speaker: Julie Nutini Joint work with Warren Hare University of British Columbia (Okanagan) III Latin American

More information

Fast-Lipschitz Optimization

Fast-Lipschitz Optimization Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Minimum Delay Packet-sizing for Linear Multi-hop Networks with Cooperative Transmissions

Minimum Delay Packet-sizing for Linear Multi-hop Networks with Cooperative Transmissions Minimum Delay acket-sizing for inear Multi-hop Networks with Cooperative Transmissions Ning Wen and Randall A. Berry Department of Electrical Engineering and Computer Science Northwestern University, Evanston,

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

Lecture 4: Convexity

Lecture 4: Convexity 10-725: Convex Optimization Fall 2013 Lecture 4: Convexity Lecturer: Barnabás Póczos Scribes: Jessica Chemali, David Fouhey, Yuxiong Wang Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Computing over Multiple-Access Channels with Connections to Wireless Network Coding

Computing over Multiple-Access Channels with Connections to Wireless Network Coding ISIT 06: Computing over MACS 1 / 20 Computing over Multiple-Access Channels with Connections to Wireless Network Coding Bobak Nazer and Michael Gastpar Wireless Foundations Center Department of Electrical

More information

Global Minimization via Piecewise-Linear Underestimation

Global Minimization via Piecewise-Linear Underestimation Journal of Global Optimization,, 1 9 (2004) c 2004 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Global Minimization via Piecewise-Linear Underestimation O. L. MANGASARIAN olvi@cs.wisc.edu

More information

Greedy Gossip with Eavesdropping

Greedy Gossip with Eavesdropping Greedy Gossip with Eavesdropping Deniz Üstebay, Mark Coates, and Michael Rabbat Department of Electrical and Computer Engineering McGill University, Montréal, Québec, Canada Email: deniz.ustebay@mail.mcgill.ca,

More information

. Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;...

. Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;... Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; 2 Application of Gradient; Tutorial Class V 3-10/10/2012 1 First Order

More information

A duality-based approach for distributed min-max optimization with application to demand side management

A duality-based approach for distributed min-max optimization with application to demand side management 1 A duality-based approach for distributed min- optimization with application to demand side management Ivano Notarnicola 1, Mauro Franceschelli 2, Giuseppe Notarstefano 1 arxiv:1703.08376v1 [cs.dc] 24

More information

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks SaeedA.AldosariandJoséM.F.Moura Electrical and Computer Engineering Department Carnegie Mellon University 5000 Forbes

More information

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013 Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork

More information

A Signaling Game Approach to Databases Querying

A Signaling Game Approach to Databases Querying A Signaling Game Approach to Databases Querying Ben McCamish 1, Arash Termehchy 1, Behrouz Touri 2, and Eduardo Cotilla-Sanchez 1 1 School of Electrical Engineering and Computer Science, Oregon State University

More information

GAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao

GAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao GAMES Webinar: Rendering Tutorial 2 Monte Carlo Methods Shuang Zhao Assistant Professor Computer Science Department University of California, Irvine GAMES Webinar Shuang Zhao 1 Outline 1. Monte Carlo integration

More information

arxiv: v1 [cond-mat.dis-nn] 30 Dec 2018

arxiv: v1 [cond-mat.dis-nn] 30 Dec 2018 A General Deep Learning Framework for Structure and Dynamics Reconstruction from Time Series Data arxiv:1812.11482v1 [cond-mat.dis-nn] 30 Dec 2018 Zhang Zhang, Jing Liu, Shuo Wang, Ruyue Xin, Jiang Zhang

More information

Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing

Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing 1 Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing Irina S. Dolinskaya Department of Industrial Engineering and Management Sciences Northwestern

More information

Robotic Motion Planning: Review C-Space and Start Potential Functions

Robotic Motion Planning: Review C-Space and Start Potential Functions Robotic Motion Planning: Review C-Space and Start Potential Functions Robotics Institute 16-735 http://www.cs.cmu.edu/~motionplanning Howie Choset http://www.cs.cmu.edu/~choset What if the robot is not

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Tangent & normal vectors The arc-length parameterization of a 2D curve The curvature of a 2D curve

Tangent & normal vectors The arc-length parameterization of a 2D curve The curvature of a 2D curve Local Analysis of 2D Curve Patches Topic 4.2: Local analysis of 2D curve patches Representing 2D image curves Estimating differential properties of 2D curves Tangent & normal vectors The arc-length parameterization

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

Lecture 25 Nonlinear Programming. November 9, 2009

Lecture 25 Nonlinear Programming. November 9, 2009 Nonlinear Programming November 9, 2009 Outline Nonlinear Programming Another example of NLP problem What makes these problems complex Scalar Function Unconstrained Problem Local and global optima: definition,

More information

DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling

DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS Yuao Cheng, Houfeng Huang, Gang Wu, Qing Ling Department of Automation, University of Science and Technology of China, Hefei, China ABSTRACT

More information

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University DS 4400 Machine Learning and Data Mining I Alina Oprea Associate Professor, CCIS Northeastern University September 20 2018 Review Solution for multiple linear regression can be computed in closed form

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

CME323 Report: Distributed Multi-Armed Bandits

CME323 Report: Distributed Multi-Armed Bandits CME323 Report: Distributed Multi-Armed Bandits Milind Rao milind@stanford.edu 1 Introduction Consider the multi-armed bandit (MAB) problem. In this sequential optimization problem, a player gets to pull

More information

15. Cutting plane and ellipsoid methods

15. Cutting plane and ellipsoid methods EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality

More information

10703 Deep Reinforcement Learning and Control

10703 Deep Reinforcement Learning and Control 10703 Deep Reinforcement Learning and Control Russ Salakhutdinov Machine Learning Department rsalakhu@cs.cmu.edu Policy Gradient I Used Materials Disclaimer: Much of the material and slides for this lecture

More information

Reaching consensus about gossip: convergence times and costs

Reaching consensus about gossip: convergence times and costs Reaching consensus about gossip: convergence times and costs Florence Bénézit, Patrick Denantes, Alexandros G. Dimakis, Patrick Thiran, Martin Vetterli School of IC, Station 14, EPFL, CH-1015 Lausanne,

More information

Lecture 2: August 29, 2018

Lecture 2: August 29, 2018 10-725/36-725: Convex Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 2: August 29, 2018 Scribes: Adam Harley Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

The PageRank Computation in Google, Randomized Algorithms and Consensus of Multi-Agent Systems

The PageRank Computation in Google, Randomized Algorithms and Consensus of Multi-Agent Systems The PageRank Computation in Google, Randomized Algorithms and Consensus of Multi-Agent Systems Roberto Tempo IEIIT-CNR Politecnico di Torino tempo@polito.it This talk The objective of this talk is to discuss

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Dual Interpolants for Finite Element Methods

Dual Interpolants for Finite Element Methods Dual Interpolants for Finite Element Methods Andrew Gillette joint work with Chandrajit Bajaj and Alexander Rand Department of Mathematics Institute of Computational Engineering and Sciences University

More information

Convex Optimization / Homework 2, due Oct 3

Convex Optimization / Homework 2, due Oct 3 Convex Optimization 0-725/36-725 Homework 2, due Oct 3 Instructions: You must complete Problems 3 and either Problem 4 or Problem 5 (your choice between the two) When you submit the homework, upload a

More information

Online Learning. Lorenzo Rosasco MIT, L. Rosasco Online Learning

Online Learning. Lorenzo Rosasco MIT, L. Rosasco Online Learning Online Learning Lorenzo Rosasco MIT, 9.520 About this class Goal To introduce theory and algorithms for online learning. Plan Different views on online learning From batch to online least squares Other

More information

Outline. Robust MPC and multiparametric convex programming. A. Bemporad C. Filippi. Motivation: Robust MPC. Multiparametric convex programming

Outline. Robust MPC and multiparametric convex programming. A. Bemporad C. Filippi. Motivation: Robust MPC. Multiparametric convex programming Robust MPC and multiparametric convex programming A. Bemporad C. Filippi D. Muñoz de la Peña CC Meeting Siena /4 September 003 Outline Motivation: Robust MPC Multiparametric convex programming Kothares

More information

Lecture 2 Convex Sets

Lecture 2 Convex Sets Optimization Theory and Applications Lecture 2 Convex Sets Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2016 2016/9/29 Lecture 2: Convex Sets 1 Outline

More information

Projection-Based Methods in Optimization

Projection-Based Methods in Optimization Projection-Based Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell, MA

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015 Convex Optimization - Chapter 1-2 Xiangru Lian August 28, 2015 1 Mathematical optimization minimize f 0 (x) s.t. f j (x) 0, j=1,,m, (1) x S x. (x 1,,x n ). optimization variable. f 0. R n R. objective

More information

arxiv: v1 [math.co] 27 Feb 2015

arxiv: v1 [math.co] 27 Feb 2015 Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,

More information

Dynamic Control and Optimization of Buffer Size for Short Message Transfer in GPRS/UMTS Networks *

Dynamic Control and Optimization of Buffer Size for Short Message Transfer in GPRS/UMTS Networks * Dynamic Control and Optimization of for Short Message Transfer in GPRS/UMTS Networks * Michael M. Markou and Christos G. Panayiotou Dept. of Electrical and Computer Engineering, University of Cyprus Email:

More information

Direct Surface Reconstruction using Perspective Shape from Shading via Photometric Stereo

Direct Surface Reconstruction using Perspective Shape from Shading via Photometric Stereo Direct Surface Reconstruction using Perspective Shape from Shading via Roberto Mecca joint work with Ariel Tankus and Alfred M. Bruckstein Technion - Israel Institute of Technology Department of Computer

More information

Case Study 1: Estimating Click Probabilities

Case Study 1: Estimating Click Probabilities Case Study 1: Estimating Click Probabilities SGD cont d AdaGrad Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade March 31, 2015 1 Support/Resources Office Hours Yao Lu:

More information

Value function and optimal trajectories for a control problem with supremum cost function and state constraints

Value function and optimal trajectories for a control problem with supremum cost function and state constraints Value function and optimal trajectories for a control problem with supremum cost function and state constraints Hasnaa Zidani ENSTA ParisTech, University of Paris-Saclay joint work with: A. Assellaou,

More information

Learning with minimal supervision

Learning with minimal supervision Learning with minimal supervision Sanjoy Dasgupta University of California, San Diego Learning with minimal supervision There are many sources of almost unlimited unlabeled data: Images from the web Speech

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Design and Performance Analysis of a DRAM-based Statistics Counter Array Architecture

Design and Performance Analysis of a DRAM-based Statistics Counter Array Architecture Design and Performance Analysis of a DRAM-based Statistics Counter Array Architecture Chuck Zhao 1 Hao Wang 2 Bill Lin 2 Jim Xu 1 1 Georgia Institute of Technology 2 University of California, San Diego

More information

Lecture 0: Reivew of some basic material

Lecture 0: Reivew of some basic material Lecture 0: Reivew of some basic material September 12, 2018 1 Background material on the homotopy category We begin with the topological category TOP, whose objects are topological spaces and whose morphisms

More information

1KOd17RMoURxjn2 CSE 20 DISCRETE MATH Fall

1KOd17RMoURxjn2 CSE 20 DISCRETE MATH Fall CSE 20 https://goo.gl/forms/1o 1KOd17RMoURxjn2 DISCRETE MATH Fall 2017 http://cseweb.ucsd.edu/classes/fa17/cse20-ab/ Today's learning goals Explain the steps in a proof by mathematical and/or structural

More information

CS281 Section 3: Practical Optimization

CS281 Section 3: Practical Optimization CS281 Section 3: Practical Optimization David Duvenaud and Dougal Maclaurin Most parameter estimation problems in machine learning cannot be solved in closed form, so we often have to resort to numerical

More information

A Brief Look at Optimization

A Brief Look at Optimization A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 1 A. d Aspremont. Convex Optimization M2. 1/49 Today Convex optimization: introduction Course organization and other gory details... Convex sets, basic definitions. A. d

More information

Cofinite Induced Subgraph Nim

Cofinite Induced Subgraph Nim University of California, Los Angeles October 4, 2012 Nim Nim: Game played with n heaps of beans Players alternate removing any positive number of beans from any one heap When all heaps are empty the next

More information

Distributed connectivity of mobile robotic networks

Distributed connectivity of mobile robotic networks Distributed connectivity of mobile robotic networks Dissertation Talk March 19, 2009 Mike Schuresko advised by Jorge Cortés Applied Mathematics and Statistics University of California, Santa Cruz Mechanical

More information

Distributed connectivity of mobile robotic networks

Distributed connectivity of mobile robotic networks Distributed connectivity of mobile robotic networks Job Talk July 24, 2009 Mike Schuresko advised by Jorge Cortés Applied Mathematics and Statistics University of California, Santa Cruz Mechanical and

More information

Nonsmooth Optimization and Related Topics

Nonsmooth Optimization and Related Topics Nonsmooth Optimization and Related Topics Edited by F. H. Clarke University of Montreal Montreal, Quebec, Canada V. F. Dem'yanov Leningrad State University Leningrad, USSR I and F. Giannessi University

More information

Remote State Estimation in the Presence of an Eavesdropper

Remote State Estimation in the Presence of an Eavesdropper Remote State Estimation in the Presence of an Eavesdropper Alex Leong Paderborn University Workshop on Information and Communication Theory in Control Systems, TUM, May 2017 Alex Leong (Paderborn University)

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information