On the Complexity of Explicit MPC Laws

Size: px
Start display at page:

Download "On the Complexity of Explicit MPC Laws"

Transcription

1 On the Complexity of Explicit MPC Laws Francesco Borrelli, Mato Baotić, Jaroslav Pekar and Greg Stewart Abstract Finite-time optimal control problems with quadratic performance index for linear systems with linear constraints can be transformed into Quadratic Programs (QPs). Model Predictive Control requires the online solution of such QPs. This can be obtained by using a QP solver or evaluating the associated explicit solution. Objective of this note is to shed some light on the complexity of the two approaches. I. INTRODCTION In [3], [2] the authors have shown how to compute the solution to constrained finite-time optimal control (CFTOC) problem for discrete-time linear systems as a piecewise affine (PWA) state-feedback law. Such a law is computed off-line by using a multi-parametric programg solver [3], [4], [9], which divides the state space into polyhedral regions, and for each region deteres the linear gain and offset which produces the optimal control action. This method reveals its effectiveness when a Model Predictive Control (MPC) strategy is used [0]. At each sampling time the MPC requires the solution of an open-loop CFTOC problem, which, for a quadratic performance index and known (measured) system state, corresponds to solving a Quadratic Program (QP). Having a precomputed solution as an explicit piecewise affine function of the state vector reduces the on-line computation of the MPC control law to a function evaluation, thus avoiding the on-line solution of a quadratic program. The objective of this note is to shed some light on the complexity of the on-line solution of a quadratic program (by means of an active set QP solver) versus the on-line evaluation of the explicit solution (by means of an explicit solver ). We will focus on the three main components of an active set QP algorithm and of an explicit solver: () the amount of stored data, (2) the optimality certificate and (3) the selection of the next active set if the validation of the optimality fails. In order to simplify our exposition and our comparison, we will start with a classical active set QP solver [7, p. 229] and a standard explicit solver (as proposed in [3], [2]). Corresponding author name Francesco Borrelli F. Borrelli is with Department of Mechanical Engineering, niversity of California, Berkeley, , SA fborrelli@me.berkeley.edu M. Baotić is with Faculty of Electrical Engineering and Computing, niversity of Zagreb, nska 3, 0000 Zagreb, Croatia mato.baotic@fer.hr J. Pekar is with Honeywell Prague Laboratory, Prague, Czech Republic jaroslav.pekar@honeywell.com G. Stewart is with Honeywell Automation and Control Solutions, North Vancouver, Canada greg.stewart@honeywell.com It is not the intent of this paper to compare the proposed algorithms with other very efficient explicit solvers appeared in the literature [8], [4], [], or fast QP solvers [], [6], [2], [5] tailored to the special structure of the underlying optimal control problem or suboptimal explicit solution. The comparison would be problem dependent and requires the simultaneous analysis of several issues such as speed of computation, storage demand and real time code verifiability. This is an involved study and as such is outside of the scope of this paper. II. NOTATION Throughout this paper (lower and upper case) italic letters denote scalars, vectors and matrices (e.g., A, a,...), while upper case calligraphic letters denote sets (e.g., A, B,...). R is the set of real numbers, N is the set of positive integer numbers. For a matrix (vector) A, A denotes its transpose, while A i denotes the i-th row (element). For a set A, A(k) denotes it s k-th element. Given the matrix G R m n, then for any set A {,..., m}, G A denotes the submatrix of G consisting of the rows indexed by A. Q 0 denotes positive definiteness (resp., Q 0 positive semidefiniteness) of a square matrix Q, while A denotes cardinality (number of elements) of a set A. III. CFTOC AND ITS STATE-FEEDBACK PWA SOLTION Consider the discrete-time linear time-invariant system subject to the constraints x(t + ) = Ax(t) + Bu(t) () E x x(t) + E u u(t) E (2) at all time instants t 0. In () (2), n x N, n u N and n E N are the number of states, inputs and constraints respectively, x(t) R nx is the state vector, u(t) R nu is the input vector, A R nx nx, B R nx nu, E x R n E n x, E u R n E n u, E R n E, and the vector inequality (2) is considered elementwise. Let x 0 = x(0) be the initial state and consider the constrained finite-time optimal control problem J (x 0 ) := J(x 0, ) x k+ = Ax k + Bu k, E x x k + E u u k E, k = 0,..., N where N N is the horizon length, := [u 0,..., u N ] R nun is the optimization vector, x i denotes the state at time i if the initial state is x 0 and the control sequence {u 0,..., u i } is applied to the system (), J : R nx R is (3)

2 the value function, and the cost function J : R nx R nun R is given as a quadratic function N J(x 0, ) = x N Q x N x N + x kq x x k + u kq u u k (4) k=0 where Q x = (Q x ) 0, Q u = (Q u ) 0, Q x N 0. Consider the problem of regulating to the origin the discrete-time linear time-invariant system () while fulfilling the constraints (2). The solution to CFTOC problem (3) (4) is an open-loop optimal control trajectory over a finite horizon. A Model Predictive Control (MPC) [0] strategy employs it to obtain a feedback control law in the following way: assume that a full measurement of the state x(t) is available at the current time t 0. Then, the CFTOC problem (3) (4) is solved at each time t for x 0 = x(t), and u(t) = u 0 is applied as an input to system (). A. Solution of CFTOC Consider the CFTOC problem (3) (4). By substituting x k = A k x 0 + k j=0 Aj Bu k j in (3) (4), this can be rewritten as the quadratic program [3] J (x) = 2 H + x F + 2 x Y x G b r + B x x where x = x 0, the column vector := [u 0,..., u N ] R n, n := n u N, is the optimization vector, H = H 0, and H, F, Y, G, B x, b r are easily obtained from Q x, Q u, Q x N and (3) (4) (see [3] for details). Because the problem depends on x the implementation of MPC can be performed either by solving the QP (5) on-line or, as shown in [3], [4], by solving problem (5) off-line for all x within a given range of values, i.e., by considering (5) as a multi-parametric Quadratic Program (mp-qp). In [3] the authors give a self-contained proof of the following properties of the mp-qp solution. Theorem : Consider the multi-parametric quadratic program (5) and let H 0. Then the set of feasible parameters X f is convex, the optimizer : X f R s is continuous and piecewise affine (PWA), and the value function J : X f R is continuous, convex and piecewise quadratic. Once the multi-parametric problem (5) is solved offline, i.e., the solution (x) = fpwa (x) of the CFTOC problem (5) is found, then the state-feedback PWA MPC law can be simply obtained by extracting first n u elements of fpwa (x) u (t) = [Inu 0nu... 0nu ] fpwa(x(t)). (6) Therefore by using a multi-parametric solver the computation of a MPC law becomes a simple piecewise affine function evaluation. IV. ACTIVE SET ALGORITHMS VS EXPLICIT SOLVERS The objective of this section is to compare computational time and storage demand associated to (i) active-set QPs for solving (5) and to (ii) the evaluation of the explicit solution (6). (5) Three main components are shared by active set QP algorithms and by explicit solvers: (i) the amount of off-line stored data, (ii) the validation of the optimality certificate and (iii) the selection of the next active set if the validation of the certificate fails. The three steps will be detailed later in this manuscript. For the sake of better readability we rewrite the quadratic program (5) compactly as 2 H + g(x) G b(x) where G R m n, b(x) R m, g(x) R n, b(x) = b r + B x x and g(x) = F x. Let I := {,..., m} be the set of constraint indices. For a fixed x, A ( x) denotes the set of active constraints at ( x): (7) A ( x) := {j I : G j ( x) = b j ( x)} (8) In the optimization field, the variety of QP algorithms is very rich and their performance depends on the type of problem. In this note, we prefer to highlight the main differences between the two approaches (online vs explicit) and corresponding changes in computational time and storage demand rather than selecting a specific QP implementation and carrying an exact computation. For this reason, in the next section IV-A, we present the main steps of a simple active set QP algorithm and in section IV-B we present the main steps of an explicit solver. For the same reason, we will not cover the variety of pivoting rules for degenerate case. A. Active Sets QP solver Before presenting an active set method for solving the QP (7), we will first consider a subset A of the constraints index A I and the following equality constrained QP: 2 H + g(x) G A = b A (x) A Lagrangian method [7, p. 229] solves the equality constrained QP (9) by computing a solution to the Karush Kuhn Tucker (KKT) conditions: H + G Aλ + g(x) = 0 G A = b A (x). (9) (0a) (0b) Equations (0a) and (0b) can be compactly written as [ ] [ ] [ ] H G A g(x) = () G A 0 λ b A (x) The matrix in () is referred as the Lagrangian matrix and it is symmetric. If the inverse exists and it is expressed as [ ] H G [ ] A L T = G A 0 T (2) S then the solution to () can be written as or equivalently as = Lg(x) + T b A (x) λ = T g(x) + Sb A (x) = (T B x A LF )x + T b r A = F Ax + c A λ = (SB x A T F )x + Sb r A = F d A x + cd A (3) (4)

3 The explicit expression for L, T and S when H exists are L = H H G A (G AH G A ) G A H T = H G A (G AH G A ) S = (G A H G A ) (5) In practice, computing equation (5) might not be numerically robust. Several alternatives can be found in [7, p. 236]. The matrix H is always invertible but the Lagrangian matrix in () might not be invertible. This happens when G A is not full row-rank. In this case any feasible point can be written as = Y b(x) + Zy, where y R n m, with m being the rank of G A. One possibility for computing the and λ is to use a QR factorization of the matrix G A to compute Y and Z as follows G A = Q [ R 0 ] = [Q Q 2 ] [ R 0 ] = Q R (6) where Q R n n is orthogonal, R R m m is upper triangular and Q R n m and Q 2 R n (n m). If we use Y = Q R and Z = Q 2 then [7, Eq. 0..4,0..5] =Y b(x) Z(Z HZ) Z (g(x) + HY b A (x)) λ =Y (H + g(x)) which combined with (4) gives: (7a) (7b) L = Z(Z HZ) Z, T = Y LHY, S = Y HT (8) Alternative approaches for computing and λ can be found in [7]. We remark that they all consists in manipulating () and, after a certain number of operations, obtaining (x) and λ (x). We are now ready to introduce a primal feasible QP solver. Consider the QP (7) for a fixed x, the set A = A(x) of active constraints at x and the associated KKT conditions: H + G Aλ + g(x) = 0 G A = b A (x), λ 0, G I\A < b I\A (x). (9a) (9b) (9c) (9d) Next the main ingredients a primal active set method are briefly recalled [7]. At each iteration k, a feasible point (k) for the QP (7) is known with associated active constraint set A (k) := {j I : G j (k) = b j (x)}. Step k consists of computing the solution to the problem where δ 2 δ Hδ + g (k) (x)δ G A (k)δ = 0 (20) g (k) (x) = g(x) + H (k) (2) and δ represents a correction to (k) in the direction where the constraints A (k) are active. Problem (20) is an equality constrained QP which can be solved as shown above. Let δ be the optimizer of problem (20). Three cases are possible: ) δ 0 and (k) + δ is feasible for (7). Then, set (k+) = (k) + δ and iterate the procedure with (k+) and A (k+) = A (k). 2) δ = 0. (k) might be the optimal solution or might violate duality conditions. The Lagrange multipliers λ (k) associated to (k) and its active set A (k) by using (7b)-(8) are computed together with the smallest multiplier d = i {,..., A (k) } λ (k) i and its index p = arg i {,..., A (k) } λ(k) i. If d > 0, then all Lagrange multipliers are positive and the optimum is found, otherwise set A (k+) = A (k) \ A (k) (p) and iterate the procedure. 3) (k) + δ is not feasible for (7). Then find the best feasible point (k+) = (k) + δ (k) with a line search in the direction of δ, i.e., δ (k) = α (k) δ where α (k) is computed so that no constraint is violated: α (k) = b i G i (k) i I\A (k), G iδ >0 G i δ (22) At (k+) a new constraint becomes active and it is defined by the index (p say) which achieves the in (22). Set (k+) = (k) + α (k) δ and A (k+) = A (k) p and iterate the procedure. The algorithm can be summarized as follows Algorithm :. Given () and A (), set k = 2. if δ = 0 is not a solution of (20)-(2) THEN GOTO Step Compute λ (k) by using equations (7b),(8) and solve d = λ (k) i {,..., A (k) i (23) } 4. IF d > 0 THEN set = (k) and terate 5. ELSE A (k) = A (k) \ A (k) (i d ) where i d is the arg of problem (23) 6. DO QR decomposition of G A (k) as in (6) and set Z (k) = Q 2 7. Solve the following 2 δ Z (k) HZ (k) δ + g (k) (x)δ (24) with g (k) (x) = (g(x) + H (k) ), which yields 8. Compute α (k) = δ = (Z (k) HZ (k) ) g (k) (x) (25) (, i I\A (k), G iδ >0 b i (x) G i (k) ) G i δ (26) and set (k+) = (k) + α (k) δ 9. IF α (k) < then A (k+) = A (k) i p where i p is the arg of problem (26) 0. Set k=k+ and GOTO 2. We can characterize Algorithm based on the three following elements: Off-line Storage Matrices H, F, G, B r and b x. Optimality Certificate (k) is optimal if primal feasibility and dual feasibility are satisfied.

4 Active Set Selection Algorithm proceeds by checking dual feasibility first. If it is not satisfied, then the constraint which is violated the most in (23) is included in the active set and procedure is repeated. If dual feasibility is satisfied, then primal feasibility is verified. If primal feasibility is not satisfied then (26) is solved to exclude the constraint which violates primal feasibility the most. Alternative Simple Algorithm The algorithm presented in this section is not typically used to solve QPs. It is introduced to better understand the issues involved with the explicit algorithms presented later in this paper. Compared to Algorithm, the algorithm proposed in this section does not require a feasible initial point and does not provide primal feasible solutions at intermediate steps. At each iteration k, an active constraint set A (k) is known and the equations (0) are solved to obtain (k) (x) and λ (k) (x). The algorithm proceeds as follows ) If (k) is not feasible for the QP (7), then compute the constraint that is violated the most i p = arg i I\A (k)b i (x) G i (k) (27) Set A (k+) = A (k) i p and iterate the procedure. 2) Compute the smallest multiplier d = i {,..., A (k) } λ (k) i and its index i d = arg i {,..., A (k) } λ(k) i. If d > 0, then all Lagrange multipliers are positive and the optimum is found, otherwise set A (k+) = A (k) \ A (k) (i d ) and iterate the procedure. The two steps above are the main components of the algorithm summarized below Algorithm 2:. Given A (), set k = 2. Compute (k) by using equation (7a), (8) 3. Compute f = b i (x) G i (k) (28) i I\A (k) 4. IF f < 0 then A (k+) = A (k) i p where i p is the arg of problem (28), i.e. (27). Set k=k+ and GOTO Compute λ (k) by using equation (7b),(8) and solve d = λ (k) i {,..., A (k) i (29) } 6. IF d > 0 THEN set = (k) and terate 7. ELSE A (k) = A (k) \ A (k) (i d ) where i d is the arg of problem (29) 8. Set k=k+ and GOTO 2. Remark 4.: We remark that Off-line Storage, Optimality Certificate and Active Set Selection for Algorithm are identical to Algorithm 2. The only difference between Algorithm and Algorithm 2 is that the first guarantees primal feasibility of the intermediate solution at each step while the latter does not. This is obtained at the price of an initial feasible solution required to initialize Algorithm and the additional steps in (26) required to compute a feasible solution. B. Explicit Solution: Off-line and On-line Computation The basic idea beyond the computation of the explicit solution to QP (7) for a set of parameters x K R nx can be described as follows. Consider an active set A k and the primal and dual solution, λ in (3)-(5) if the Lagrangian matrix is invertible or (3), (6)-(8) if Lagrangian matrix is not invertible. and λ are affine functions of x since b(x) and g(x) are affine functions of x. Their expression is valid for all x for which the set A k is active at the optimum (note that in this section the lower index k in A k simply denotes a counter, while in the previous section, the upper index (k) specifies variables at the k- the step of the algorithm). Such region CR Ak is called the critical region and is the set of all parameters x for which the constraints indexed by A k are active at the optimizer of problem (7). The critical region CR Ak is computed by imposing primal feasibility conditions (9d) on P p := {x : G I\Ak (x) < b I\Ak (x)} (30) and dual feasibility conditions (9c) on λ P d := {x : λ (x) 0} (3) In conclusion, the critical region CR Ak is the intersection of P p and P d : CR Ak = {x : x P p, x P d } (32) Obviously, the closure of CR Ak is a polyhedron in the x- space. Based on (4) we can rewrite the closure of the primal and dual polyhedron (30)-(3) as: P p = {x : G I\Ak F Ak x b(x) I\Ak G I\Ak c Ak } = {x : P i,p x Q i,p, for i =,... I \ A k } (33) and P d := {x : F d A k x c d A k } = {x : P i,d x Q i,d, for i =,... A k } (34) An mp-qp algorithm deteres the partition of K into critical regions CR Ai, and finds the expression of the functions ( ) for each critical region. Let N P denote the total number of critical regions. The explicit solution (4) of the QP (7) is (x) = F Ai x + c Ai, x CR Ai, i =,..., N P, (35) where F Ai R n nx, c Ai R n, {CR Ai } N P i= is a polyhedral partition of K (i.e., i CR Ai = K, and CR Ai and { CR Aj have disjoint interiors i j), with CR Ai = x R n x P i x Q i}, P i R pi nx, Q i R pi, and p i is the number of halfspaces defining polyhedron CR Ai, i =,..., N P. Note that in general i=,...,n P CR Ai = K K since for some x the QP (7) could be infeasible. In principle, one could simply generate all the possible combinations of active sets and compute the corresponding

5 critical region CR Ai and optimizer expression of ( ). However, in many problems only a few active constraints sets generate full-dimensional critical regions inside the region of interest K. Therefore, the goal of an mp-qp solver is to generate only the active sets A i with associated fulldimensional critical regions covering only the feasible set K K. The evaluation of explicit solutions in its simplest form would require: (i) the storage of the list of polyhedral regions and of the corresponding affine control laws, (ii) a sequential search through the list of polyhedra for the i-th polyhedron that contains the current state in order to implement the i-th control law. Since verifying if a point x belongs to a critical region means to verify primal and dual conditions, then the on-line search for the polyhedron containing x can be compared to the main steps of a QP solver. Next Mj i denotes the j-th row of the matrix M i. The simplest implementation consists of searching for the polyhedral region that contains the state x(t) as in the following algorithm: Algorithm 3:. i = 0, notfound=tre 2. WHILE i N P AND notfound 2.. j = 0, feasible=tre 2.2. WHILE j p i AND feasible IF Pj ix(t) > Qi j THEN feasible=false 2.3. END 2.4. IF feasible THEN notfound=false 3. END We can characterize Algorithm 3 based on the three following elements: (i) Off-line Storage: Matrices F Ai, c Ai, P i, Q i, (ii) Optimality Certificate: Primal feasibility and dual feasibility, (iii) Active Set Selection: Pick the next active set in the list. Algorithm 3 differs from the primal feasible Algorithm 2 in the way the next active set (or region) is chosen. One can easily modify Algorithm 3 to use the same active strategy of Algorithm 2 as follows. In the j-th critical region CR Aj we separate the constraints deriving from primal feasibility P j,p x Q j,p and the constraint deriving from dual feasibility P j,d x Q j,d. In the next algorithms A (k) (p) denotes the p element of the set A (k) and the symbol j A (i) p denotes the following operation: given the active set A (i) at step i, select the index j so that A j = A (i) p and set A (i+) = A j. Algorithm 4:. notfound=tre, k =, j =, A () = A. 2. WHILE k N P AND notfound 2.. Compute f = i P j,p i x(t) + Q j,p i and let i p be the arg 2.2. IF f < 0 THEN j A (k) i p, k = k + break 2.3. Compute d = i P j,d i x(t) + P j,d i and let i d be the arg 2.4. IF d < 0 THEN j A (k) \ A (k) (i d ), k = k + break 2.5. notfound=false 3. END Algorithm 4 can be interpreted as the off-line version of Algorithm 2. The off-line version of Algorithm can be found in [5]. V. COMPARISON BETWEEN ON-LINE AND EXPLICIT ALGORITHMS By comparing Algorithm 2 and Algorithm 4 we can draw the following conclusions. (I) The active set strategy and the certificate of optimality are the same. In particular, Step 5. in Algorithm 2 corresponds to Step 2.. in Algorithm 4. The difference is clear from equation (34). In the explicit Algorithm 4 the matrices Pj,d x and P j,d are precomputed and stored. In the online QP Algorithm 2, Pj,d x and P j,d are computed online from (3) and (8). Similarly, Step 3. in Algorithm 2 requires the computation of b i (x) G i (k) which corresponds to Step 2.3. in Algorithm 4. The difference is clear from equation (33). In the explicit Algorithm 4 the matrices Pj,p x and P j,p are precomputed and stored. In the online QP Algorithm 2, Pj,p x and P j,p are computed online from (3) and (8). Although equations (4) and (8) can be efficiently computed, online QR decomposition and several matrix multiplications are required at each time step. This represents the main computational saving between online active sets QPs and explicit solvers. (II) The online QP Algorithm 2 could be improved. Any modification which helps computing the L, T, and S in (8) will reduce the computational gap between Algorithm 2 and Algorithm 4. However, a computational gap will always exist unless those matrices are pre-computed as in Algorithm 4. Any modification on the active set strategy or to the optimality certificate can be applied to the explicit solution as well and therefore they will not affect the comparison. (III) Algorithm 2 and Algorithm 4 can be properly compared only if they are initialized with the same set A (). Note that there might be active constraint sets for which it is possible to initialize Algorithm 2 and not Algorithm 4. In fact, Algorithm 4 stores only full dimensional critical regions for which the corresponding set A (k) is active at (x) for some x, i.e., A (x) defined in (8). Definition (Non-degenerate QP): We say that the QP (7) is non-degenerate if for each x K the rows of G A (x) are linearly independent, where A (x) is defined in (8). The following proposition shows that if Algorithm 2 and Algorithm 4 are initialized with the same set A () then they explore the same active sets. Proposition : Assume the QP (7) is non-degenerate, let x be a feasible state and consider Algorithm 4. If A () corresponds to a full-dimensional critical region stored offline by Algorithm 4, then, at any time step k, either j A p or j A \ A(p) correspond to full-dimensional critical regions which have been stored off-line by Algorithm 4. At the generic step k of Algorithm 4, assume that constraint j of the current critical region CR A (k) (say a j x b j ) is violated. Let A (k+) A (k) p ( A (k+) A (k) \

6 A (k) (p)) the new set of active constraints if j is a primal (dual) constraint. Then, three cases can occur: () CR A (k+) has been stored off-line by Algorithm 4, (2) CR A (k+) is not a full dimensional critical region and (3) CR A (k+) is empty. Case (2) can be excluded because we assumed that the QP (7) is non-degenerate (cf. [4], [3]). Case (3) can also be excluded by contradiction. In fact, assume there is no critical region in the half-space a j x > b j. Then a j x b j is a facet of the region of feasible states K and therefore (since it is violated) x is not feasible, which contradicts the assumption. Therefore, the only admissible option is case () which proves the proposition. (IV) When computing the critical regions (32) redundant constraints are removed. While, in general, this can improve the efficiency of Algorithm 4 versus Algorithm 2, by definition these constraints will not play any role in selecting the next set of active constraints. (V) From the above observation it is clear that Algorithm 2 requires more operations at each iteration than Algorithm 4. This is obtained at the price of increased memory requirement. In fact, in Algorithm 4 the polyhedral partition and the gains have to be stored which, in general, largely surpass the memory required for Algorithm 2 (simply the matrices of the QP (7)). Remark 5.: If the QP (7) is degenerate, then QP Algorithm 2 has to be modified in order to avoid (possible) cycling. There are standard well know pivoting approaches which solve this issue. Consequently, the selection of the next region in the explicit Algorithm 4 has to be modified accordingly. Remark 5.2: The operation j A (k+) is immediate in Algorithm since it is a simple selection of different matrix rows. In Algorithm 4, if the list of neighboring regions is available, then j A (k+) is not time consug (since it corresponds to switch to the neighboring region of a given facet). However, the construction of neighboring region list can be a numerically sensitive issue for mpqp solvers especially in the case of degeneracies. If the list of neighboring regions is not available then, the operation j A (k+) requires a search through a list of active constraints associated to all the stored region and it might be time consug. Remark 5.3: The comparison of Algorithm 3 and Algorithm 2 is similar to the comparison between Algorithm 2 and Algorithm 4 with two main differences. Algorithm 3 corresponds to an active set QP where the next active set is chosen randomly. Although, on average it might perform worse than Algorithm 4, it does not require the computation of all dual variables as in step (2.3.) of Algorithm 3 and of all primal feasibility conditions in as step (2..) of Algorithm 4. As soon as one is violated, the algorithm moves to the next region in the list. This does reduce the computational time of each step. Secondly, Algorithm (3) works well even in presence of non full-dimensional critical regions (in fact it does not require the list of neighboring regions). For these reasons, Algorithm (3) is very simple and practical even if it might perform very poorly on average. VI. CONCLSIONS We have shed some light on the complexity of the on-line solution of active-sets quadratic programs versus the on-line evaluation of explicit solutions. Three elements can be used to compare the different algorithms: () the amount of stored data, (2) the validation of the optimality certificate and (3) the selection of the next active set if the validation of the certificate fails. If the algorithms are properly initialized, the main difference between the two approaches relies on the choice between the online QR decomposition of a set of linear equations and their off-line solution. In the latter case computational time is gained at the price of memory storage. This simple observation also leads to the design of alternative solvers which trade off memory and computational time differently than active sets QP solvers and explicit solvers [5]. REFERENCES [] M. BAOTIĆ, F. BORRELLI, A. BEMPORAD, AND M. MORARI, Efficient On-Line Computation of Constrained Optimal Control, SIAM Journal on Control and Optimization, 47 (2008), pp [2] A. BEMPORAD, F. BORRELLI, AND M. MORARI, Model Predictive Control Based on Linear Programg The Explicit Solution, IEEE Trans. on Automatic Control, 47 (2002), pp [3] A. BEMPORAD, M. MORARI, V. DA, AND E.N. PISTIKOPOLOS, The explicit linear quadratic regulator for constrained systems, Automatica, 38 (2002), pp [4] F. BORRELLI, A. BEMPORAD, AND M. MORARI, A geometric algorithm for multi-parametric linear programg, Journal of Optimization Theory and Applications, 0 (2003), pp [5] F. BORRELLI, J. PEKAR, M. BAOTIC AND G. STEWART, On The Computation Of Linear Model Predictive Control Laws, Technical Report #5. S Berkeley, frborrel/pub.php [6] H.J. FERREA, H.G. BOCK, M. DIEHL, An online active set strategy to overcome the limitations of explicit MPC, International Journal of Robust and Nonlinear Control, 8 (2008), pp [7] R. FLETCHER, Practical Methods of Optimization, Wiley-Interscience Publication. Second Edition [8] C. JONES, P. GRIEDER, AND S. RAKOVIĆ, A Logarithmic-Time Solution to the Point Location Problem for Closed-Form Linear MPC, in IFAC World Congress, Prague, Czech Republic, July [9] M. KVASNICA, P. GRIEDER, M. BAOTIĆ, AND M. MORARI, Multi Parametric Toolbox (MPT), in Hybrid Systems: Computation and Control, Lecture Notes in Computer Science, Volume 2993, Pennsylvania, Philadelphia, SA, Mar. 2004, Springer Verlag, pp [0] D. Q. MAYNE, J. B. RAWLINGS, C. V. RAO, AND P. O. M. SCOKAERT, Constrained model predictive control: Stability and optimality, Automatica, 36 (2000), pp [] J. A. MENDEZ, B. KOVARITAKIS, AND J. A. ROSSITER, State space approach to interpolation in MPC, Int. Journal on Robust and Nonlinear Control, 0 (2000), pp [2] R. MILMAN, E.J. DAVISON, A Fast MPC Algorithm sing Nonfeasible Active Set Methods, Journal of Optimization Theory and Applications, Published on-line May 3, [3] J. SPJØTVOLD, E.C. KERRIGAN, C.N. JONES, P. TØNDEL, AND T.A. JOHANSEN, On the facet-to-facet property of solutions to convex parametric quadratic programs, Automatica, 42 (2006), pp [4] P. TØNDEL, T.A. JOHANSEN, AND A. BEMPORAD, An algorithm for multiparametric quadratic programg and explicit MPC solutions, Automatica, 39 (2003), pp [5] Y. WANG, S. BOYD, Fast Model Predictive Control sing Online Optimization, Proceedings IFAC World Congress, pages , Seoul, July [6] M. ZEILINGER, C.N. JONES, M. MORARI, Real-time suboptimal Model Predictive Control using a combination of Explicit MPC and Online Optimization, Conference on Decision and Control, CDC 2008, Cancun, Mexico, December 2008.

Efficient On-Line Computation of Constrained Optimal Control

Efficient On-Line Computation of Constrained Optimal Control Efficient On-Line Computation of Constrained Optimal Control Mato Baotić, Francesco Borrelli, Alberto Bemporad, Manfred Morari Automatic Control Laboratory, ETH Zentrum - ETL, Physikstrasse 3, CH-8092

More information

Computation of the Constrained Infinite Time Linear Quadratic Optimal Control Problem

Computation of the Constrained Infinite Time Linear Quadratic Optimal Control Problem Computation of the Constrained Infinite Time Linear Quadratic Optimal Control Problem July 5, Introduction Abstract Problem Statement and Properties In this paper we will consider discrete-time linear

More information

The Facet-to-Facet Property of Solutions to Convex Parametric Quadratic Programs and a new Exploration Strategy

The Facet-to-Facet Property of Solutions to Convex Parametric Quadratic Programs and a new Exploration Strategy The Facet-to-Facet Property of Solutions to Convex Parametric Quadratic Programs and a new Exploration Strategy Jørgen Spjøtvold, Eric C. Kerrigan, Colin N. Jones, Petter Tøndel and Tor A. Johansen Abstract

More information

On the facet-to-facet property of solutions to convex parametric quadratic programs

On the facet-to-facet property of solutions to convex parametric quadratic programs Automatica 42 (2006) 2209 2214 www.elsevier.com/locate/automatica Technical communique On the facet-to-facet property of solutions to convex parametric quadratic programs JZrgen SpjZtvold a,, Eric C. Kerrigan

More information

Infinite Time Optimal Control of Hybrid Systems with a Linear Performance Index

Infinite Time Optimal Control of Hybrid Systems with a Linear Performance Index Infinite Time Optimal Control of Hybrid Systems with a Linear Performance Index Mato Baotić, Frank J. Christophersen, and Manfred Morari Automatic Control Laboratory, ETH Zentrum, ETL K 1, CH 9 Zürich,

More information

Reverse Search for Parametric Linear Programming

Reverse Search for Parametric Linear Programming Reverse Search for Parametric Linear Programg Colin N. Jones Automatic Control Laboratory Swiss Federal Institute of Technology Physikstrasse 3, CH-8092 Zurich Switzerland colin.jones@cantab.net Jan M.

More information

Further Results on Multiparametric Quadratic Programming

Further Results on Multiparametric Quadratic Programming Further Results on Multiparametric Quadratic Programming P. Tøndel, T.. Johansen,. Bemporad bstract In this paper we extend recent results on strictly convex multiparametric quadratic programming (mpqp)

More information

Controlling Hybrid Systems

Controlling Hybrid Systems Controlling Hybrid Systems From Theory to Application Manfred Morari M. Baotic, F. Christophersen, T. Geyer, P. Grieder, M. Kvasnica, G. Papafotiou Lessons learned from a decade of Hybrid System Research

More information

Linear Machine: A Novel Approach to Point Location Problem

Linear Machine: A Novel Approach to Point Location Problem Preprints of the 10th IFAC International Symposium on Dynamics and Control of Process Systems The International Federation of Automatic Control Linear Machine: A Novel Approach to Point Location Problem

More information

An Algorithm for Multi-Parametric Quadratic Programming and Explicit MPC Solutions

An Algorithm for Multi-Parametric Quadratic Programming and Explicit MPC Solutions An Algorithm for Multi-Parametric Quadratic Programming and Explicit MPC Solutions P. Tøndel 1, T.A. Johansen 1, A. Bemporad 2 TuP11-4 Abstract Explicit solutions to constrained linear MPC problems can

More information

RELATIVELY OPTIMAL CONTROL: THE STATIC SOLUTION

RELATIVELY OPTIMAL CONTROL: THE STATIC SOLUTION RELATIVELY OPTIMAL CONTROL: THE STATIC SOLUTION Franco Blanchini,1 Felice Andrea Pellegrino Dipartimento di Matematica e Informatica Università di Udine via delle Scienze, 208 33100, Udine, Italy blanchini@uniud.it,

More information

Outline. Robust MPC and multiparametric convex programming. A. Bemporad C. Filippi. Motivation: Robust MPC. Multiparametric convex programming

Outline. Robust MPC and multiparametric convex programming. A. Bemporad C. Filippi. Motivation: Robust MPC. Multiparametric convex programming Robust MPC and multiparametric convex programming A. Bemporad C. Filippi D. Muñoz de la Peña CC Meeting Siena /4 September 003 Outline Motivation: Robust MPC Multiparametric convex programming Kothares

More information

Real-time MPC Stability through Robust MPC design

Real-time MPC Stability through Robust MPC design Real-time MPC Stability through Robust MPC design Melanie N. Zeilinger, Colin N. Jones, Davide M. Raimondo and Manfred Morari Automatic Control Laboratory, ETH Zurich, Physikstrasse 3, ETL I 28, CH 8092

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Efficient implementation of Constrained Min-Max Model Predictive Control with Bounded Uncertainties

Efficient implementation of Constrained Min-Max Model Predictive Control with Bounded Uncertainties Efficient implementation of Constrained Min-Max Model Predictive Control with Bounded Uncertainties D.R. Ramírez 1, T. Álamo and E.F. Camacho2 Departamento de Ingeniería de Sistemas y Automática, Universidad

More information

Explicit Model Predictive Control by A. Bemporad Controllo di Processo e dei Sistemi di Produzione A.a. 2008/09 1/54

Explicit Model Predictive Control by A. Bemporad Controllo di Processo e dei Sistemi di Produzione A.a. 2008/09 1/54 Explicit Model Predictive Control 1/54 KKT Conditions for Optimality When f, g i are convex functions and h j are linear, the condition is also sufficient 2/54 KKT Geometric Interpretation rg 1 (U 1 )

More information

qpoases - Online Active Set Strategy for Fast Linear MPC

qpoases - Online Active Set Strategy for Fast Linear MPC qpoases - Online Active Set Strategy for Fast Linear MPC Moritz Diehl, Hans Joachim Ferreau, Lieboud Vanden Broeck, Jan Swevers Dept. ESAT and Center of Excellence for Optimization in Engineering OPTEC

More information

Max Min Control Problems for Constrained Discrete Time Systems

Max Min Control Problems for Constrained Discrete Time Systems Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 Max Min Control Problems for Constrained Discrete Time Systems Saša V. Raković, Miroslav Barić and Manfred

More information

Improving Reliability of Partition Computation in Explicit MPC with MPT Toolbox

Improving Reliability of Partition Computation in Explicit MPC with MPT Toolbox Improving Reliability of Partition Computation in Explicit MPC with MPT Toolbox Samo Gerkšič Dept. of Systems and Control, Jozef Stefan Institute, Jamova 39, Ljubljana, Slovenia (e-mail: samo.gerksic@ijs.si).

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Efficient On-Line Computation of Constrained Optimal Control

Efficient On-Line Computation of Constrained Optimal Control Efficient On-Line Computation of Constrained Optimal Control Francesco Borrelli, Mato Baotic, Alberto Bemporad, Manfred Morari Automatic Control Laboratory, ETH Zentrum - ETL, CH-89 Zürich, Switzerland

More information

Graphs that have the feasible bases of a given linear

Graphs that have the feasible bases of a given linear Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Computation of Voronoi Diagrams and Delaunay Triangulation via Parametric Linear Programming

Computation of Voronoi Diagrams and Delaunay Triangulation via Parametric Linear Programming Computation of Voronoi Diagrams and Delaunay Triangulation via Parametric Linear Programming Saša V. Raković, Pascal Grieder and Colin Jones September 14, 2004 Abstract This note illustrates how Voronoi

More information

Research Topics (Baotic, Bemporad, Borrelli, Ferrari-Trecate, Geyer, Grieder, Mignone, Torrisi, Morari)

Research Topics (Baotic, Bemporad, Borrelli, Ferrari-Trecate, Geyer, Grieder, Mignone, Torrisi, Morari) Research Topics (Baotic, Bemporad, Borrelli, Ferrari-Trecate, Geyer, Grieder, Mignone, Torrisi, Morari) Analysis Reachability/Verification Stability Observability Synthesis Control (MPC) Explicit PWA MPC

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

maximize c, x subject to Ax b,

maximize c, x subject to Ax b, Lecture 8 Linear programming is about problems of the form maximize c, x subject to Ax b, where A R m n, x R n, c R n, and b R m, and the inequality sign means inequality in each row. The feasible set

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Complexity Reduction of Explicit Model Predictive Control via Combining Separator Function and Binary Search Trees

Complexity Reduction of Explicit Model Predictive Control via Combining Separator Function and Binary Search Trees American Journal of Computer Science and Technology 2018; 1(1): 19-23 http://www.sciencepublishinggroup.com/j/ajcst doi: 10.11648/j.ajcst.20180101.13 Complexity Reduction of Explicit Model Predictive Control

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Piecewise Quadratic Optimal Control

Piecewise Quadratic Optimal Control EECE 571M/491M, Spring 2007 Lecture 15 Piecewise Quadratic Optimal Control Meeko Oishi, Ph.D. Electrical and Computer Engineering University of British Columbia, BC http://www.ece.ubc.ca/~elec571m.html

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

Approximate nonlinear explicit MPC based on reachability analysis

Approximate nonlinear explicit MPC based on reachability analysis Approximate nonlinear explicit MPC based on reachability analysis D.M. Raimondo 1, M. Schulze Darup 2, M. Mönnigmann 2 Università degli studi di Pavia Ruhr-Universität Bochum The system Introduction: Nonlinear

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Simplification of Explicit MPC Feedback Laws via Separation Functions

Simplification of Explicit MPC Feedback Laws via Separation Functions Simplification of Explicit MPC Feedback Laws via Separation Functions Michal Kvasnica,1, Ivana Rauová, and Miroslav Fikar Institute of Automation, Information Engineering and Mathematics, Slovak University

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

Polytopes Course Notes

Polytopes Course Notes Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

IN the last decade embedded Model Predictive Control

IN the last decade embedded Model Predictive Control 1 Exact Complexity Certification of Active-Set Methods for Quadratic Programming Gionata Cimini, Student Member, IEEE Alberto Bemporad, Fellow, IEEE Abstract Active-set methods are recognized to often

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs 11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course?

This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course? Lec4 Page 1 Lec4p1, ORF363/COS323 This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course? Instructor: Amir Ali Ahmadi

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 1 A. d Aspremont. Convex Optimization M2. 1/49 Today Convex optimization: introduction Course organization and other gory details... Convex sets, basic definitions. A. d

More information

Evaluation of Piecewise Affine Control via Binary Search Tree

Evaluation of Piecewise Affine Control via Binary Search Tree Evaluation of Piecewise Affine Control via Binary Search Tree P. Tøndel a T. A. Johansen a A. Bemporad b a Department of Engineering Cybernetics, Norwegian University of Science and Technology, N-7491

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Lecture 6: Faces, Facets

Lecture 6: Faces, Facets IE 511: Integer Programming, Spring 2019 31 Jan, 2019 Lecturer: Karthik Chandrasekaran Lecture 6: Faces, Facets Scribe: Setareh Taki Disclaimer: These notes have not been subjected to the usual scrutiny

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 8: Separation theorems 8.1 Hyperplanes and half spaces Recall that a hyperplane in

More information

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010 Polyhedral Computation Today s Topic: The Double Description Algorithm Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010 1 Convexity Review: Farkas-Type Alternative Theorems Gale

More information

THE MORTAR FINITE ELEMENT METHOD IN 2D: IMPLEMENTATION IN MATLAB

THE MORTAR FINITE ELEMENT METHOD IN 2D: IMPLEMENTATION IN MATLAB THE MORTAR FINITE ELEMENT METHOD IN D: IMPLEMENTATION IN MATLAB J. Daněk, H. Kutáková Department of Mathematics, University of West Bohemia, Pilsen MECAS ESI s.r.o., Pilsen Abstract The paper is focused

More information

11 Linear Programming

11 Linear Programming 11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed

More information

Obtaining a Feasible Geometric Programming Primal Solution, Given a Near-Optimal Dual Solution

Obtaining a Feasible Geometric Programming Primal Solution, Given a Near-Optimal Dual Solution Obtaining a Feasible Geometric Programming Primal Solution, Given a ear-optimal Dual Solution DEIS L. BRICKER and JAE CHUL CHOI Dept. of Industrial Engineering The University of Iowa June 1994 A dual algorithm

More information

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. Row echelon form A matrix is said to be in the row echelon form if the leading entries shift to the

More information

Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application

Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application Proection onto the probability simplex: An efficient algorithm with a simple proof, and an application Weiran Wang Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science, University of

More information

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33 Convex Optimization 2. Convex Sets Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2018 SJTU Ying Cui 1 / 33 Outline Affine and convex sets Some important examples Operations

More information

Distributed Alternating Direction Method of Multipliers

Distributed Alternating Direction Method of Multipliers Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,

More information

Optimization under uncertainty: modeling and solution methods

Optimization under uncertainty: modeling and solution methods Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino e-mail: paolo.brandimarte@polito.it URL: http://staff.polito.it/paolo.brandimarte

More information

Explicit MPC in Mechatronics Industry:

Explicit MPC in Mechatronics Industry: European Control Conference, July 8 th, 23 Zurich, CH MITSUBISHI ELECTRIC RESEARCH LABORATORIES Cambridge, Massachusetts Explicit MPC in Mechatronics Industry: Technology Transfer Potential and Limitations

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where LP Geometry: outline I Polyhedra I Extreme points, vertices, basic feasible solutions I Degeneracy I Existence of extreme points I Optimality of extreme points IOE 610: LP II, Fall 2013 Geometry of Linear

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information