Computational study of the step size parameter of the subgradient optimization method

Size: px
Start display at page:

Download "Computational study of the step size parameter of the subgradient optimization method"

Transcription

1 1 Computational study of the step size parameter of the subgradient optimization method Mengjie Han 1 Abstract The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton s method and can be applied to a wider variety of problems. It also converges when the objective function is nondifferentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter α. Keywords: subgradient method; optimization; convex function; p-median 1 Introduction The subgradient optimization method is suggested by Kiwiel (1985) and Shor (1985) for solving non-differentiable function, such as constrained linear programming. As to the ordinary gradient method, the subgradient method is extended to the non-differentiable functions. The application of subgradient method is more straightforward than other iterative methods, for example, the interior point method and the Newton method. The memory requirement is much lower due to its simplicity. This property reduces the computing burden when big data is handled. However, the efficiency or the convergence speed of the subgradient method is likely to be affected by pre-defined parameter settings. One always would like to apply the most efficient empirical parameter settings on the specific data set. For example, the efficiency or the convergence speed has relation to the step size (a scalar on the subgradient direction) in the iteration. In this paper, the impact of the step size parameter in the subgradient equation on the convex function is studied. Specifically, an application of the subgradient method is conducted with p-median problem using Lagrangian relaxation. In this specific application, we study the impact of the step size parameter on the quality of the solutions. Methods for solving the p-median problem are widely studied (see Reese, 2006; Mladenović, 2007; Daskin,1995). Reese (2006) summarized the literature on solution methods by surveying eight types of methods and listing 132 papers or books. Linear programming (LP) relaxation accounts for 17.4% among the 132 papers or books. Mladenović (2007) examined the metaheuristics framework for solving p-median problem. Metaheuristics has led to substantial improvement in solution quality when the problem scale is large. The Lagrangian heuristic is a specific representation of LP and metaheuristics. Daskin (1995) also showed that the Lagrangian method always give good solutions compared to constructive methods. 1 PhD student in School of Technology and Business Studies, Dalarna Unversity, Sweden. mea@du.se

2 2 Solving p-median problems by Lagrangian heuristics is often suggested (Beasley, 1993; Daskin, 1995; Beltran, 2004; Avella, 2012; Carrizosa, 2012). The corresponding subgradient optimization algorithm has also been suggested. A good solution can always be found by narrowing the gap between the lower bound (LB) and the best upper bound (BUB). This property provides an understanding of how good the solution is. The solution can be improved by increasing the best lower bound (BLB) and decreasing the BUB. This procedure could stop either when the critical percentage difference between LB and BUB is reached or when the parameter controlling the LB s increment becomes trivial. However, the previous studies did not examine how the LB s increment affects the quality of the solution. The LB s increment is decided by the step size parameter of the subgradient substitution. Given this open question, the aim of this paper is to examine how the step size parameter in the subgradient equation affect the performance of a convex function through a general piecewise example and several specific p-median problems. The remaining parts of this paper are sectionally organized in subgradient method and the impact of step size, p-median problem, computational results and conclusions. 2 Subgradient method and the impact of step size The subgradient method provides a framework of minimizing a convex function f : R n R by using the iterative equation: x (k+1) = x (k) α(k)g (k). (2.1) In (2.1) x (k) is the kth iteration of the argument x of the function. g (k) is an arbitrary subgradient of f at x (k). α(k) is the step size. The convergence of (2.1) can be proved by Shor (1985). 2.1 step size forms Five typical rules of step size are listed in Stephen and Almir (2008). They can be summarized as: constant size: α(k) = ξ constant step length: α(k) = ξ/ g (k) 2 square summable but not summable: α(k) = ξ/(b + k) nonsummable diminishing: α(k) = ξ/ k nomsummable diminishing step length: α(k) = ξ(k)/ g (k) 2 The form of the step size is pre-set and will not change. The top two forms, α(k) = ξ and α(k) = ξ/ g (k) 2, are not examined since they are constant size or step length which are lack of variation for p-median problem. On the other hand, the bottom three forms are studied. We restrict ξ(k) such that ξ(k)/ g (k) 2 can be represented by an exponential decreasing function of α(k). Thus, we study α(k) = ξ/k, α(k) = ξ/ k, α(k) = ξ/1.05 k, α(k) = ξ/2 k and α(k) = ξ/ exp(k) in this paper. We first examine the step size impact on a general piecewise convex function and then on the p-median problem.

3 3 Figure 1: Objective values of picewise convex function when five forms of step sizes are compared 2.2 general impact on convex function We consider to minimize the function: f(x) = max(a T i x + b i ) i where x R n and a subgradient g can be taken as g = f(x) = a j (a T j x + b j maximizes a T i x + b i, i = 1,..., m). In our experiment, we take m = 100 and the dimension of x being 10. Both a MV N(0, I) and b N(0, 1). The initial value of constant ξ is 1. The initial value of x is 0. We run the subgradient iteration 1000 times. Figure 1 shows a non-increased objective values of the function against the number of iterations. The objective value is taken when there is a improvement of the objective function. Otherwise, it is taken as the minimum value in the previous iterations. In Figure 1, α(k) = 1/2 k and α(k) = 1/ exp(k) have similar converging patterns and quickly approach the optimal bottom. The convergence speed of α(k) = 1/ k is a bit slower, but it has a steep slop before 100 iterations as well. α(k) = 1/ k does not have a fast improvement after 200 iterations, while α(k) = 1/1.05 k has an approximately uniform convergence speed. However, α(k) = 1/ k is still far away from the optimal bottom. In short, α(k) = 1/2 k and α(k) = 1/ exp(k) provide uniformly good solutions which would is more efficient when dealing with big data.

4 4 3 p-median problem An important application of the subgradient method is solving p-median problems. Here, the p-median problem is formulated by integer linear programming. It is defined as follows. Minimize: h i d ij Y ij (3.1) subject to: i j Y ij = 1 i (3.2) j X j = P (3.3) j Y ij X j 0 i, j (3.4) X j = 0, 1 j (3.5) Y i,j = 0, 1 i, j (3.6) In (3.1), h i is the weight on each demand point and d ij is the cost of the edge. Y ij is the decision variable indicating that if a trip between node i and j is made or not. Constraint (3.2) ensures that every demand point must be assigned to one facility. In (3.3) X j is decision variable and it ensures that the number of facilities to be located is P. Constraint (3.4) indicates that no demand point i is assigned to j unless there is a facility. In constraint (3.5) and (3.6) the value 1 means that the locating (X) or travelling (Y ) decision is made. 0 means that the decision is not made. To solve this problem using sugradient method, the Lagrangian relaxation must be made. Since the number of facilities, P, is fixed, we cannot relax the locating decision variable X j. Consequently, the relaxation is necessarily put on the travelling decision variable Y ij. It could be made either on (3.2) or on (3.4). In this paper, we only consider the case for (3.2), because the same procedure would be applied on (3.4). We do not repeat this for (3.4). What we need to do is to relax this problem for fixed values of the Lagrange multipliers, find primal feasible solutions from the relaxed solution and improve the Lagrange multipliers (Daskin, 1995). Consider relaxing constraint (3.2), we have Minimize: h i d ij Y ij + λ i (1 Y ij ) i j i j = (h i d ij λ i )Y ij + (3.7) λ i i j i with constraints (3.3) (3.6) unchanged. In order to minimize the objective function for fixed values of λ i, we set Y ij = 1 when h i d ij λ i < 0 and Y ij = 0 otherwise. The corresponding value of X j is 1. A set of initial values of λ i s are given by the mean weighted distance between each node and the demand points.

5 5 The step size value in the kth itera- Lagrange multipliers are updated in each iteration. tion for the multipliers T (k) is : T (k) = α(k) (BUB L (k) ) i { j Y (k) ij 1} 2, (3.8) where T (k) is the kth step size value; BUB is the minimum upper bound of the objective function until the kth iteration; L is the value evaluated by (3.7) at the kth iteration; j Y (k) ij is the current optimal value of the decision variable. The Lagrangian multipliers, λ i, are updated by: λ (k+1) i = max{0, λ (k) i T (k) ( Y (k) ij 1)}. (3.9) j A general working scheme is: step 1 Plug the initial values or updated values of λ i into (3.7) and identify the p medians according to h i d ij λ i ; step 2 According to p medians in step 1, evaluate the subgradient g (k) = 1 j Y ij, BUB and L (k). If the stopping criteria is met, stop. Otherwise, go to step 3; step 3 Update T (k) using (3.8); step 4 Update Lagrangian multipliers λ (k) i s using (3.9). Then go to step 1 with new λ i s. The lower bound (LB) in each iteration is decided by the value of λ i s. The step size T (k) will affect the update speed of λ i s. It goes to 0 when the number of iterations tend to infinity. When it goes slowly, the increment of LB would be fast but unstable. This leads to inaccurate estimates of the LB. On the other hand, when the update speed goes too fast, the update of LB is slow. The non-update would easily happen such that the difference between BUB and BLB remains even though more iterations are made. The danger will arise if the inappropriate step size is computed. Thus, a good choice of executed parameter controlling the update speed would make the algorithm more efficient. 4 Computational results In this section, we study the parameter, α, controlling the step size. Daskin (1995) suggested an initial value of 2 and a halved decreasing factor after 5 failures of changing; Avella (2012) suggested an initial value of 1.5 and a 1.01 decreasing factor after one failure of changing. We could also consider other alternative initial values instead of that in the previous studies. However, that is only a minor issue and not related to the step size function. Thus, we skip the analysis of the initial values. The complexity in our study is different from Daskin (1995) and Avella (2012). We take medium sized problems from the OR-library (Beasley, 1990). The OR-library consists of 40 test p-median problems. The optimal solutions are given. We pick eight cases. N varies from 100 to 900 and P varies from 5 to 80. A subset is picked in our study by only selecting two cases for each N = 100, 200, 400, 800. The parameter α take five forms. Following

6 6 Table 1: Lagrangian settings testing a subset of OR-library form 1: ξ/k form 2: ξ/ k α(k) form 3: ξ/1.05 k form 4: ξ/2 k form 5: ξ/ exp(k) n (number of failures before changing α) 5 restart the counter when α changed Yes critical difference 0.01 initial values of λ i s j h id ij Y ij / j maximum iterations after no improvement on BUB m = 1000 and m = 100 Stephen and Almir (2008), we take the forms of α(k) = ξ/k, α(k) = ξ/ k, α(k) = ξ/1.05 k, α(k) = ξ/2 k and α(k) = ξ/ exp(k).the procedure settings are shown in Table 1. In Table 1, α(k) is the step size function of. We take ξ = 1 as we did for the piecewise function f(x). n is a counter recording the number of 5 consecutive failures. As suggested by Daskin (1995), we do not further elaborate the impact from the counter. The critical difference takes the value of 1% of the optimal solution. This is only a criterion for known optimal values and it can be largely affected by the type of the problem. Considering that, the algorithm also stops if no improvement of BUB is found after preset number of iterations. Here we compare 100 and 1,000. Given the settings, the results are shown in Table 2 and Table 3. In Table 2 and Table 3, optimal solution values are given for two stopping criteria. The problem complexity varies. We compare different forms of α(k). BLBs (best lower bound), BUBs (best upper bound), deviations ( BUB Optimal Optimal 100%), U/L ( BUB BLB ) and the number of iterations. The optimal BUB and U/L are marked in bold. Table 2 shows the solutions for m = 100. For pmed 1 and pmed 6 of the OR-library, the exact optimal solutions are obtained. For pemd 35, an almost exact solution is also obtained. On the other hand, for pmed 4, pmed 9, pmed 18 and pmed 37, the BLB is much closer to the optimal. For most of the cases, the step size function with the minimum U/L ratio gives the lowest BUB. It is an indication of the good quality of the algorithm even though 1/1.05 k performs very bad in pmed 18 and pmed 37. It is no surprise that more exact solutions appear when the number of iterations is increased, for example, 1/1.05 k in pmed 1 and pmed 6; 1/k in pmed 4 and pmed 35 in Table 3. Similarly, we also improve the quality of BLBs. The worst deviation is for m = 1000 instead of for m = 100. There are several overall tendencies we can draw from Table 2 and Table 3. Firstly, 1/2 k and 1/ exp(k) are relatively stable which is also in accordance with piecewise function we studied before. This can be seen not only for less complicated problem but also for the complicated case. However, there is no obvious tendency of which one will dominates. Secondly, it is

7 7 Table 2: Comparison of optimal solutions for different step size decreasing speed (m = 100) File No. f n (α) BLB BUB Optimal Deviation (%) U/L Iterations 1/k pmed 1 1/ k (N = 100 1/1.05 k /2 k /exp(k) pmed 4 (N = 100 P = 20) pmed 6 (N = 200 pmed 9 (N = 200 P = 40) pmed 16 (N = 400 pmed 18 (N = 400 P = 40) pmed 35 (N = 800 pmed 37 (N = 800 P = 80) 1/k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k)

8 8 Table 3: Comparison of optimal solutions for different step size decreasing speed (m = 1000) File No. α(k) BLB BUB Optimal Deviation (%) U/L Iterations 1/k pmed 1 1/ k (N = 100 1/1.05 k /2 k /exp(k) pmed 4 (N = 100 P = 20) pmed 6 (N = 200 pmed 9 (N = 200 P = 40) pmed 16 (N = 400 pmed 18 (N = 400 P = 40) pmed 35 (N = 800 pmed 37 (N = 800 P = 80) 1/k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k /1.05 k /2 k /exp(k) /k / k α/1.05 k α/2 k α/exp(k) /k / k /1.05 k /2 k /exp(k)

9 9 Figure 2: Changes for BLB and BUB for file No.9 and No.35 difficult for 1/ k to perform better that the rest of 4 forms to have a optimal BUB, which is in accordance with the piecewise function. One reason is that when the number of iterations is large, a slightly short step size is required. Too large steps can bring infeasible solutions, which to some extent enlarge the gaps between BLBs and BUBs. Thirdly, 1/k and 1/1.05 k are too sensitive to the stopping criterion, which is not seen in the general piece wise function. Decision for stop the algorithm should be very carefully made. One suggested way is to visualize the convergence curve and to terminate the iteration when the curve becomes flat. Generally speaking, the BLB and the BUB tend to complement each other. In other words, one can always make an inference that either the BLB or the BUB would be the benchmark when there is a gap between BLB and BUB. In Figure 2, for example, two extreme cases are shown. The grey line represents the optimal value. The left panel shows the first 800 objective values for five forms of step size functions in problem 9 (pmed 9). The right one shows the values in problem 35 (pmed 35). For pmed 9, the BLBs quickly converge to the optimal. However, only sub-optimal BUBs are obtained. On the contrary, pmed 35 has good BUBs and bad BLBs. Thus, either the BLB or BUB is likely to reach the sub-optimal. When this happens, a complement algorithm could be involved to improve the solution. 5 Conclusion In this paper, we studied how the decreasing speed of step size in the subgradient optimization method affects the performance of the convergence. The subgradient optimization method is simpler in solving linear programming. However, the choice of the step function in the subgradient equation can bring uncertainties to the solution. Thus, we conduct our study on

10 10 examining how the step size function parameter α affect the performance. Both a general piecewise function and a specific p-median problem are studied. The p-median problem is represented by linear programming and the corresponding Lagragian relaxation is added. We examined five forms of the step size parameters α. One is square summable but not summable form α(k) = ξ/(b + k). One is nonsummable diminishing form α(k) = ξ/ k. Three are nonsummable diminishing step length forms α(k) = ξ/1.05 k, α(k) = ξ/2 k and α(k) = ξ/ exp(k). We evaluated the best upper bound, best lower bound, and the required iterations to reach our stopping criteria. We have the following conclusions. Firstly, the nonsummable diminishing step size function α(k) = ξ/ k has its limitation when the number of iterations are large. For both the general piecewise function and the p-median problem, nonsummable diminishing step size function performs bad and easily goes into the suboptimal solution. Two nonsummable diminishing step length function α(k) = ξ/2 k and α(k) = ξ/ exp(k) have similar behaviors and stable solutions. As long as the problem is not likely to lead to the suboptimal solutions, step size function α(k) = ξ/2 k and α(k) = ξ/ exp(k) always give fast convergence for both BLB and BUB. This is found both in general piecewise function and p-median problems. The square summable but not summable form α(k) = ξ/(b+k) as well as nonsummable diminishing form α(k) = ξ/1.05 k are unstable. They are also sensitive to the number of iterations. Secondly, from our empirical result, the quality of the solution will be largely affected by the specific type of the problem. The problem characteristic may have influence on the difficulties of avoiding suboptimal solutions. If it is easy to avoid suboptimal solutions for a specific step size function α, one can make a good inference. On the other hand, if the subgradient method can always produce the suboptimal solution, a complement algorithm can be considered to get out from the suboptimal. Thirdly, the problem complexity has little impact. We cannot assert that good solutions can be found for a less complex problem and bad solutions for a complex solution for a subgradient method.

11 References 11 References [1] Avella, P., Boccia, M., Salerno, S. and Vasilyev, I., An aggregation heuristic for large p-median problem, Computers and Operations Research, 32, [2] Beasley, J.E., OR Library: distributing test problems by electronic mail, Journal of the Operational Research, 41(11), [3] Beasley, J.E., Lagrangian heuristics for location problems, European Journal of Operational Research, 65, [4] Beltran, C., Tadonki, C. and Vial, J.-Ph., 2004, Solving the p-median problem with a semi-lagrangian relaxation, Logilab Report, HEC, University of Geneva, Switzerland. [5] Carrizosa, E., Ushakov, A. and Vasilyev, I., A computational study of a nonlinear minsum facility location problem, Computers and Operations Research, 32, [6] Daskin, M., 1995, Network and discrete location, Wiley, New York. [7] Kiwiel, K.C., 1985, Methods of decent for nondifferentiable optimization. Springer Verlag, Berlin. [8] Mladenović, N., Brimberg, J., Hansen, P. and Moreno-Pérez, JA., The p-median problem: a survey of mataheuristic approaches, European Journal of Operational Research, 179(3), [9] Reese, J., Solution methods for the p-median problem: an annotated bibliography, Networks, 48(3), [10] Shor, N.Z., 1985, Minimization methods for non-differentiable functions. Springer Verlag, New York. [11] Stephen, B. and Almir, M., 2008, Notes for EE364b, Stanford University, Winter

A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz

A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS Joanna Józefowska, Marek Mika and Jan Węglarz Poznań University of Technology, Institute of Computing Science,

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Reduction and Exact Algorithms for the Disjunctively Constrained Knapsack Problem

Reduction and Exact Algorithms for the Disjunctively Constrained Knapsack Problem Reduction and Exact Algorithms for the Disjunctively Constrained Knapsack Problem Aminto Senisuka Byungjun You Takeo Yamada Department of Computer Science The National Defense Academy, Yokosuka, Kanagawa

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Optimal network flow allocation

Optimal network flow allocation Optimal network flow allocation EE384Y Project intermediate report Almir Mutapcic and Primoz Skraba Stanford University, Spring 2003-04 May 10, 2004 Contents 1 Introduction 2 2 Background 2 3 Problem statement

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Optimal Network Flow Allocation. EE 384Y Almir Mutapcic and Primoz Skraba 27/05/2004

Optimal Network Flow Allocation. EE 384Y Almir Mutapcic and Primoz Skraba 27/05/2004 Optimal Network Flow Allocation EE 384Y Almir Mutapcic and Primoz Skraba 27/05/2004 Problem Statement Optimal network flow allocation Find flow allocation which minimizes certain performance criterion

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Fast-Lipschitz Optimization

Fast-Lipschitz Optimization Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Lagrangean relaxation - exercises

Lagrangean relaxation - exercises Lagrangean relaxation - exercises Giovanni Righini Set covering We start from the following Set Covering Problem instance: min z = x + 2x 2 + x + 2x 4 + x 5 x + x 2 + x 4 x 2 + x x 2 + x 4 + x 5 x + x

More information

Lagrangean Methods bounding through penalty adjustment

Lagrangean Methods bounding through penalty adjustment Lagrangean Methods bounding through penalty adjustment thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline Brief introduction How to perform Lagrangean relaxation Subgradient techniques

More information

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS EVOLUTIONARY METHODS FOR DESIGN, OPTIMIZATION AND CONTROL P. Neittaanmäki, J. Périaux and T. Tuovinen (Eds.) c CIMNE, Barcelona, Spain 2007 A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER

More information

15. Cutting plane and ellipsoid methods

15. Cutting plane and ellipsoid methods EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality

More information

Numerical Method in Optimization as a Multi-stage Decision Control System

Numerical Method in Optimization as a Multi-stage Decision Control System Numerical Method in Optimization as a Multi-stage Decision Control System B.S. GOH Institute of Mathematical Sciences University of Malaya 50603 Kuala Lumpur MLYSI gohoptimum@gmail.com bstract: - Numerical

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Variable Neighbourhood Search for Uncapacitated Warehouse Location Problems

Variable Neighbourhood Search for Uncapacitated Warehouse Location Problems International Journal of Engineering and Applied Sciences (IJEAS) ISSN: 2394-3661, Volume-3, Issue-1, January 2016 Variable Neighbourhood Search for Uncapacitated Warehouse Location Problems Kemal Alaykiran,

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations

Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations Celso C. Ribeiro Isabel Rosseti Reinaldo C. Souza Universidade Federal Fluminense, Brazil July 2012 1/45 Contents

More information

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 7. Multicommodity Flows Problems 7.2 Lagrangian Relaxation Approach Fall 2010 Instructor: Dr. Masoud Yaghini The multicommodity flow problem formulation: We associate nonnegative

More information

Constrained optimization

Constrained optimization Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:

More information

Variable Neighborhood Search for Solving the Balanced Location Problem

Variable Neighborhood Search for Solving the Balanced Location Problem TECHNISCHE UNIVERSITÄT WIEN Institut für Computergraphik und Algorithmen Variable Neighborhood Search for Solving the Balanced Location Problem Jozef Kratica, Markus Leitner, Ivana Ljubić Forschungsbericht

More information

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics An Application of Lagrangian Relaxation to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics M lardalen University SE-721

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 5 Goal Driven Optimization 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 Goal Driven Optimization (GDO) Goal Driven Optimization (GDO) is a multi objective

More information

A Method for Comparing Multiple Regression Models

A Method for Comparing Multiple Regression Models CSIS Discussion Paper No. 141 A Method for Comparing Multiple Regression Models Yuki Hiruta Yasushi Asami Department of Urban Engineering, the University of Tokyo e-mail: hiruta@ua.t.u-tokyo.ac.jp asami@csis.u-tokyo.ac.jp

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation Dhaka Univ. J. Sci. 61(2): 135-140, 2013 (July) An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation M. Babul Hasan and Md. Toha De epartment of Mathematics, Dhaka

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

N ondifferentiable optimization solver: basic theoretical assumptions

N ondifferentiable optimization solver: basic theoretical assumptions 48 N ondifferentiable optimization solver: basic theoretical assumptions Andrzej Stachurski Institute of Automatic Control, Warsaw University of Technology Nowowiejska 16/19, Warszawa, Poland. Tel: fj

More information

Solving the Capacitated Single Allocation Hub Location Problem Using Genetic Algorithm

Solving the Capacitated Single Allocation Hub Location Problem Using Genetic Algorithm Solving the Capacitated Single Allocation Hub Location Problem Using Genetic Algorithm Faculty of Mathematics University of Belgrade Studentski trg 16/IV 11 000, Belgrade, Serbia (e-mail: zoricast@matf.bg.ac.yu)

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey Chapter 4 : Optimization for Machine Learning Summary of Chapter 2 Chapter 2: Convex Optimization with Sparsity

More information

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Vijay Gupta Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh Bora Tarhan ExxonMobil

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

More information

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori Use of K-Near Optimal Solutions to Improve Data Association in Multi-frame Processing Aubrey B. Poore a and in Yan a a Department of Mathematics, Colorado State University, Fort Collins, CO, USA ABSTRACT

More information

NOTATION AND TERMINOLOGY

NOTATION AND TERMINOLOGY 15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION

More information

Fundamentals of Integer Programming

Fundamentals of Integer Programming Fundamentals of Integer Programming Di Yuan Department of Information Technology, Uppsala University January 2018 Outline Definition of integer programming Formulating some classical problems with integer

More information

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL The Heuristic (Dark) Side of MIP Solvers Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL 1 Table of content [Lodi], The Heuristic (Dark) Side of MIP Solvers, Hybrid Metaheuristics, 273-284,

More information

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1. ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND

More information

Online algorithms for clustering problems

Online algorithms for clustering problems University of Szeged Department of Computer Algorithms and Artificial Intelligence Online algorithms for clustering problems Summary of the Ph.D. thesis by Gabriella Divéki Supervisor Dr. Csanád Imreh

More information

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points. 5.93 Optimization Methods Final Examination Instructions:. There are 5 problems each w i t h 2 p o i n ts for a maximum of points. 2. You are allowed to use class notes, your homeworks, solutions to homework

More information

CS 450 Numerical Analysis. Chapter 7: Interpolation

CS 450 Numerical Analysis. Chapter 7: Interpolation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

Reliability Allocation

Reliability Allocation Reliability Allocation Introduction Many systems are implemented by using a set of interconnected subsystems. While the architecture of the overall system may often be fixed, individual subsystems may

More information

Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Assessing the Quality of the Natural Cubic Spline Approximation

Assessing the Quality of the Natural Cubic Spline Approximation Assessing the Quality of the Natural Cubic Spline Approximation AHMET SEZER ANADOLU UNIVERSITY Department of Statisticss Yunus Emre Kampusu Eskisehir TURKEY ahsst12@yahoo.com Abstract: In large samples,

More information

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery Monika Sharma 1, Deepak Sharma 2 1 Research Scholar Department of Computer Science and Engineering, NNSS SGI Samalkha,

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Introduction to Stochastic Combinatorial Optimization

Introduction to Stochastic Combinatorial Optimization Introduction to Stochastic Combinatorial Optimization Stefanie Kosuch PostDok at TCSLab www.kosuch.eu/stefanie/ Guest Lecture at the CUGS PhD course Heuristic Algorithms for Combinatorial Optimization

More information

CS205b/CME306. Lecture 9

CS205b/CME306. Lecture 9 CS205b/CME306 Lecture 9 1 Convection Supplementary Reading: Osher and Fedkiw, Sections 3.3 and 3.5; Leveque, Sections 6.7, 8.3, 10.2, 10.4. For a reference on Newton polynomial interpolation via divided

More information

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 Answer #1 and any five of the remaining six problems! possible score 1. Multiple Choice 25 2. Traveling Salesman Problem 15 3.

More information

Gate Sizing by Lagrangian Relaxation Revisited

Gate Sizing by Lagrangian Relaxation Revisited Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

Graph Coloring via Constraint Programming-based Column Generation

Graph Coloring via Constraint Programming-based Column Generation Graph Coloring via Constraint Programming-based Column Generation Stefano Gualandi Federico Malucelli Dipartimento di Elettronica e Informatica, Politecnico di Milano Viale Ponzio 24/A, 20133, Milan, Italy

More information

LECTURE NOTES Non-Linear Programming

LECTURE NOTES Non-Linear Programming CEE 6110 David Rosenberg p. 1 Learning Objectives LECTURE NOTES Non-Linear Programming 1. Write out the non-linear model formulation 2. Describe the difficulties of solving a non-linear programming model

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions Chalmers/GU Mathematics EXAM TMA946/MAN280 APPLIED OPTIMIZATION Date: 03 05 28 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

Heuristic Algorithms for the Fixed-Charge Multiple Knapsack Problem

Heuristic Algorithms for the Fixed-Charge Multiple Knapsack Problem The 7th International Symposium on Operations Research and Its Applications (ISORA 08) Lijiang, China, October 31 Novemver 3, 2008 Copyright 2008 ORSC & APORC, pp. 207 218 Heuristic Algorithms for the

More information

Convex combination of adaptive filters for a variable tap-length LMS algorithm

Convex combination of adaptive filters for a variable tap-length LMS algorithm Loughborough University Institutional Repository Convex combination of adaptive filters for a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository

More information

An Improved Policy Iteratioll Algorithm for Partially Observable MDPs

An Improved Policy Iteratioll Algorithm for Partially Observable MDPs An Improved Policy Iteratioll Algorithm for Partially Observable MDPs Eric A. Hansen Computer Science Department University of Massachusetts Amherst, MA 01003 hansen@cs.umass.edu Abstract A new policy

More information

Optimization Techniques for Design Space Exploration

Optimization Techniques for Design Space Exploration 0-0-7 Optimization Techniques for Design Space Exploration Zebo Peng Embedded Systems Laboratory (ESLAB) Linköping University Outline Optimization problems in ERT system design Heuristic techniques Simulated

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Franz Hamilton Faculty Advisor: Dr Timothy Sauer January 5, 2011 Abstract Differential equation modeling is central

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

How does the use of different road networks effect the optimal

How does the use of different road networks effect the optimal Working papers in transport, tourism, information technology and microdata analysis How does the use of different road networks effect the optimal location of facilities in rural areas? Ev. underrubrik

More information

Parallel Auction Algorithm for Linear Assignment Problem

Parallel Auction Algorithm for Linear Assignment Problem Parallel Auction Algorithm for Linear Assignment Problem Xin Jin 1 Introduction The (linear) assignment problem is one of classic combinatorial optimization problems, first appearing in the studies on

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information