A Simplex Based Parametric Programming Method for the Large Linear Programming Problem

Similar documents
Graphs that have the feasible bases of a given linear

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

VARIANTS OF THE SIMPLEX METHOD

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Read: H&L chapters 1-6

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

The Ascendance of the Dual Simplex Method: A Geometric View

Chapter II. Linear Programming

Linear Programming. Course review MS-E2140. v. 1.1

Selected Topics in Column Generation

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

Discrete Optimization. Lecture Notes 2

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

The MIP-Solving-Framework SCIP

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

Linear programming and duality theory

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Marginal and Sensitivity Analyses

Some Advanced Topics in Linear Programming

Math Models of OR: The Simplex Algorithm: Practical Considerations

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

A Dual Variant of. Benson s Outer Approximation Algorithm. for Multiple Objective Linear Programs

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms

Linear Programming Problems

16.410/413 Principles of Autonomy and Decision Making

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

An iteration of the simplex method (a pivot )

MATLAB Solution of Linear Programming Problems

DEGENERACY AND THE FUNDAMENTAL THEOREM

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS

New Directions in Linear Programming

Graph Coloring via Constraint Programming-based Column Generation

5.4 Pure Minimal Cost Flow

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

A computationally stable solution algorithm for linear programs

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

4.1 The original problem and the optimal tableau

A hard integer program made easy by lexicography

How to use Farkas lemma to say something important about linear infeasible problems MOSEK Technical report: TR

Addressing degeneracy in the dual simplex algorithm using a decompositon approach

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MLR Institute of Technology

Generalized Network Flow Programming

A Nonlinear Presolve Algorithm in AIMMS

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Programming, numerics and optimization

Working Under Feasible Region Contraction Algorithm (FRCA) Solver Environment

The SYMPHONY Callable Library for Mixed-Integer Linear Programming

Obtaining a Feasible Geometric Programming Primal Solution, Given a Near-Optimal Dual Solution

Integer Programming Chapter 9

Appendix A MATLAB s Optimization Toolbox Algorithms

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Algorithms for Integer Programming

Exploiting Degeneracy in MIP

Graphs and Network Flows IE411 Lecture 20

A Linear Programming Approach to Concave Approximation and Value Function Iteration

Modified Model for Finding Unique Optimal Solution in Data Envelopment Analysis

Lecture 16 October 23, 2014

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

The Simplex Algorithm with a New. Primal and Dual Pivot Rule. Hsin-Der CHEN 3, Panos M. PARDALOS 3 and Michael A. SAUNDERS y. June 14, 1993.

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

Tuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem

Recap, and outline of Lecture 18

GOAL GEOMETRIC PROGRAMMING PROBLEM (G 2 P 2 ) WITH CRISP AND IMPRECISE TARGETS

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

An introduction to pplex and the Simplex Method

Applied Mixed Integer Programming: Beyond 'The Optimum'

NOTATION AND TERMINOLOGY

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Fundamentals of Integer Programming

Solving Linear and Integer Programs

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Linear Programming Motivation: The Diet Problem

A Simplex-Cosine Method for Solving Hard Linear Problems

Aone-phaseinteriorpointmethodfornonconvexoptimization

4.1 Graphical solution of a linear program and standard form

Gurobi Guidelines for Numerical Issues February 2017

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

Dual-fitting analysis of Greedy for Set Cover

J Linear Programming Algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Solving Linear Programs Using the Simplex Method (Manual)

Math 5593 Linear Programming Lecture Notes

Bi-objective Network Flow Optimization Problem

Linear Programming with Bounds

Artificial Intelligence

Lecture 5: Duality Theory

AIMMS Function Reference - GMPInstance Procedures and Functions

Topics. Introduction. Specific tuning/troubleshooting topics "It crashed" Gurobi parameters The tuning tool. LP tuning. MIP tuning

A NEW SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM

Pseudo-polynomial formulations for bin packing and cutting stock problems

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Transcription:

A Simplex Based Parametric Programming Method for the Large Linear Programming Problem Huang, Rujun, Lou, Xinyuan Abstract We present a methodology of parametric objective function coefficient programming for large linear programming (LP) problem. Most of current parametric programming methodologies are based on the assumption that the optimal solution is available. Large linear programming problems in real-world might have millions of variables and constraints, which makes it difficult for the LP solver to return the optimal solutions. This paper extends the traditional parametric programming methodology by considering features of the large LP problems. By considering the tolerance of infeasibility and introducing a step size to deal with degeneracy of the problem, the parametric objective function coefficient linear programming of large LP problem can be conducted while the optimal solutions are not available. Experiment results of LP problems with different scales are provided. Index Terms linear programming, large problem, simplexbased, parametric programming L I. INTRODUCTION inear programming(lp) is widely used in the industrial system. Analyzing impact of parametric setting on the solution of interested variables is helpful to the improvement of the model and the explain-ability of the solution. A lot of trial-and-error to obtain the desired solution by changing parameters is time consuming, while sensitivity analysis[1] and parametric programming can help about this. The simplex-based parametric programming methodology was first provided by Gass and Saaty [2] in 1955. By taking advantage of the iterations in simplex algorithm, this methodology tries to find the parametric interval in which the current optimal basis stays optimal. Degeneracy could not be handled in this methodology. New approaches like interior point based [3], [4], support sets based [5] methodology were proposed in the last decades. The concept of optimal partition was introduced into the interior point based parametric programming. Most of them are based on the assumption that strictly complementary solution is available. However, it s hard for LP solver to get the optimal solution of large LP problems because of the time and precision limitation, so that those parametric programming methodologies are no longer applicative. Manuscript received January 7, 2012; revised January 28, 2012. This work was supported in part by the Intel. Huang,Rujun is with the Southwest Jiaotong University, ChengDu, SiChuan, China. E-mail: rjhuang1201@ gmail.com. Lou,Xinyuan is with the Southwest Jiaotong University, ChengDu, SiChuan, China. E-mail: louxyis@foxmail.com. This paper extends the traditional simplex based parametric programming methodology to adjusting the objective function coefficient parameters of the large LP problem. The main idea to conduct the parametric programming is to preserve the current optimal basis, optimal partition or support set [5] while the parameters are changing. We provide a new definition of the optimal of the current optimal basis in the parametric programming iterations. II. PROBLEM DEFINITION The linear programming problem can be defined as: Min C X C X Subject to A X A X B (1) LB X UB Where C /C : The objective function coefficient of decision variables in/out of the basis A /A : The constraint coefficient matrix of decision variables in/out of the basis B: The right hand side value vector LB/UB: The lower bound/upper bound of decision variable vector X X /X : The decision variable vector in/out of the basis B/N A set contain the indices of variable in/out of the basis Given the parameter θ, the objective function coefficient can be denoted by C C 0 fun θ, where C 0 is the current objective function coefficient and the fun θ here presents the perturbation vector. The parametric programming on objective function coefficient will observe the behavior of solution with respect to the change of θ. III. SIMPLEX BASED PARAMETRIC PROGRAMMING In this section, we introduce the simplex based parametric programming. The reduced cost for non-basic variables in the optimal solution of this problem can be defined as: RC C A A C j N (2) Introducing θ into the reduced cost function yield: RC θ C fun θ A A C fun θ C A A C A A fun θ fun θ j N (3) We restrict the objective function coefficient to be a linear function of θ. The function of the coefficient can be denoted as C j C j θ, j B, N, we have

RC θ C A A C C A A C θ,j B, N (4) RC θ can be simplified as: a b θ, j N (5) Where a C A A C, j N (6) b C A A C, j N (7) a : The reduce cost of the non-basic variables in the current optimal solution. b : The variation of reduced cost by increasing one unit of θ based on the current basis. The interval of the θ in which the current optimal basis remains optimal is determined by keeping the reduced cost of non-basic variables non-negative: RC j θ 0. The lower bound and the upper bound of θ in which the current optimal basis will stay optimal are denoted by LBP 1 and UBP 1, respectively. LBP max,when b 0, j N Inf, when b 0 UBP max,when b 0, j N (8) Inf, when b 0 This methodology works well only if the LP model is non-degenerate and the optimal solution is available. If the LP model is non-degenerate, we would have LBP 1 0 and UBP 1 0, since a j 0 in the nondegenerate minimizing problem. So that we have LBP 1 UBP 1 and a new interval in which the current basis will keep optimal of the parameter is obtained. But if the problem is dual degeneracy [6], the reduced cost for some of the non-basic variables might be zero. So we might have LBP 1 UBP 1 0, which means the current basis will remain optimal only if the parameter keeps the current value. There are several features for large LP problem: 1) It s hard to have both the primal infeasibility and dual infeasibility equal to zero in the final basis because of the time and precision limited, which means that the optimal solution is usually not available. 2) Based on the point above, Solution within certain tolerance is acceptable and same extent of accuracy loss is allowed. In large-scale LP problems, there might be invalid reduced costs or bound violations for some variables when the LP solver reaches its optimum. There is actually a gap between the optimal solution and the final solution returned by LP solver. It is hard to find a parameter interval, in which the solution will keep optimal by this methodology. IV. PARAMETRIC PROGRAMMING ALGORITHM FOR LARGE LP MODEL Considering the problems above, we provide a methodology to conduct parametric programming on large LP problem by extending the methodology above. In this methodology of parametric objective function coefficient linear programming, selective pre-processing is conducted to remove the fixed variables and redundant constraints of the LP model before the parametric programming. Primal and dual infeasibility are used as criterions of the optimal of the LP model in the calculation of the parameter intervals. In order to move forward in the parametric programming calculation, a step size variable is introduced. If stalling is encountered, we will change the current parameter value by the step size in order to skip the stalling point. A. Pre-processing Pre-processing is used to simplify the problem. The size of problem after pre-processing is reduced by aggregating variables and constraints, eliminating redundancy and so on. Taking advantage of pre-processing seems like a good idea to reduce the number of degenerate variables and the size of LP problem since some of the degeneracy of the model might be caused by the redundant constraints. However, pre-processing picks out redundant constraints and variables; these constraints and variables are redundant in the original LP model, but they might not be redundant in the LP model with changed parameters. The change of objective coefficients has no impact on the fixed variables the redundancy of constraints. Then fixed variables and redundant constraints can be ignored when we change the objective coefficients. B. Parametric objective function coefficient Programming In the traditional simplex based parametric objective coefficient programming, the parameter interval is determined by ensuring the reduced cost still satisfy the optimal condition, so that the current basis will stay optimal in the whole interval. But there might be some invalid reduced cost in the final optimal solution of the problem returned by the LP solver. Here we redefine the optimum of the solution. It s reasonable to take the max reduced cost infeasibility (MRCI or dual infeasibility) as the validation the optimum of the optimal basis in the last iteration in parametric objective function coefficient programming because of the three points below: 1) The primal infeasibility for the current basis will keep the same no matter how we vary the parameters in the objective function coefficient. 2) By keeping the MRCI no greater than the current optimal solution, we would have both of the primal infeasibility and dual infeasibility no greater than the value returned by the LP solver in the last parametric programming iteration. 3) There is at least one parameter value (the current parameter value) meets the infeasibility tolerance. We take the MRCI as the criteria to determine if the current solution can be accepted as the optimal solution in the parametric objective function coefficient linear programming. Here C BV and C NV are objective coefficient increment for basic and non-basic variables by increasing one unit of θ. To be simple, we assume that the objective coefficient is a linear function of θ. θ begins with zero. The objective coefficient can be expressed as:

C j C j θ, j B N Define the amount of the parameter increment as UPW and the step size for skipping stalling as PS. Similar algorithm can be employ while the parameter decreases. Step1: Do selective preprocessing to remove the fixed variables and redundant constraints. Standardize the model. Primal problem: Min: CX AX B Subject to: (9) X 0 Dual problem: Max: BY Subject to: A T Y C (10) Step2: Optimize the problem and calculate the a j and b j in (6) and (7). Go to step 4. Step3: Solve the problem with LP solver starting with the current basis and the new objective coefficient, and get the new optimal solution, a j and b j in C j θ. Step4: If the reduced cost meets the optimal condition, calculate the upper bound: UBP 1 in (8). Else go to step5. A new interval is obtained; UBP Max UBP 1, UPW, UBP 1 0 (11) PS, UBP 1 0 If θ is equal to UPW, the parameter reaches the upper bound we provided, then stop, else go to step6 Step5: Calculate the max reduced cost infeasibility MRCI for all the dual constraints for the current optimal basis. MRCI max A T Y C j j when A T j Y C j 0, 1, 0, when A T j Y C j 0, 1, (12) If we keep the current basis, the max reduce cost infeasibility might change with respect to the change of parameter. The dual constraints can be denoted by: A T a i b i θ C 0j θ C j, i M, j N (13) The dual infeasibility of some constraints might increase while some might decrease when the parameter value is increasing. What we have to make sure is that the Maximum dual infeasibility for the current basis is within the tolerance Tol or no less than the current value Cur_MRCI in the parameter interval. The upper bound and lower bound of θ can be fixed as follows: MRCI max Tol, Cur_MRCI min C 0j A T a i C j b θ MRCI, i when A T a i b i θ C 0j θ C j, j N (14) LBP 2 max MRCI C T 0j A ai,when C j b 0 C j bi i INF, when C j b 0 i UBP 2 min MRCI T C0j A ai,when C j b i 0 C j bi INF, when C j b 0 i (15) (16) Max UBP2, UPW, UBP2 0 UBP (17) PS, UBP2 0 If UBP is equal to UPW, the parameter reaches the upper bound we provided, then stop, else go to step6 Step6: Increase the objective coefficient C through C C UBP C (18) UPW UPW UBP (19) V. NUMERICAL EXAMPLES We take the CPLEX as the LP solver and conduct parametric programming on the supply chain linear programming problems with different size by changing the same weight. The experiments provide two of the decision variable values we interested in while the weight which impacts the objective function coefficient of multiple variables is increasing from the current value to the target value. The parameters of the three models in experiments are TABLE I EXPERIMENTS PARAMETERS Parameter Model1 Model2 Model3 Variable 33848 935905 1469151 number Constraint 24866 651597 1234558 number Current 1.00E+09 1.80E+09 0.9 E+08 weight Target 1.40E+09 2.00E+09 1.50E+09 weight Max dual 5.96E-07 1.22E-04 3.46E-05 infeasibility Max primal 9.09E-12 9.09E-12 3.53E-05 infeasibility Step Size 100 100 100 provided in Table I. Fig.1 to Fig.3 provide the solution of two decision variables:, and the corresponding infeasibility. From the solution of the three models above, the dual infeasibility and primal infeasibility of the final solution returned by the LP solver are increasing while the size of the model is increasing. Since the parameter intervals are determined by making sure the primal and dual infeasibility no greater than the last solution returned by the LP solver. In each interval without using the step size, the dual infeasibility and primal infeasibility will keep no greater than the infeasibilities in corresponding parameter interval provided by the second part of figures above. The step size set to 100, which is much less than the total interval length (gap between target weight and current weight). In this algorithm, the decision variable values and infeasibilities are undefined in the parameter intervals in which the step size is using.

Fig. 1(a) Solutions of, of Model 1 Fig. 2(b) Primal and dual infeasibility of Model 2 Fig. 1(b) Primal and dual infeasibility of Model 1 Fig. 3(a) Solutions of, of Model 3 Fig. 2(a) Solutions of, of Model 2 Fig. 3(b) Primal and dual infeasibility of Model 3

VI. CONCLUSION Parametric programming can provide the behavior of related decision variables values with respect to the variations of the parameter. For the large linear programming problems, the traditional parametric programming theory could not be applied to problem directly since the optimal solution is not always available. By considering the tolerance of infeasibility, we employ the parametric objective function coefficient programming on large linear programming problems. Step size is introduced to deal with the degeneracy. If stalling is encountered because of degeneracy, some of the parameter intervals will be discrete. REFERENCES [1] James E.Ward, Richard E.Wendell, Approaches to sensitivity analysis in linear programming, Annals of Operations Research, vol.27, pp.3-38, 1990. [2] S.I.Gass, T.L.Saaty,. The computational algorithm for the parametric objective function Naval Research Logistics Quarterly, vol.2, pp.39-45, 1955. [3] B. Jansen, C. Roos,T, Terlaky, An interior point approach to post optimal and parametric analysis in linear programming. The Faculty of Technical Mathematics and Informatics [4] Ilan Adler, Renato D. C. Monteiro, A geometric view of parametric linear programming. Algorithmica, vol.8, pp.161-176, 1992. [5] Milan Hladik, Multi-parametric linear programming: Support set and optimal partition invariance, European Journal of Operational Research, vol.202, pp.25-31, 2010. [6] Tomas Gal, Degeneracy graphs: theory and application an updated survey. Annals of Operations Research vol.46, pp.81-105, 1993.