A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems

Similar documents
Working Under Feasible Region Contraction Algorithm (FRCA) Solver Environment

Mathematical and Algorithmic Foundations Linear Programming and Matchings

The simplex method and the diameter of a 0-1 polytope

Convex Optimization CMU-10725

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

Notes for Lecture 20

Heuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam

1. Lecture notes on bipartite matching February 4th,

Linear Programming Duality and Algorithms

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

DETERMINISTIC OPERATIONS RESEARCH

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

12.1 Formulation of General Perfect Matching

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Lagrangian Relaxation: An overview

Artificial Intelligence

Lecture 4 Duality and Decomposition Techniques

New Directions in Linear Programming

3 INTEGER LINEAR PROGRAMMING

arxiv: v1 [cs.cc] 30 Jun 2017

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

THEORY OF LINEAR AND INTEGER PROGRAMMING

(67686) Mathematical Foundations of AI July 30, Lecture 11

Linear Programming. Course review MS-E2140. v. 1.1

Linear programming and duality theory

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

A Subexponential Randomized Simplex Algorithm

Math 5593 Linear Programming Lecture Notes

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Lecture 14: Linear Programming II

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

1 Linear programming relaxation

Applied Lagrange Duality for Constrained Optimization

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

Dual-fitting analysis of Greedy for Set Cover

Combinatorial Optimization

16.410/413 Principles of Autonomy and Decision Making

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms

EARLY INTERIOR-POINT METHODS

Solutions for Operations Research Final Exam

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

Integer Programming Theory

A Nonlinear Presolve Algorithm in AIMMS

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

How to use Farkas lemma to say something important about linear infeasible problems MOSEK Technical report: TR

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Outline: Finish uncapacitated simplex method Negative cost cycle algorithm The max-flow problem Max-flow min-cut theorem

Finite Math Linear Programming 1 May / 7

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

A 2-APPROXIMATION ALGORITHM FOR THE MINIMUM KNAPSACK PROBLEM WITH A FORCING GRAPH. Yotaro Takazawa Shinji Mizuno Tokyo Institute of Technology

16.410/413 Principles of Autonomy and Decision Making

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Homework 2: Multi-unit combinatorial auctions (due Nov. 7 before class)

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

3 No-Wait Job Shops with Variable Processing Times

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

A modification of the simplex method reducing roundoff errors

Lecture Notes 2: The Simplex Algorithm

Discrete Optimization. Lecture Notes 2

12 Introduction to LP-Duality

Fast-Lipschitz Optimization

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Approximation Algorithms

11.1 Facility Location

An Efficient Approximation for the Generalized Assignment Problem

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Algorithms for Integer Programming

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

11 Linear Programming

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Linear Programming and its Applications

Research Interests Optimization:

Notes taken by Mea Wang. February 11, 2005

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Primal-Dual Methods for Approximation Algorithms

Polyhedral Compilation Foundations

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

Linear Programming Motivation: The Diet Problem

Fundamentals of Integer Programming

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming

Bilinear Programming

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming: Introduction

1. Lecture notes on bipartite matching

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Repetition: Primal Dual for Set Cover

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Application of Cutting Stock Problem in Minimizing The Waste of Al-Quran Cover

Transcription:

A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems E. O. Effanga Department of Mathematics/Statistics and Comp. Science University of Calabar P.M.B. 1115, Calabar, Cross River State, Nigeria E-mail: effanga2005@yahoo.com I. O. Isaac Department of Mathsematics/Statistics and Comp. Science, University of Calabar P.M.B. 1115, Calabar, Cross River State, Nigeria E-mail: idonggrace@yahoo.com Received: March 10, 2011 Accepted: April 8, 2011 doi:10.5539/jmr.v3n3p159 Abstract This paper presents an alternative technique for solving linear programming problems. It is centered on finding a feasible interior point each time the feasible region is contracted, thus generating a sequence of feasible interior points that converge at the optimal solution of linear programming problem. The convergence of the algorithm is proved, and it is shown that the algorithm performs faster in a smaller feasible region. Keywords: Linear programming, Feasible region, Contraction, Interior point 1. Introduction Linear programming is without doubt the most popular tool used in operations research study. A number of solution techniques are available and these include the simplex method developed by Dantzig et al (1956) and the interior point methods initiated by Karmarkar (1984). The simplex methods and its variants search for the candidate optimal solution in an intelligent fashion by moving from one corner point of the feasible region to another. Although, the simplex methods and its variants have enjoyed widespread acceptance and usage in solving linear programming problems, they are by no means the only good technique. The time and number of iterations required by the simplex methods to reach an optimal solution may be exponential even in small scale linear programming problems thus making the method less efficient (Eiselt et al, 1987). An attempt to solve linear programming problems in polynomial time facilitated the advent of the interior point methods. Since the dramatic development by Karmarkar (1984), there has been considerable interest in research in the area of interior point methods (Gondzio, 1995; Marsten et al, 1990; Lustig et al, 1994; Todd and Ye, 1990; Todd and Ye, 1998). Conventionally, the interior point methods start from the initial feasible interior point and follow a central path through the feasible region until an optimum solution is found. This is achieved by converting the given linear programming problem to linear complementarity problem and solved by Newton s method. Although the interior point methods are considered to be highly efficient in practice they have some drawbacks, namely, extensive calculations, many iterations, and large storage space in computer is required. It is therefore necessary to develop a new technique that will overcome these drawbacks. In other to find a feasible interior point to the system of linear inequalities, Effanga (2010) modified the existing simplex splitting method developed by Levin and Yamnitsky (1982). The modified simplex splitting algorithm is embedded in the feasible region contraction algorithm presented in this paper. 2. The Idea Behind the Development of FRCA Consider the following primal and dual linear programming problems: Primal: maximize Z = c T x Ax b x 0 Published by Canadian Center of Science and Education 159

Dual: minimize Y = b T y A T y c y 0 The week duality property: If x 0 is a feasible solution of the primal linear programming problem with the objective function value Z 0 and y 0 is a feasible solution of the corresponding dual problem with the objective function value Y 0, then Z 0 Y 0. The strong duality property: If x is an optimal solution of the primal linear programming problem with the objective function value Z and y is an optimal solution of the corresponding dual problem with the objective function value Y, then Z = Y. In view of the weak and strong duality properties, it follows that Z 0 Z Y 0. 3. The Outline of the Feasible Region Contraction Algorithm (FRCA) Step 0 Find the feasible interior point x 0 to the primal problem and the feasible interior point y 0 to the dual problem using the modified simplex splitting algorithm. Set k = 0 Step 1 Determine the lower bound Z k and upper bound Y k to the optimum objective function value Z, where Z k = c T x k and Y k = b T y k. Step 2 For a small positive number ε, Is (Y k Z k ) <ε? Yes: Stop, an ε - optimal solution is found. Set x = x k and Z = 1 2 (Zk + Y k ) No: Go to Step 3. Step 3 Contract the feasible region by introducing the new constraint c T x 1 2 (Zk + Y k ). Set k = k + 1 Step 4 Is there a feasible point x f to the updated problem in Step 3? Yes: Set x k = x f and Z k+1 = 1 2 (Zk + Y k ) No: Set x k = x k 1 and Y k+1 = 1 2 (Zk + Y k ) Return to Step 2. 4. The Convergence of FRCA Let I 0 := [ Z 0, Y 0] be the duality gap as defined in the FRCA. Assume that I 0, I 1,, I k have been generated. Then denoting I j := [ Z j, Y j], j = 0, 1, 2,, k, we generate I k+1 as follows: If there is a feasible point with the objective function value greater than Z k+1, then I k+1 := [ Z k+1, Y k]. But if there is a feasible point with the objective function value less than Z k+1, then I k+1 := [ Z k, Z k+1]. 160 ISSN 1916-9795 E-ISSN 1916-9809

Lemma (i) The sequence { I k} is a decreasing sequence of duality gaps. i.e., k=0 Ik I k+1, 0 k < (ii) The length of I k given by Y0 Z 0 2 k (iii) I k = {Z }, where Z is the optimal objective function value of the primal linear programming problem. k=0 Proof. Points (i) and (ii) are obvious by our construction. (iii) For k = 0 Z I 0, by the weak duality property. { } Z k is a monotonically increasing sequence of primal objective function values, and it is bounded above by k=0 Y0. It therefore has a finite limit, L 1. Also, { Y k} is a monotonically decreasing sequence of dual objective function values, and it is bounded below by k=0 Z 0. It therefore has a finite limit, L 2. ( Now, L 2 L 1 = lim Y k Z k) ( ) = lim Y 0 Z 0 k k 2 = 0. k Therefore, L 2 = L 1 = Z, by strong duality property. Theorem { } 4.1 Let {X i } i=0 be a sequence of interior feasible points generated by FRCA, then there exists a subsequence Xij that converges at the optimal point j=0 x with the objective function value Z. Proof. Since the feasible region is compact, a convergence subsequence { X ij } j=0 can be selected from the sequence {X i} i=0 generated. By the continuity of the expression of the value of a feasible point, X ij X and val ( ) X ij Ii val (X ) = Z. ( ) Theorem 4.2 The FRCA will terminate after at least k = log Y 0 Z 0 2 ε number of iterations. Proof. Using the stopping rule of FRCA, ( Y k Z k) <ε?, it follows that Y 0 Z 0 <ε. 2 k That is, 2 k Y0 Z 0 ε. Therefore, ( ) k log Y 0 Z 0 2 ε. 5. The Effect of Scaling down Right hand side values of constraints on the performance of FRCA The Modified Simplex Splitting Algorithm is a nonlinear algorithm which performs faster in a smaller feasible region. This algorithm is performed in steps 0 and step 4 of the FRCA. If the feasible region is therefore reduced in size the decision about the existence of the feasible solution is faster. Hence independent of the main iteration loop, each loop will be faster by itself. Since the choice of the simplex depends on the right-hand side values, b i s, of the constraints, it becomes necessary to scale down b i s by multiplying by a factor 10 n, for any positive integer n, prior to applying the algorithm. Scaling down the b i s is equivalent to reduction in size of the feasible region. The optimal solution of the original linear programming problem can be recovered by multiplying the optimal solution of the refined linear programming problem by the factor 10 n. 6. Example Consider the following linear programming problem minimize Z = 2x 1 + 3x 2 Published by Canadian Center of Science and Education 161

x 1 + x 2 8 x 1 + 2x 2 4 x 1 0, x 2 0. Below is the detailed FRCA report of the above linear programming problem. F. R. C. Algorithm Solver Report Solution of LP Problem using the Feasible Region Contraction Algorithm Primal Problem Maximize Z = 2x 1 + 3x 2 1) x 1 + x 2 8 2) x 1 + 2x 2 4 x(i) 0, i = 1, 2. Dual Problem Minimize Z = 8y 1 + 4y 2 1) y 1 + 2y 2 2 2) 1 1 + yx 2 3 y(i) 0, i = 1, 2. Vertices Vertex 1 = [16, 0] Vertex 2 = [0, 16] Vertex 3 = [0, 0] Epsilon ε = 0.01 Results x(0) = [1.8395, 0.937] y(0) = [2, 2] Iteration 1 Upper Z(UZ) = 24, Lower Z(LZ) = 6.4901 (UZ LZ) = 17.5099, is not <ε 3) 2x 1 + 3x 2 15.245 Iteration 2 Upper Z(UZ) = 15.245, Lower Z(LZ) = 6.4901 (UZ LZ) = 8.755, is not <ε 3) 2x 1 + 3x 2 10.8676 Iteration 3 Upper Z(UZ) = 10.8676, Lower Z(LZ) = 6.4901 162 ISSN 1916-9795 E-ISSN 1916-9809

(UZ LZ) = 4.3775, is not <ε 3) 2x 1 + 3x 2 8.6788 Iteration 4 Upper Z(UZ) = 8.6788, Lower Z(LZ) = 6.4901 (UZ LZ) = 2.1887, is not <ε 3) 2x 1 + 3x 2 7.5845 Feasible Solution: x(1) = [3.3146, 0.3227] Iteration 5 Upper Z(UZ) = 8.6788, Lower Z(LZ) = 7.5845 (UZ LZ) = 1.0944, is not <ε 3) 2x 1 + 3x 2 8.1317 Iteration 6 Upper Z(UZ) = 8.1317, Lower Z(LZ) = 7.5845 (UZ LZ) = 0.5472, is not <ε 3) 2x 1 + 3x 2 7.8581 Feasible Solution: x(2) = [3.7409, 0.1292] Iteration 7 Upper Z(UZ) = 8.1317, Lower Z(LZ) = 7.8581 (UZ LZ) = 0.2736, is not <ε 3) 2x 1 + 3x 2 7.9949 Feasible solution: x(3) = [3.9903, 0.0048] Iteration 8 Upper Z(UZ) = 8.1317, Lower Z(LZ) = 7.9949 (UZ LZ) = 1.1368, is not <ε 3) 2x 1 + 3x 2 8.0633 Iteration 9 Upper Z(UZ) = 8.0633, Lower Z(LZ) = 7.9949 (UZ LZ) = 0.0684, is not <ε 3) 2x 1 + 3x 2 8.0291 Iteration 10 Upper Z(UZ) = 8.0291, Lower Z(LZ) = 7.9949 Published by Canadian Center of Science and Education 163

(UZ LZ) = 0.0342, is not <ε 3) 2x 1 + 3x 2 8.012 Iteration 11 Upper Z(UZ) = 8.012, Lower Z(LZ) = 7.9949 (UZ LZ) = 0.0171, is not <ε 3) 2x 1 + 3x 2 8.0034 Iteration 12 Upper Z(UZ) = 8.0034, Lower Z(LZ) = 7.9949 (UZ LZ) = 0.0085, is not <ε Optimal Solution Found Z = 7.9991 x = x(3) = [3.9903, 0.0048] Now let the right - hand side values of each of the constraints be multiplied by 10 1. The detailed report of the new problem is given below. F. R. C. Algorithm Solver Report Solution of LP Problem using the Feasible Region Contraction Algorithm Primal Problem Maximize Z = 2x 1 + 3x 2 1) x 1 + x 2 0.8 2) x 1 + 2x 2 0.4 x(i) 0, i = 1, 2. Dual Problem Minimize Z = 0.8y 1 + 0.4y 2 1) y 1 + 2y 2 2 2) 1 1 + yx 2 3 y(i) 0, i = 1, 2. Vertices Vertex 1 = [1.6, 0] Vertex 2 = [0, 1.6] Vertex 3 = [0, 0] Epsilon ε = 0.01 Results x(0) = [0.184, 0.0937] y(0) = [2, 2] Iteration 1 Upper Z(UZ) = 2.4, Lower Z(LZ) = 0.649 164 ISSN 1916-9795 E-ISSN 1916-9809

(UZ LZ) = 1.751, is not <ε 3) 2x 1 + 3x 2 1.5245 Iteration 2 Upper Z(UZ) = 1.5245, Lower Z(LZ) = 0 : 649 (UZ LZ) = 0.8755, is not <ε 3) 2x 1 + 3x 2 1.0868 Iteration 3 Upper Z(UZ) = 1.0868, Lower Z(LZ) = 0.649 (UZ LZ) = 0.4377, is not <ε 3) 2x 1 + 3x 2 0.8679 Iteration 4 Upper Z(UZ) = 0.8679, Lower Z(LZ) = 0.649 (UZ LZ) = 0.2189, is not <ε 3) 2x 1 + 3x 2 0.7584 Feasible Solution: x(1) = [0.3315, 0.0323] Iteration 5 Upper Z(UZ) = 0.8679, Lower Z(LZ) = 0.7584 (UZ LZ) = 0.1094, is not <ε 3) 2x 1 + 3x 2 0.8132 Iteration 6 Upper Z(UZ) = 0.8132, Lower Z(LZ) = 0.7584 (UZ LZ) = 0.0547, is not <ε 3) 2x 1 + 3x 2 0.7858 Feasible Solution: x(2) = [0.3741, 0.0129] Iteration 7 Upper Z(UZ) = 0.8132, Lower Z(LZ) = 0.7858 (UZ LZ) = 0.0274, is not <ε 3) 2x 1 + 3x 2 0.7995 Feasible solution: x(3) = [0.399, 0.0005] Iteration 8 Upper Z(UZ) = 0.8132, Lower Z(LZ) = 0.7995 Published by Canadian Center of Science and Education 165

(UZ LZ) = 0.0137, is not <ε 3) 2x 1 + 3x 2 0.8063 Iteration 9 Upper Z(UZ) = 0.8063, Lower Z(LZ) = 0.7995 (UZ LZ) = 0.0068, is not <ε Optimal Solution Found. Z = 0.8029 x = x(3) = [0.399, 0.0005] Multiplying the optimal solution by 10 gives the optimal solution of the original problem as Z = 8.029 x = x(3) = [3.99, 0.005] 7. Concluding Remarks The solution obtained by FRCA compares favorably with that obtained by the popular simplex algorithm. The optimal solution by simplex method is Z = 8; x = (4, 0). Furthermore, the FRCA performs faster when the size of the feasible region is reduced. In the above illustration the number of iterations reduces from 12 to 9. References E. O. Effanga. (2009). Alternative Technique for solving linear programming problems, Ph. D Thesis. Department of Mathematics/Statistics and Computer Science, University of Calabar - Nigeria (unpublished). G. B. Dantzig, L. R. Ford (Jr) and D. R. Fulkerson. (1956). A primal-dual algorithm, linear inequalities and related systems. Annal of Mathematics study 38, Princeton University press, Princeton, New York. G. Eiselt, C. Pederzolli and L. Sandblom. (1987). Operations Research. W de G. New York. I. J. Lustig, R. E. Marsten and D. F. Shanno. (1994). Interior point methods for linear programming. Computational state of art. ORSA Journal on Computing,6,1-14. J. Gondzio. (1995). Multiple centrality corrections in a primal - dual method for linear programming. TR No. 20 Logilab, HEC, Geneva, Section of Management Studies, University of Geneva, Bd. Carl - Vogt 102, 1211 Geneva, Switzerland. L. A. Levin and B. Yammitsky. (1982). An old linear programming algorithm runs in polynomial time. Proceeding 23rd Annual Symposium of foundations of Computer Science. IEEF Computer Society, Long Beach. California, 327-328. M. J. Todd and Y. Ye. (1990). A centered projective algorithm for linear programming. Mathematics of Operations Research, 15, 508-529. N. Karmarkar. (1984). A New Polynomial Time Algorithm for linear programming. Combinatorial, 4, 373-395. R. E. Marsten, R. Subramanian, M. Saltzman, I. J. Lustig and D. Shanno. (1990). Interior point methods for linear programming. Interfaces, 20, 105-116. 166 ISSN 1916-9795 E-ISSN 1916-9809