Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization

Similar documents
II. Linear Programming

Solving linear programming

Chapter 7. Linear Programming Models: Graphical and Computer Methods

NOTATION AND TERMINOLOGY

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

THE GEOMETRY OF LINEAR PROGRAMMING

16.410/413 Principles of Autonomy and Decision Making

Linear Programming. L.W. Dasanayake Department of Economics University of Kelaniya

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Applications of Linear Programming

CHAPTER 3 LINEAR PROGRAMMING: SIMPLEX METHOD

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

Linear Programming. Meaning of Linear Programming. Basic Terminology

Quantitative Technique

Linear Programming. them such that they

Intro to Linear Programming. The problem that we desire to address in this course is loosely stated below.

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

EARLY INTERIOR-POINT METHODS

LECTURE NOTES Non-Linear Programming

OR 674 DYNAMIC PROGRAMMING Rajesh Ganesan, Associate Professor Systems Engineering and Operations Research George Mason University

Introduction. Chapter 15. Optimization Modeling: Applications. Integer Programming. Manufacturing Example. Three Types of ILP Models

Constrained Optimization COS 323

LINEAR PROGRAMMING (LP), GRAPHICAL PRESENTATION GASPAR ASAMPANA

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear Programming and its Applications

Chapter 15 Introduction to Linear Programming

Introduction to Linear Programming

Chapter II. Linear Programming

Lesson 08 Linear Programming

BACK TO MATH LET S START EASY.

Integer Programming Explained Through Gomory s Cutting Plane Algorithm and Column Generation

Mathematics. Linear Programming

Using Linear Programming for Management Decisions

Prepared By. Handaru Jati, Ph.D. Universitas Negeri Yogyakarta.

Operations Research and Optimization: A Primer

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Solution Methods Numerical Algorithms

4 Linear Programming (LP) E. Amaldi -- Foundations of Operations Research -- Politecnico di Milano 1

AMATH 383 Lecture Notes Linear Programming

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

16.410/413 Principles of Autonomy and Decision Making

Artificial Intelligence

1 GIAPETTO S WOODCARVING PROBLEM

A Short SVM (Support Vector Machine) Tutorial

The Simplex Algorithm for LP, and an Open Problem

Using the Graphical Method to Solve Linear Programs J. Reeb and S. Leavengood

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs

Modelling of LP-problems (2WO09)

Discrete Optimization. Lecture Notes 2

I will illustrate the concepts using the example below.

Linear Programming: Introduction

Full file at Linear Programming Models: Graphical and Computer Methods

Classification of Optimization Problems and the Place of Calculus of Variations in it

Algorithms for Integer Programming

ONLY AVAILABLE IN ELECTRONIC FORM

NATCOR Convex Optimization Linear Programming 1

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

MLR Institute of Technology

DETERMINISTIC OPERATIONS RESEARCH

Linear Programming Problems: Geometric Solutions

Introduction to Linear Programming

February 10, 2005

Math 273a: Optimization Linear programming

Linear programming II João Carlos Lourenço

COMP9334: Capacity Planning of Computer Systems and Networks

CMPSCI611: The Simplex Algorithm Lecture 24

AQA GCSE Maths - Higher Self-Assessment Checklist

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Chapter 4: The Mechanics of the Simplex Method

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Edexcel Linear GCSE Higher Checklist

B553 Lecture 12: Global Optimization

Convex Optimization CMU-10725

Algebra Reviews & LP Graphic Solutions

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

INEN 420 Final Review


25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods.

Chapter 4 Linear Programming

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Math Models of OR: The Simplex Algorithm: Practical Considerations

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Section 18-1: Graphical Representation of Linear Equations and Functions

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

3 INTEGER LINEAR PROGRAMMING

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

LINEAR PROGRAMMING INTRODUCTION 12.1 LINEAR PROGRAMMING. Three Classical Linear Programming Problems (L.P.P.)

Basics of Computational Geometry

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

A Brief Overview of Optimization Problems. Steven G. Johnson MIT course , Fall 2008

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Year 8 Review 1, Set 1 Number confidence (Four operations, place value, common indices and estimation)

Econ 172A - Slides from Lecture 2

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Transcription:

Introduction to Linear Programming Algorithmic and Geometric Foundations of Optimization

Optimization and Linear Programming Mathematical programming is a class of methods for solving problems which ask you to optimize: to maximize, minimize, to find the best, the worst, the most, the least, the fastest, the shortest, etc. Linear programming is the simplest form, suitable only for problems with linear objective functions and constraints. Two of the Basic Linear Programming Problems: -The Product Mix Problem What quantities of products should be produced to maximize profit? In economics, this is the optimal allocation of scarce resources. -The Blending Problem What combination of ingredients minimizes cost? For example, creating industrial chemical compounds.

Some Optimization Basics Unconstrained: Easy: Unconstrained: max x max x f ä ã x ë í = x f ä ã x ë í = x ä ã 1 x ë í Still easy: df ä ã x ë í dx = 1 2 x = 0 à x í = 1 2 In general, what are we doing? -Set the derivative to zero and solve. -Why?

Necessary and Sufficient Conditions for an optimum over single-peaked functions: Maximum Minimum Necessity Sufficiency df ä ã x ë í dx = 0 d 2 f ä ã x ë í < 0 dx df ä ã x ë í dx = 0 d 2 f ä dx ã x ë í > 0 f '(x) = 0 f "(x) < 0 f '(x) = 0 f "(x) > 0

Some Complications What about more complicated functions? max x, y, z f ä ã x, y, z ë í = x ä ã 1 y ë í + y ä ã 1 x ë í + z ä ã 1 x y ë í Calculus still works, but is cumbersome to do by hand. We turn to computers and automate the process instead. What about constraints? max x f ä ã x ë í = x ä ã 1 x ë í, subject to x # 1 3 Again, calculus works (use the Lagrangian) for small problems, but needs automation for larger problems.

More Complications f(x) Multiple Optima f(x) Discontinuous Functions Uncertainty f(x) f(x) x Integer Restrictions x x x

The Need for Algorithmic Methods How can we automate searching for optima? Linear programming (LP) is one such method. LP is easy and fast, but it has some important limitations: Limitations/Requirements (1) Decisions are made under certainty (1) (2) Objective function and constraints must be linear (2) Alternatives Sometimes expectations can be used, but generally stochastic programming is required NLP (tough!). No guaranteed global optimizer other than enumeration! GA, GP, SA, heuristic search, tabu, adaptive grid refinement, active sets (3) Additivity (3) NLP (4) Complete Independence (4) NLP (5) Infinite Divisibility (5) Integer programming, branch & bound

LP Structure Assuming the requirements for LP have been met, what does a LP look like? Decision Variables -Quantities to be allocated -Number of units to be produced -Time intervals, unit quantities, proportions... Objective Function -What is being maximized or minimized? -Thought: How do we reduce the world to a set of linear equations? Constraints -(In)equalities specifying limits on the unbounded optimization of a function. -If Π = P H Q, why not just set Q to?

Setting up a LP Problem: Product Mix Fischbeck Electronics makes two models of television sets, A and B. Fischbeck Electronics profit, A, on A is $300 per set and its profit on B, B, is $250 per set. Thus, the problem facing Fischbeck Electronics is: max Π = Π A + Π B x B, x B Notice that unconstrained optimization would cause them to produce only A! But, Fischbeck Electronics faces several constraints: (1) A Labor Constraint: They can t use more than 40 hours of labor per day (5 employees working regular shifts). (2) A Manufacturing Constraint: They only have 45 hours of machine time available (9 machines available for 5 hours each). (3) A Marketing Constraint: They can t sell more than 12 units of set A per day.

Decision Variables Identifying the Problem Variables -What can we vary? and x B are the only variables we have direct control over. The Objective Function -What are we maximizing (minimizing)? Profit! (Cost!) Thought: Max Profit = Min Cost = Max Cost =... -How is it determined? Π = Π A + Π B x B, but we know that Π A = 300 and Π B = 250. So, the objective function is Π = 300 + 250x B.

Identifying the Constraints 3 primary constraints plus feasibility (x i $ 0 ú i ): (1) Labor: A needs 2 hours of labor. B needs only 1 hour. Total labor is equal to the number of units produced times the labor required per unit. 2 + x B # 40 (2) Manufacturing: A needs 1 hour of machine time. B needs 3 hours. + 3 x B # 45 (3) Marketing: Can t see more than 12 units of A per day. # 12

The Entire LP and Possible Implications max Π = 300 + 250x B, x B such that: 2 + x B # 40 + 3 x B # 45 # 12 $ 0 x B $ 0 What if or x B = 7. 4235? What does it mean to produce 0.4235 TV sets? Can 0.6689 people be employed?

Solving Linear Programming Problems Often, setting up LPs is the hardest part. This is because the actual solving is usually done by computer. How do computers do it? -We will use a graphical technique to illustrate the intuition behind the algorithmic methods used in optimization software packages. For any LP, there exists an infinite number of possible solutions. Problem spaces may be vast. Many models have tens of thousands of variables and constraints. How can we search? How can we limit the number of possibilities we have to try?

Algorithmic Solutions The Simplelgorithm (and variants) -Created by George Dantzig in 1947 for the Air Force Office of Scientific Research - If the LP problem has an optimal solution, it will be found in a finite number of iterations. (Note: this isn t always very comforting) -The simplex algorithm finds the solution by making iterated improvements from an initial position (solution). The Ellipsoid Algorithms -The best known example is Karmarkar s Algorithm (created at AT&T Bell Labs) -Runs in linear time -Faster than simplex for very large problems -We won t worry about Karmarkar...

What about the Solver? Do not use Excel or Solver as a black box! Know what s going on underneath the surface. Remember: you will always get an answer back from Solver. However, knowing whether or not that answer is relevant requires understanding what Solver is doing. Know its limitations! Linear programs: Standard simplex Quadratic programs: Modified simplex (QP is technically NLP, but results specific to quadratic forms can be exploited to allow for significant simplification. Nonlinear Programs: Generalized Reduced Gradient Method. NOTE: Unless you check Assume linear model, it will automatically use GRG2. GRG2 may have difficulty with some LP or QP problems that could have more easily been solved by another method. Beware the black box! Integer programs: Branch and Bound

Solution Method Intuition All of the solution methods just mentioned tend to have very intuitive explanations. We will talk about how the simplex method works by graphically solving a small (two variable) linear program. Again, our problem: max Π = 300 + 250x B, x B such that: 2 + x B # 40 + 3 x B # 45 # 12 $ 0 x B $ 0

Graphical Solutions Limited to either 2 constraints/variables. The objective function: max Π = 300 + 250x B, x B The solution is contained somewhere in real-valued space. That s a big area. Fortunately, we can narrow it down immediately to the first quadrant because of the feasibility constraints: $ 0 and x B $ 0. x(b) 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 x(a)

Step One: Graphing the Constraints Consider Constraint #1: 2 + x B # 40 When = 0, x B # 40. To anchor the line, note that when x B = 0, # 20. x(b) 40 1 0 111 000 11111 00000 1111111 0000000 111111111 000000000 2x(A) + x(b) = 40 2x(A) + x(b) < 40 x(a) 20

Constraint #2: + 3 x B # 45 When = 0, then x B # 15. Anchoring the line, when x B = 0, then # 45. x(b) x(a) + 3x(B) = 45 15 11 00 11 00 11 00 111 000 111 000 111 000 1111 0000 1111 0000 1111 0000 11111 00000 45 x(a) + 3x(B) < 45 x(a)

Constraint #3: x A # 12 x(a) This constraint requires no solving for. x(b) 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 12

Combining the Constraints: The Feasible Set The intersection of all constraint-feasible sets represents the set of feasible solutions (if a solution exists). Keep in mind that this is the feasible set - it says nothing about the optimal set! x(b) x(a) 1 0 1 0 11 00 11 00 11 00 111 000 1 0 111 000 11 00 111 000 11 00 1111 0000 1111 0000 11 00 11111 00000 1111111111111111 0000000000000000 1111111111111111 0000000000000000 0000000000000000 1111111111111111 0000000000000000 1111111111111111 40 Feasible Set (Profit) 15 12 20 45

Identifying the Optimal Points Within the Feasible Set Method #1: Corner-Point Enumeration (Close-Up) 15 (0,15) (?,?) (12,0) (0,0) 0 12 The mystery point is the intersection of the two lines. Two equations, two unknowns: = 12 and + 3 x B = 45. Therefore 12 + 3 x B = 45 à x B = 11. Thus, the coordinates of the fourth corner are (12, 11).

Choosing a Corner-Point Each of the corner-points represents a possible optimal solution. To identify the optimal one, substitute them into the objective function and see which one maximizes the function. Trial Objective Function Profit 1 300(0) + 250(0) $-0-2 300(0) + 250(15) $3,750 3 300(12) + 250(0) $3,600 4 300(12) + 250(11) $6,350 But why limit ourselves to corner-points? Why shouldn t we choose a point in the interior of the feasible set?

Isoprofit (Isocost) Curves and x B can take many values. To identify the optimal values, substitute arbitrary values in as the objective function and then change them to push the frontier out towards the boundary of the feasible set. To start, pick a value for the objective function - say, $1,500. This becomes an isoprofit curve: 300 + 250x B = 1500. When = 0, x B = 6 and when x B = 0, = 5. x(b) 300x(A) + 250x(B)=1500 6 1 0 1 0 1 0 1 0 111 000 1 0 111 000 x(a) 5

Adding Constraints to the Isoprofit Curves x(b) 300(12) + 250(11) = 6,350 15 11 111111 000000 1111111 0000000 1111111 0000000 1111111 0000000 11111111 00000000 11111111 00000000 11111111 00000000 x(a) 12 Any further advancement of the isoprofit curve would cause it to leave the feasible set. Thus, (1) optimal solutions exist only on boundaries, and (2) if only one exists, the optimal solution must always occur at a vertex.

A Quick Review LPs are good for optimization problems involving maximizing profits and minimizing costs. LP = Decision Variables, Objective Function, and Constraints Decision Variables are the quantities of resource being allocated The Objective Function is what s being optimized. Constraints are resource limitations or requirements Advantages of Linear Programming: -Relatively quick -Guaranteed to find optimal solution -Provides natural sensitivity analysis (shadow prices)

Disadvantages of Linear Programming Absence of risk (does expectation adequately capture risk?) Restriction to linear objective functions -No correlation among variables -No positive or negative synergies -Nonlinear programming is very difficult Fractional solutions often have no meaning Reducing the world to a set of linear equations is usually very difficult

The Simplex Method: Solution Methods (1) Pick a trial solution point (2) Move out from that point along the edges, looking for vertices. (3) Keep moving along the positive gradient until no further improvements can be made. The Graphical Analogue to the Simplex: (1) Draw the constraints (2) Push the isoquant as far in the optimal direction as possible. (3) Pick the last vertex exiting the feasible set. For more graphing help, see The Geometry of Linear Programming at http://equilibrium.hss.cmu.edu/dadss/