LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

Similar documents
Linear Optimization and Extensions: Theory and Algorithms

Convex Optimization CMU-10725

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

16.410/413 Principles of Autonomy and Decision Making

Linear Programming Duality and Algorithms

The Affine Scaling Method

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

The simplex method and the diameter of a 0-1 polytope

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Programming, numerics and optimization

The Simplex Algorithm

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

MATH 310 : Degeneracy and Geometry in the Simplex Method

EARLY INTERIOR-POINT METHODS

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Artificial Intelligence

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Linear programming and duality theory

arxiv: v1 [cs.cc] 30 Jun 2017

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

A Subexponential Randomized Simplex Algorithm

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

Open problems in convex geometry

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

NATCOR Convex Optimization Linear Programming 1

Linear Programming 1

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Mathematical Programming and Research Methods (Part II)

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Linear Programming Motivation: The Diet Problem

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Linear Programming in Small Dimensions

Recap, and outline of Lecture 18

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Math 5593 Linear Programming Lecture Notes

Introduction to Linear Programming

Lecture 15: Log Barrier Method

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

CMPSCI611: The Simplex Algorithm Lecture 24

Open problems in convex optimisation

Linear Programming and Clustering

Lecture Notes 2: The Simplex Algorithm

Lecture 16 October 23, 2014

Real life Problem. Review

(67686) Mathematical Foundations of AI July 30, Lecture 11

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Linear Programming. Course review MS-E2140. v. 1.1

Convex Optimization CMU-10725

R n a T i x = b i} is a Hyperplane.

Identical text Minor difference Moved in S&W Wrong in S&W Not copied from Wiki 1

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Other Algorithms for Linear Programming Chapter 7

Discrete Optimization. Lecture Notes 2

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Lecture 2 - Introduction to Polytopes

Algorithms for convex optimization

Chapter 15 Introduction to Linear Programming

Linear Programming and its Applications

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

MATH 890 HOMEWORK 2 DAVID MEREDITH

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Optimization. Lecture08 12/06/11

Lesson 17. Geometry and Algebra of Corner Points

Selected Topics in Column Generation

Read: H&L chapters 1-6

An iteration of the simplex method (a pivot )

Polyhedral Compilation Foundations

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Nonlinear Programming

Applications of Linear Programming

Lecture 9: Linear Programming

A Simplex-Cosine Method for Solving Hard Linear Problems

Lecture 5: Duality Theory

J Linear Programming Algorithms

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

LARGE SCALE LINEAR AND INTEGER OPTIMIZATION: A UNIFIED APPROACH

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

The Simplex Algorithm

Design and Analysis of Algorithms (V)

THE simplex algorithm [1] has been popularly used

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Introduction to Mathematical Programming IE406. Lecture 4. Dr. Ted Ralphs

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Conic Duality. yyye

A Dual Variant of. Benson s Outer Approximation Algorithm. for Multiple Objective Linear Programs

THEORY OF LINEAR AND INTEGER PROGRAMMING

Transcription:

LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

Motivation Simplex method works well in general, but suffers from exponential-time computational complexity. Klee-Minty example shows simplex method may have to visit every vertex to reach the optimal one. Total complexity of an iterative algorithm = # of iterations x # of operations in each iteration Simplex method - Simple operations: Only check adjacent extreme points - May take many iterations: Klee-Minty example Question: any fix?

Complexity of the simplex method

Worst case performance of the simplex method Klee-Minty Example: Victor Klee, George J. Minty, How good is the simplex algorithm? in (O. Shisha edited) Inequalities, Vol. III (1972), pp. 159-175.

Klee-Minty Example

Klee-Minty Example

Karmarkar s (interior point) approach Basic idea: approach optimal solutions from the interior of the feasible domain Take more complicated operations in each iteration to find a better moving direction Require much fewer iterations

General scheme of an interior point method An iterative method that moves in the interior of the feasible domain

Interior movement (iteration) Given a current interior feasible solution, we have > 0 An interior movement has a general format

Key knowledge 1. Who is in the interior? - Initial solution 2. How do we know a current solution is optimal? - Optimality condition 3. How to move to a new solution? - Which direction to move? (good feasible direction) - How far to go? (step-length)

Q1 - Who is in the interior? Standard for LP Who is at the vertex? Who is on the edge? Who is on the boundary? Who is in the interior?

What have learned before

Who is in the interior? Two criteria for a point x to be an interior feasible solution: 1. Ax = b (every linear constraint is satisfied) 2. x > 0 (every component is positive) Comments: 1. On a hyperplane, every point is interior relative to H. 2. For the first orthant K =, only those x > 0 are interior relative to K.

Example

How to find an initial interior solution? Like the simplex method, we have - Big M method - Two-phase method (to be discussed later!)

Key knowledge 1. Who is in the interior? - Initial solution 2. How do we know a current solution is optimal? - Optimality condition 3. How to move to a new solution? - Which direction to move? (good feasible direction) - How far to go? (step-length)

Q2 - How do we know a current solution is optimal? Basic concept of optimality: A current feasible solution is optimal if and only if no feasible direction at this point is a good direction. In other words, every feasible direction is not a good direction to move!

Feasible direction In an interior-point method, a feasible direction at a current solution is a direction that allows it to take a small movement while staying to be interior feasible. Observations: - There is no problem to stay interior if the step-length is small enough. - To maintain feasibility, we need

Good direction In an interior-point method, a good direction at a current solution is a direction that leads it to a new solution with a lower objective value. Observations:

Optimality check Principle: no feasible direction at this point is a good direction.

Key knowledge 1. Who is in the interior? - Initial solution 2. How do we know a current solution is optimal? - Optimality condition 3. How to move to a new solution? - Which direction to move? (good feasible direction) - How far to go? (step-length)

Q3 How to move to a new solution? 1. Which direction to move? - a good, feasible direction Good requires Feasible requires Question: any suggestion?

A good feasible direction Reduce the objective value Maintain feasibility

Projection mapping A projection mapping projects the negative gradient vector c into the null space of matrix A

Q3 How to move to a new solution? 2. How far to go? - To satisfy every linear constraint Since the step-length can be real number. - To stay to be an interior solution, we need

How to choose step-length? One easy approach - in order to keep > 0 we may use the minimum ratio test to determine the step-length. Observation: - when is close to the boundary, the step-length may be very small. Question: then what?

Observations If a current solution is near the center of the feasible domain (polyhedral set), in average we can make a decently long move. If a current solution is not near the center, we need to re-scale its coordinates to transform it to become near the center". Question: but how?

Where is the center? We need to know where is the center of the non-negative/first orthant. -Concept of equal distance to the boundary Question: If not, what to do?

Concept of scaling Scale Define a diagonal scaling matrix

Transformation affine scaling Affine scaling transformation The transformation is 1. one-to-one 2. onto 3. Invertible 4. boundary to boundary 5. interior to interior

Transformed LP

Step-length in the transformed space Minimum ratio test in the y-space

Property 1 Iteration in the x-space

Property 2 Feasible direction in x-space

Property 3 Good direction in x-space

Property 4 Optimality check (Lemma 7.2)

Property 5 Well-defined iteration sequence (Lemma 7.3) From properties 3 and 4, if the standard form LP is bounded below and is not a constant, then the sequence is well-defined and strictly decreasing.

Property 6 Dual estimate, reduced cost and stopping rule We may define

Property 7 Moving direction and reduced cost

Primal affine scaling algorithm

Example

Example

How to find an initial interior feasible solution? Big-M method Idea: add an artificial variable with a big penalty (big-m) Objective

Properties of (big-m) problem (1) It is a standard form LP with n+1 variables and m constraints. (2) e is an interior feasible solution of (big-m). (3)

Two-phase method

Properties of (Phase-I) problem

Facts of the primal affine scaling algorithm

More facts

Improving performance potential push Idea: (Potential push method) - Stay away from the boundary by adding a potential push. Consider Use

Improving performance logarithmic barrier Idea: (Logarithmic barrier function method) Consider Properties: (2)

Dual affine scaling algorithm Affine scaling method applied to the dual LP Idea:

Key knowledge Dual scaling (centering) Dual feasible direction Dual good direction increase the dual objective value Dual step-length Primal estimate for stopping rule

Observation 1 Dual scaling (centering)

Observation 2 Dual feasibility (feasible direction)

Observation 3 Increase dual objective function (good direction)

Observation 4 Dual step-length

Observation 5 Primal estimate We define

Dual affine scaling algorithm

Find an initial interior feasible solution

Properties of (big-m) problem

Performance of dual affine scaling No polynomial-time proof! Computational bottleneck Less sensitive to primal degeneracy and numerical errors, but sensitive to dual degeneracy. Improves dual objective value very fast, but attains primal feasibility slowly.

Improving performance 1. Logarithmic barrier function method Polynomial-time proof

Improving performance Power series method - Basic idea: following a higher order trajectory

Power series expansion

Karmarkar s projective scaling algorithm Karmarkar s standard form

Terminologies Feasible solution Interior feasible solution

Basic assumptions

Example

Where is the center? 3-dimensional n-dimensional

How far away from center to boundaries?

How to rescale interior to center?

Example

Properties of projective scaling

Karmarkar s algorithm

Karmarkar s algorithm

Kramarkar s algorithm

Choice of step length

Example

Example

Performance analysis

Performance analysis