Linear Programming in Small Dimensions

Similar documents
CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Linear programming and duality theory

Linear Programming- Manufacturing with

Computational Geometry

Linear Programming and its Applications

Math 414 Lecture 2 Everyone have a laptop?

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Chapter 4 Concepts from Geometry

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

! Linear programming"! Duality "! Smallest enclosing disk"

Lecture 2 - Introduction to Polytopes

Math 5593 Linear Programming Lecture Notes

COMP331/557. Chapter 2: The Geometry of Linear Programming. (Bertsimas & Tsitsiklis, Chapter 2)

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Convex Geometry arising in Optimization

Applied Integer Programming

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

FACES OF CONVEX SETS

The Simplex Algorithm for LP, and an Open Problem

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Introduction to Linear Programming

Linear Programming Duality and Algorithms

Chapter 8. Voronoi Diagrams. 8.1 Post Oce Problem

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Mathematical and Algorithmic Foundations Linear Programming and Matchings

The Simplex Algorithm

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

maximize c, x subject to Ax b,

Basic Measures for Imprecise Point Sets in R d

Voronoi diagram and Delaunay triangulation

16.410/413 Principles of Autonomy and Decision Making

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Numerical Optimization

4 Linear Programming. Manufacturing with Molds

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

OPERATIONS RESEARCH. Linear Programming Problem

CMPSCI611: The Simplex Algorithm Lecture 24

Some Advanced Topics in Linear Programming

Lecture 3 Randomized Algorithms

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

11 Linear Programming

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Voronoi diagrams Delaunay Triangulations. Pierre Alliez Inria

Convexity: an introduction

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

11.1 Facility Location

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Polyhedral Compilation Foundations

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Chapter 15 Introduction to Linear Programming

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Modeling and Analysis of Hybrid Systems

Modeling and Analysis of Hybrid Systems

9 Bounds for the Knapsack Problem (March 6)

arxiv: v1 [cs.cc] 30 Jun 2017

J Linear Programming Algorithms

Integer Programming Theory

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

CS 372: Computational Geometry Lecture 3 Line Segment Intersection

AMATH 383 Lecture Notes Linear Programming

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

Lecture 5: Duality Theory

Mathematical Programming and Research Methods (Part II)

Search and Intersection. O Rourke, Chapter 7 de Berg et al., Chapter 11

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Lecture 2 Convex Sets

IE 5531: Engineering Optimization I

On Unbounded Tolerable Solution Sets

Week 5. Convex Optimization

Advanced Algorithms Linear Programming

Figure 2.1: An example of a convex set and a nonconvex one.

A Subexponential Randomized Simplex Algorithm

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Programming Motivation: The Diet Problem

Lecture Notes 2: The Simplex Algorithm

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

COMPUTATIONAL GEOMETRY

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Trapezoidal decomposition:

MATH 890 HOMEWORK 2 DAVID MEREDITH

Introduction to Modern Control Systems

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

R n a T i x = b i} is a Hyperplane.

Design and Analysis of Algorithms (V)

Lecture 15: The subspace topology, Closed sets

Linear Programming. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015

Polytopes Course Notes

4 Linear Programming (LP) E. Amaldi -- Foundations of Operations Research -- Politecnico di Milano 1

THEORY OF LINEAR AND INTEGER PROGRAMMING

Monotone Paths in Geometric Triangulations

Real life Problem. Review

Transcription:

Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron

Outline linear programming, motivation and definition one dimensional case a randomized algorithm in two dimension generalization to higher dimension there are deterministic algorithms also not covered. Presentation?

Example you can build two kinds of houses: X and Y a house of type X requires 10,000 bricks, 4 doors and 5 windows a house of type Y requires 8,000 bricks, 2 doors and 10 windows a house X can be sold $200,000 and a house Y can be sold $250,000 you have 168,000 bricks, 60 doors and 150 windows how many houses of each type should you build so as to maximize their total price?

Formulation x (resp. y) denotes the number of houses of type X (resp. Y ). maximize the price under the constraints f (x,y) = 200,000x + 250,000y x 0 y 0 10, 000x + 8, 000y 168, 000 4x + 2y 60 5x + 10y 150

Geometric interpretation 4x + 2y = 60 f (x, y) = constant Feasible region 10, 000x + 8, 000y = 168, 000 5x + 10y = 150

Geometric interpretation 4x + 2y = 60 optimal (x, y) 10, 000x + 8, 000y = 168, 000 5x + 10y = 150 f (x, y) = constant

Solution from previous frame, at the optimum x = 8 y = 11 luckily these are integers so it is the solution to our problem if we add the constraint that all variables are integers, we are doing integer programming we do not deal with it in CS4235 we consider only linear inequalities, no other constraint our example was a special case where the linear program has an integer solution, hence it is also a solution to the integer program

Problem statement maximize the objective function f (x 1,x 2...x d ) = c 1 x 1 + c 2 x 2 + + c d x d = c x under the constraints a 1,1 x 1 + + a 1,d x d b 1 a 2,1 x 1 + + a 2,d x d b 2... a n,1 x 1 + + a n,d x d b n this is linear programming in dimension d

Geometric interpretation each constraint represents a half space in IR d the intersection of these half spaces forms the feasible region the feasible region is a convex polyhedron in IR d feasible region a constraint

Convex polyhedra definition: a convex polyhedron is an intersection of half spaces in IR d it is not necessarily bounded a bounded convex polyhedron is a polytope special case: a polytope in IR 2 is a convex polygon

Convex polyhedra in IR 3 a tetrahedron a cube a cone faces of a convex polyhedron in IR 3 vertices, edges and facets example: a cube has 8 vertices, 12 edges and 6 facets

Geometric interpretation let c = (c1,c 2,... c d ) we want to find a point vopt of the feasible region such that c is the outer normal at v opt if there is one vopt is the extreme point in the direction c c v opt Feasible region

Infeasible linear programs the feasible region can be empty in this case there is no solution to the linear program the program is said to be infeasible we would like to know when it is the case

Unbounded linear programs the feasible region may be unbounded in the direction of c the linear program is called unbounded in this case, we want to return a ray ρ in the feasible region with the property that f takes arbitrarily large values along ρ ρ Feasible region c

Degenerate cases a linear program may have an infinite number of solutions f (x, y) = opt c in this case, we report only one solution

Background linear programming is one of the most important problems in operations research many optimization problems in engineering and in economics are linear programs a practical algorithm: the simplex algorithm people used it without computers exponential time in the worst case there are polynomial time algorithms ellipsoid method, interior point method integer programming is NP hard we do not expect a polynomial algorithm

Background computational geometry techniques give good algorithms in low dimension running time is O(n) when d is constant but large dependance on d: best running time is roughly O(d 2 n) + d O( d) example of fixed-parameter algorithm this lecture: Seidel s algorithm simple, randomized expected running time O(d!n) this is O(n) when d = O(1) in practice, good for low dimension

One dimensional case maximize the objective function under the constraints f (x) = cx a 1 x b 1 a 2 x b 2.. a n x b n

Interpretation if ai > 0 then constraint i corresponds to the interval (, b ] i a i if ai < 0 then constraint i corresponds to the interval [ ) bi, a i the feasible region is an intersection of intervals so the feasible region is an interval

Interpretation a 1 > 0 b 1 /a 1 a 2 < 0 b 2 /a 2 L feasible region R IR

Algorithm assume there is (i1,i 2 ) such that a i1 < 0 < a i2 compute compute b i R = min a i >0 a i b i L = max a i <0 a i it takes O(n) time if L > R then the program in infeasible otherwise if c > 0 then the solution is x = R if c < 0 then the solution is x = L

Algorithm assume ai > 0 for all i compute R = min b i a i if c > 0 then the solution is x = R if c < 0 then the program is unbounded and the ray (,R] is a solution assume ai < 0 for all i compute L = max b i a i if c < 0 then the solution is x = L if c > 0 then the program is unbounded and the ray [L, ) is a solution

Linear programming in the plane First idea compute the feasible region intersection of halfplanes doable in O(n log n) time using duality and convex hulls the feasible region is a convex polygon find an optimal point extreme point along c can be found in O(log n) time overall, O(n log n) time this lecture: an expected O(n) time algorithm main lesson: we can find an extreme point faster than computing the whole region this difference is more dramatic in higher dimensions

Preliminary we only consider bounded linear programs we make sure that our linear program is bounded by enforcing two additional constraints m 1 and m 2 objective function: f (x,y) = c1 x + c 2 y let M be a large number if c1 0 then m 1 is x M if c1 0 then m 1 is x M if c2 0 then m 2 is y M if c2 0 then m 2 is y M in practice, it often comes naturally for instance, in our first example, it is easy to see that M = 30 is sufficient

New constraints y M m 2 c m 1 M x

Notation the i th constraint is it defines a half plane h i li is the line delimiting h i a i,1 x + a i,2 y b i l i h i

General position we assume that c is not orthogonal to any line li there is a unique solution to the linear program for any subset of constraints, also there is a unique solution to the linear subprogram can be done simulating a small rotation when there are several solutions, take the lexicographically smallest

Algorithm a randomized incremental algorithm we first compute a random permutation of the constraints (h 1,h 2... h n ) we denote Hi = {m 1,m 2,h 1,h 2... h i } we denote by vi a vertex of H i that maximizes the objective function in other words, vi is a solution to the linear program where we only consider the first i constraints v0 is simply the vertex of the boundary of m 1 m 2 idea: knowing vi 1, we insert h i and find v i

Example m 1 v 0 m 2 c f (x, y) = constant Feasible region

Example v 1 m 1 m 2 c f (x, y) = f (v 1 ) Feasible region h 1

Example v 2 = v 1 m 1 m 2 c Feasible region h 2 f (x, y) = f (v 2 ) h 1

Example m 1 h 3 m 2 v 3 c h 2 Feasible region f (x, y) = f (v 3 ) h 1

Example m 1 h 3 m 2 v 4 = v 3 c h 4 h 2 Feasible region f (x, y) = f (v 4 ) h 1

Algorithm randomized incremental algorithm before inserting hi, we already know v i 1 how to find vi? we distinguish two cases: vi 1 h i vi 1 h i

First case first case: vi 1 h i h i v i 1 c Feasible region then vi = v i 1 proof?

Second case second case: vi 1 h i v i 1 c Feasible region for H i 1 l i h i then vi 1 is not in the feasible region of H i then vi v i 1

Second case what do we know about vi? c v i Feasible region for H i 1 l i h i vi l i proof? how to find vi?

Second case assume ai,2 0, then the equation of l i is y = b i we replace y by bi x a i,2 in all the constraints of H i and in the objective function x a i,2 we obtain a one dimensional linear program if it is feasible, its solution gives us the x coordinate of vi we obtain the y coordinate using the equation of li if this linear program is infeasible, then the original 2D linear program is infeasible too and we are done

Analysis in the case vi 1 h i : we spend O(1) time in the case vi 1 h i : we spend O(i) time to solve a one dimensional linear program with i + 2 constraints so the algorithm runs in O(n 2 ) time give a worst case example where it runs in Ω(n 2 ) time what is the expected running time? we need to know how often the second case happens we define the random variable Xi Xi = 0 in first case (v i = v i 1 ) Xi = 1 in second case (v i v i 1 )

Analysis when Xi = 0 we spend O(1) time at i th step when Xi = 1 we spend O(i) time the expected time in the i th step is Pr[X i = 0] O(1) + Pr[X i = 1] O(i) O (1 + i Pr[X i = 1]) by linearity of expectation, the expected running time E[T(n)] of the algorithm is ( n ) E[T(n)] = O (1 + i Pr[X i = 1]) i=1

Analysis we denote Ci = H i in other words, Ci is the feasible region at step i vi is adjacent to two edges of C i, these edges correspond to two constraints h and h C i h v i h if vi v i 1 then h i = h or h i = h

Analysis what is the probability that hi = h or h i = h? we use backwards analysis we assume that Hi is fixed, no other assumption so hi is chosen uniformly at random in H i \ {m 1,m 2 } so the probability that hi = h or h i = h is 2/i so Pr[Xi = 1] 2/i and E[T(n)] = O ( n i=1 ) 1 + i 2 = O(n) i

Conclusion Lemma We can solve a 2-dimensional linear program with n constraints in expected O(n) time, assuming that we have a bound m 1,m 2 on the optimal solution. did we use a general position assumption like any three lines in l 1,...,l n are not coincident? can we get rid of the assumption on m1,m 2?

Higher dimensions each constraint is a half space can we compute their intersection and get the feasible region? in higher dimension, the feasible region has Ω(n d 2 ) vertices in the worst case computing the feasible region requires Ω(n d 2 ) time. Too much here, we will give a O(n) expected time algorithm for d = O(1)

Preliminary a hyperplane in IR d is a set with equation where α 1 x 1 + α 2 x 2 + + α d x d = β d (α 1,α 2,...,α d ) IR d \ {0} d in general position, d hyperplanes intersect at one point each constraint hi is a half space, bounded by an hyperplane h i we assume general position in that no hyperplane is orthogonal to c

Algorithm we generalize the 2D algorithm we first find d constraints m1,m 2,... m d that make the linear program bounded: if ci 0 then m i is x i M if ci < 0 then m i is x i M we pick a random permutation (h1,h 2,... h n ) of H then Hi is {m 1,m 2,... m d,h 1,h 2,...h i } we maintain vi, the solution to the linear program with constraints H i and objective function f v0 is the vertex of d i=1 m i

Algorithm compute vertices v0,v 1,...,v n inserting hi is done in the same way as in IR 2 : if vi 1 h i then v i 1 = v i if vi 1 h i then v i h i we find vi by solving a linear program with i 1 + d constraints m 1,...,m d,h 1,...,h i 1 restricted to h i this is a (d 1)-dimensional linear program if this linear program is infeasible, then the original linear program is infeasible too, and we are done it can be done in expected O(i) time (by induction on the dimension)

Analysis what is the probability that vi v i 1? fix d constraints of Hi that define v i the probability that vi v i 1 is bounded by the probability that one of these d constraints was inserted last by backwards analysis, this probability is d/i so the expected running time of our algorithm is ( n ) E[T(n)] = O 1 + i d = O(dn) = O(n) i assuming that d = O(1) i=1

Conclusion this algorithm can be made to handle unbounded linear programs and degenerate cases a careful implementation of this algorithm runs in O(d!n) time only useful in low dimension it can be generalized to other types of problems LP-type problems (presentation?) smallest enclosing disk (presentation?) sometimes we can linearize a problem and use a linear programming algorithm enclosing annulus with minimal area

The big result Theorem Let d be a constant. A linear program in R d with n constraints can be solved in expected O(n) time. In particular: Corollary Let d be a constant. We can decide in expected O(n) time if the intersection of n halfspaces in R d is empty or not. If it is nonempty, we can also find a point in the intersection in the same time.