CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Similar documents
Linear Programming in Small Dimensions

Linear Programming- Manufacturing with

CS 372: Computational Geometry Lecture 3 Line Segment Intersection

Linear Programming and its Applications

Computational Geometry

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Introduction to Linear Programming

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

Linear programming and duality theory

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Chapter 4 Concepts from Geometry

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

The Simplex Algorithm for LP, and an Open Problem

! Linear programming"! Duality "! Smallest enclosing disk"

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

16.410/413 Principles of Autonomy and Decision Making

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Lecture 4: Linear Programming

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Math 5593 Linear Programming Lecture Notes

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

OPERATIONS RESEARCH. Linear Programming Problem

Voronoi diagram and Delaunay triangulation

Linear Programming Duality and Algorithms

maximize c, x subject to Ax b,

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Lecture 2 - Introduction to Polytopes

CMPSCI611: The Simplex Algorithm Lecture 24

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Applied Integer Programming

MATH 890 HOMEWORK 2 DAVID MEREDITH

Combinatorial Geometry & Topology arising in Game Theory and Optimization

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

AMATH 383 Lecture Notes Linear Programming

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Linear Programming. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015

IE 5531: Engineering Optimization I

Convex Geometry arising in Optimization

The Simplex Algorithm

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

FACES OF CONVEX SETS

J Linear Programming Algorithms

Circuit Walks in Integral Polyhedra

Some Advanced Topics in Linear Programming

Chapter 15 Introduction to Linear Programming

Math 414 Lecture 2 Everyone have a laptop?

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Programming, numerics and optimization

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

11.1 Facility Location

Linear programming and the efficiency of the simplex algorithm for transportation polytopes

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Real life Problem. Review

Lecture 5: Duality Theory

1. CONVEX POLYGONS. Definition. A shape D in the plane is convex if every line drawn between two points in D is entirely inside D.

Chapter 8. Voronoi Diagrams. 8.1 Post Oce Problem

Numerical Optimization

4 Linear Programming. Manufacturing with Molds

Lecture Notes 2: The Simplex Algorithm

Computational Geometry

4 Linear Programming (LP) E. Amaldi -- Foundations of Operations Research -- Politecnico di Milano 1

Polyhedral Compilation Foundations

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Discrete Optimization. Lecture Notes 2

Mathematics. Linear Programming

Notes taken by Mea Wang. February 11, 2005

A Subexponential Randomized Simplex Algorithm

COMPUTATIONAL GEOMETRY

CHAPTER 3 LINEAR PROGRAMMING: SIMPLEX METHOD

Design and Analysis of Algorithms (V)

arxiv: v1 [cs.cc] 30 Jun 2017

Lecture 3 Randomized Algorithms

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

9 Bounds for the Knapsack Problem (March 6)

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Figure 2.1: An example of a convex set and a nonconvex one.

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

Integer Programming Theory

16.410/413 Principles of Autonomy and Decision Making

Mathematical Programming and Research Methods (Part II)

Applications of Linear Programming

High-Dimensional Computational Geometry. Jingbo Shang University of Illinois at Urbana-Champaign Mar 5, 2018

R n a T i x = b i} is a Hyperplane.

Solution for Homework set 3

Introduction to Optimization Problems and Methods

Introduction to Modern Control Systems

11 Linear Programming

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Artificial Intelligence

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

Linear Programming: Introduction

ACTUALLY DOING IT : an Introduction to Polyhedral Computation

Transcription:

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 1 / 48

1 Introduction 2 One dimensional case 3 Two-dimensional linear programming 4 Generalization to higher dimension Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 2 / 48

Outline This lecture is on linear programming. Special case: fixed dimension. It means d = O(1) variables. Even without this restriction, there are polynomial-time algorithms. For any fixed dimension, we give a simple linear-time algorithm. Reference: Textbook Chapter 4. Dave Mount s lecture notes, lectures 8 10. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 3 / 48

Example A factory can make two types of products: X and Y A product of type X requires 10 hours of manpower, 4l of oil, and 5m 3 of storage. A product of type Y requires 8 hours, 2l of oil and 10m 3 storage. A product X can be sold $200 and a product Y can be sold $250. You have 168 hours of manpower available, as well as 60l of oil and 150m 3 of storage. How many products of each type should you make so as to maximize their total price? Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 4 / 48

Formulation x and y denote the number of products of type X and Y, respectively. Maximize the price under the constraints f (x, y) = 200x + 250y x 0 y 0 10x + 8y 168 4x + 2y 60 5x + 10y 150. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 5 / 48

Geometric Interpretation 4x + 2y = 60 f (x, y) = constant Feasible region 10x + 8y = 168 5x + 10y = 150 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 6 / 48

Geometric Interpretation 4x + 2y = 60 optimal (x, y) 10x + 8y = 168 5x + 10y = 150 f (x, y) = constant Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 7 / 48

Solution From previous slide, at the optimum: x = 8 y = 11. Luckily these are integers. So it is the solution to our problem. If we add the constraint that all variables are integers, we are doing integer programming. We do not deal with it in CS 372. We consider only linear inequalities, no other constraint. Our example was a special case where the linear program has an integer solution, hence it is also a solution to the integer program. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 8 / 48

Problem Statement Maximize the objective function under the constraints f (x 1, x 2... x d ) = c 1 x 1 + c 2 x 2 + + c d x d a 1,1 x 1 + + a 1,d x d b 1 a 2,1 x 1 + + a 2,d x d b 2... a n,1 x 1 + + a n,d x d b n This is linear programming in dimension d. Equivalently, the goal could be to minimize f. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 9 / 48

Geometric Interpretation Each constraint represents a half-space in R d. The intersection of these half-spaces forms the feasible region. The feasible region is a convex polyhedron in R d. feasible region a constraint Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 10 / 48

Convex Polyhedra Definition (Convex polyhedron) A convex polyhedron is an intersection of half spaces in R d. May also be called convex polytope. A convex polyhedron is not necessarily bounded. Special case: A convex polygon is a a bounded, convex polytope in R 2. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 11 / 48

Convex Polyhedra in R 3 a tetrahedron a cube a cone Faces of a convex polyhedron in R 3 : Vertices, edges and facets. Example: A cube has 8 vertices, 12 edges and 6 facets. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 12 / 48

Geometric Interpretation Let c = (c 1, c 2,... c d ). We want to find a point v opt of the feasible region such that c is an outer normal at v opt, if there is one. c v opt Feasible region Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 13 / 48

Infeasible Linear Programs The feasible region may be empty. In this case there is no solution to the linear program. The program is said to be infeasible. We would like to know when it is the case. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 14 / 48

Unbounded Linear Programs The feasible region may be unbounded in the direction of c. In this case, we say that the linear program is unbounded. Then we want to find a ray ρ in the feasible region along which f takes arbitrarily large values. ρ Feasible region c Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 15 / 48

Degenerate Cases A linear program may have an infinite number of optimal solutions. f (x, y) = opt c In this case, we report only one solution. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 16 / 48

Background Many optimization problems in engineering, operations research, and economics are linear programs. A practical algorithm: The simplex algorithm. Exponential time in the worst case. There are polynomial time algorithms. Interior point methods. Integer programming is NP-hard. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 17 / 48

Background Computational geometry techniques give good algorithms in low dimension. Running time is O(n) when d is constant. But exponential in d: Time O(3 d2 n). This lecture: Seidel s algorithm. Simple, randomized. Expected running time O(d!n). This is O(n) when d = O(1) In practice, very good for low dimension. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 18 / 48

One dimensional case Maximize the objective function f (x) = cx under the constraints a 1 x b 1 a 2 x b 2.. a n x b n Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 19 / 48

Interpretation If a i > 0 then constraint i corresponds to the interval (, b ] i. a i If a i < 0 then constraint i corresponds to the interval [ ) bi,. a i The feasible region is an intersection of intervals. So the feasible region is an interval. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 20 / 48

Interpretation a 1 > 0 b 1 /a 1 a 2 < 0 b 2 /a 2 L feasible region R R Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 21 / 48

Algorithm Case 1: (i 1, i 2 ) such that a i1 < 0 < a i2. Compute Compute It takes O(n) time. b i R = min. a i >0 a i b i L = max. a i <0 a i If L > R then the program in infeasible. Otherwise If c > 0 then the solution is x = R. If c < 0 then the solution is x = L. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 22 / 48

Algorithm Case 2: a i > 0 for all i. Compute R = min b i a i. If c > 0 then the solution is x = R. If c < 0 then the program is unbounded and the ray (, R] is a solution. Case 3: a i < 0 for all i. Compute L = max b i a i. If c < 0 then the solution is x = L. If c > 0 then the program is unbounded and the ray [L, ) is a solution. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 23 / 48

Two-Dimensional Linear Programming First approach: Compute the feasible region. In O(n log n) time by divide and conquer+plane sweep. Other method: see later, lecture on duality. The feasible region is a convex polyhedron. Find an optimal point. Can be done in O(log n) time. Overall, it is O(n log n) time. This lecture: An expected O(n)-time algorithm. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 24 / 48

Preliminary We only consider bounded linear programs. We make sure that our linear program is bounded by enforcing two additional constraints m 1 and m 2. Objective function: f (x, y) = c1 x + c 2 y Let M be a large number. If c1 0 then m 1 is x M. If c 1 0 then m 1 is x M. If c 2 0 then m 2 is y M. If c2 0 then m 2 is y M. In practice, it often comes naturally. For instance, in our first example, it is easy to see that M = 30 is sufficient. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 25 / 48

New constraints y M m 2 c m 1 M x Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 26 / 48

Notation The i th constraint is: It defines a half-plane h i. a i,1 x + a i,2 y b i. l i h i Let l i denote the line that bounds h i. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 27 / 48

Algorithm A randomized incremental algorithm. We first compute a random permutation of the constraints (h 1, h 2... h n ). We denote H i = {m 1, m 2, h 1, h 2... h i }. We denote by v i a vertex of H i that maximizes the objective function. In other words, v i is a solution to the linear program where we only consider the first i constraints. v0 is simply the vertex of the boundary of m 1 m 2. Idea: knowing v i 1, we insert h i and find v i. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 28 / 48

Example m 1 v 0 m 2 c f (x, y) = constant Feasible region Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 29 / 48

Example v 1 m 1 m 2 c f (x, y) = f (v 1 ) Feasible region h 1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 30 / 48

Example v 2 = v 1 m 1 m 2 c Feasible region h 2 f (x, y) = f (v 2 ) h 1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 31 / 48

Example m 1 h 3 m 2 v 3 c h 2 Feasible region f (x, y) = f (v 3 ) h 1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 32 / 48

Example m 1 h 3 m 2 v 4 = v 3 c h 4 h 2 Feasible region f (x, y) = f (v 4 ) h 1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 33 / 48

Algorithm Randomized incremental algorithm. Before inserting h i, we only assume that we know v i 1. How can we find v i? Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 34 / 48

First Case First case: v i 1 h i. h i v i 1 c Feasible region Then v i = v i 1. Proof? Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 35 / 48

Second Case Second case: v i 1 h i. v i 1 c Feasible region for H i 1 l i h i Then v i 1 is not in the feasible region of H i. So v i v i 1. What do we know about v i? Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 36 / 48

Second Case There exists an optimal solution v i l i. c v i Feasible region for H i 1 l i h i Proof? How can we find v i? Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 37 / 48

Second Case Assume that a i,2 0, then the equation of l i is We replace y with b i a i,1 x a i,2 objective function. y = b i a i,1 x a i,2. in all the constraints of H i and in the We obtain a one dimensional linear program, with variable x. If it is feasible, its solution gives us the x-coordinate of v i. We obtain the y-coordinate using the equation of l i. If this linear program is infeasible, then the original 2D linear program is infeasible too and we are done. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 38 / 48

Analysis First case is done in O(1) time: Just check whether v i 1 h i. Second case in O(i) time: One dimensional linear program with i + 2 constraints. So the algorithm runs in O(n 2 ) time. Is there a worst case example where it runs in Ω(n 2 ) time? What is the expected running time? We need to know how often the second case happens. We define the random variable X i. Xi = 0 in first case (v i = v i 1 ). Xi = 1 in second case (v i v i 1 ). Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 39 / 48

Analysis When X i = 0 we spend O(1) time at i-th step. When X i = 1 we spend O(i) time. So the expected running time E[T (n)] is ( n ) E[T (n)] = O 1 + i.e[x i ]. Note: E[X i ] is simply the probability that X i = 1 in our case. i=1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 40 / 48

Analysis We denote C i = H i. In other words, C i is the feasible region at step i. v i is adjacent to two edges of C i, these edges correspond to two constraints h and h. C i h v i h If v i v i 1, then h i = h or h i = h. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 41 / 48

Analysis What is the probability that h i = h or h i = h? We use backwards analysis. We assume that H i is fixed. So h i is chosen uniformly at random in H i \ {m 1, m 2 }. So the probability that h i = h or h i = h is at most 2/i. It could be 1/i or 0 if vi is defined by m 1 and/or m 2. So E[X i ] 2/i So ( n ) E[T (n)] = O 1 + i. 2 = O(n). i i=1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 42 / 48

Generalization to higher dimension First attempt: Each constraint is a half-space. Can we compute their intersection and get the feasible region? In R 3 it can be done in O(n log n) time. (Not covered in CS 372.) In higher dimension, the feasible region has Ω(n d 2 ) vertices in the worst case. So computing the feasible region requires Ω(n d 2 ) time. Here, we will give a O(n) expected time algorithm for finding one optimal point in the feasible region, when d = O(1). Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 43 / 48

Preliminary A hyperplane in R d has equation where α 1 x 1 + α 2 x 2 + + α d x d = β d. (α 1, α 2,..., α d ) R d \ {0} d. In general position, d hyperplanes intersect at one point. Each constraint h i is a half-space, bounded by a hyperplane h i. We assume general position in the sense that: Any d hyperplanes h i1,..., h id intersect at exactly one point. The intersection of any d + 1 such hyperplanes is empty. No such hyperplane is orthogonal to c. Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 44 / 48

Algorithm We generalize the 2D algorithm. We first find d constraints m 1, m 2,... m d that make the linear program bounded: If ci 0 then m i is x i M. If ci < 0 then m i is x i M. We pick a random permutation (h 1, h 2,... h n ) of H. Then H i is {m 1, m 2,... m d, h 1, h 2,... h i }. We maintain v i, the solution to the linear program with constraints H i and objective function f. v 0 is the vertex of d m i. i=1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 45 / 48

Algorithm We assume d = O(1). Inserting h i is done in the same way as in R 2 : If v i 1 h i then v i 1 = v i. Otherwise, v i h i. We find v i by solving a linear program with i + d constraints in R d 1. If this linear program is infeasible, then the original linear program is infeasible too, so we are done. It can be done in expected O(i) time. (By induction.) Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 46 / 48

Analysis What is the probability that v i v i 1? By our general position assumption, v i belongs to exactly d hyperplanes that bound constraints in H i. The probability that vi v i 1 is the probability that one of these d constraints was inserted last. By backwards analysis, it is d/i. So the expected running time of our algorithm is ( n ) E[T (n)] = O 1 + i. d = O(dn) = O(n). i i=1 Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 47 / 48

Conclusion This algorithm can be made to handle unbounded linear programs and degenerate cases. A careful implementation of this algorithm runs in O(d!n) time. So it is only useful in low dimension. It can be generalized to other types of problems. See textbook: Smallest enclosing disk. Sometimes we can linearize a problem and use a linear programming algorithm. We will see such cases in homework or tutorial. (Example: Finding the enclosing annulus with minimum area.) Antoine Vigneron (KAUST) CS 372 Lecture 9 November 7, 2012 48 / 48