# / Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Save this PDF as:

Size: px
Start display at page:

Download "/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang"

## Transcription

1 / Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization problem (everything will hold for maximization problems too, but with the inequalities reversed). A general way to prove α-approximation is as following: 1. Prove OPT LB 2. Prove ALG α LB Then we can conclude ALG α OPT. This is how essentially all of our analyses have worked so far. 9.2 Integer Linear Program A set {x 1, x 2,, x n } of variables, each of which must be an integer m linear inequalities over variables Possibly a linear objective function over variables 9.3 LP Rounding We want to round the solution to LP to get a feasible solution to ILP. The general steps are as following: 1. Write ILP 2. Relax to LP (i.e. remove the integrality constraints) 3. Solve LP, getting solution x 4. Round x to integer values to get a solution to ILP Key ideas of rounding: 1. LP OPT, since it is a relaxation. Slightly more formally, any integral solution is obviously a solution to the LP, and hence the optimal LP value is at most the optimal ILP value, which equals OPT. 1

2 2. We design our rounding in order to guarantee that the integral solution we get back is at most α LP. Putting these together, we get that ALG α OPT. And instead of having to find some mysterious lower bound LB to compare the algorithm to, we can compare the output of the algorithm to the fractional LP solution, and only analyze how much our rounding increased the cost. 9.4 Weighted Vertex Cover Problem Description Input: a graph G = (V, E), and a cost function c : V R + Feasible solution: S V such that {u, v} E, either u S or v S Objective: min v S c(v), i.e. min c(s) Equivalent Integer Linear Program minimize: c(v) x v v V subject to: x u + x v 1 for each edge {u, v} E x v {0, 1} for each vertex v V It is easy to see that this ILP is an exact formulation of the weighted vertex cover problem. For one direction, suppose that we are given a vertex cover S. Then setting x v = 1 if v S and x v = 0 if v S gives a solution to the ILP with cost exactly c(s). For the other direction, suppose that we are given a solution x to the ILP. Then let S = {v V : x v = 1}. Then S is a vertex cover with cost exactly equal to the objective value of the ILP on x. Hence the ILP and weighted vertex cover are equivalent Relax to a Linear Program Now we would like to drop integrality constraints to get LP: minimize: c(v) x v v V subject to: x u + x v 1 for each edge {u, v} E 0 x v 1 for each vertex v V Theorem If x is feasible for ILP, then x is feasible for LP. Proof: x satisfies the edge constraints of the LP since it satisfies them for the ILP, and x v {0, 1} for each v V due to the integrality constraints of the ILP, and hence 0 x v 1. 2

3 Corollary OPT(LP) OPT(ILP). Proof: By the theorem above, the optimal solution to ILP is feasible to LP. Corollary If we find a solution of cost less or equal to α OPT(LP), then we have an α- approximation algorithm. Proof: cost(sol) α OPT(LP) α OPT(ILP) LP Rounding As always, we first solve the LP to get an optimal fractional solution x. We can do this in polynomial time since the LP has size polynomial in the size of the instance. We then round x to an integral solution x as follows: for each v V, we set x v = 1 if x v 1 2, and set x v = 0 otherwise. Theorem x is a feasible solution to ILP. Proof: For any (u, v) E, we have x u + x v 1, which implies that max(x u, x v) 1 2. So either x u = 1 or x v = 1. Thus x u + x v 1. Theorem c( x ) 2c( x ). Proof: c( x ) = v V c(v) x v = {v V :x v 1/2} c(v) v V c(v) 2x v = 2c( x ). Corollary This rounding is a 2-approximation algorithm. 9.5 Weighted Set Cover Problem Description Weighted set cover problem is the set cover problem with costs on each set Equivalent Integer Linear Program minimize: subject to: c(s) x S S S {S:e S} x S 1 for each element e U x S {0, 1} for each set S S This is clearly an exact formulation it is easy to see that solutions to the ILP correspond to set covers with the same cost, and vice versa. 3

4 9.5.3 Relax to a Linear Program minimize: subject to: c(s) x S S S {S:e S} x S 1 for each element e U 0 x S 1 for each set S S LP Rounding In a previous lecture on set cover we defined the frequency of an element e to be f e = {S S : e S} and defined f = max e U f e. Given x which is solution to LP, set x S = 1 if x S 1 f, and set x S = 0 otherwise. Theorem x is a feasible solution to ILP. Proof: Since each elements is in at most f sets, the LP constraint {S:e S} x S 1 implies that max {S:e S} x S 1 f. Thus there exists at least one S which contains e such that x S = 1. So {S:e S} x S 1. Corollary It is an f-approximation algorithm. Proof: By the previous theorem x is a feasible set cover. For its cost, we have that c( x ) = S S c(s)x S f S S c(s)x S = f c( x ). 9.6 Integrality Gap Definition The integrality gap of an LP is sup instance I ( OP T (I) LP (I) The integrality gap of an LP measures the strength of the relaxation. If we prove that ALG α OP T by proving that ALG α LP (as we have been doing with rounding), then we cannot give a ratio α that is better than the integrality gap (or else on the instance I which achieves the integrality gap we would be able to round the LP solution to a value less than OPT, giving a contradiction) Integrality Gap for Weighted Vertex Cover Consider the instance to be: G = K n, which is the complete graph, and c(v) = 1 for all v V. Theorem The integrality gap is at least 2 ( 1 1 n). Proof: OPT = n 1, since if there are two nodes not in the vertex cover the edge between them will not be covered. But in the LP, if we set set x v = 1 2 for every v V we get a valid LP solution with cost n n 1 2. Hence IG n/2 = 2 ( 1 n) 1. ). 4

5 9.6.2 Integrality Gap for Max Independent Set The ILP is: maximize: v V x v subject to: x u + x v 1 for each edge {u, v} E x v {0, 1} for each vertex v V Theorem The integrality gap is at least n 2. Proof: Consider the instance to be G = K n, which is a complete graph. So OPT = 1 since every pair of vertices are connected. For the LP, set x v = 1 2 for every v V. Then we get the LP cost to be n n/2 2. Hence the integrality gap is at least 1 = n Solving LPs This is a bit outside the scope of this class, but we will talk a little bit about algorithms for solving LPs. This section is pretty informal, but everything can be made formal Simplex Note that the feasible region of an LP is a polytope (by definition) and hence is convex (for every two points in the feasible region, their midpoint is also feasible). The oldest heuristic for solving LPs is the simplex algorithm. We won t talk much about this algorithm, but the highlevel view is straightforward. Given a polytope, there is a natural graph associated with it where the vertices of the graph are the vertices of the polytope (points which are tight for d linearly independent constraints of the polytope, where d is the dimension) and two vertices are adjacent if in the polytope they are also adjacent (the line segment between them is tight for d 1 linearly independent constraints). This algorithm starts by finding a vertex of the polytope, and then moving to a neighbor with decreased cost as long as this is possible. By linearity and convexity, once it gets stuck it has found the optimal solution. Unfortunately simplex does not run in polynomial time even with a polynomially-sized LP, the number of vertices of the polytope can be exponential. Simplex does well in practice, but poorly in theory Interior Point Methods While simplex only moves along the outer faces of the polytope, there are algorithms known as interior-point methods which make moves inside the polytope. There are now known to be interior-point methods which have polynomial running time, and are also extremely fast in practice. So in both theory and in practice, we can solve LPs efficiently. 5

6 9.7.3 Ellipsoid We will spend a bit more time talking about an algorithm known as Ellipsoid, which works well in theory but poorly in practice. Nevertheless, it has some properties which are extremely nice theoretically, so we will feel free to use it when designing algorithms. As a first step, we will reduce optimization to feasibility. Suppose that we are given an algorithm which can determine whether a given LP is feasible, i.e. it returns YES if the feasible region is non-empty and NO otherwise. Now suppose that we are given an LP which we want to solve. For any value T, we can use our feasibility algorithm to see if there is an LP solution with cost at most T by adding a single new constraint (the objective is at most T ). So we can do a binary search over possible values of T to find in polynomial time the smallest T for which the feasibility region is nonempty. Hence if we have such a feasibility algorithm, we can also solve LPs with objective functions The algorithm The ellipsoid algorithm is an algorithm for testing feasibility. Informally, it is the following. Algorithm 1 Using Ellipsoid to Check Feasibility Algorithm Input: Convex constraints Output: A feasible solution if one exists. Find an ellipsoid containing the polytope. Let x be the the center of the ellipsoid. k = 0 while x is not feasible AND k < maxiter do k = k + 1 Find a constraint violated by x Draw a parallel hyperplane corresponding to this violated constraint through x. This divides the ellipsoid into two parts with equal volume, one of which is on the wrong side of the hyperplane and so is entirely infeasible. Consider the part which is on the correct side of the violated constraint (i.e. the part in which the polytope must appear if it is nonempty). Find a new ellipsoid containing this part. Let x be the center of the new ellipsoid. end while return x, which is a feasible solution; or no feasible solution exists. It is not hard to see that this algorithm is correct, in that if the polytope is nonempty it will eventually find a feasible point and if the polytope is empty then it will never find a feasible point. The hard part is proving that it takes only polynomial time in the worst case, i.e. that we can set maxiter to a polynomial value. While it is complicated, the key idea is that due to the geometry of ellipsoids, in each iteration the volume of the ellipsoid that we are considering decreases by a constant factor. 6

7 Separation The ellipsoid algorithm has an extremely nice property: in order to make it work, all that we need to be able to do is find a violated constraint given some point x (or prove that x is feasible). The is is called the separation problem. If the LP is small then this is simple we can just check all of the constraint. But sometimes our LPs are not small, and ellipsoid still lets us solve them! If there are an exponential number of constraints, we cannot even write down all of them in polynomial time so we certainly cannot use simplex or interior-point methods to solve the LP. But if we can separate, then ellipsoid lets us solve in polynomial time despite not even being able to write down the LP. Let s do a somewhat trivial example of this: the spanning tree polytope Spanning Tree Polytope Consider the minimum spanning tree problem. Obviously we know multiple simple and fast algorithms. But suppose we want to do this by an LP. While it s not immediately obvious how to write such an LP, it turns out that the following formulation is good: minimize: subject to: c(e) x e (9.7.1) e E {e E(S, S)} x e 1 for each cut S V (9.7.2) 0 x e 1 for each e E (9.7.3) Note that the tree constraints are hidden in the optimization problem above since we can always make the objective function smaller by throwing away an edge. Although it has exponentially many constrains, we can separate in polynomial time. Given x, first run the min-cut algorithm. If the min cut is larger or equal to 1, than x is a feasible solution; otherwise the minimum cut S corresponds to a violated constraint. Hence we can separate, so by the ellipsoid algorithm we can optimize. 7

### 11.1 Facility Location

CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

### 15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 While we have good algorithms for many optimization problems, the previous lecture showed that many

### Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

### Vertex Cover Approximations

CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation

### 3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

### Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

### Lecture 14: Linear Programming II

A Theorist s Toolkit (CMU 18-859T, Fall 013) Lecture 14: Linear Programming II October 3, 013 Lecturer: Ryan O Donnell Scribe: Stylianos Despotakis 1 Introduction At a big conference in Wisconsin in 1948

### Convex Optimization CMU-10725

Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

### Polynomial-Time Approximation Algorithms

6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast

### Linear programming and duality theory

Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

### The Simplex Algorithm

The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

### Lecture 9: Pipage Rounding Method

Recent Advances in Approximation Algorithms Spring 2015 Lecture 9: Pipage Rounding Method Lecturer: Shayan Oveis Gharan April 27th Disclaimer: These notes have not been subjected to the usual scrutiny

### 6. Lecture notes on matroid intersection

Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

### Approximation Algorithms

Approximation Algorithms Group Members: 1. Geng Xue (A0095628R) 2. Cai Jingli (A0095623B) 3. Xing Zhe (A0095644W) 4. Zhu Xiaolu (A0109657W) 5. Wang Zixiao (A0095670X) 6. Jiao Qing (A0095637R) 7. Zhang

### (67686) Mathematical Foundations of AI July 30, Lecture 11

(67686) Mathematical Foundations of AI July 30, 2008 Lecturer: Ariel D. Procaccia Lecture 11 Scribe: Michael Zuckerman and Na ama Zohary 1 Cooperative Games N = {1,...,n} is the set of players (agents).

### 3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

### 1 The Traveling Salesperson Problem (TSP)

CS 598CSC: Approximation Algorithms Lecture date: January 23, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im In the previous lecture, we had a quick overview of several basic aspects of approximation

### Steiner Trees and Forests

Massachusetts Institute of Technology Lecturer: Adriana Lopez 18.434: Seminar in Theoretical Computer Science March 7, 2006 Steiner Trees and Forests 1 Steiner Tree Problem Given an undirected graph G

### Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs

CMPUT 675: Topics in Algorithms and Combinatorial Optimization (Fall 009) Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching Lecturer: Mohammad R. Salavatipour Date: Sept 15 and 17, 009

### Lecture 16 October 23, 2014

CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 16 October 23, 2014 Scribe: Colin Lu 1 Overview In the last lecture we explored the simplex algorithm for solving linear programs. While

### Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

Integer Programming ISE 418 Lecture 1 Dr. Ted Ralphs ISE 418 Lecture 1 1 Reading for This Lecture N&W Sections I.1.1-I.1.4 Wolsey Chapter 1 CCZ Chapter 2 ISE 418 Lecture 1 2 Mathematical Optimization Problems

### Repetition: Primal Dual for Set Cover

Repetition: Primal Dual for Set Cover Primal Relaxation: k min i=1 w ix i s.t. u U i:u S i x i 1 i {1,..., k} x i 0 Dual Formulation: max u U y u s.t. i {1,..., k} u:u S i y u w i y u 0 Harald Räcke 428

### The complement of PATH is in NL

340 The complement of PATH is in NL Let c be the number of nodes in graph G that are reachable from s We assume that c is provided as an input to M Given G, s, t, and c the machine M operates as follows:

### Maximal Independent Set

Chapter 0 Maximal Independent Set In this chapter we present a highlight of this course, a fast maximal independent set (MIS) algorithm. The algorithm is the first randomized algorithm that we study in

### Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

### MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

### Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

### Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

### Greedy Approximations

CS 787: Advanced Algorithms Instructor: Dieter van Melkebeek Greedy Approximations Approximation algorithms give a solution to a problem in polynomial time, at most a given factor away from the correct

### Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

### Combinatorial Optimization - Lecture 14 - TSP EPFL

Combinatorial Optimization - Lecture 14 - TSP EPFL 2012 Plan Simple heuristics Alternative approaches Best heuristics: local search Lower bounds from LP Moats Simple Heuristics Nearest Neighbor (NN) Greedy

### Modules. 6 Hamilton Graphs (4-8 lectures) Introduction Necessary conditions and sufficient conditions Exercises...

Modules 6 Hamilton Graphs (4-8 lectures) 135 6.1 Introduction................................ 136 6.2 Necessary conditions and sufficient conditions............. 137 Exercises..................................

### Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

### Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R

### Integer Programming Explained Through Gomory s Cutting Plane Algorithm and Column Generation

Integer Programming Explained Through Gomory s Cutting Plane Algorithm and Column Generation Banhirup Sengupta, Dipankar Mondal, Prajjal Kumar De, Souvik Ash Proposal Description : ILP [integer linear

### Solution for Homework set 3

TTIC 300 and CMSC 37000 Algorithms Winter 07 Solution for Homework set 3 Question (0 points) We are given a directed graph G = (V, E), with two special vertices s and t, and non-negative integral capacities

### 5 MST and Greedy Algorithms

5 MST and Greedy Algorithms One of the traditional and practically motivated problems of discrete optimization asks for a minimal interconnection of a given set of terminals (meaning that every pair will

### Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 14.1 Introduction Today we re going to talk about algorithms for computing shortest

### Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

### LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

### Kurt Mehlhorn, MPI für Informatik. Curve and Surface Reconstruction p.1/25

Curve and Surface Reconstruction Kurt Mehlhorn MPI für Informatik Curve and Surface Reconstruction p.1/25 Curve Reconstruction: An Example probably, you see more than a set of points Curve and Surface

### Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

### The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

### 11. APPROXIMATION ALGORITHMS

Coping with NP-completeness 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: weighted vertex cover LP rounding: weighted vertex cover generalized load balancing knapsack problem

### Trapezoidal decomposition:

Trapezoidal decomposition: Motivation: manipulate/analayze a collection of segments e.g. detect segment intersections e.g., point location data structure Definition. Draw verticals at all points binary

### The simplex method and the diameter of a 0-1 polytope

The simplex method and the diameter of a 0-1 polytope Tomonari Kitahara and Shinji Mizuno May 2012 Abstract We will derive two main results related to the primal simplex method for an LP on a 0-1 polytope.

### 5 MST and Greedy Algorithms

5 MST and Greedy Algorithms One of the traditional and practically motivated problems of discrete optimization asks for a minimal interconnection of a given set of terminals (meaning that every pair will

### Notes taken by Mea Wang. February 11, 2005

CSC2411 - Linear Programming and Combinatorial Optimization Lecture 5: Smoothed Analysis, Randomized Combinatorial Algorithms, and Linear Programming Duality Notes taken by Mea Wang February 11, 2005 Summary:

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 29 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/7/2016 Approximation

### Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

Algorithmic Game Theory and Applications Lecture 6: The Simplex Algorithm Kousha Etessami Recall our example 1 x + y

### In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

2 Basics In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems. 2.1 Notation Let A R m n be a matrix with row index set M = {1,...,m}

### A List Heuristic for Vertex Cover

A List Heuristic for Vertex Cover Happy Birthday Vasek! David Avis McGill University Tomokazu Imamura Kyoto University Operations Research Letters (to appear) Online: http://cgm.cs.mcgill.ca/ avis revised:

### 9.1 Cook-Levin Theorem

CS787: Advanced Algorithms Scribe: Shijin Kong and David Malec Lecturer: Shuchi Chawla Topic: NP-Completeness, Approximation Algorithms Date: 10/1/2007 As we ve already seen in the preceding lecture, two

### Integer Programming Chapter 9

1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017 2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies

### 5.5 The Travelling Salesman Problem

0 Matchings and Independent Sets 5.5 The Travelling Salesman Problem The Travelling Salesman Problem A travelling salesman, starting in his own town, has to visit each of towns where he should go to precisely

### Lecture 10 October 7, 2014

6.890: Algorithmic Lower Bounds: Fun With Hardness Proofs Fall 2014 Lecture 10 October 7, 2014 Prof. Erik Demaine Scribes: Fermi Ma, Asa Oines, Mikhail Rudoy, Erik Waingarten Overview This lecture begins

### Greedy algorithms is another useful way for solving optimization problems.

Greedy Algorithms Greedy algorithms is another useful way for solving optimization problems. Optimization Problems For the given input, we are seeking solutions that must satisfy certain conditions. These

### 15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods

5.8J Integer Programming and Combinatorial Optimization Fall 9 A knapsack problem Enumerative Methods Let s focus on maximization integer linear programs with only binary variables For example: a knapsack

### NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

### Lecture 8: The Traveling Salesman Problem

Lecture 8: The Traveling Salesman Problem Let G = (V, E) be an undirected graph. A Hamiltonian cycle of G is a cycle that visits every vertex v V exactly once. Instead of Hamiltonian cycle, we sometimes

### 11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

### CS 598CSC: Approximation Algorithms Lecture date: March 2, 2011 Instructor: Chandra Chekuri

CS 598CSC: Approximation Algorithms Lecture date: March, 011 Instructor: Chandra Chekuri Scribe: CC Local search is a powerful and widely used heuristic method (with various extensions). In this lecture

### Introduction to Linear Programming

Introduction to Linear Programming Eric Feron (updated Sommer Gentry) (updated by Paul Robertson) 16.410/16.413 Historical aspects Examples of Linear programs Historical contributor: G. Dantzig, late 1940

### CPSC 536N: Randomized Algorithms Term 2. Lecture 10

CPSC 536N: Randomized Algorithms 011-1 Term Prof. Nick Harvey Lecture 10 University of British Columbia In the first lecture we discussed the Max Cut problem, which is NP-complete, and we presented a very

### 16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

### FACES OF CONVEX SETS

FACES OF CONVEX SETS VERA ROSHCHINA Abstract. We remind the basic definitions of faces of convex sets and their basic properties. For more details see the classic references [1, 2] and [4] for polytopes.

### J Linear Programming Algorithms

Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.]

Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

### CS364A: Algorithmic Game Theory Lecture #19: Pure Nash Equilibria and PLS-Completeness

CS364A: Algorithmic Game Theory Lecture #19: Pure Nash Equilibria and PLS-Completeness Tim Roughgarden December 2, 2013 1 The Big Picture We now have an impressive list of tractability results polynomial-time

### CMPSCI611: Approximating SET-COVER Lecture 21

CMPSCI611: Approximating SET-COVER Lecture 21 Today we look at two more examples of approximation algorithms for NP-hard optimization problems. The first, for the SET-COVER problem, has an approximation

### LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

### Linear Programming and Integer Linear Programming

Linear Programming and Integer Linear Programming Kurt Mehlhorn May 26, 2013 revised, May 20, 2014 1 Introduction 2 2 History 3 3 Expressiveness 3 4 Duality 4 5 Algorithms 8 5.1 Fourier-Motzkin Elimination............................

### CS369G: Algorithmic Techniques for Big Data Spring

CS369G: Algorithmic Techniques for Big Data Spring 2015-2016 Lecture 11: l 0 -Sampling and Introduction to Graph Streaming Prof. Moses Charikar Scribe: Austin Benson 1 Overview We present and analyze the

### Chapter 5 Graph Algorithms Algorithm Theory WS 2012/13 Fabian Kuhn

Chapter 5 Graph Algorithms Algorithm Theory WS 2012/13 Fabian Kuhn Graphs Extremely important concept in computer science Graph, : node (or vertex) set : edge set Simple graph: no self loops, no multiple

### Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

### Adaptive Linear Programming Decoding of Polar Codes

Adaptive Linear Programming Decoding of Polar Codes Veeresh Taranalli and Paul H. Siegel University of California, San Diego, La Jolla, CA 92093, USA Email: {vtaranalli, psiegel}@ucsd.edu Abstract Polar

### Lecture 9: Linear Programming

Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative

### On Clarkson s Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small

On Clarkson s Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small Robert Bassett March 10, 2014 2 Question/Why Do We Care Motivating Question: Given a linear or integer

### An Eternal Domination Problem in Grids

Theory and Applications of Graphs Volume Issue 1 Article 2 2017 An Eternal Domination Problem in Grids William Klostermeyer University of North Florida, klostermeyer@hotmail.com Margaret-Ellen Messinger

### Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

CS261, Winter 2017. Instructor: Ashish Goel. Problem set 2 Electronic submission to Gradescope due 11:59pm Thursday 2/16. Form a group of 2-3 students that is, submit one homework with all of your names.

### arxiv: v1 [math.co] 27 Feb 2015

Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,

### Decomposition in Integer Linear Programming

Decomposition in Integer Linear Programming M.V. Galati Advanced Analytics - Operations R & D, SAS Institute, Chesterbrook, PA 9 Ted K. Ralphs Department of Industrial and System Engineering, Lehigh University,

### 1. Suppose you are given a magic black box that somehow answers the following decision problem in polynomial time:

1. Suppose you are given a magic black box that somehow answers the following decision problem in polynomial time: Input: A CNF formula ϕ with n variables x 1, x 2,..., x n. Output: True if there is an

### Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

### The Shortest Path Problem. The Shortest Path Problem. Mathematical Model. Integer Programming Formulation

The Shortest Path Problem jla,jc@imm.dtu.dk Department of Management Engineering Technical University of Denmark The Shortest Path Problem Given a directed network G = (V,E,w) for which the underlying

### Theorem 3.1 (Berge) A matching M in G is maximum if and only if there is no M- augmenting path.

3 Matchings Hall s Theorem Matching: A matching in G is a subset M E(G) so that no edge in M is a loop, and no two edges in M are incident with a common vertex. A matching M is maximal if there is no matching

### COVERING POINTS WITH AXIS PARALLEL LINES. KAWSAR JAHAN Bachelor of Science, Bangladesh University of Professionals, 2009

COVERING POINTS WITH AXIS PARALLEL LINES KAWSAR JAHAN Bachelor of Science, Bangladesh University of Professionals, 2009 A Thesis Submitted to the School of Graduate Studies of the University of Lethbridge

### Minimum Weight Constrained Forest Problems. Problem Definition

Slide 1 s Xiaoyun Ji, John E. Mitchell Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY, USA jix@rpi.edu, mitchj@rpi.edu 2005 Optimization Days Montreal, Canada May 09, 2005

### MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

CS161, Lecture 2 MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 1 Introduction Today, we will introduce a fundamental algorithm design paradigm, Divide-And-Conquer,

### princeton univ. F 15 cos 521: Advanced Algorithm Design Lecture 2: Karger s Min Cut Algorithm

princeton univ. F 5 cos 5: Advanced Algorithm Design Lecture : Karger s Min Cut Algorithm Lecturer: Pravesh Kothari Scribe:Pravesh (These notes are a slightly modified version of notes from previous offerings

### Convex Optimization CMU-10725

Convex Optimization CMU-10725 2. Linear Programs Barnabás Póczos & Ryan Tibshirani Please ask questions! Administrivia Lecture = 40 minutes part 1-5 minutes break 35 minutes part 2 Slides: http://www.stat.cmu.edu/~ryantibs/convexopt/

### Homework 3 Solutions

CS3510 Design & Analysis of Algorithms Section A Homework 3 Solutions Released: 7pm, Wednesday Nov 8, 2017 This homework has a total of 4 problems on 4 pages. Solutions should be submitted to GradeScope

### Ramsey s Theorem on Graphs

Ramsey s Theorem on Graphs 1 Introduction Exposition by William Gasarch Imagine that you have 6 people at a party. We assume that, for every pair of them, either THEY KNOW EACH OTHER or NEITHER OF THEM

### Ray Projection for Optimizing Polytopes with Prohibitively Many Constraints in Set-Covering Column Generation

Ray Projection for Optimizing Polytopes with Prohibitively Many Constraints in Set-Covering Column Generation Daniel Porumbel Abstract A recurrent task in mathematical programming consists of optimizing

### David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

### Finding Strongly Connected Components

Yufei Tao ITEE University of Queensland We just can t get enough of the beautiful algorithm of DFS! In this lecture, we will use it to solve a problem finding strongly connected components that seems to

### arxiv: v2 [cs.dm] 3 Dec 2014

The Student/Project Allocation problem with group projects Aswhin Arulselvan, Ágnes Cseh, and Jannik Matuschke arxiv:4.035v [cs.dm] 3 Dec 04 Department of Management Science, University of Strathclyde,