Combinatorial Optimization

Similar documents
CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Mathematical and Algorithmic Foundations Linear Programming and Matchings

DM515 Spring 2011 Weekly Note 7

1. Lecture notes on bipartite matching

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

1. Lecture notes on bipartite matching February 4th,

Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs

8 Matroid Intersection

Linear Programming Duality and Algorithms

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Approximation Algorithms

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

Lecture 14: Linear Programming II

Integer Programming Theory

Approximation Algorithms

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

1 Linear programming relaxation

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Notes for Lecture 24

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

6. Lecture notes on matroid intersection

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Heuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011

MATH 409 LECTURE 10 DIJKSTRA S ALGORITHM FOR SHORTEST PATHS

Vertex Cover Approximations

CSE 417 Network Flows (pt 4) Min Cost Flows

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

In this lecture, we ll look at applications of duality to three problems:

12 Introduction to LP-Duality

Approximation Algorithms

Mathematical Tools for Engineering and Management

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Combinatorial Optimization - Lecture 14 - TSP EPFL

CMPSCI611: The Simplex Algorithm Lecture 24

12.1 Formulation of General Perfect Matching

Lecture 16 October 23, 2014

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

5. Lecture notes on matroid intersection

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

11.1 Facility Location

A List Heuristic for Vertex Cover

Mathematics for Decision Making: An Introduction. Lecture 18

Algorithm Design and Analysis

4 Integer Linear Programming (ILP)

1 Unweighted Set Cover

Theorem 2.9: nearest addition algorithm

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Introduction to Graph Theory

1 The Traveling Salesperson Problem (TSP)

Steiner Trees and Forests

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Notes for Lecture 20

LECTURES 3 and 4: Flows and Matchings

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs

Linear Programming and Integer Linear Programming

Linear and Integer Programming (ADM II) Script. Rolf Möhring WS 2010/11

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

Iterative Methods in Combinatorial Optimization. R. Ravi Carnegie Bosch Professor Tepper School of Business Carnegie Mellon University

ME 391Q Network Flow Programming

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs

val(y, I) α (9.0.2) α (9.0.3)

CSE 417 Network Flows (pt 3) Modeling with Min Cuts

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

THEORY OF LINEAR AND INTEGER PROGRAMMING

Lecture 5: Duality Theory

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Algorithms for Integer Programming

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms

Fundamentals of Integer Programming

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

NOTES ON COMBINATORIAL OPTIMIZATION

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

Linear Programming. Course review MS-E2140. v. 1.1

Exam problems for the course Combinatorial Optimization I (DM208)

Artificial Intelligence

Dual-fitting analysis of Greedy for Set Cover

11 Linear Programming

Some Advanced Topics in Linear Programming

Homework 2: Multi-unit combinatorial auctions (due Nov. 7 before class)

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

Computational Complexity and Implications for Security DRAFT Notes on Infeasible Computation for MA/CS 109 Leo Reyzin with the help of Nick Benes

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

3 INTEGER LINEAR PROGRAMMING

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Size of a problem instance: Bigger instances take

Lecture Orientations, Directed Cuts and Submodular Flows

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Transcription:

Combinatorial Optimization Frank de Zeeuw EPFL 2012

Today Introduction Graph problems - What combinatorial things will we be optimizing? Algorithms - What kind of solution are we looking for? Linear Programming - The main tool that we will use. Example: Matchings

This course Problem sets: weekly problem set discussed in class on Tuesday optionally hand in one problem of your choice on Tuesday (not for first set) each handed-in problem counts for 1% of your final grade, if that improves it (i.e. it counts if > final exam grade) Final Exam: more information when the time comes

Lecture outline (tentative) 1 Introduction 2 Trees 3 Paths 4 Flows 5 Matchings 6 Matchings 7 Integer Programming 8 Matroids 9 Matroids 10 Matroids 11 N P-hardness 12 More problems? 13 Travelling Salesman Problem

Texts I will post rough lecture notes online that cover the lectures. Online notes by others (see links on course website): Schrijver (great, but don t cover everything) Plotnik (great, but very different order) Several others Books (several copies in library): Korte, Vygen - Combinatorial Optimization (not the easiest to read, least expensive) Cook, Cunningham, Pulleyblank, Schrijver - Combinatorial Optimization (easier to read, very expensive) Schrijver - Combinatorial Optimization (encyclopedic, 3 big books, very expensive...) Matoušek, Gärtner - Understanding and Using Linear Programming (useful if you have it already)

Combinatorial Optimization The goal of this course is to look for algorithms that find certain combinatorial objects with optimal value. Most often, the objects are subgraphs of a graph G = (V, E), usually with a weight function w : E R. Then we want to find a subgraph H of a given type such that its total weight w(h) = e H w(e) is larger/smaller than that of any other such subgraph. Some simple examples: Shortest Path Problem: Given a weighted directed graph and a, b V, find a path P from a to b such that w(p) is minimum. Maximum Spanning Tree Problem: Given an undirected weighted graph, find a spanning tree T such that w(t ) is maximum.

Matchings (in undirected graphs) A matching is an M E such that m, m M, m m =. It is maximal if it is not contained in any larger matching. It is maximum if there is no matching with more edges. Maximum Matching Problem In an unweighted graph, find a maximum matching. Maximum Weighted Matching Problem In a weighted graph, find a matching M such that w(m) is maximum.

Flows A network is a directed graph G = (V, E) with two special vertices s, t, and a capacity function c : E R +. Given a network, a flow is a function f : E R + such that f (e) c(e) e E; f (uv) = f (vw) v V {s, t}. uv E vw E Maximum Flow Problem Given a network, find a flow f such that f (sv) is maximum. sv E

Hamilton cycle A Hamilton cycle of a graph is a cycle that goes through all the vertices of the graph. Travelling Salesman Problem Given a weighted complete graph, find a Hamilton cycle of minimum weight.

Other problems (picked randomly, not exactly involving graphs) Minimum Set Cover Problem: Given a set S with subsets S 1,..., S m. Find a minimum I [m] such that j I S j = S. Bin-Packing Problem: Given a list of numbers 0 a 1,..., a n 1, find a minimum k N for which there is an assignment f : [n] [k] with... f (i)=j a i 1 j [k].

Algorithms We will treat algorithms informally, usually describing them mathematically. For instance, here is an example I will talk about later today: 1 Set M :=. 2 Pick e E such that e m = m M; if not possible, then stop and return M. 3 Set M := M {e}. 4 Go back to 2. Note that this is not a precise implementation: It doesn t say how to pick e, how to check that the intersection is empty, or how to represent the graph as a data structure. But mathematically those things are not problematic, and we will focus on the mathematical concepts here.

Algorithms Since we re not precise about implementation or data structures, we also won t be precise about running times (the worst-case number of steps that the algorithm might take, as a function of the input; for example, the algorithm above has running time O( V )). What we will care about most is whether running times are polynomial or not, i.e. if they are of the form O( V k ) for some fixed k. If it is, we say the algorithm is polynomial, or that it is in P. In combinatorics, naive algorithms are often exponential, i.e. not polynomial. For instance, if it runs over all subsets of the vertices, its running time would be O(2 V ).

Algorithms We introduce, very roughly, several notions related to polynomial running times. The first are properties of decision problems (YES/NO questions). A decision problem is in P if there is an algorithm that solves any instance in polynomial time; in N P if, when the answer to an instance is YES, there is a certificate proving it that can be checked in polynomial time; N P-complete if any problem in N P can be polynomially reduced to it. Also, a calculation problem (question whose answer can be represented as a number, including any optimization problem) is N P-hard if any N P-complete problem can be polynomially reduced to it.

Algorithms You should be aware of the following N P-facts: N P does not stand for non-polynomial (and it doesn t mean that), but for non-deterministically polynomial. P N P. To become a millionaire, prove P N P or the opposite. Deciding whether or not a graph has a Hamilton Cycle is N P-complete. Travelling Salesman, Set Cover, Bin-Packing are all N P-hard. Linear Programming is in P, but Integer Programming is N P-hard.

Approximation algorithms Consider a general minimization problem, asking for a feasible object S with minimum value v(s). Write S for an optimum solution. A k-approximation algorithm (with 1 k R) for a minimization problem is a polynomial algorithm that returns a feasible object S such that v(s) k v(s ). A k-approximation algorithm for a maximization problem is a polynomial algorithm that returns a feasible object S such that v(s) 1 k v(s ).

Linear Programming A Linear Programming problem looks like maximize c T x subject to Ax b and x 0 with x R n a vector of variables, c R n, b R m, and A R m n. Different versions are possible: minimize could be maximize Ax b could be Ax b or Ax = b x 0 could be x 0, or just x R n any mix of the above

Duality Suppose x satisfies Ax b, or in rows b i j A ijx j. Then for any y i 0 also y i b i y i A ij x j = A ij x i y j = i i j i j j x j ( i A ij y i ). If i A ijy i c j, then i y ib i j c jx j, so b T y max c T x. This gives the dual LP problem (to the primal problem we started with): minimize b T y subject to A T y c and y 0 Note that we need y 0, otherwise the inequalities would have flipped.

Duality Primal: maximize c T x subject to Ax b and x 0 Dual: minimize b T y subject to A T y c and y 0 By the previous slide, if x is a primal feasible solution and y a dual feasible solution, then c T x b T y (this is called weak duality). Duality Theorem: x is a primal optimal solution and y is a dual optimal solution if and only if c T x = b T y. Note that if the primal looks different, so will the dual. For instance, if the primal has Ax = b, then the dual does not have y 0 (y is free). To avoid memorizing all the different combinations, you should be able to redo the previous slide.

Complementary slackness Theorem Feasible x and y are optimal solutions to the primal and dual problems in the forms above if and only if ( ) j : A ij x i = b j or y j = 0. i In other words, either the jth primal constraint is tight, or y j = 0. By symmetry, something similar holds for the dual constraint.

LP Algorithms The most natural algorithm for solving LP problems is the simplex algorithm. It uses that Ax b describes a polytope in R n, and optimal solutions are among the vertices of the polytope. In the algorithm, you start at any vertex, then repeatedly move along an edge to a better vertex, eventually ending up at an optimal one. But the simplex algorithm does not have a polynomial running time. (actually, some but not all versions are proven to be non-polynomial, while no version is known to be polynomial) There are other algorithms, like ellipsoid algorithms, which are proven to be polynomial. So solving linear programs is in P. In this course, we will treat these algorithms as black boxes, i.e. we should be aware of the fact that they exist, but we won t look any closer at them.

Integer programming An Integer Programming problem looks like maximize c T x subject to Ax b, x 0, and x Z n Solving IP problems is N P-hard. So unless P = N P, there is no polynomial algorithm for it. There is no duality theorem for an IP problem (directly), but weak duality still holds. The relaxation of an IP problem is the LP problem you get by removing the condition x Z n. An optimum for the relaxation provides a bound for the IP optimum, but usually there is a gap.

IP & Combinatorial Optimization All the combinatorial optimization problems that we will see are in fact IP problems. For some problems, we can prove that the relaxation has the same optima as the IP problem (we say the polytope is integral), which immediately implies that there is a polynomial algorithm. We ll still study these more closely, to look for nicer algorithms than an indirect LP algorithm, and because we ll learn much that we ll need for harder problems.

Main tools The techniques that we will use to find algorithms for combinatorial optimization are, roughly speaking: Greedy heuristic (matroids) Linear Programming (especially duality) Graph Theory (a.k.a. insight...)

Example: Maximum Matching Problem Greedy Algorithm: Repeatedly add any edge that you can add without losing the matching property. More formally: 1 Set M :=. 2 Pick e E such that e m = m M; if not possible, then stop and return M. 3 Set M := M {e}. 4 Go back to 2. M will be maximal, but need not be maximum: Is it a k-approximation for some k?

Integer programming formulation of the Maximum Matching Problem: max x e, with x 0, x Z E e E and x e 1 e v v V Dual: min v V y v, with y v 0, y Z V and y u + y v 1 uv E Then if y is optimal, we must have y v {0, 1}, and the set C = {v V : y v = 1} is what is called a vertex cover: a set of vertices that touch every edge.

By weak duality, a primal feasible solution {x e } and a dual feasible solution {y v } always satisfy x e y v. In other words, any matching M and cover C satisfy M C. (which you can easily prove without duality) Any maximal matching M easily gives you a cover: C = m, with C = 2 M. m M Proof: If C is not a cover, then M is not maximal. Therefore if M is a maximum matching, then we have M C = 2 M, so M M /2, which means that M is a 2-approximation.

Bipartite minimum matching Can we find an exact algorithm for the Minimum Matching Problem? Let s try it for bipartite graphs. If the algorithm is not greedy, it must have some kind of backtracking followed by improvement, like: More generally, we can do this for any augmenting path:

Bipartite minimum matching This suggests an algorithm: Find an augmenting path, then augment on it to get a larger matching; repeat. This will work because: Theorem If a matching in a bipartite graph G = (V, E) is not maximum, then it has an augmenting path.

Bipartite minimum matching Theorem If a matching M in a bipartite graph is not maximum, then it has an augmenting path. Proof: Suppose M is a matching with M > M. Consider We have G = (V, M M ). deg G (v) = deg M (v) + deg M (v) 1 + 1 = 2, which implies that all components of G are paths or cycles. These must have alternating M-edges and M -edges. By bipartiteness, all cycles are even, so have as many M-edges as M -edges. Since M > M, one of the paths must have more edges from M than from M, which means it s an augmenting path.

Bipartite minimum matching Find an augmenting path, then augment on it to get a larger matching; repeat. This is not quite an algorithm, because we haven t said how to find an augmenting path. Here s how. Given a maximal-but-not-maximum matching in a bipartite graph, the graph will look like A B S T

If we define a digraph with the edges from S downward, matching-edges upward, and the others downward, like this: A B S T then a dipath from S to T will be an augmenting path. The first homework will ask you for such an algorithm.