Exact Algorithms for NP-hard problems

Similar documents
Dynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Introduction to Graph Theory

Algorithm Design and Analysis

Partha Sarathi Mandal

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

Hardness of Approximation for the TSP. Michael Lampis LAMSADE Université Paris Dauphine

Algorithms for Euclidean TSP

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

The Square Root Phenomenon in Planar Graphs

val(y, I) α (9.0.2) α (9.0.3)

Questions? You are given the complete graph of Facebook. What questions would you ask? (What questions could we hope to answer?)

Minimal Dominating Sets in Graphs: Enumeration, Combinatorial Bounds and Graph Classes

Module 6 NP-Complete Problems and Heuristics

P and NP CISC5835, Algorithms for Big Data CIS, Fordham Univ. Instructor: X. Zhang

Fast algorithms for max independent set

Chapter 8. NP-complete problems

1 The Traveling Salesperson Problem (TSP)

Introduction to Combinatorial Algorithms

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

Optimal tour along pubs in the UK

Module 6 NP-Complete Problems and Heuristics

Integer Programming Theory

Module 6 NP-Complete Problems and Heuristics

35 Approximation Algorithms

P and NP CISC4080, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang

Theorem 2.9: nearest addition algorithm

Graph Theory S 1 I 2 I 1 S 2 I 1 I 2

CSE 417 Branch & Bound (pt 4) Branch & Bound

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Greedy algorithms Or Do the right thing

Travelling Salesman Problem. Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij

Monte Carlo Methods; Combinatorial Search

Lecture 8: The Traveling Salesman Problem

Notes for Lecture 24

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Dominating Set on Bipartite Graphs

Module 6 P, NP, NP-Complete Problems and Approximation Algorithms

Introduction to Approximation Algorithms

NP-complete Reductions

Best known solution time is Ω(V!) Check every permutation of vertices to see if there is a graph edge between adjacent vertices

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

NP-Complete Problems

Data Structures (CS 1520) Lecture 28 Name:

Approximation Algorithms

Faster parameterized algorithms for Minimum Fill-In

P vs. NP. Simpsons: Treehouse of Horror VI

Traveling Salesman Problem. Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij

CAD Algorithms. Categorizing Algorithms

Approximation Schemes for Planar Graph Problems (1983, 1994; Baker)

Parallel Computing in Combinatorial Optimization

Solving NP-hard Problems on Special Instances

Paired Approximation Problems and Incompatible Inapproximabilities David Eppstein

A Moderately Exponential Time Algorithm for Full Degree Spanning Tree

Recent Advances in FPT and Exact Algorithms for NP-Complete Problems

Introduction to Algorithms

arxiv: v2 [cs.ds] 30 Nov 2012

6. Algorithm Design Techniques

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Minimal dominating sets in graph classes: combinatorial bounds and enumeration

W[1]-hardness. Dániel Marx. Recent Advances in Parameterized Complexity Tel Aviv, Israel, December 3, 2017

Combinatorial Search; Monte Carlo Methods

Faster parameterized algorithms for Minimum Fill-In

CS6702 GRAPH THEORY AND APPLICATIONS 2 MARKS QUESTIONS AND ANSWERS

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Algorithmic complexity of two defence budget problems

Approximation Algorithms

The k-center problem Approximation Algorithms 2009 Petros Potikas

Some Hardness Proofs

Lecture 21: Other Reductions Steven Skiena

CS 231: Algorithmic Problem Solving

V1.0: Seth Gilbert, V1.1: Steven Halim August 30, Abstract. d(e), and we assume that the distance function is non-negative (i.e., d(x, y) 0).

Graph Theory. Connectivity, Coloring, Matching. Arjun Suresh 1. 1 GATE Overflow

Parameterized Complexity - an Overview

Lecture 21: Other Reductions Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

3 No-Wait Job Shops with Variable Processing Times

Approximation Algorithms

! Greed. O(n log n) interval scheduling. ! Divide-and-conquer. O(n log n) FFT. ! Dynamic programming. O(n 2 ) edit distance.

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

Solutions for the Exam 6 January 2014

(Refer Slide Time: 01:00)

CSE 100: B+ TREE, 2-3 TREE, NP- COMPLETNESS

Interaction Between Input and Output-Sensitive

Lecture 4: September 11, 2003

1 Variations of the Traveling Salesman Problem

Outline. CS38 Introduction to Algorithms. Approximation Algorithms. Optimization Problems. Set Cover. Set cover 5/29/2014. coping with intractibility

CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example

Further Improvement on Maximum Independent Set in Degree-4 Graphs

Reference Sheet for CO142.2 Discrete Mathematics II

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

Data Structures (CS 1520) Lecture 28 Name:

Algorithms for Data Science

Lecture 24: More Reductions (1997) Steven Skiena. skiena

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

Domination, Independence and Other Numbers Associated With the Intersection Graph of a Set of Half-planes

6.2. Paths and Cycles

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

Transcription:

24 mai 2012

1 Why do we need exponential algorithms? 2 3

Why the P-border? 1 Practical reasons (Jack Edmonds, 1965) For practical purposes the difference between algebraic and exponential order is more crucial than the difference between [computable and not computable]... 2 Algebric reasons (Alan Cobham, 1964) If we formalize the definition relative to various general classes of computing machines we seem always to end up with the same well-defined class of functions. Thus we can give a mathematical characterization of P having some confidence it characterizes correctly our informally defined class. This class then turns out to have several natural closure properties

Why the P-border? 1 Practical reasons (Jack Edmonds, 1965) For practical purposes the difference between algebraic and exponential order is more crucial than the difference between [computable and not computable]... 2 Algebric reasons (Alan Cobham, 1964) If we formalize the definition relative to various general classes of computing machines we seem always to end up with the same well-defined class of functions. Thus we can give a mathematical characterization of P having some confidence it characterizes correctly our informally defined class. This class then turns out to have several natural closure properties

P NP NP-hard There exists an algorithm who finds a solution to the problem and whose execution time is bounded with O(n d ). There exists an algorithm who verifies a solution to the problem and whose execution time is bounded with O(n d ). If this problem is polynomial, then P=NP.

SAT 3-SAT COLORING 0-1 INT PROG CLIQUE SET PACKINGVERTEX COVER CLIQUE COVEREXACT COVERSET COVER FEEDBACK ARC HITTING SET FEEDBACK NODE 3D MATCHING HAMILTONIAN CIRCUIT STEINER TREE KNAPSACK HAMILTONIAN CYCLE

Example : matching problems Matching : set of non-adjacent edges Proposition Maximum-size matching is in P Proposition Minimum-size maximal matching is NP-hard

Exercise I Why do we need exponential algorithms? a) Find a maximum-size matching on a (1 n)-grid. b) Find a minimum-size maximal matching on a (1 n)-grid.

Solutions Why do we need exponential algorithms? a) Find a maximum-size matching on a (1 n)-grid.

Solutions Why do we need exponential algorithms? b) Find a minimum-size maximal matching on a (1 n)-grid.

1 Why do we need exponential algorithms? 2 3

General Principle Aim : maximize f (S) among all the sets S that verifies some property. Problem : this is NP-hard. Solution : design an algorithm that computes a solution S which is not too far from the optimal. A R-approximation algorithm guarantees that f (OPT )/f (S ) R

Exercise II Why do we need exponential algorithms? A Vertex Cover is a subgraph H G such that any edge in the initial graph G is incident to a vertex in the cover H.

Exercise II Why do we need exponential algorithms? Question : How to make a 2-approximation algorithm for the vertex cover? Clue : Maximum Matching is in P.

Exercise II Why do we need exponential algorithms? any vertex cover has at least the same size of a maximal matching : f (OPT ) M all the vertices incident to a maximal matching form a vertex cover : f (S ) = 2M

Exercise II Why do we need exponential algorithms? any vertex cover has at least the same size of a maximal matching : f (OPT ) M all the vertices incident to a maximal matching form a vertex cover : f (S ) = 2M

Inapproximation Ratio 2 isn t very sexy, still it s among the best situations. Many important problems don t admit constant approximation ratio except if P=NP. Example : Min Coloring, Max Clique, Min Dominating Set, Traveling Salesman Problem...

Because we can Jack Edmonds, 1965 (again) : It would be unfortunate for any rigid criterion to inhibit the practical development of algorithms which are either not known or known not to conform nicely to the criterion... Modern computers can compute 2 50 operations in reasonable time.

TSP (Traveling Salesman Problem) : Given a complete graph with weighted edges, find a path of minimum cost that visits every vertex exactly once. Proposition There exists an algorithm that solves TSP within 2 n steps. Assume we have an instance of 50 vertices : Why would we use heuristics or approximation when we can do it exactly?

TSP (Traveling Salesman Problem) : Given a complete graph with weighted edges, find a path of minimum cost that visits every vertex exactly once. Proposition There exists an algorithm that solves TSP within 2 n steps. Assume we have an instance of 50 vertices : Why would we use heuristics or approximation when we can do it exactly?

Exponential algorithms can even be faster Size of the instance n 3 1.1 n n = 30 27000 18 n = 50 125000 118 n = 100 10 6 13781 n = 200 8 10 6 2 10 8

Pause Why do we need exponential algorithms? Any question before we start the main course?

Exercise : brute force algorithms Design an exact (exponential) algorithm for the following problems : 1 Minimum Vertex Cover (running time 2 n ) 2 Traveling Salesman Problem (running time n!)

Solution Why do we need exponential algorithms? 1 Compute every subgraph and keep the largest one which is a vertex cover 2 A path is just a way to order vertices : n possibilities for the first vertex, then n 1 for the second one, etc...

1 Why do we need exponential algorithms? 2 3

Example : Minimum Set Cover Given a set S and a family of subsets F 2 S, find the smallest subfamily X F that covers S. Trivial algorithm in 2 m, with m = F

Example : Minimum Set Cover Recursive algorithm : 1 if all subsests have size two or less, the problem is polynomial. 2 otherwise, if some element x belong to only one set T, take T. 3 otherwise, if some set T contains only one element x, discard T. 4 otherwise, branch on some subset T of maximum carinality : either take it or discard it. Solve the reduce problem in both cases.

Example : Minimum Set Cover Branching recurrence : SOL(S, F ) = min(sol(s \ T, F \ {T }) + 1, SOL(S, F \ {T })) Bounding value : T 3 Branching inequation : T (n + m) T (n + m 1) + T (n + m 4) Bound on complexity : T (n + m) 1.38 n+m

General Principle Find a recurrence (branch) : SOL(I ) = f (SOL(I 1 ), SOL(I 2 ),...) with I j < I Find a stop condition (cut) : φ(i ) implies the problem is polynomial, and I < k = φ(i )

General Principle Translate the recurrence numerically : T (n) T (n k 1 ) + T (n k 2 ) +... with n = I, k j = I I j We know that the solution is T (n) = x n, where x is the solution of : 1 = x k 1 + x k 2 +...

Exercise : Minimum Independent Dominating Set Let G a graph. An independent dominating set is a subgraph H G such that : H contains no edge. every edge in G is adjacent to a vertex in H.

Exercise : Minimum Independent Dominating Set 1) find a recurrence algorithm whose recurrence equation is T (n) (k + 1)T (n k 1) 2) which k is the worse case? 3) what is the overall running time of this algorithm?

Exercise : Minimum Independent Dominating Set 1) Pick a vertex of minimum degree k : either add it or one of its neighbors. Discard the neighbors of the selected vertex. 2) Worse case is k = 2. 3) Overall complexity is 3 n/3.

1 Why do we need exponential algorithms? 2 3

Example : TSP

Example : TSP For any subgraph W V and any vertex i W, P(W, i) is the shortest hamiltonian path in W between 1 and i. Let j W the vertex just before i in P(W, i), we just need to compute P(W \ {i}, j) for every j W. P(W, i) = min j W (P(W \ {i}, j) + w(e ij))

Example : TSP If we store results in a big table by increasing size of W, the recurrence is polynomial. Thus, running time is equal to the size of the the table, which is 2 n.

General Principle Find a recurrence (branch) : SOL(I ) = f (SOL(I 1 ), SOL(I 2 ),...) with I j < I Compute by increasing size and store all results in a table M. Typically, M = 2 I.

General Principle Advantages : The complexity is independent from the branching rule (provided it is polynomial). No subproblem is computed twice. Good for poor branchings (TSP, coloring... ) Weaknesses : Lots of useless subproblems are computed. Huge memory requirement (equal to the running time). Not efficient for subgraph mining, for example.

1 Why do we need exponential algorithms? 2 3

Inclusion-Exclusion Reference : Björklund, Husfeldt and Koivisto, 2006. c k (V ) = X V ( 1) X a(x ) k where a(x ) is the number of subset from some well-chosen family F disconnected with X. c k (V ) stands for the number of covers of V with exactly k items from F. Many cover or partition problems reduce to the computation of these coeffs (like, min coloring)

Inclusion-Exclusion All the c k (V ) can be computed within : 4 n with brute force. 3 n with dynamic programming. 2 n with Zeta transform.

Inclusion-Exclusion All the c k (V ) can be computed within : 4 n with brute force. 3 n with dynamic programming. 2 n with Zeta transform.

Inclusion-Exclusion All the c k (V ) can be computed within : 4 n with brute force. 3 n with dynamic programming. 2 n with Zeta transform.

Inclusion-Exclusion All the c k (V ) can be computed within : 4 n with brute force. 3 n with dynamic programming. 2 n with Zeta transform.

Measure and Conquer Reference : Fomin, Grandoni and Kratsch, 2006. General idea : Use a branch-and-bound algorithm. Give different weights to vertices. Balance this weights to make worst case better and best case worse.

Measure and Conquer p i p j

Partition and Measure Reference : Bourgeois, Escoffier, Paschos and van Rooij, 2010. General idea : Use a branch-and-bound algorithm. Split the space of possible instances among polyhedrons with specific properties. Make measure and conquer analyzis on each polyhedron. Balance all the weights to satisfy boundary conditions.

S 2 S 1 Partition Why do we need exponential algorithms? p i P 2 = 0 P 1 = 0 W = k S 0 p j

Opening There exist different ways than exact algorithms to deal with inapproximable problems : moderatly exponential approximation parameterized algorithms f (k) n d