Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Similar documents
1 The Traveling Salesperson Problem (TSP)

Approximation Algorithms

val(y, I) α (9.0.2) α (9.0.3)

Greedy algorithms Or Do the right thing

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

Theorem 2.9: nearest addition algorithm

Fall CS598CC: Approximation Algorithms. Chandra Chekuri

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem

Basic Approximation algorithms

NP-Hard (A) (B) (C) (D) 3 n 2 n TSP-Min any Instance V, E Question: Hamiltonian Cycle TSP V, n 22 n E u, v V H

Approximation Algorithms

CS 4407 Algorithms. Lecture 8: Circumventing Intractability, using Approximation and other Techniques

Travelling Salesman Problem. Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij

Optimal tour along pubs in the UK

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

Lecture 8: The Traveling Salesman Problem

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Slides on Approximation algorithms, part 2: Basic approximation algorithms

Introduction to Approximation Algorithms

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Approximation Algorithms

Traveling Salesman Problem. Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij

Outline. CS38 Introduction to Algorithms. Approximation Algorithms. Optimization Problems. Set Cover. Set cover 5/29/2014. coping with intractibility

Polynomial time approximation algorithms

Algorithm Design and Analysis

Module 6 NP-Complete Problems and Heuristics

A constant-factor approximation algorithm for the asymmetric travelling salesman problem

Lecture 1. 2 Motivation: Fast. Reliable. Cheap. Choose two.

Partha Sarathi Mandal

11. APPROXIMATION ALGORITHMS

Institute of Operating Systems and Computer Networks Algorithms Group. Network Algorithms. Tutorial 4: Matching and other stuff

Assignment 5: Solutions

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Best known solution time is Ω(V!) Check every permutation of vertices to see if there is a graph edge between adjacent vertices

Improved approximation ratios for traveling salesperson tours and paths in directed graphs

Module 6 P, NP, NP-Complete Problems and Approximation Algorithms

Polynomial-Time Approximation Algorithms

Theory of Computing. Lecture 10 MAS 714 Hartmut Klauck

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

Approximation Algorithms

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

Prove, where is known to be NP-complete. The following problems are NP-Complete:

Module 6 NP-Complete Problems and Heuristics

Module 6 NP-Complete Problems and Heuristics

Approximation Algorithms

Greedy Approximations

P NP. Approximation Algorithms. Computational hardness. Vertex cover. Vertex cover. Vertex cover. Plan: Vertex Cover Metric TSP 3SAT

Hardness of Approximation for the TSP. Michael Lampis LAMSADE Université Paris Dauphine

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

35 Approximation Algorithms

Multi-Objective Combinatorial Optimization: The Traveling Salesman Problem and Variants

1 Better Approximation of the Traveling Salesman

Graphs and Algorithms 2015

Steiner Trees and Forests

Traveling Salesperson Problem (TSP)

Approximation Algorithms

Matching 4/21/2016. Bipartite Matching. 3330: Algorithms. First Try. Maximum Matching. Key Questions. Existence of Perfect Matching

11. APPROXIMATION ALGORITHMS

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

NP-Completeness. Algorithms

Approximation Algorithms

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

Notes for Recitation 9

NP-complete Reductions

2. Optimization problems 6

Introduction to Graph Theory

SLS Methods: An Overview

Dual-fitting analysis of Greedy for Set Cover

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

The k-center problem Approximation Algorithms 2009 Petros Potikas

Solutions for the Exam 6 January 2014

8 NP-complete problem Hard problems: demo

An O(log n) Approximation Ratio for the Asymmetric Traveling Salesman Path Problem

Chapter 5 Graph Algorithms Algorithm Theory WS 2012/13 Fabian Kuhn

V1.0: Seth Gilbert, V1.1: Steven Halim August 30, Abstract. d(e), and we assume that the distance function is non-negative (i.e., d(x, y) 0).

Notes for Lecture 24

11. APPROXIMATION ALGORITHMS

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Lecture Notes: Euclidean Traveling Salesman Problem

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms

Approximation Algorithms

Notes 4 : Approximating Maximum Parsimony

CSE 417 Branch & Bound (pt 4) Branch & Bound

CMPSCI 311: Introduction to Algorithms Practice Final Exam

1 Variations of the Traveling Salesman Problem

Questions? You are given the complete graph of Facebook. What questions would you ask? (What questions could we hope to answer?)

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

Dynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed

Introduction to Approximation Algorithms

GRAPH THEORY and APPLICATIONS. Matchings

Approximability Results for the p-center Problem

Improved Approximations for Graph-TSP in Regular Graphs

2 Approximation Algorithms for Metric TSP

Graphs and Algorithms 2016

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

More NP-complete Problems. CS255 Chris Pollett May 3, 2006.

CSE 548: Analysis of Algorithms. Lecture 13 ( Approximation Algorithms )

11.1 Facility Location

Transcription:

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost Q: Is there a reasonable heuristic for this?

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E Z + Goal: find a tour (Hamiltonian cycle) of minimum cost Q: Is there a reasonable heuristic for this? Claim: Unless P=NP no good heuristic. Why?

TSP and Hamiltonian cycle Claim: Unless P=NP no good heuristic. Why? Can solve NP-Complete Hamiltonian cycle problem using a good heuristic for TSP Proof: Given graph G=(V,E) create a new graph H = (V, E ) where H is a complete graph Set c(e) = 1 if e E, otherwise c(e) = B

TSP is hard Proof: Given graph G=(V,E) create a new graph H = (V, E ) where H is a complete graph Set c(e) = 1 if e E, otherwise c(e) = B If G has a Hamilton cycle, OPT = n otherwise OPT n-1 + B Approx alg with ratio better than (n-1+b)/n would enable us to solve Hamiltonian cycle problem

TSP is hard If G has a Hamilton cycle, OPT = n otherwise OPT n-1 + B Approx alg with ratio better than (n-1+b)/n would solve the Hamiltonian cycle problem Can choose B to be any poly-time computable value (n 2, n 3,..., 2 n ). Conclusion: TSP is inapproximable

Metric-TSP G=(V, E), c: E Z + Find a min cost tour that visits all vertices Allow a vertex to be visited multiple times Equivalent to assuming the following: G is a complete graph c satisfies triangle inequality: for vertices u,v,w c(uv) + c(vw) c(uw) Why?

Nearest Neighbour Heuristic Natural Greedy Heuristic: 1. Start at an arbitrary vertex s 2. From the current vertex u, go to the nearest unvisited vertex v 3. When all vertices are visited, return to s

Nearest Neighbour Heuristic Natural Greedy Heuristic: 1. Start at an arbitrary vertex s 2. From the current vertex u, go to the nearest unvisited vertex v 3. When all vertices are visited, return to s Exercise 1: Not a constant factor approximation for any constant c Exercise 2*: Is an O(log n) approximation

MST based algorithm 1. Compute an MST of G, say T 2. Obtain an Eulerian graph H=2T by doubling edges of T 3. An Eulerian tour of 2T gives a tour in G

MST based algorithm T 2T 3 2 9 8 7 6 5 4 1 14 13 10 11 12 Euler tour

MST based algorithm Claim: MST heuristic is a 2-approximation algorithm c(t) = e E(T) c(e) OPT c(2t) = 2 c(t) 2 OPT

Christofides heuristic Can we convert T into an Eulerian graph? 1. Compute an MST of G, say T. Let S be the vertices of odd degree in T (Note: S is even) 2. Find a minimum cost matching M on S in G 3. Add M to T to obtain Eulerian graph H 4. Compute an Eulerian tour of H

Christofides heuristic T T + M 1 10 3 2 4 6 7 8 5 9 Euler Tour

Christofides heuristic 1 10 3 2 4 6 7 8 5 9 Euler Tour Shortcut

3/2 approx for Metric-TSP Lemma: c(m) OPT/2 c(h) = c(t) + c(m) OPT + OPT/2 = 3OPT/2

Proof of Lemma A tour in G of cost OPT implies a tour on S of cost at most OPT (why?) T Tour on S

Proof of Lemma A tour in G of cost OPT implies a tour on S of cost at most OPT (why?) c(m 1 ) + c(m 2 ) OPT M 1 M 2

Practice and local search Local search heuristics perform extremely well 2-Opt: apply following step if it improves tour

Research Problem Christofides heuristic: 1976 No improvement in 30 years! Open Problem: Improve 3/2 for Metric-TSP Candidate: Held-Karp linear program relaxation is conjectured to give a ratio of 4/3 important open problem

TSP in Directed Graphs G = (V, A), A : arcs c: A R + non-negative arc weights TSP: find min-cost directed Hamiltonian cycle ATSP: asymmetric TSP (allow vertex to be visited multiple times) Equivalent to assuming c satisfies c(u,v) + c(v, w) c(u, w) for all u,v,w Note: c(u,v) might not equal c(v, u) (asymmetry)

Example

Heuristic ideas? Metric-TSP heuristics relied on symmetry strongly

Directed cycles and their use Wlog assume that G is a complete directed graph with c(uv) + c(vw) c(uw) for all u,v,w Given S V, G[S] induced graph on S OPT(G) : cost of tour in G Observation: OPT(G[S]) OPT(G) for any S V

Directed cycles and their use Consider a directed cycle C on S Pick an arbitrary vertex u S V = (V S) {u} C u

Directed cycles and their use Consider a directed cycle C on S Pick an arbitrary vertex u S V = (V S) {u} Find a solution C in G[V ] Can extend solution to G using C C C C u

Cycle cover Cycle cover of G collection of cycles C 1, C 2,..., C k such that each vertex is in exactly one cycle

Min-cost cycle cover cycle cover cost: sum of costs of edges in cover Claim: a minimum cost cycle cover in a directed graph can be computed in polynomial time See Hw 0, Prob 5

Cycle Shrinking Algorithm [Frieze-Galbiati-Maffioli 82] log 2 n approx 1. Find a minimum cost cycle cover (poly-time computable) 2. Pick a proxy node for each cycle 3. Recursively solve problem on proxies extend using cycles

Example

Example

Example

Example

Example

Example

Analysis V 0 = V V i : set of proxy nodes after iteration i G i = G[V i ] x i : cost of cycle cover in G i Claim: x i OPT (why?)

Analysis x i : cost of cycle cover in G i Claim: x i OPT (why?) Total cost = x 1 + x 2 +... + x k where k is number of iterations OPT log n Why is k log n?

Analysis Conclusion: log n approximation for ATSP Running time: each iteration computes a min cost cycle cover log n iterations V i V /2 i time dominated by first iteration: O(n 2.5 ) Can be reduced to O(n 2 ) with a small loss in approximation ratio

ATSP Best known ratio: 0.842 log n Open problem: Is there a constant factor approximation for ATSP? (open for 25 years!)

Formal definition of NPO problems P, NP: language/decision classes NPO: NP Optimization problems, function class

Optimization problem Π is an optimization problem Π is either a min or max type problem Instances I of Π are a subset of Σ * I size of instance for each I there are feasible solutions S(I) for each I and solution S S(I) there is a real/rational number val(s, I) Goal: given I, find OPT(I) = min S S(I) val(s, I)

NPO: NP Optimization problem Π is an NPO problem if Given x Σ *, can check if x is an instance of Π in poly( x ) time for each I, and S S(I), S is poly( I ) there exists a poly-time decision procedure that for each I and x Σ *, decides if x S(I) val(i, S) is a poly-time computable function

NPO and NP Π in NPO, minimization problem For a rational number B define L(Π, Β) = { I OPT(I) B } Claim: L(Π, B) is in NP L(Π, Β): decision version of Π

Approximation Algorithm/Ratio Minimization problem Π: A is an approximation algorithm with (relative) approximation ratio α iff A is polynomial time algorithm for all instance I of Π, A produces a feasible solution A(I) such that val (A(I)) α val (OPT(I)) (Note: α 1) Remark: α can depend in size of I, hence technically it is α( I ). Example: α( I ) = log n

Maximization problems Maximization problem Π: A is an approximation algorithm with (relative) approximation ratio α iff A is polynomial time algorithm for all instance I of Π, A produces a feasible solution A(I) such that val (A(I)) α val (OPT(I)) (Note: α 1) Very often people use 1/α ( 1) as approximation ratio

Relative vs Additive approximation Approximation ratio defined as a relative measure Why not additive? α additive approximation implies for all I, val(a(i)) OPT(I) + α For most NPO problems no additive approximation ratio possible because OPT(I) has scaling property

Scaling property Example: Metric-TSP Take instance I, create instance I with all edge costs increased by a factor of β for each S S(I) = S(I ), val(s, I ) = β val(s, I) OPT(I ) = β OPT(I) Suppose there exists an additive α approximation for Metric-TSP then by choosing β sufficiently large, can obtain an exact algorithm!

Scaling property Given I, run additive approx alg on I to get a solution S S(I ), return S for I val(s, I ) OPT(I ) + α val(s, I) = val(s, I )/β = OPT(I )/β + α/β = OPT(I) + α/β Choose β s.t α/β < 1 for integer data, val(s,i) < OPT(I) + 1 val(s,i) = OPT(I)

Some simple additive approximations Planar-graph coloring: Fact: NP-complete to decide if a planar graph adimits 3-coloring Fact: can always color using 4 colors Edge coloring: Vizing s thm: edge coloring number is either (G) or (G) + 1 Fact: NP-complete to decide!

Hardness of approximation Given optimization problem Π (say minimization) Q: What is the smallest α such that Π has an approximation ratio of α? α * (Π): approximability threshold of Π α * (Π) = 1 implies Π is polynomial time solvable

Hardness of Approximation P = NP all NPO problems have exact algorithms (Why?) P NP many NPO problems have α * (Π) > 1 (Which ones?) Approximation algorithms: upper bounds on α * (Π) Hardness of approximation: lower bounds on α * (Π)

Proving hardness of approximation Unless P = NP, α(π) >?? Direct: from an NP-Complete problem Indirect: via gap reductions

Example: k-center (metric) k-center problem: given undirected graph G = (V, E) and integer k choose k centers/vertices in V Goal: minimize maximum distance to a center min S V, S = k max v V dist G (v, S) dist G (v, S) = min u S dist G (u,v)

Example: k-center k-center problem: given undirected graph G = (V, E) and integer k choose k centers/vertices in V Goal: minimize maximum distance to a center min S V, S = k max v V dist G (v, S) Theorem: Unless P=NP there is no 2-ε approximation for k-center for any ε > 0 Theorem: There is a 2-approximation for k-center

k-center hardness of approximation Given undirected graph G=(V,E) Dominating set: S V s.t for all v V, either v S or v is adjacent to some u in S Dominating Set Problem: Given G = (V, E) is there a dominating set in G of size k? NP-Complete

k-center hardness of approximation Dominating Set: decision version Given G = (V, E) is there a dominating set in G of size k? Use DS to prove hardness for k-center

k-center hardness of approximation Dominating Set: Given G = (V, E) is there a dominating set in G of size k? Given instance of DS instance I create instance I of k-center graph and k remain the same

k-center hardness of approximation If I has DS of size k (that is I is in the language) then OPT(I ) = 1 (why?) If I does not have a DS of size k (that is I is not in the language) then OPT(I ) 2 (why?) Therefore a 2-ε approximation for k-center can be used to solve DS unless P=NP, α * (k-center) 2-ε

Algorithm for k-center Given G = (V, E) integer k Pick k centers from V so that the maximum distance of any vertex to a center is minimized Let R * be the optimal distance

Example Let R * be the optimal distance

2-approx for k-center Assume we know R * Initialize all vertices to be uncovered for i = 1 to k do if no uncovered vertex, BREAK pick an uncovered vertex u as a center set all vertices within 2R * from u as covered endif Output set of centers

2-approx for k-center Assume we know R * Initialize all vertices to be uncovered for i = 1 to k do if no uncovered vertex, BREAK pick an uncovered vertex u as a center set all vertices within 2R * from u as covered end if Output set of centers Claim: All vertices covered by chosen centers

Another 2-approx for k-center S = for i = 1 to k do let u be vertex in G farthest from S add u to S Output S Also known as Gonzalez s algorithm

Exercises Prove that the k-center algorithms yield a 2-approximation Compare running times of algorithms Which one would you prefer in practice? Why?