Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far:

Similar documents
Algorithms for Integer Programming

Theorem 2.9: nearest addition algorithm

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Discrete (and Continuous) Optimization WI4 131

Final. Name: TA: Section Time: Course Login: Person on Left: Person on Right: U.C. Berkeley CS170 : Algorithms, Fall 2013

Institute of Operating Systems and Computer Networks Algorithms Group. Network Algorithms. Tutorial 4: Matching and other stuff

Optimal tour along pubs in the UK

TSP! Find a tour (hamiltonian circuit) that visits! every city exactly once and is of minimal cost.!

Polynomial time approximation algorithms

Mathematical Tools for Engineering and Management

6 ROUTING PROBLEMS VEHICLE ROUTING PROBLEMS. Vehicle Routing Problem, VRP:

Approximation Algorithms

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Algorithm Design and Analysis

3 No-Wait Job Shops with Variable Processing Times

Solutions to Assignment# 4

Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs

Christofides Algorithm

Approximation Algorithms

CSE 417 Branch & Bound (pt 4) Branch & Bound

Assignment 5: Solutions

Assignment 3b: The traveling salesman problem

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

Notes for Recitation 9

3 INTEGER LINEAR PROGRAMMING

Greedy Algorithms. Algorithms

Integer Programming Theory

ALGORITHM CHEAPEST INSERTION

1 The Traveling Salesperson Problem (TSP)

Integer Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University

7KH9HKLFOH5RXWLQJSUREOHP

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Fall CS598CC: Approximation Algorithms. Chandra Chekuri

22 Elementary Graph Algorithms. There are two standard ways to represent a

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

Travelling Salesman Problem. Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij

Algorithms for Decision Support. Integer linear programming models

Steiner Trees and Forests

Combinatorial Optimization - Lecture 14 - TSP EPFL

Coping with NP-Completeness

Design and Analysis of Algorithms

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

22 Elementary Graph Algorithms. There are two standard ways to represent a

Computational Complexity CSC Professor: Tom Altman. Capacitated Problem

Shortest path problems

35 Approximation Algorithms

Approximation Algorithms: The Primal-Dual Method. My T. Thai

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

Solutions for Operations Research Final Exam

DM515 Spring 2011 Weekly Note 7

val(y, I) α (9.0.2) α (9.0.3)

Lecture 8: The Traveling Salesman Problem

Notes for Lecture 24

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

Chapter 8. NP-complete problems. Search problems. cient algorithms. Exponential search space

An Introduction to Dual Ascent Heuristics

Questions? You are given the complete graph of Facebook. What questions would you ask? (What questions could we hope to answer?)

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

Integer Programming: Algorithms - 2

Chapter 8. NP-complete problems

The Size Robust Multiple Knapsack Problem

Greedy algorithms Or Do the right thing

Chapter 8. NP-complete problems

Modeling and Solving Location Routing and Scheduling Problems

DETERMINISTIC OPERATIONS RESEARCH

Foundations of Computing

CS302 - Data Structures using C++

Approximation Algorithms

Fundamentals of Integer Programming

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997

5.3 Cutting plane methods and Gomory fractional cuts

Parallel Graph Algorithms

11.1 Facility Location

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs

CSC 1700 Analysis of Algorithms: Minimum Spanning Tree

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

Vertex Cover Approximations

Basic Approximation algorithms

Traveling Salesman Problem. Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij

1 Better Approximation of the Traveling Salesman

Constructive and destructive algorithms

Chapter Design Techniques for Approximation Algorithms

DESIGN AND ANALYSIS OF ALGORITHMS GREEDY METHOD

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

1 Variations of the Traveling Salesman Problem

(Refer Slide Time: 01:00)

Algorithms and Experimental Study for the Traveling Salesman Problem of Second Order. Gerold Jäger

CS420/520 Algorithm Analysis Spring 2009 Lecture 14

CS 6783 (Applied Algorithms) Lecture 5

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

SLS Methods: An Overview

Introduction. I Given a weighted, directed graph G =(V, E) with weight function

6. Lecture notes on matroid intersection

In this lecture, we ll look at applications of duality to three problems:

Introduction to Optimization

1 The Traveling Salesman Problem

Transcription:

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far: I Strength of formulations; improving formulations by adding valid inequalities I Relaxations and dual problems; obtaining good bounds on the optimal value of an IP I Cutting plane algorithms (Exact general purpose algorithm based on strengthening LP relaxations; does not produce a feasible solution until termination) I Branch and bound algorithms (Exact general purpose algorithm using primal and dual bounds to intelligently enumerate feasible solutions; does not produce a feasible solution until one of the subproblems is pruned by optimality) IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 111 c Marina A. Epelman Last topic: Summary; Heuristics and Approximation Algorithms Solving IPs in real life: Special classes of problems I Can we devise specialized algorithms for a particular class of problems? I What if there is a need to get an OK feasible solution quickly? IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 112 c Marina A. Epelman

Heuristic algorithms A heuristic or approximation algorithm may be used when... I A feasible solution is required rapidly; within a few seconds or minutes I An instance is so large, it cannot be formulated as an IP or MIP of reasonable size I In branch-and-bound algorithm, looking to find good/quick primal bounds on subproblems solutions su ciently quickly IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 113 c Marina A. Epelman Heuristic algorithms I Provide a feasible solution without guarantee on its optimality, or even quality, or algorithm running time. I Well-designed heuristics for particular problem classes have been empirically shown to find good solutions fast. I Examples of heuristic algorithms: I Lagrangian Solve the Lagrangian Relaxation with a good value of u. If the solution x(u) is not feasible, make adjustments while keeping objective function deteriorations small. I Greedy Construct a feasible solution from scratch, choosing at I each step the decision bringing the best immediate reward. Local Search Start with a feasible solution x, and compare it with neighboring feasible solutions. If a better neighbor y is found, move to y, andrepeatthesameprocedure.ifnosuch neighbor is found, the process stops a local optimum is found. I In practice, often used in combination: a greedy procedure or other heuristic to construct a feasible solution, followed by a local search to improve it. IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 114 c Marina A. Epelman

Examples of greedy heuristics Example 1: 0 1 Knapsack 8 9 < = max c : j x j : a j x j apple b, x 2 B n ;, where a j, c j > 0 8j Assume the items are ordered so that c j a j c j+1 a j+1, j =1,...,n 1 The algorithm: I Initialization: Start with no items in the knapsack I For j =1,...,n: ifthespaceremainingintheknapsackis su cient to accommodate jth item, include it; otherwise, skip it. For example, if c = (12, 8, 17, 11, 6, 2, 2), a =(4, 3, 7, 5, 3, 2, 3), and b = 9, the solution obtained by the greedy heuristic is x G =(1, 1, 0, 0, 0, 1, 0) with z G = c T x G = 22 (the optimal solution is x 1 = x 4 =1withz = 23). IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 115 c Marina A. Epelman Examples of greedy heuristics Example 2: STSP G =(V, E), c ij = c ji Greedy algorithm I: I Initialization: Let S = {1}, N = V \ S, v 1 =1 I For j =2,..., V, findv j = argmin i2n c vj S = S [ {v j }, N = N \{v j } 1,i. Set I Connect v n to v 1. Resulting tour: v 1, v 2,...,v n, v 1 Greedy algorithm II: I Initialization: Let S = ;, N = E. Sort the edges so that c ek apple c ek+1 for k =1,..., E 1. Set j = 0. I Repeat until S is a tour: increment j; IfS [ {e j } contains no subtours and no nodes of degree more than 2, set S = S [ {e j } IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 116 c Marina A. Epelman

Examples of greedy heuristics Example 3: Minimum Spanning Tree problem Given a connected graph G =(V, E) withc e, e 2 E, construct a minimum-cost spanning tree, i.e., subgraph Ḡ = {V, Ē) whichis connected and contains no cycles. Prim s algorithm: I Initialization: Start with a graph (V 1, E 1 ), where V 1 = 1, and E 1 = ; I For k =2,..., V, consider all edges {i, j} with i 2 V k and j 62 V k, and choose the one with smallest cost. Let V k+1 = V k [ {j} and N k+1 = N k [ {{i, j}}. Theorem Prim s algorithm finds an optimal solution of the MST problem. Proof Relies on the cut optimality necessary and su cient condition of MSTs. IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 117 c Marina A. Epelman Examples of local search Local Search Heuristics: Start with a feasible solution x, and compare it with neighboring feasible solutions. If a better neighbor y is found, move to y, andrepeatthesameprocedure.if no such neighbor is found, the process stops a local optimum is found. To define a local search, need a starting solution and a definition of a neighborhood. Example 1: local search heuristic for cost-minimizing UFL Observation: if a set of facilities S 6= ; is built, it is easy to find the cheapest feasible assignment of clients to these facilities. The total cost of such solution is c(s)+ P j2s f j,wheref j = P m i=1 min j2s c ij Local search: Start with a set of depots S 6= ;. Definea neighborhood of S to be all non-empty sets that can be obtained by either adding or deleting one depot to/from S (note: S has at most n neighbors). IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 118 c Marina A. Epelman

Examples of local search Example 2: 2OPT heuristic for STSP Given a set of edges S that form a TSP tour, there is no other tour that di ers from S by exactly one edge. If two (disjoint) edges are removed from S, there is exactly one other tour that contains the remaining edges. In the 2OPT heuristic for STSP a neighborhood of S consists of all tours that can be obtained removing two (disjoint) edges from S, and replacing them with two other edges. Note: S has n 2 neighbors. Thus, at every iteration, the algorithm may have to compare up to n 2 tours. IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 119 c Marina A. Epelman Adapting local search for global optimization Local search algorithms find a solution that is a local optimum, i.e., best in its neighborhood. Globally optimal solutions may not lie in that neighborhood! The following are search algorithms designed to seek a globally optimal solution: I Tabu search I Simulated annealing I Genetic algorithms IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 120 c Marina A. Epelman

Approximation algorithms Essentially, heuristics with a provable guarantee on the quality of the obtained solution. -Approximation Algorithm A heuristic algorithm for a class of problems constitutes an -approximation algorithm if for each instance of a problem in this class with optimal value z, the heuristic algorithm returns a feasible solution of value z H such that z H apple (1 + )z for a minimization problem, or z H (1 )z for a maximization problem. Additionally, keep an eye on the running time of the algorithm make sure it grows at a reasonable rate as larger problems and/or smaller values are considered. (E.g., polynomial, not exponential, growth.) IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 121 c Marina A. Epelman Examples of approximation algorithms: integer knapsack 8 < z =max : c j x j : 9 = a j x j apple b, x 2 Z n + ;, where a 1,...,a n, b 2 Z +. Assume that a j apple b 8j, and c 1 c j a 1 a j 8j. The greedy solution x H has x1 H = b b a 1 c,withz H c 1 b b a 1 c. Theorem 12.1 z H z 1 2 (An -approximation with = 1 2.) Proof The solution to LP relaxation x LP 1 = b a 1 and z LP = c 1b a 1 z. Note: b a 1 apple 2b b a 1 c (since b b a 1 c 1), so z H z z H z LP c 1 b b a 1 c c 1 b a 1 b b a 1 c 2b b a 1 c = 1 2. IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 122 c Marina A. Epelman

Examples of approximation algorithms: STSP with 4apple Consider an instance of STSP on a complete graph with 0 apple c ij apple c ik + c jk 8i, j, k Tree Heuristic: I Find a minimum spanning tree (let its length be z T apple z) I Construct a walk that starts at node 1, visits all nodes and return to node 1 using only edges of the tree. The length of the walk is 2z T, since every edge is traversed twice. I Convert the walk into a tour by skipping intermediate nodes that have already been visited. Due to triangle inequality, the length does not increase. z H apple 2z T apple 2z. An -approximation with = 1. Can be improved to =1/2 by finding a minimum perfect matching on the nodes that have odd degree in the tree and using its arcs in the tour. IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 123 c Marina A. Epelman Examples of approximation algorithms: 0 1knapsack 8 < z =max : c j x j : 9 = a j x j apple b, x 2 B n ; Can solve in time proportional to n 2 c max. (With Dynamic Programming if time remains.) Idea behind an -approximation: if c 1 = 105, c 2 = 37, c 3 = 85, the solution is not much di erent than the solution to the problem with c 1 = 100, c 2 = 30, c 3 = 80; the latter is equivalent to ĉ 1 = 10, ĉ 2 =3, ĉ 3 = 8. I Let c j, j =1...,n are original coe cients, I replace the t least significant digits with 0 s to obtain c j, j =1,...,n, I ĉ j = c j /10 t, j =1,...,n. Note: c j 10 t apple c j apple c j. I Solve the problem with coef. s ĉ j, j =1,...,n IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 124 c Marina A. Epelman

Quality of approximation for 0 1knapsack Let S (S 0 ) be the optimal solution to the original (modified) instance. Then X X X X X X c j c j c j c j (c j 10 t ) c j j2s j2s 0 j2s 0 j2s j2s j2s P z z H j2s = c P j j2s 0 c j P z j2s c apple n10t. j c max So, to obtain an -approximation in polynomial (in n) time: If c max < n/, solve the original problem. Time: n 2 c max apple n 3 / If c max n/, find nonnegative integer t: 10 < n10t c max apple, and apply the algorithm to ĉ j, j =1,...,n. Note: ĉ max = 10 t c max apple 10 t c max < 10n/, sotimeis: n 2 ĉ max apple 10n 3 /. n10 t IOE 518: Introduction to IP, Winter 2012 Heuristics and Approximation Algorithms Page 125 c Marina A. Epelman Dynamic Programming Background: the shortest path problem I Consider a directed graph D =(V, A) with nonnegative arc costs, c e for e 2 A. I Given a starting node s, the goal is to find a shortest (directed) path from s to every other node in the graph. Dynamic programming: a sequential approach I Think of the problem as a sequence of decisions I A stage refers to the number of decisions already made I A state contains information about decisions already made I An arc in the network represents making the next decision I Cost of the arc represents the incremental cost of the decision. IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 126 c Marina A. Epelman

Observations about the shortest path problem Observation 1 If the shortest path from s to t passes through node p, the subpaths from (s, p) and(p, t) aretheshortestpathsbetweenthe respective points. Observation 2 Let d(v) denote the shortest path from s to v. Then d(v) =min i2v (v) {d(i)+c iv } Observation 3 Given an acyclic graph, the nodes can be ordered so that i < j for every arc (i, j) 2 A. The shortest path from node 1 to all other nodes can be found by applying the above recursion for v =2,..., N. IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 127 c Marina A. Epelman Example: TSP viewed as a shortest path problem Start at node 1, and at each step, or stage, choose which node to go to next. To make the decision at each stage, need to know the set of states S already visited, and the current location k. (S, k) is a state. Starting node: ({1}, 1). Let C(S, k) betheminimum cost of all paths from 1 to k that visit all nodes in S exactly once. Note: state (S, k) can be reached from any state (S \{k}, m), with m 2 S \{k} at the cost of c mk.therefore: C(S, k) = min (C(S \{k}, m)+c mk), k 2 S, and C({1}, 1) = 0. m2s\{k} The cost of the optimal tour is min k (C({1,...,n}, k)+c k1 ). IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 128 c Marina A. Epelman

Solving TSP as a dynamic program Recall: C(S, k) = min (C(S \{k}, m)+c mk), k 2 S, and C({1}, 1) = 0. m2s\{k} I Number of states: 2 n choices for S, O(n) choices for k. The total number of states is O(n2 n ). I Computational e ort: each time C(S, k) is computed using the above equations, O(n) arithmetic operations are needed. Hence, a total of O(n 2 2 n ) operations are required exponential, but better than enumerating all n! tours! (Realistically, can handle at most 20 nodes.) IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 129 c Marina A. Epelman Constructing Dynamic Programming Algorithms 1. View the choice of a feasible solution as a sequence of decisions occurring in stages, and so that the total cost is the sum of the costs of individual decisions. 2. Define the state as a summary of all relevant past decisions. 3. Determine which state transitions are possible. Let the costs of each state transition be the cost of the corresponding decision. 4. Write a recursion on the optimal cost from the origin state to a destination state. IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 130 c Marina A. Epelman

Example: 0 1Knapsackasadynamicprogram 8 < max : c j x j : 9 = a j x j apple b, x 2 B n ;, where a, c 2 Zn +, b 2 Z +. Stages: at stage i =1,...,n, decidewhethertotakeitemi. State: at stage i, thestateis(i, u) valueaccumulatedselecting from the first i items. Let W (i, u) be the least possible weight that has to be accumulated in order to reach state (i, u). State transitions: from the state (i, u), can transition either to state (i +1, u), or (i +1, u + c i+1 ). Cost recursion: let W (i, u) =1 if it is impossible to accumulate u with items 1 through i. Also, W (0, 0) = 0 and W (0, u) =1 for u 6= 0. Then W (i, u) =min{w (i 1, u), W (i 1, u c i )+a i }, i =1,...,n. IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 131 c Marina A. Epelman 0 1Knapsackasadynamicprogram continued Finding the optimal solution: I Let c max =max i=1,...,n c i. Note that if u > nc max, no state of the form (i, u) is reachable. Hence, at every stage there are at most nc max states. (This is due to integrality of the c j s.) I Initially, we are in state (0, 0) I For i =1,...,n, compute W (i, u) forallu apple nc max using the recursion total computational time is O(n 2 c max ). I The optimal solution is u? =max{u : W (n, u) apple b} Another approach: define C(i, w) asthemaximumvaluethatcan be obtained using the first i items and accumulating a total weight of w. C(i, w) =max{c(i 1, w), C(i 1, w w i )+c i }. Running time: O(nb). IOE 518: Introduction to IP, Winter 2012 Dynamic Programming Page 132 c Marina A. Epelman