A parallel GRASP for the Steiner problem in graphs using a hybrid local search

Similar documents
Local search with perturbations for the prize collecting Steiner tree problem in graphs

GRASP. Greedy Randomized Adaptive. Search Procedure

Restart strategies for

Improved Minimum Spanning Tree Heuristics for Steiner Tree problem in graph

GRASP with evolutionary path-relinking for the antibandwidth problem

PARALLEL GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES

LOCAL SEARCH WITH PERTURBATIONS FOR THE PRIZE-COLLECTING STEINER TREE PROBLEM IN GRAPHS

GRASP and path-relinking: Recent advances and applications

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

GRASP: Greedy Randomized Adaptive Search Procedures A metaheuristic for combinatorial optimization

A hybrid heuristic for the p-median problem

Chapter 8 GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES 1 INTRODUCTION. Mauricio G.C. Resende AT&T Labs Research

A Kruskal-Based Heuristic for the Rooted Delay-Constrained Minimum Spanning Tree Problem

Greedy randomized adaptive search procedures: Advances, hybridizations, and applications

SLS Methods: An Overview

Parallel Computing in Combinatorial Optimization

A Metaheuristic Algorithm for the Minimum Routing Cost Spanning Tree Problem

A HYBRID LAGRANGEAN HEURISTIC WITH GRASP AND PATH-RELINKING FOR SET K-COVERING

Partha Sarathi Mandal

Conflict Graphs for Combinatorial Optimization Problems

Introduction to Combinatorial Algorithms

PARALLEL GRASP WITH PATH-RELINKING FOR JOB SHOP SCHEDULING

CSE 417 Branch & Bound (pt 4) Branch & Bound

Artificial Intelligence Methods (G52AIM)

Approximation Algorithms

Optimal tour along pubs in the UK

Fall CS598CC: Approximation Algorithms. Chandra Chekuri

Steiner Tree. Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij

PARALLEL STRATEGIES FOR GRASP WITH PATH-RELINKING

Approximation Algorithms

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

Theorem 2.9: nearest addition algorithm

A GRASP FOR JOB SHOP SCHEDULING

Chapter 23. Minimum Spanning Trees

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

Week 11: Minimum Spanning trees


Graph Definitions. In a directed graph the edges have directions (ordered pairs). A weighted graph includes a weight function.

ABSTRACT. KULKARNI, GIRISH. A Tabu Search Algorithm for the Steiner Tree Problem. (Under

DESIGN AND ANALYSIS OF ALGORITHMS GREEDY METHOD

RANDOMIZED HEURISTICS FOR THE MAX-CUT PROBLEM

Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations

Probability Distribution of Solution Time in GRASP: An Experimental Investigation

Neural Networks for Network. Anoop Ghanwani. Duke University

METAHEURISTIC HYBRIDIZATION WITH GRASP

Lecture 4: Graph Algorithms

CAD Algorithms. Shortest Path

Approximating Node-Weighted Multicast Trees in Wireless Ad-Hoc Networks

Lecture 3: Totally Unimodularity and Network Flows

Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem

GRASP WITH PATH-RELINKING FOR THE GENERALIZED QUADRATIC ASSIGNMENT PROBLEM

Greedy algorithms is another useful way for solving optimization problems.

Bottleneck Steiner Tree with Bounded Number of Steiner Vertices

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Chapter Design Techniques for Approximation Algorithms

ACO and other (meta)heuristics for CO

Weighted Graph Algorithms Presented by Jason Yuan

A GRASP with restarts heuristic for the Steiner traveling salesman problem

val(y, I) α (9.0.2) α (9.0.3)

GRASP WITH PATH-RELINKING FOR THE GENERALIZED QUADRATIC ASSIGNMENT PROBLEM

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs

AUTOMATIC TUNING OF GRASP WITH EVOLUTIONARY PATH-RELINKING

Network monitoring: detecting node failures

Extensions of the minimum labelling spanning tree problem

Greedy algorithms Or Do the right thing

Design and Analysis of Algorithms

2. Optimization problems 6

A Course on Meta-Heuristic Search Methods for Combinatorial Optimization Problems

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far:

Introduction to Parallel & Distributed Computing Parallel Graph Algorithms

Part 3: Handling Intractability - a brief introduction

The k-center problem Approximation Algorithms 2009 Petros Potikas

A GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURE FOR JOB SHOP SCHEDULING

Foundations of Computing

Adaptive Large Neighborhood Search

Applications of the Linear Matroid Parity Algorithm to Approximating Steiner Trees

Lecture 5: Search Algorithms for Discrete Optimization Problems

Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

PROBABILITY DISTRIBUTION OF SOLUTION TIME IN GRASP: AN EXPERIMENTAL INVESTIGATION

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Algorithms. Algorithms ALGORITHM DESIGN

Constructive meta-heuristics

CS200: Graphs. Rosen Ch , 9.6, Walls and Mirrors Ch. 14

Traveling Salesman Problem. Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij

END-TERM EXAMINATION

Decomposition of log-linear models

Minimum spanning trees

The Traveling Salesman Problem: State of the Art

a local optimum is encountered in such a way that further improvement steps become possible.

CS302 - Data Structures using C++

Trees, Trees and More Trees

Constrained Minimum Spanning Tree Algorithms

Minimum-Spanning-Tree problem. Minimum Spanning Trees (Forests) Minimum-Spanning-Tree problem

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Institute of Operating Systems and Computer Networks Algorithms Group. Network Algorithms. Tutorial 4: Matching and other stuff

SPANNING TREES. Lecture 21 CS2110 Spring 2016

Geometric Steiner Trees

1 The Traveling Salesperson Problem (TSP)

Review of Graph Theory. Gregory Provan

Transcription:

A parallel GRASP for the Steiner problem in graphs using a hybrid local search Maurício G. C. Resende Algorithms & Optimization Research Dept. AT&T Labs Research Florham Park, New Jersey mgcr@research.att.com http://www.research.att.com/~mgcr INFORMS Philadelphia Meeting November 1999 Joint work with S. Martins, C. C. Ribeiro, and P. M. Pardalos slide 1

Steiner Problem in Graphs (SPG) Given a graph G (V,E ) with n vertices and m edges a subset S of the verticesv edge weights w 1, w 1,, w m SPG: Find a subgraph of G that is connected contains all vertices of S is of minimum weight slide

Steiner Problem in Graphs Classic combinatorial optimization problem ( Hwang, Richards, & Winter, 199) An example: 4 1 3 1 3 Terminal node from set S slide 3

Steiner Problem in Graphs 4 1 3 total weight = 9 1 3 4 total weight = 8 (optimum) 1 1 3 3 slide 4

Steiner Problem in Graphs: Complexity NP-complete (Karp, 197) Remains NP-complete for: grid graphs bipartite graphs chordal & split graphs Polynomial time algorithms exist for special graphs, e.g. permutation graphs distance hereditary graphs homogeneous graphs slide 5

Steiner Problem in Graphs: Applications Telecommunications network design VLSI design Computational biology (phylogenetic trees & DNA codes) Reliability Examples of applications are found in the books by Voss (1990), Hwang, Richards, & Winter (199), and Du & Pardalos (1993). slide 6

GRASP Feo & Resende (1989, 1995) best_obj = 0; repeat many times{ bias towards greediness good diverse solutions x = grasp_construction( ); x = local_search(x); if ( obj_function(x) < best_obj ){ x* = x; best_obj = obj_function(x); } } slide 7

GRASP construction repeat until solution is constructed For each candidate element apply a greedy function to element Rank all elements according to their greedy function values Place well-ranked elements in a restricted candidate list (RCL) Select an element from the RCL at random & add it to the solution slide 8

GRASP local search There is no guarantee that constructed solutions are locally optimal w.r.t. simple neighborhood definitions. It is usually beneficial to apply a local search algorithm to find a locally optimal solution. slide 9

GRASP local search Let N(x) be set of solutions in the neighborhood of solution x. f(x) be the objective function value of solution x. x 0 be an initial feasible solution built by the construction procedure Local search to find local minimum while ( there exists y ε N(x) f(y) < f(x) ){ } x = y; slide 10

Spanning tree based construction procedure First, we describe the spanning tree based construction procedure Based on distance network heuristic (Choukhmane, 1978; Kou et al., 1981; Plesník, 1981; Iwainsky et al., 1986) Uses distance modification of Mehlhorn (1988) Uses randomized variant of Kruskal s minimum spanning tree (MST) algorithm (1956) slide 11

Spanning tree based construction procedure Distance network heuristic 1) Construct distance network D G (S ) ) Compute a MST of D G (S ) 3) Construct graph T D, from the MST by replacing each edge of the MST by a shortest path in G (obs: T D can always be a tree) 4) Consider G T, the subgraph of G induced by the vertices of T D. Compute a MST of G T : T S. 5) Delete from T S the non-terminals of degree 1, one at a time. slide 1

G 6 3 Distance network heuristic 5 1 5 1 4 (DNH) 7 5 7 4 4 D G (S ) 3 5 5 6 3 1 5 1 4 4 7 6 3 1 5 1 4 T D 3 optimal Steiner tree T S slide 13

Mehlhorn s modification Mehlhorn adapted the DNH by replacing the metric used to build the distance matrix, improving its complexity. For all s ε S, N (s) contains the nonterminal vertices of V that are closer to s than to any other vertex in S. The graph G (S,E ) is defined, where slide 14 E = { (s,t ) s,t ε S and (u,v ) ε V such that u ε N (s), v ε N (t ) } w (s,t ) = min {d (s,u ) + w (u,v ) + d (v,t )} MTS (G ) is also a MST (D G (S ))

Making DNH into a GRASP construction method In Kruskal s algorithm, instead of selecting the least weight feasible edge: Build a restricted candidate list (RCL) consisting of low weight edges. Select, at random, an edge from the RCL. ( ) RCL = { e w ( e ) < w + α w w } smallest weight 0 α 1 largest weight slide 15

Local Search Procedures We consider two local search methods: vertex based approach path based approach slide 16

Vertex based local search The neighbors of T S are all MST obtained by adding to T S a non-terminal vertex not in T S by deleting from T S a non-terminal vertex that is in T S Weights used in MST computations are of the original graph. slide 17

Vertex based local search Given a MST T of graph G, the computation of a new MST of the graph G + {v } can be done in O( V ) time (Minoux, 1990) To compute a MST of G {v }, we use Kruskal s algorithm. slide 18

Vertex based local search LocalSearch (T ) { for all v not in T { compute cost of MST T with v inserted; } if cost(best T ) < cost(t ) { T = best T ; LocalSearch(T ); } slide 19

Vertex based local search else{ /* if no improvement in insertion is possible */ for all v in T { compute cost of MST T with v deleted; } if cost(best T ) < cost(t ) { T = best T ; LocalSearch(T ); } } slide 0

Path based local search We use the key path exchange local search of Verhoeven et al. (1996) slide 1

Path based local search We need two definitions A key vertex is a Steiner vertex with degree at least 3 A key path is a path in a Steiner tree T S of which all intermediate vertices are Steiner vertices with degree in T S, and whose end vertices are either terminal or key vertices. A Steiner tree has at most S key vertices S 3 key paths slide

Path based construction procedure A minimal Steiner tree consists of key paths that are shortest paths between key vertices or terminals. non-terminal vertex key path l 1 l l 3 terminal vertex T = { l 1, l, l 3 } key vertex slide 3

Path based local search Let T = {l 1, l,, l K } be a Steiner tree. C i and C i be the components that result from the removal of l i from T. The neighborhood N (T ) = {C i U C i U sp (C i,c i ) i = 1,,,K } Observe that C i U C i U l i = T N (T ) contains at most S 3 neighbors. slide 4

Neighborhood T non-terminal vertex key path l 1 l l 3 T = { l 1, l, l 3 } key vertex terminal vertex Neighbor 1: remove l 1 Neighbor : remove l sp sp l l 1 N(T) Neighbor 3: remove l 3 l 1 l sp slide 5

Path based local search LocalSearch (T = {l 1, l,, l K } ) { for (i =1,,K ) { if ( sp (C i,c i ) < length ( l i ) ){ T = T - { l i } U sp (C i,c i ) if necessary{ } } slide 6 } } update T to be a set of key paths LocalSearch (T )

Path based local search Solutions only have neighbors with lower or equal cost. A replacement of a key path in T can lead to the same Steiner tree if no shorter path exists. This implies that local minima have no neighbors. slide 7

Comparing the two local search strategies Use John Beasley s OR-Library OR-Library: series C & D reduced graphs using reduction tests of Duin & Vogenant (1989) IBM RISC/6000 390 C implementation IBM xlc compiler v. 3.1.3 with flags O3 qstrict 51 GRASP iterations fixed RCL parameter α = 0.1 slide 8

Comparing the two local vertex-based local search search strategies Series # opt avg err. max err. cpu time C 18/0 0.17%.65% 5.3s D 14/0 0.6%.4% 5.51s path-based local search Series # opt avg err. max err. cpu time C 15/0 0.39% 3.13% 7.0s D 10/0 0.54% 4.47% 114.60s slide 9

Comparing the two local search strategies Both variants find good solutions Optimal solutions found on large portion of instances Average error < 1% off optimal Worst error < 5% off optimal Node-based neighborhood produces best-quality solutions Computation time of node-based neighborhood search is twice that of path-based neighborhood search slide 30

Hybrid local search repeat T = GRASP construction If T is a new Steiner tree (hashing) T = path-based search (T) if w(t) < λ best weight (λ > 1) T = node-based search (T) if w(t) < best weight save tree: T * = T best weight = w(t) slide 31

Simple parallelization Most straightforward scheme for parallel GRASP is distribution of iterations to different processors. Care is required so that two iterations never start off with same random number generator seed. run generator and record all N g seeds in seed( ) array start iteration i with seed seed(i ) slide 3

MPI implementation IBM SP with 3 RS6000 390 processors with 56 Mbytes of memory each We used p = 1,,4,8,16 slave processors, each running a total of 51/p GRASP iterations with λ = 1.01 OR-library test problems: Series C, D, & E master passes block of seeds to slave master master waits for slave to finish block slave passes back best solution of block to master slave slave slave slide 33

Computational results: hybrid local search Series C: 500 nodes, 65 to 1500 edges, and 5 to 50 terminal nodes Series D: 1000 nodes, 150 to 5000 edges, and 5 to 500 terminal nodes Series E: 500 nodes, 315 to 6500 edges, and 5 to 100 terminal nodes. Series # opt avg. err. max err. C 17/0 0.3% 3.03% D 16/0 0.18%.19% E 13/0 0.6% 3.09% slide 34

Computational results: hybrid local search Series C: all three sub-optimal solutions found were off only by one from optimal Series D: two of the four sub-optimal solutions found were off only by one from optimal Series E: Of the seven sub-optimal solutions found, six were less than 1% from optimal slide 35

State of art Parallel GRASP found 46 of 60 otimal solutions Two state of the art Tabu Search implementations found: 4 (Ribeiro & Souza, to appear in Networks) 44 (Bastos & Ribeiro, MIC99) slide 36

Elapsed time 1 000 0 Time (seconds) 100 0 10 0 1 0 ² + ² + ² + Series E ² + Series D Series C ² + 1 0 4 6 8 1 0 1 1 4 16 Number of processors slide 37

Spedup Speedup 9 8 Series C 7 6 5 4 + ² Series E Series D + ² 3 + ² 1 + ² + ² 0 4 6 8 1 0 1 1 4 1 6 Number of processors slide 38

Computational results: hybrid local search For some instances, speed up was almost linear For some others, parallelization did not contribute much (because of memory structures used that made sequential algorithm very fast) Main contribution of parallel implementation was on notably difficult instances With more than 16 processors, it appears more speedup is possible slide 39