European Journal of Operational Research

Similar documents
An efficient ILS heuristic for the Vehicle Routing Problem with Simultaneous Pickup and Delivery

An Innovative Metaheuristic Solution Approach for the Vehicle Routing Problem with Backhauls

An Open Vehicle Routing Problem metaheuristic for examining wide solution neighborhoods

Vehicle Routing Heuristic Methods

SOLVING VEHICLE ROUTING PROBLEM WITH SIMULTANEOUS PICKUP AND DELIVERY WITH THE APPLICATION OF GENETIC ALGORITHM

A Tabu Search solution algorithm

A sequential insertion heuristic for the initial solution to a constrained vehicle routing problem

Open Vehicle Routing Problem Optimization under Realistic Assumptions

Computational Complexity CSC Professor: Tom Altman. Capacitated Problem

Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft

Variable Neighborhood Search for Solving the Balanced Location Problem

Adjusted Clustering Clarke-Wright Saving Algorithm for Two Depots-N Vehicles

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints

ACO and other (meta)heuristics for CO

A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows

A Heuristic Based on Integer Programming for the Vehicle Routing Problem with Backhauls

Two models of the capacitated vehicle routing problem

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Improving on the initial solution heuristic for the Vehicle Routing Problem with multiple constraints

6. Tabu Search 6.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Rich Vehicle Routing Problems Challenges and Prospects in Exploring the Power of Parallelism. Andreas Reinholz. 1 st COLLAB Workshop

Improved K-Means Algorithm for Capacitated Clustering Problem

A Tabu Search Heuristic for the Generalized Traveling Salesman Problem

Effective Local Search Algorithms for the Vehicle Routing Problem with General Time Window Constraints

Hybrid ant colony optimization algorithm for two echelon vehicle routing problem

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

Optimization Techniques for Design Space Exploration

A Bucket Graph Based Labelling Algorithm for the Resource Constrained Shortest Path Problem with Applications to Vehicle Routing

Pre-requisite Material for Course Heuristics and Approximation Algorithms

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows

A Study of Neighborhood Structures for the Multiple Depot Vehicle Scheduling Problem

Branch-and-bound: an example

3. Genetic local search for Earth observation satellites operations scheduling

Non-deterministic Search techniques. Emma Hart

GVR: a New Genetic Representation for the Vehicle Routing Problem

Variable neighborhood search algorithm for the green vehicle routing problem

A Course on Meta-Heuristic Search Methods for Combinatorial Optimization Problems

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Solving the Capacitated Vehicle Routing Problem with a Genetic Algorithm

Solution Methods for the Multi-trip Elementary Shortest Path Problem with Resource Constraints

A New Exam Timetabling Algorithm

A Relative Neighbourhood GRASP for the SONET Ring Assignment Problem

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Complete Local Search with Memory

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

Heuristic Search Methodologies

A Firework Algorithm for Solving Capacitated Vehicle Routing Problem

LOCAL SEARCH FOR THE MINIMUM FUNDAMENTAL CYCLE BASIS PROBLEM

Guidelines for the use of meta-heuristics in combinatorial optimization

α Coverage to Extend Network Lifetime on Wireless Sensor Networks

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

A Bi-directional Resource-bounded Dynamic Programming Approach for the Traveling Salesman Problem with Time Windows

6 ROUTING PROBLEMS VEHICLE ROUTING PROBLEMS. Vehicle Routing Problem, VRP:

Theorem 2.9: nearest addition algorithm

Variable Neighborhood Search for the Dial-a-Ride Problem

A Fast Look-ahead Heuristic for the Multi-depot Vehicle Routing Problem

GRASP. Greedy Randomized Adaptive. Search Procedure

(Refer Slide Time: 01:00)

SavingsAnts for the Vehicle Routing Problem. Karl Doerner Manfred Gronalt Richard F. Hartl Marc Reimann Christine Strauss Michael Stummer

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Simple mechanisms for escaping from local optima:

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

Overview of Tabu Search

LEAST COST ROUTING ALGORITHM WITH THE STATE SPACE RELAXATION IN A CENTRALIZED NETWORK

Parallel tabu search for a pickup and delivery problem under track contention

A Hybrid Genetic Algorithm for the Distributed Permutation Flowshop Scheduling Problem Yan Li 1, a*, Zhigang Chen 2, b

A robust enhancement to the Clarke-Wright savings algorithm

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

A Guided Cooperative Search for the Vehicle Routing Problem with Time Windows

Outline of the module

Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm

Multiple Depot Vehicle Routing Problems on Clustering Algorithms

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

Parallel Computing in Combinatorial Optimization

Available online at ScienceDirect. Procedia CIRP 44 (2016 )

Applying Particle Swarm Optimization for Solving Team Orienteering Problem with Time Windows

Tabu Search for Constraint Solving and Its Applications. Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier Angers Cedex 01 - France

A Hybrid Improvement Heuristic for the Bin Packing Problem

arxiv: v1 [math.oc] 9 Jan 2019

a local optimum is encountered in such a way that further improvement steps become possible.

An algorithmic method to extend TOPSIS for decision-making problems with interval data

A GRASP with restarts heuristic for the Steiner traveling salesman problem

DISSERTATION. Titel der Dissertation. Decomposition Strategies for Large Scale Multi Depot Vehicle Routing Problems. Verfasser

SLS Methods: An Overview

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Chapter S:II. II. Search Space Representation

Solving Capacitated P-Median Problem by Hybrid K-Means Clustering and Fixed Neighborhood Search algorithm

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

A Particle Swarm Optimization Algorithm with Path Relinking for the Location Routing Problem

DEVELOPMENT AND COMPARISON OF GENETIC ALGORITHMS FOR VEHICLE ROUTING PROBLEM WITH SIMULTANEOUS AND PICKUPS IN A SUPPLY CHAIN NETWORK

Permutation, no-wait, no-idle flow shop problems

of optimization problems. In this chapter, it is explained that what network design

Extension of the TOPSIS method for decision-making problems with fuzzy data

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

An Energy Efficient and Delay Aware Data Collection Protocol in Heterogeneous Wireless Sensor Networks A Review

Complementary Graph Coloring

The Encoding Complexity of Network Coding

Transcription:

European Journal of Operational Research 202 (2010) 401 411 Contents lists available at ScienceDirect European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor Production, Manufacturing and Logistics An adaptive memory methodology for the vehicle routing problem with simultaneous pick-ups and deliveries Emmanouil E. Zachariadis a, *, Christos D. Tarantilis b, Chris T. Kiranoudis a a Department of Process Analysis and Plant Design, School of Chemical Engineering, National Technical University of Athens, 15780 Athens, Greece b Department of Management Science and Technology, Athens University of Economics and Business, Greece article info abstract Article history: Received 18 July 2008 Accepted 12 May 2009 Available online 18 May 2009 Keywords: Vehicle routing Simultaneous pick-ups and deliveries Adaptive memory This paper deals with a routing problem variant which considers customers to simultaneously require delivery and pick-up services. The examined problem is referred to as the Vehicle Routing Problem with Simultaneous Pick-ups and Deliveries (VRPSPD). VRPSPD is an NP-hard combinatorial optimization problem, practical large-scale instances of which cannot be solved by exact solution methodologies within acceptable computational times. Our interest was therefore focused on metaheuristic solution approaches. In specific, we introduce an Adaptive Memory (AM) algorithmic framework which collects and combines promising solution features to generate high-quality solutions. The proposed strategy employs an innovative memory mechanism to systematically maximize the amount of routing information extracted from the AM, in order to drive the search towards diverse regions of the solution space. Our metaheuristic development was tested on numerous VRPSPD instances involving from 50 to 400 customers. It proved to be rather effective and efficient, as it produced high-quality solutions, requiring limited computational effort. Furthermore, it managed to produce several new best solutions. Ó 2009 Elsevier B.V. All rights reserved. 1. Introduction This article examines a variant of the Vehicle Routing Problem (VRP) which considers customers to require both delivery and pick-up services. The examined problem is known as VRP with Simultaneous Pick-ups and Deliveries (VRPSPD), and arises in practical transportation operations that involve bi-directional flow of products. Briefly, the VRPSPD model is defined on a directed graph G ¼ðV; AÞ, where V ¼fv 0 ; v 1 ;...; v n g is the vertex set and A ¼ fðv i ; v j Þ : v i ; v j 2 V;v i v j g is the arc set. Vertex v 0 represents the central depot which is the base of a homogeneous fleet of vehicles, each one having a maximum carrying load Q. The rest n vertices of Vðfv 1 ; v 2 ;...; v n gþ correspond to the customer set. With every customer v i, is associated a pair of non-negative product quantities, namely the delivery ðd i Þ and pick-up ðp i Þ quantity, respectively. Moving along an arc ðv i ; v j Þ of A requires a travel cost equal to c ij. Goal of the VRPSPD is to determine a set of routes that satisfy the following requirements: ðaþ every route originates and terminates from/to the central depot v 0, (b) each customer is serviced exactly once by a single route, ðcþ all customer delivery and pick-up demands are totally satisfied, ðdþ at no point of any route may the transported quantity of goods be greater than the vehicle * Corresponding author. E-mail addresses: ezach@mail.ntua.gr (E.E. Zachariadis), tarantil@aueb.gr (C.D. Tarantilis), kyr@chemeng.ntua.gr (C.T. Kiranoudis). capacity Q and ðeþ the total cost of the route set is minimized. For integer programming VRPSPD formulations, the interested reader is referred to the works of Dethloff (2001) and Tang Montané and Galvão (2006). Our research on VRPSPD is motivated both by the growing commercial importance of reverse logistics, and by the high computational complexity of routing problem variants. In terms of the VRPSPD commercial importance, there are numerous practical applications which require bi-directional transportation of products. In the grocery industry, for example, goods must flow to stores, while at the same time outdated products are collected and sent back to the production sites to be appropriately processed. Furthermore, pro-environmental practices like recycling of empty packaging and other reusable materials or equipment is another central factor which creates the necessity of reverse product flows. From the theoretical point of view, VRPSPD is an NP-hard combinatorial optimization problem, as it generalizes the standard version of the VRP. Large-scale VRPSPD instances encountered in real-life business activities cannot be efficiently tackled by exact solution approaches. Therefore, our interest was focused on metaheuristic methodologies which are able to produce high-quality solutions within reasonable computing times even for large-scale practical instances. The purpose of this paper is to present an Adaptive Memory (AM) method for dealing with the VRPSPD. The proposed AM collects high-quality solution characteristics obtained through the search process. These characteristics are combined to produce 0377-2217/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2009.05.015

402 E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 new solutions that are subsequently improved by a Tabu Search (TS) (Glover, 1986) method, accelerated by using the granularity concept introduced by Toth and Vigo (2003). Our methodology s central aim is to systematically exploit wide portions of information stored in the AM. Towards this aim, we employ an innovative memory component for keeping track of the way in which AM contributes to the exploration of the search space. In terms of the challenging VRPSPD feasibility constraints, we present some metrics which capture the load fluctuation of vehicles along their routes. These metrics allow checking the feasibility of tentative local search moves in constant time. To assess the effectiveness of the proposed solution approach, we tested it on three VRPSPD data sets of diverse characteristics and scales. It produced fine results, improving several best-known solutions. The rest of the article is outlined as follows: in Section 2, a brief literature review on the VRPSPD solution methodologies is provided, followed by the detailed description of the proposed metaheuristic solution approach in Section 3. The computational results of our method are discussed in Section 4. Finally, Section 5 concludes the paper. 2. Literature review The first work addressing VRPSPD was published by Min (1989). It considered a practical problem with 22 customers and 2 vehicles based on a central depot. The proposed solution approach is initiated by clustering the customer population in two disjoint sets and then, for each set, solving the corresponding Travelling Salesman Problem (TSP). Feasibility is guaranteed by penalizing any infeasible arcs present in the solution generated and resolving the TSP. Dethloff (2001) employs a polynomial construction heuristic for the VRPSPD, which makes use of various insertion criteria. Nagy and Salhi (2005) also propose an insertion-based algorithm for solving the corresponding VRP, by treating pick-ups and deliveries in an integrated manner. Capacity infeasibilities are eliminated by the application of heuristic routines taken from VRP methodologies and specially adapted for the problem examined. The presented algorithm is also capable of dealing with instances where multiple depots are considered. Crispim and Brandão (2005) present a hybrid metaheuristic local search strategy based on TS and Variable Neighbourhood Descent (VND). Another hybrid metaheuristic is proposed by Chen and Wu (2006). Specifically, their work proposes an insertion-based procedure for obtaining initial solutions which are later improved by a hybridization of TS and record-to-record travel. Tang Montané and Galvão (2006) propose a tabu search procedure which explores four different neighbourhood structures. The balance between intensification and diversification of the search is controlled by a frequency penalization scheme. More recent works for the VRPSPD have been published by Bianchessi and Righini (2007), Wassan et al. (2007), Zachariadis et al. (2009). Bianchessi and Righini (2007) present and compare the performance of various constructive, local search and TS algorithms designed for the VRPSPD, whereas Wassan et al. (2007) present a reactive TS framework which uses the general 1 1 and 1 0 exchange operators together with a problem-specific move which reverses complete routes. The proposed dynamic control of the tabu list size achieves an effective balance between the intensification and diversification of the conducted search, as seen from the high quality results obtained. Zachariadis et al. (2009) propose a hybrid metaheuristic approach based on TS controlled by a guiding mechanism for diversifying the conducted search. Finally the most recent articles on the VRPSPD have been published by Gajpal and Abad (2009) and Ai and Kachitvichyanukul (2009). The former work proposes an Ant Colony System methodology which employs a construction rule as well as two multi-route local search schemes, whereas the latter paper presents a Particle Swarm Optimization (PSO) algorithm. A practical VRPSPD variant has been tackled by Privé et al. (2006) who have considered a problem which arises from the distribution of soft drinks and collection of recyclable containers. It involves heterogeneous vehicles, time windows, capacity, and volume constraints. The objective function of the examined distribution model combines routing costs and the revenue resulting from the sale of recyclable material. To solve this real-life problem, the authors propose three construction heuristics and an improvement method. 3. The proposed algorithm The proposed solution framework is an evolutionary procedure based on the AM rationale introduced by Rochat and Taillard (1995) for solving the VRP. As the search evolves, the AM collects promising solution components which are periodically combined to generate new solutions. In their work, Rochat and Taillard (1995) build new solutions by extracting routes contained in the AM and applying an improvement heuristic procedure. On the other hand, as in the case of the highly effective BoneRoute (Tarantilis and Kiranoudis, 2002) and SEPAS (Tarantilis, 2005), our algorithm generates new solutions by extracting promising sequences of nodes (bones) present in the AM. However, to exploit a greater amount of information hold in the AM, the proposed algorithm has two key differences compared to the two former works: The AM itself keeps track of how its solution components have contributed to the generation of new solutions. This is accomplished by an additional memory mechanism which records the extraction frequency for each bone in the AM. The length of the bones (i.e. the number of nodes contained in a bone) extracted from the AM is not fixed. These methodological innovations are aimed at injecting sufficient diversification in the conducted search. As will be thoroughly explained in the following, this is achieved by the proposed additional memory mechanism, which records the AM contribution to the overall search and has a dual role: firstly, it induces diversification in a much more systematic way compared to the approach of Rochat and Taillard (1995) which probabilistically extracts solution components from the AM to generate new partial solutions. Furthermore, the use of the aforementioned memory component eliminates the risk of an elitistic behaviour which may result due to the deterministic criterion employed by our algorithmic development for extracting the bones to form new solutions. In the following, we provide a brief description of the overall framework, followed by an analytic description of every algorithmic component. 3.1. Overall structure of the proposed algorithm The AM we employ consists of a route pool that does not include any duplicate entries. To fill this pool, the AM initialization phase (Phase I) is initiated. After the pool is filled, the AM exploitation step (Phase II) begins. Phase II is a cyclic procedure. During each cycle, solution components are extracted and combined to create an initial feasible VRPSPD solution which is later improved by a TS process. Our TS method, which is the most time-demanding component of the overall algorithm, is accelerated using the concept of granularity proposed by Toth and Vigo (2003). The solution obtained by the TS method is used to update the AM. The proposed algorithm terminates after the completion of mcycles cycles of Phase II by returning the best solution encountered through the

E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 403 Phase I Adaptive Memory Initialization Paessens Construction Algorithm Tabu Search Phase II Adaptive Memory Exploitation Adaptive Memory Updater Adaptive Memory Bone Extractor Final Solution Tabu Search Fig. 1. Schematic representation of the overall algorithmic framework. search process. The overall algorithmic framework is presented in Fig. 1. 3.2. Phase I Route pool initialization As earlier mentioned, the AM used consists of a pool of routes which does not contain any duplicate entries. Let PoolSize denote the total number of routes in the pool. To construct these PoolSize routes, we iteratively employ a two-step method: Step 1 Generation of feasible VRPSPD initial solutions To obtain an initial feasible VRPSPD solution, we use the weighted savings heuristic proposed by Paessens (1988). The savings function used is: ; sðv i ;v j Þ¼c i0 þ c 0j g c ij þ f c i0 c 0j where the f and g parameter values are uniformly distributed within [0,1] and (0,3], respectively, as proposed by Paessens (1988). The stochastic setting of f and g plays a significant role in the pool initialization phase, as it drives the pool to contain routes of diverse characteristics, later to be exploited by the improvement phase. Note that customer insertion points are taken into consideration, only if they lead to routes that respect the capacity constraints of the problem. Step 2 Improvement of the initial solutions using tabu search The solution produced by the construction heuristic is improved by an iterative TS method. At each iteration, one of the classical quadratic ðoðn 2 ÞÞ 1 0 exchange, 1 1 exchange (Waters, 1987) and 2 opt (Croes, 1958) neighbourhood structures is stochastically selected. All three neighbourhood structures share the same probability of selection. The solution is then transformed by applying the best move defined by the selected neighbourhood structure, if this move does not lead to any capacity constraint violation. The deterministic criterion of moving towards new solutions within the search space causes cycling phenomena to occur (i.e. continuously revisiting the same solutions). To avoid such situations, when a move is performed, its reversal is declared tabu for a specific number of iterations equal to tabuten. Moves declared tabu are not considered when solution neighbourhoods are explored, unless they improve the quality of the best solution generated so far (aspiration criterion). Our TS implementation terminates after the completion of mni non-improving iterations by returning the highest-quality solution obtained through the search process. To accelerate the search process, we reduce the size of the neighbourhoods explored, or in other words the number of candidate moves evaluated per iteration, by using the concept of granularity introduced by Toth and Vigo (2003). To do so, we first employ the Clarke and Wright (1964) savings heuristic, specially modified to satisfy the capacity requirements of the VRPSPD model. The granularity threshold h is then evaluated as: # ¼ðk zþ=ðn þ KÞ; where z is the objective function value of the solution obtained by this VRPSPD-adapted Clarke and Wright (1964) heuristic, K is the number of routes contained in the solution and k is the sparsification parameter set to 2.5, as proposed by Toth and Vigo (2003). Let A 0 denote a subset of the arc set A where: A 0 ¼ 2 A : cij 6 h _ i ¼ 0 _ j ¼ 0 : v i ;v j The moves considered during neighbourhood exploration are only those leading to the generation of every arc contained in A 0. In other words, the arcs of A 0 serve as direct move generators, so that for the considered 1 0, 1 1 exchange and 2 opt move types, neighbourhood evaluations require OðjA 0 jþ, instead of Oðn 2 Þ (Toth and Vigo, 2003). As earlier mentioned the two-step procedure for initializing the route pool is iteratively applied until PoolSize different routes are produced. The number of routes contained in the pool is determined using the following rationale. Let NR denote the number of routes contained in the first solution produced. Then the size of the pool is given by PoolSize ¼ msize NR. When these PoolSize unique routes are generated, they are tagged with the cost of the solution which they belong to, and they are stored in the pool in increasing cost tag order. 3.3. Phase II Adaptive memory exploitation After the pool is filled with PoolSize unique feasible routes, the AM exploitation phase commences. It consists of three stages interacting with the AM as shown in Fig. 1. In the following, we provide analytic descriptions for these three stages. Stage I Bone extractor Our AM is composed by routes, each of them consisting of bones (vertex sequences) of variable length. For every bone b in the AM, let len b ; freq b, and sz b denote its length (number of vertices), its frequency (number of occurrences of b in the route pool), and the cost tag of the best solution that contains b, respectively. In addition, let textr b denote the total times that bone b has been extracted from the AM. This metric plays a central role in the bone extraction phase as it keeps record of how much each bone has contributed to the guidance of the solution space exploration. In this way, it

404 E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 helps the overall algorithm to exploit large parts of information present in the AM and diversifies the conducted search. Finally, let MaxLen denote the length of the longest bone contained in the AM, or in other words the number of vertices visited by the longest route. The bone extractor is initiated by stochastically selecting a value for the maximum length (MaxBoneLen) of the candidate bones. MaxBoneLen is uniformly distributed within [MaxLen/2, MaxLen] and rounded to the next integer value. To be candidates for extraction, bones must satisfy two preconditions: (a) their length must not exceed MaxBoneLen and (b) they must not contain any customer present in any bone already extracted. The bone extracted is the candidate maximizing: U1ðbÞ ¼len b = ð1 þ textr b Þ. Ties are broken by maximizing: U2ðbÞ ¼freq b = ð1 þ textr b Þ. While the final criterion for breaking any further ties is the minimization of: U3ðbÞ ¼sz b = ð1 þ textr b Þ. Using this combination of utility functions together with the variable MaxBoneLen bound and the memory role of the textr b metric eliminate the risk of repeatedly selecting a small subset of the bones present in the AM which may result in an overall elitistic behaviour of the proposed algorithm. Bones are extracted iteratively, using the mechanism described above. Stage I is terminated after every customer is contained in the set of the bones extracted. The resulting bone set can be directly transformed to a set of feasible VRPSPD routes (with the insertion of the depot vertex at the start or the end of each bone, if not already there), which in turn constitutes a feasible VRPSPD solution. This is because: (a) every customer is visited exactly once, (b) the capacity constraint is not violated at any point, since the routes are partial segments of other feasible VRPSPD routes, and (c) the size of the vehicle fleet is treated as a decision variable, and therefore no limitation is imposed on the total number of routes. To better describe the bone extractor stage, we provide Fig. 2 which presents the generation of a feasible VRPSPD solution for a VRPSPD instance of 9 customers and a depot. For convenience of presentation, consider that Fig. 2 demonstrates the first call to the bone extraction method, so that textr b is 0 for every bone b, MaxBoneLen is equal to 4 and that the routes of the AM are presented in increasing cost tag order. The first bone to be extracted is v 0 v 2 v 4 v 6, which is the longest bone with length equal to 4 and frequency equal to 2. No other bone with 4 vertices can be selected, as all of them contain customers already included in the extracted bone. Moving to the 3 vertex bones, the first one extracted is v 9 v 1 v 0, with frequency equal to 2, followed by v 0 v 5 v 8, appearing once in the route pool. Then, the two vertex bone v 7 v 0 is extracted, as it belongs to the route with the lowest cost tag. The last bone used to create a feasible VRPSPD solution is the single-vertex bone v 3 which appears once in the AM. After these five bones are extracted, the depot vertex is inserted wherever necessary so that a complete feasible VRPSPD solution is formed. Stage II Improvement of the extracted solution The solution which is generated by the bone extractor stage contains promising vertex sequences that belong to good routes stored in the AM. To exploit these solution components and obtain high-quality solutions, we use the TS method as described for the second step of Phase I. Stage III Adaptive memory updater As earlier mentioned, the AM contains unique routes sorted in increasing order of their corresponding solution cost. When a new solution sol is generated by the TS method of Stage II, the AM is updated via the following mechanism: for every route rt of sol, we check if it is already contained in the route pool. If rt is already present in the AM, we check its solution cost tag. If the cost tag is greater than the cost of the new solution ðz sol Þ, it is set equal to z sol, and the route is repositioned into the AM accordingly. If rt is not present in the AM and z sol is lower than the cost tag of the last route of the pool, rt replaces this last route by being inserted in the appropriate position. In this way, the pool has constant size and always includes every route that belongs to the highest-quality solutions generated through the search. 3.4. Feasibility issues ROUTE POOL The TS method of the proposed framework applies three local search operators, namely the 1 0, 1 1 exchange and 2 opt move types. The cardinality of the tentative moves defined by these operv 0 v 2 v 4 v 6 v 1 v 7 v 0 v 0 v 2 v 4 v 6 v 3 v 9 v 0 Cost Tags 900 1000 VRPSPD Solution v 0 v 2 v 4 v 6 v 0 v 0 v 2 v 4 v 9 v 1 v 0 1100 Bone Extraction v 0 v 9 v 1 v 0 v 0 v 5 v 6 v 8 v 2 v 0 1200 v 0 v 5 v 8 v 0 v 0 v 7 v 0 v 0 v 7 v 2 v 9 v 1 v 0 1200 v 0 v 5 v 8 v 2 v 0 1300 v 0 v 3 v 0 Fig. 2. Schematic representation of bone extraction.

E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 405 ators is Oðn 2 Þ, whereas the cost evaluation for each of these tentative moves requires constant time Oð1Þ. For the classical VRP version, feasibility investigation also requires constant time, so that the considered neighbourhood structures can be exhaustively examined in Oðn 2 Þ. For the VRPSPD however, the load of vehicles fluctuates, causing the feasibility checking process to require additional complexity, if not efficiently designed (Gendreau et al., 1999; Gribkovskaia et al., 2007). To speed up the feasibility checking process, we use the following approach: let x be a point of the vector encoding a given route RT and Z RT denote the number of customers serviced by RT. Obviously, x varies from 0 to Z RT ðx ¼ 0; 1; 2;...; Z RT Þ corresponding to the depot ðx ¼ 0Þ and the Z RT customer visits. In addition, let d RT ðxþ and p RT ðxþ denote the delivery and pick-up quantity, respectively, of the vertex lying at the point x of route RT. To model the vehicle load fluctuations along each route RT, we introduce the following metrics: 3.4.1.2. Inter-route 1 0 exchange. Assume that the relocated customer is removed from point k of route RT1, and is inserted after point l of route RT2. The move is feasible, if: MAX LB RT2 ðlþþd RT1 ðkþ 6 Q and MAX LA RT2 ðlþþp RT1 ðkþ 6 Q. 3.4.2. Feasibility checking for the 1 1 exchange move 3.4.2.1. Intra-route 1 1 exchange. Assume that the move involves swapping the positions of the vertices lying at points k and l of route RT with k < l. Then, the intra-route 1 1 exchange is feasible if: MAXIM LA RT ðk; l k 1Þþd RT ðkþ p RT ðkþ d RT ðlþþ p RT ðlþ 6 Q. SPB RT ðxþ ¼R ðq¼0;1;::;x 1Þ p RT ðqþ; x ¼ 0; 1;...; Z RT (sum of the pick-up quantities of route RT vertices lying before point x). SDA RT ðxþ ¼R ðq¼xþ1;xþ2;::;zrtþ d RT ðqþ; x ¼ 0; 1;...; Z RT (sum of the delivery quantities of route RT vertices lying after point xþ. L RT ðxþ ¼SPB RT ðxþþp RT ðxþþsda RT ðxþ; x ¼ 0; 1;...; Z RT (load of the vehicle, when it leaves from the vertex of RT route point xþ. MAX LB RT ðxþ ¼max ðq¼0; 1;...; xþ fl RT ðqþg; x ¼ 0; 1;...; Z RT (maximum load of the RT segment beginning from the depot, and terminating at point x þ 1 when x < Z RT, or at the depot when x ¼ Z RT Þ. MAX LA RT ðxþ ¼max ðq¼x; xþ1;...; ZRTÞ fl RT ðqþg; x ¼ 0; 1;...; Z RT (maximum load of the RT segment beginning at point x and terminating at the depot). MAXIM LA RT ðx; yþ ¼max ðq¼x; xþ1;...; xþyþ fl RT ðqþg; x ¼ 0; 1;...; Z RT ; y ¼ 0; 1;...; Z RT x (maximum load of the RT segment beginning at point x and terminating at point x þ y þ 1 when x þ y < Z RT, or at the depot when x þ y ¼ Z RT Þ. MINIM LA RT ðx; yþ ¼min ðq¼x; xþ1;...; xþyþ fl RT ðqþg; x ¼ 0; 1;...; Z RT ; y ¼ 0; 1;...; Z RT x (minimum load of the RT segment beginning at point x and terminating at point x þ y þ 1 when x þ y < Z RT, or at the depot when x þ y ¼ Z RT ). All seven demand metrics are stored in array structures which offer constant time retrieval capability. They are updated every time a local search move modifies the characteristics of a route, at the expense of OðZ 2 Þ complexity, where Z is the number of customers assigned to the modified route. With the use of these demand metrics, feasibility investigation can be performed in constant time O(1), as will be presented in detail, for all three local search operators. 3.4.1. Feasibility checking for the 1 0 exchange move 3.4.1.1. Intra-route 1 0 exchange. Let RT denote the route involved in the move, k denote the RT point from which the relocated customer is removed, and l denote the RT point after which the customer is re-inserted. If k < l (the relocated customer moves forward) then the relocation is feasible, if: MAXIM LA RT ðk þ 1; l k 1Þþd RT ðkþ p RT ðkþ 6 Q. If k > l (the relocated customer moves backwards), the move is feasible, if: MAXIM LA RT ðl; k l 2Þ d RT ðkþþp RT ðkþ 6 Q. 3.4.2.2. Inter-route 1 1 exchange. Assume that the move exchanges the positions of the customers lying at point k of route RT1, and point l of route RT2, respectively. Then, for this 1 1 exchange move to be feasible the following six conditions must hold: MAX LB RT1 ðk 1Þþd RT2 ðlþ d RT1 ðkþ 6 Q, L RT1 ðk 1Þ d RT1 ðkþþp RT2 ðlþ 6 Q, MAX LA RT1 ðk þ 1Þþp RT2 ðlþ p RT1 ðkþ 6 Q (applicable when k < Z RT1 Þ, MAX LB RT2 ðl 1Þþd RT1 ðkþ d RT2 ðlþ 6 Q, L RT2 ðl 1Þ d RT2 ðlþþp RT1 ðkþ 6 Q, MAX LA RT2 ðl þ 1Þþp RT1 ðkþ p RT2 ðlþ 6 Q (applicable when l < Z RT2 Þ. 3.4.3. Feasibility checking for the 2 opt exchange move 3.4.3.1. Intra-route 2 opt. Assume that the intra-route 2 opt operator connects the customers lying at points k and l ðk < lþ of route RT, by reversing the route path beginning from point k þ 1 and terminating at point l. Then the examined intra-route 2 opt move is feasible if: L RT ðkþþl RT ðlþ MINIM LA RT ðk; l k 1Þ 6 Q. 3.4.3.2. Inter-route 2 opt. Assume that the inter-route 2 opt move connects the route RT1 segment beginning from the depot and terminating at point k to the route RT2 segment beginning from point l þ 1 and terminating at the depot, and the route RT2 segment beginning from the depot and terminating at point l, to the route RT1 segment beginning from point k þ 1 and terminating at the depot. The move is feasible if the following four conditions hold: MAX LB RT1 ðkþ SDA RT1 ðkþþsda RT2 ðlþ 6 Q, MAX LA RT2 ðl þ 1Þ SPB RT2 ðlþ p RT2 ðlþþspb RT1 ðkþþ p RT1 ðkþ 6 Q (applicable when l < Z RT2 Þ, MAX LB RT2 ðlþ SDA RT2 ðlþþsda RT1 ðkþ 6 Q, MAX LA RT1 ðk þ 1Þ SPB RT1 ðkþ p RT1 ðkþþspb RT2 ðlþþ p RT2 ðlþ 6 Q (applicable when k < Z RT1 Þ. 3.5. The dual adaptive memory representation The bone extraction operation may be significantly time consuming (or even virtually impossible for large problem instances), if the AM is not effectively designed. To keep the computations necessary for the bone extractor stage to a minimum, we represent the AM dually. In specific, except for the standard representation of the routes sorted in decreasing cost of their solution cost tags, the AM information is also recorded in a sorted tree structure, each

406 E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 ROUTE SET REPRESENTATION TREE OF BONES REPRESENTATION Cost Tags ROOT V2 V3 V2 V3 V2 10 12 14 len=1 freq=3 len=1 freq=2 V2 len=1 freq=3 V3 len=1 freq=2 V2 freq=3 freq=2 sz=14 V3 freq=2 sz =10 freq =1 len=3 sz=14 V3 len=3 freq=2 sz =10 len=3 sz=14 len =3 sz =10 len=3 len=3 sz =12 len=4 sz=14 len=4 len=4 len=4 len=5 sz =12 Fig. 3. The dual representation of the adaptive memory. node of which represents a bone present in the route pool. For every bone b of the AM, the textr b ; len b ; freq b, and sz b values are also kept in the corresponding tree nodes. To illustrate the dual representation of the AM, Fig. 3 is provided. Every time the AM updater changes any of the route pool contents, the tree structure is appropriately modified, while every time a bone is extracted from the AM, its textr value is incremented by 1. 4. Computational results The proposed algorithm was implemented in Visual C# and executed on a single core of an Intel T5500 processor (1.66 GHz) under Windows XP. To evaluate the effectiveness of the metaheuristic design, as well as to fix the algorithmic parameters, we used three VRPSPD data sets varying in terms of the problem size and characteristics. In the following, we briefly present these three benchmark data sets, discuss the standard parameter setting of the algorithm, and finally provide analytic computational results for every benchmark problem. 4.1. Benchmark data sets The first data set used for testing our algorithm consists of 14 VRPSPD instances (with no limit on the total route length) generated from 7 problems originally proposed by Christofides et al. (1979) for the VRP with capacity constraints (CVRP), involving from 50 to 199 customers. The cost matrix is obtained by calculating the Euclidean distances between vertices. To modify these CVRP instances for the VRPSPD, Salhi and Nagy (1999) used the following rationale: for every customer v i, let x i and y i denote the x and y coordinates of its location, and dem i be the amount of goods demanded. Then, ratio r i is calculated as r i ¼ minfðx i =y i Þ; ðy i =x i Þg. To obtain the first seven VRPSPD instances (X-series), the delivery d i and pick-up quantity p i of customer v i are set equal to r i dem i, and ð1 r i Þdem i, respectively. The other seven VRPSPD problems used in our computational experiments (Y-series) are generated by swapping the d i and p i values for every customer. Note that the aforementioned scheme for generating the Y-series of problems was also used by Dethloff (2001), Crispim and Brandão (2005), Zachariadis et al. (2009), and Gajpal and Abad (2009). On the contrary, Nagy and Salhi (2005), Chen and Wu (2006), Tang Montané and Galvão (2006), Wassan et al. (2007) generate the Y-series of problems by exchanging the d i and p i values for every other customer. Furthermore, previous works on VRPSPD do not contain any detailed information on the precision used for calculating the delivery and pick-up quantities. As high-quality VRPSPD solutions tend to be composed of nearly fully loaded routes, the precision used for calculating customer demands heavily affects the objective value of the solutions produced. For this reason, we have used two configurations for the 14 instance data set, differentiated in terms of the precision used for evaluating customer demands. For the first configuration (rounded) also considered by Tang Montané and Galvão (2006) the delivery and pick-up quantities were rounded to the nearest integer value, while for the second configuration (unrounded) the r i ; d i and p i calculations were performed using 32-bit arithmetic. These two benchmark problem configurations were also used in our previous VRPSPD study (Zachariadis et al., 2009).

E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 407 The second data set was generated and presented by Dethloff (2001). It consists of 40 test problems involving 50 customers and a central depot. These 40 problems are classified into two categories, namely SCA and CON, differentiated according to the geographic distribution of the customer population. For generating the SCA instance group, the coordinate pair of each customer is uniformly distributed within [0, 100]. On the other hand, for generating the CON instances, half of the customers location coordinates are uniformly distributed in [0, 100], whereas the other half locations are uniformly distributed within [100/3, 200/3]. The CON scheme aims at representing urban spaces, where large parts of the population are concentrated in small regions of the total area examined. In terms of customer s v i demand, the delivery quantity d i is uniformly distributed within [0,100]. The pick-up quantity is set equal to ð0:5 þ k i Þd i, where k i is uniformly distributed over the [0,1] interval. As far as the cost matrix is concerned, c ij values are obtained as the Euclidean distances between vertices v i and v j: Finally, the third VRPSPD data set consists of 18 test problems and was introduced by Tang Montané and Galvão (2006). They involve from 100 to 400 customers and were derived from the Solomon and extended Solomon CVRP test problems (Solomon, 1987; Gehring and Homberger, 1999). To adapt these CVRP instances to the VRPSPD, Tang Montané and Galvão (2006) randomly generated discrete pick-up quantities within the interval originally used to generate the delivery demands. The cost matrix is given by the Euclidean distance of vertex pairs. 4.2. Parameter setting To conclude on a robust parameter setting, we performed extensive tests using all benchmark instances described in Section 4.1. Four algorithmic parameters had to be fixed. Two of those are involved in the TS method namely tabuten which defines the iteration horizon for which the reversal of a performed move is declared tabu, and mni which sets the termination criterion for a TS execution. The third examined parameter is msize which determines the size of the route pool, while the fourth one is mcycles which sets the termination criterion for the overall algorithmic framework. As no obvious correlation between the optimal settings of these parameters was observed, we varied each one of them individually in the order presented in Table 1 measuring the algorithm s performance both in terms of effectiveness and efficiency. The tests for deciding on the standard parameter setting are summarized in Table 1. Regarding the tabuten parameter, we used values taken from {5,10,20,50,100}. Value 5 was proven to be too small for effectively avoiding cycling phenomena, while values 50 and 100 lead to poor-quality solutions as they did not let TS intensify into promising solution space areas. The best solution scores were obtained with tabuten set to 10 and 20, without any noticeable difference in the computational time required using these values. Therefore we fixed the tabuten at 20, following the computational experience by our previous VRPSPD study (Zachariadis et al., 2009). As far as the mni parameter is concerned, our aim was to keep it at the lowest possible level, as it controls the computational effort required by the TS block which is repeatedly executed during Phase II, and therefore determines the overall algorithmic speed. After setting mni to 10, 20, 30, 50, and 100 non-improving iterations, the best balance between solution quality and speed was achieved for the value of 30, therefore we set mni ¼ 30. The msize parameter defines the size of the route pool. After testing our algorithm with msize values taken from {5,10,20,50} we made the following observations: value 5 lead to short computational times for reaching the final solutions, however the quality of these solutions was rather poor. This is because the size of the route pool was too small to store enough diverse characteristics of the solution space. On the contrary, setting msize to 50 lead to a significant increase of the CPU effort required for obtaining good solutions, without any important effect on the final solution s quality. In general, the best algorithmic behaviour was observed when setting msize to 10 and 20. For the standard parameter setting of our methodology, we used msize ¼ 15. Finally, the mcycles parameter was tested using values from {1000, 5000, 10,000, 20,000, 50,000}. For the smallest scale instances of 50 customers, 5000 and sometimes 1000 cycles were enough for the route pool to converge to the same region of the solution space and to obtain the best solutions. On the contrary, when solving the largest 400-customer instances, solution improvement was occasionally observed after the completion of even 20,000 cycles. To limit the total computational times within acceptable bounds, we set the termination condition of the algorithm to be the completion of 10,000 Phase II cycles. 4.3. Computational results on benchmark instances Let VLBR (Variable Length BoneRoute) denote our proposed algorithmic framework. The computational results of VLBR on the unrounded configuration of the Salhi and Nagy (1999) instances are presented in Table 2. Table 2 also compares the solution scores by the VLBR with those achieved by the algorithms of Chen and Wu (2006) (denoted by C&W), Wassan et al. (2007) (denoted by W&W&N), and Zachariadis et al. (2009) (denoted by GTS). Note that for the C&W and W&W&N methods, we provide the solution scores of only the X-series benchmark problems, as a different scheme for generating the Y-series of VRPSPD instances was used. As mentioned in Section 4.1, secure comparisons on the solution quality can only be made between VLBR and GTS: the works of Chen and Wu (2006) and Wassan et al. (2007) do not contain any detailed information on the precision used for evaluating the various delivery and pick-up demands which heavily affects the objective value of the solutions produced. However, some general comparative remarks are provided in the following: VLBR reached the highestquality solution scores for four out of the seven X-series instances, whereas the remaining three best solution scores were obtained by W&W&N. The average deviation between the solution costs obtained by VLBR and GTS is 0.07% (calculated relatively to the GTS scores). In terms of the number of vehicles required by the solutions obtained by VLBR and W&W&N, we see that for five solutions both algorithms require the same fleet size, whereas W&W&N uses one less vehicle for instances CMT3X and CMT5X. Regarding the fleet sizes required by GTS and VLBR, we observe that both Table 1 Parameter setting summary. Parameter Description Range tested Value selected tabuten Number of TS iterations for which the reversal of a performed move is declared tabu [5,100] 20 mni Non-improving iterations after which the TS method is terminated [10,100] 30 msize Factor used for determining the size of the route pool [5,50] 15 mcycles Number of completed Phase II cycles after which the algorithm terminates [1000,50,000] 10,000

408 E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 Table 2 Results for the unrounded demand configuration of the Salhi and Nagy (1999) data set. Instance n C&W W&W&N GTS VLBR %Dev z veh t z veh t z veh t z veh t CMT1X 50 478.59 3 7.7 468.30 3 0.3 469.80 3 2.9 469.80 3 2.1 0.00 CMT1Y 50 3 7.8 3 2 469.80 3 3.9 469.80 3 3.8 0.00 CMT2X 75 688.51 6 24.9 668.77 6 5 684.21 6 7.4 684.21 6 5.4 0.00 CMT2Y 75 6 12.0 6 6 684.21 6 8.0 684.21 6 6.8 0.00 CMT3X 100 744.77 5 94.1 729.63 4 65 721.27 5 11.6 721.27 5 11.9 0.00 CMT3Y 100 5 120.7 4 15 721.27 5 13.5 721.27 5 11.0 0.00 CMT12X 100 678.46 6 46.8 644.70 5 22 662.22 5 11.8 662.22 5 9.3 0.00 CMT12Y 100 6 56.4 6 12 662.22 5 7.6 662.22 5 4.8 0.00 CMT11X 120 858.57 4 321.1 861.97 4 10 838.66 4 17.8 833.92 4 21.2 0.57 CMT11Y 120 5 230.7 4 129 837.08 4 14.3 833.92 4 14.4 0.38 CMT4X 150 887.00 7 502.0 876.50 7 69 852.46 7 27.8 852.46 7 29.6 0.00 CMT4Y 150 7 406.3 7 73 852.46 7 31.2 852.46 7 27.4 0.00 CMT5X 199 1089.22 10 1055.8 1044.51 9 16 1030.55 10 51.7 1030.55 10 62.8 0.00 CMT5Y 199 10 771.7 9 132 1030.55 10 58.8 1030.55 10 47.7 0.00 Avg 261.3 39.7 19.2 18.4 0.07 Values written in bold represents higher quality solutions. C&W: the algorithm of Chen and Wu (2006) (Pentium IV 1.6 GHz, C, unrounded cost matrix), W&W&N: the algorithm of Wassan et al. (2007) (Sun Fire V440 server UltraSPARC-IIIi 1062 MHz, Fortran, unrounded cost matrix), GTS: the algorithm of Zachariadis et al. (2009) (Pentium IV 2.4 GHz, Visual C#, unrounded cost matrix), VLBR: the proposed algorithm (T5500 1.66 GHz, Visual C#, unrounded cost matrix), z: the objective function value of the best solution obtained, veh: number of vehicles, t: time elapsed when the final solution was generated (seconds), %Dev: the percent gap between the scores obtained by the GTS and VLBR methods (relatively to the GTS scores). methods required the same number of vehicles for all 14 test problems. The results obtained for the rounded configuration of the Salhi and Nagy (1999) data set, are presented in Table 3. Table 3 also compares the performance of VLBR, GTS and the method of Tang Montané and Galvão (2006) (denoted by T&G). Again, no analytic comparisons can be made with the solution costs of the T&G metaheuristic, as they consider a rounded cost matrix. Furthermore, we provide the T&G solution scores only for the X-series of instances, as a different generation scheme was used for the Y-series test problems. We observe that VLBR reached the highest-quality solutions for all 14 test problems. The average deviation between the solution scores achieved by the VLBR and GTS methods is 0.08% (relatively to the GTS scores). Regarding both demand configurations (rounded and unrounded) of the Salhi and Nagy (1999) data set, the gap between the solution scores achieved for a pair of X and Y problems can be interpreted as a measure of robustness of the examined algorithm (Zachariadis et al., 2009). We see that VLBR managed to robustly produce identical cost solutions for all 14 pairs of X and Y problems. Table 4 presents the computational results for the Dethloff VRPSPD instances. For this data set, the scores achieved by the presented algorithms can be safely compared, as the customer Table 3 Results for the rounded demand configuration of the Salhi and Nagy (1999) data set. Instance n T&G GTS VLBR %Dev z veh t z veh t z veh t CMT1X 50 472 3 3.7 470.48 3 4.1 470.48 3 3.3 0.00 CMT1Y 50 3 4.4 470.48 3 3.2 470.48 3 3.2 0.00 CMT2X 75 695 7 6.9 682.39 6 6.5 682.39 6 6.0 0.00 CMT2Y 75 7 7.6 682.39 6 7.9 682.39 6 8.3 0.00 CMT3X 100 721 5 11.0 719.06 5 10.5 718.40 5 10.1 0.09 CMT3Y 100 5 12.0 719.06 5 13.3 718.40 5 11.2 0.09 CMT12X 100 675 6 12.2 658.83 5 12.0 658.83 5 13.0 0.00 CMT12Y 100 6 12.8 660.47 5 10.4 658.83 5 11.1 0.25 CMT11X 120 900 4 18.2 831.09 4 16.8 829.07 4 16.2 0.24 CMT11Y 120 5 18.1 829.85 4 15.3 829.07 4 14.5 0.09 CMT4X 150 880 7 24.6 854.21 7 23.0 852.46 7 21.3 0.20 CMT4Y 150 7 29.1 852.46 7 28.7 852.46 7 29.5 0.00 CMT5X 199 1098 11 51.5 1030.56 10 57.6 1030.56 10 53.0 0.00 CMT5Y 199 10 56.2 1031.69 10 53.8 1030.56 10 57.9 0.11 Avg 19.2 18.8 18.5 0.08 Values written in bold represents higher quality solutions. T&G: the algorithm of Tang Montané and Galvão (2006) (Athlon 2.0 GHz, Pascal, rounded cost matrix), GTS: the algorithm of Zachariadis et al. (2009) (Pentium IV 2.4 GHz, Visual C#, unrounded cost matrix), VLBR: the proposed algorithm (T5500 1.66 GHz, Visual C#, unrounded cost matrix), z: the objective function value of the best solution obtained, veh: number of vehicles, t: time elapsed when the final solution was generated (seconds), %Dev: the percent gap between the scores obtained by the GTS and VLBR methods (relatively to the GTS scores).

E.E. Zachariadis et al. / European Journal of Operational Research 202 (2010) 401 411 409 Table 4 Results for the Dethloff (2001) data set. Instance n T&G GTS VLBR %Dev Z veh t z veh t z veh t SCA3-0 50 640.55 4 3.4 636.06 4 2.8 635.62 4 2.5 0.07 SCA3-1 50 697.84 4 3.3 697.84 4 2.1 697.84 4 2.5 0.00 SCA3-2 50 659.34 4 3.5 659.34 4 2.6 659.34 4 2.9 0.00 SCA3-3 50 680.04 4 3.3 680.04 4 3.1 680.04 4 2.3 0.00 SCA3-4 50 690.50 4 3.4 690.50 4 2.7 690.50 4 2.9 0.00 SCA3-5 50 659.90 4 3.7 659.90 4 2.6 659.90 4 3.0 0.00 SCA3-6 50 653.81 4 3.4 651.09 4 4.4 651.09 4 3.1 0.00 SCA3-7 50 659.17 4 3.3 659.17 4 3.0 659.17 4 2.8 0.00 SCA3-8 50 719.47 4 3.4 719.47 4 3.4 719.47 4 3.5 0.00 SCA3-9 50 681.00 4 3.4 681.00 4 3.9 681.00 4 4.7 0.00 SCA8-0 50 981.47 9 4.1 961.50 9 3.2 961.50 9 2.7 0.00 SCA8-1 50 1077.44 9 4.3 1050.20 9 3.6 1049.65 9 3.8 0.05 SCA8-2 50 1050.98 10 4.2 1039.64 9 4.7 1039.64 9 3.9 0.00 SCA8-3 50 983.34 9 4.2 983.34 9 3.3 983.34 9 2.6 0.00 SCA8-4 50 1073.46 9 4.1 1065.49 9 2.7 1065.49 9 2.4 0.00 SCA8-5 50 1047.24 9 4.0 1027.08 9 4.5 1027.08 9 3.4 0.00 SCA8-6 50 995.59 9 3.9 971.82 9 2.7 971.82 9 2.7 0.00 SCA8-7 50 1068.56 10 4.2 1052.17 9 4.3 1051.28 9 5.1 0.08 SCA8-8 50 1080.58 9 3.9 1071.18 9 3.4 1071.18 9 3.6 0.00 SCA8-9 50 1084.80 9 4.2 1060.50 9 4.1 1060.50 9 4.8 0.00 CON3-0 50 631.39 4 3.6 616.52 4 3.9 616.52 4 4.7 0.00 CON3-1 50 554.47 4 3.3 554.47 4 3.0 554.47 4 2.2 0.00 CON3-2 50 522.86 4 3.5 519.26 4 3.3 518.00 4 3.1 0.24 CON3-3 50 591.19 4 3.3 591.19 4 2.8 591.19 4 3.2 0.00 CON3-4 50 591.12 4 3.5 589.32 4 3.1 588.79 4 2.3 0.09 CON3-5 50 563.70 4 3.4 563.70 4 3.5 563.70 4 3.7 0.00 CON3-6 50 506.19 4 3.3 500.80 4 3.0 499.05 4 3.7 0.35 CON3-7 50 577.68 4 3.5 576.48 4 2.4 576.48 4 1.9 0.00 CON3-8 50 523.05 4 3.7 523.05 4 5.0 523.05 4 3.8 0.00 CON3-9 50 580.05 4 3.4 580.05 4 3.1 578.25 4 2.2 0.31 CON8-0 50 860.48 9 4.2 857.17 9 3.4 857.17 9 4.4 0.00 CON8-1 50 740.85 9 3.9 740.85 9 3.7 740.85 9 3.3 0.00 CON8-2 50 723.32 9 3.8 713.44 9 2.9 712.89 9 2.7 0.08 CON8-3 50 811.23 10 4.1 811.07 10 3.8 811.07 10 2.8 0.00 CON8-4 50 772.25 9 3.8 772.25 9 3.0 772.25 9 2.8 0.00 CON8-5 50 756.91 9 4.0 756.91 9 5.8 754.88 9 5.7 0.27 CON8-6 50 678.92 9 4.0 678.92 9 4.0 678.92 9 3.4 0.00 CON8-7 50 814.50 9 4.0 811.96 9 2.5 811.96 9 2.5 0.00 CON8-8 50 775.59 9 3.7 767.53 9 4.2 767.53 9 3.2 0.00 CON8-9 50 809.00 9 4.1 809.00 9 3.9 809.00 9 3.8 0.00 Avg 3.7 3.4 3.3 0.04 Values written in bold represents higher quality solutions. T&G: the algorithm of Tang Montané and Galvão (2006) (Athlon 2.0 GHz, Pascal), GTS: the algorithm of Zachariadis et al. (2009) (Pentium IV 2.4 GHz, Visual C#), VLBR: the proposed algorithm (T5500 1.66 GHz, Visual C#), z: the objective function value of the best solution obtained, veh: number of vehicles, t: time elapsed when the final solution was generated (seconds), %Dev: the percent gap between the VLBR and the previous best solution values (relatively to the previously best scores). demand quantities have been explicitly specified. VLBR managed an average 0.04% improvement over the GTS best solution scores, which is satisfactory considering the small size of the instances (50 customers). In specific, it produced nine improving solutions, while for the rest 31 problems, it consistently reached the GTS solution values. The fleet size required by the VLBR is identical with the one required by GTS for all 40 test problems. The computational results obtained by the VLBR algorithm for the Tang Montanè and Galvão data set are provided in Table 5. Again, the solution values obtained by T&G, GTS and VLBR can be fairly compared, as all benchmark data details are clearly defined. The VLBR method improved the best solution scores for 13 of the examined 18 benchmark instances. For the rest 5 problems, it generated identical solutions with those of the GTS algorithm. The average improvement over the best solution scores previously published is 0.57%. For the smallest problems with n ¼ 100 the average score improvement is 0.16%, whereas for n ¼ 200 and 400 it becomes 0.41% and 1.13%, respectively. The greatest solution quality improvement is observed for the 400-customer instance c2_4_1 (2.24%). This clearly indicates that as the problem scale grows, the VLBR framework becomes more effective than the T&G and GTS methods. Large-scale problems have a wide variety of desirable routing characteristics which if appropriately combined, lead to high-quality solutions. Therefore, the relative advantage of the VLBR method when dealing with such large problems is mainly attributed to its special feature of exploiting a broad spectrum of information stored in the AM. In terms of the fleet size, again, it is identical with that required by the GTS method for all 18 problems. The CPU times required by the VLBR method for obtaining the best solutions are satisfactory. On average, it appears to be faster than the GTS method mainly because of the granularity concept for accelerating the solution neighbourhood investigation which is the decisive factor in terms of the overall algorithmic speed. Note that even for the largest instances involving up to 400 customers, the CPU time elapsed when the best solution was produced was less than 6 minutes (345.3 seconds for instance r1_4_1).