A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

Similar documents
Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

Using Genetic Algorithms to optimize ACS-TSP

Ant Colony Optimization for dynamic Traveling Salesman Problems

Ant Colony Optimization

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm

An Ant Approach to the Flow Shop Problem

Ant Colony Optimization: The Traveling Salesman Problem

ACO for Maximal Constraint Satisfaction Problems

ACO and other (meta)heuristics for CO

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

Abstract. Keywords 1 Introduction 2 The MAX-W-SAT Problem

Solving a combinatorial problem using a local optimization in ant based system

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

Learning Fuzzy Rules Using Ant Colony Optimization Algorithms 1

Recent PTAS Algorithms on the Euclidean TSP

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company

SavingsAnts for the Vehicle Routing Problem. Karl Doerner Manfred Gronalt Richard F. Hartl Marc Reimann Christine Strauss Michael Stummer

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization

arxiv: v1 [cs.ai] 9 Oct 2013

Notes for Lecture 24

An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem

IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM

Image Edge Detection Using Ant Colony Optimization

Approximate Shortest Path Algorithms for Sequences of Pairwise Disjoint Simple Polygons

A Recursive Ant Colony System Algorithm for the TSP

Combinatorial Optimization - Lecture 14 - TSP EPFL

Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

A STUDY OF SOME PROPERTIES OF ANT-Q

3 No-Wait Job Shops with Variable Processing Times

Ant Colony Based Load Flow Optimisation Using Matlab

A heuristic approach to find the global optimum of function

Ant Colony Optimization

Solving a unique Shortest Path problem using Ant Colony Optimisation

Workflow Scheduling Using Heuristics Based Ant Colony Optimization

Ant Colony Optimization (ACO) For The Traveling Salesman Problem (TSP) Using Partitioning

Swarm Intelligence (Ant Colony Optimization)

Preemptive Scheduling of Equal-Length Jobs in Polynomial Time

150 Botee and Bonabeau Ant Colony Optimization (ACO), which they applied to classical NP-hard combinatorial optimization problems, such as the traveli

Hybrid Ant Colony Optimization and Cuckoo Search Algorithm for Travelling Salesman Problem

MIRROR SITE ORGANIZATION ON PACKET SWITCHED NETWORKS USING A SOCIAL INSECT METAPHOR

Accelerating Ant Colony Optimization for the Vertex Coloring Problem on the GPU

Lower Bounds for Insertion Methods for TSP. Yossi Azar. Abstract. optimal tour. The lower bound holds even in the Euclidean Plane.

A HYBRID GENETIC ALGORITHM A NEW APPROACH TO SOLVE TRAVELING SALESMAN PROBLEM

Ant colony optimization with genetic operations

Ant Colony Optimization Algorithm for Reactive Production Scheduling Problem in the Job Shop System

Adjusted Clustering Clarke-Wright Saving Algorithm for Two Depots-N Vehicles

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006

Task Scheduling Using Probabilistic Ant Colony Heuristics

Modified Greedy Methodology to Solve Travelling Salesperson Problem Using Ant Colony Optimization and Comfort Factor

Adaptive Model of Personalized Searches using Query Expansion and Ant Colony Optimization in the Digital Library

Algorithms for Euclidean TSP

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Adaptive Ant Colony Optimization for the Traveling Salesman Problem

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Introduction to Approximation Algorithms

Two new variants of Christofides heuristic for the Static TSP and a computational study of a nearest neighbor approach for the Dynamic TSP

Research Interests Optimization:

Relationship between Genetic Algorithms and Ant Colony Optimization Algorithms

CT79 SOFT COMPUTING ALCCS-FEB 2014

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016)

Ant-Q: A Reinforcement Learning approach to the traveling salesman problem

Research Article A Novel Metaheuristic for Travelling Salesman Problem

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU

An Efficient Analysis for High Dimensional Dataset Using K-Means Hybridization with Ant Colony Optimization Algorithm

Heuristic Search Methodologies

Comparison Study of Multiple Traveling Salesmen Problem using Genetic Algorithm

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Lecture 1. 2 Motivation: Fast. Reliable. Cheap. Choose two.

Ant-Colony Optimization for the System Reliability Problem with Quantity Discounts

A Comparative Study on Nature Inspired Algorithms with Firefly Algorithm

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Research Article Using the ACS Approach to Solve Continuous Mathematical Problems in Engineering

A Parallel Implementation of Ant Colony Optimization

Memory-Based Immigrants for Ant Colony Optimization in Changing Environments

The Traveling Salesman Problem: State of the Art

Complete Local Search with Memory

Ant Algorithms for Discrete Optimization

Optimal tour along pubs in the UK

RESEARCH ARTICLE. Accelerating Ant Colony Optimization for the Traveling Salesman Problem on the GPU

The Ant Colony Optimization Meta-Heuristic 1

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization

Ant Colony Optimization Exercises

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm

Improvement of a car racing controller by means of Ant Colony Optimization algorithms

Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma

JOURNAL OF OBJECT TECHNOLOGY

Ant Algorithms for Discrete Optimization

All 0-1 Polytopes are. Abstract. We study the facial structure of two important permutation polytopes

Hybrid approach for solving TSP by using DPX Cross-over operator

A two-level metaheuristic for the All Colors Shortest Path Problem

Lecture Notes: Euclidean Traveling Salesman Problem

Travelling salesman problem using reduced algorithmic Branch and bound approach P. Ranjana Hindustan Institute of Technology and Science

Ant Algorithms for Discrete Optimization

THE OPTIMIZATION OF RUNNING QUERIES IN RELATIONAL DATABASES USING ANT-COLONY ALGORITHM

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Transcription:

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem TRUNG HOANG DINH, ABDULLAH AL MAMUN Department of Electrical and Computer Engineering National University of Singapore, NUS Address: 0 Kent Ridge Crescent, Singapore 7584 Abstract: The algorithm of Ant Colony System (ACS) has been found attractive for solving combinatorial optimization problem such as Travelling Salesman Problem (TSP). The run-time for this algorithm increases with increase in number of nodes. In this paper, we propose an efficient method that reduces the run-time for very large-scale Euclidean TSP problem and yet conserve the quality of solution for certain clustered cases. Moreover, the proposed method has a simple parallel implementation. The proposed method shows excellent performance both in run-time and quality of solution specially on large clustered instances. Effectiveness of the proposed method is underscored by applying them on two kinds of different benchmark problems. Key-Words: travelling salesman problem, ant colony optimization, clustering algorithms, combinatorial optimization. Introduction The classic well-known NP-hard Travelling Salesman Problem (TSP) has been used as a rich testing ground for most important algorithmic ideas during the past few decades. Interested readers may refer to Lawler et al. [8] for a fascinating history. In Euclidean TSP, nodes lie in R 2 (or more generally, in R d for some d), and the distance is defined using Euclidean norm. Moreover, similar to TSP, it belongs to the NP-hard class [9, 7]. Many algorithms have already been developed to solve TSP. They are categorized temporally following one of two strategies by mathematical exaction or approximation of resulted solutions. Algorithms such as branch and bound, branch and cut giving the solutions whose optimality is mathematically proven, may require a large amount of run-time for large instances. The other approach is to design approximate algorithms whose performances are often evaluated based on both their run-time and quality of solutions. There have been many algorithms developed following such a way. One approach of designing approximation algorithms is based on meta-heuristics such as genetic algorithm (GA), simulated annealing (SA), and ant algorithms. Most ant algorithms which have been successfully applied to many combinatorial optimization problems [3] follows a general scheme which is called Ant Colony Optimization - ACO (see [2]). Ant System (AS) [5], AS s variants eg. ant-q [6], rank ant []), ACS [4] are some examples of the ant algorithms applied to TSP problem. We chose ACS due to the fact that it is one of the most successful ant algorithms applied to TSP, outperformed GA, SA, and also meets conditions in the theorem about reducing run-time which will be discussed later. Our method of solving Euclidean TSP problem of very large instances is divided into three separated stages(in some scenes the first and the third stage can be combined together in one single stage). In the first stage, the original instance is partitioned into few clusters of smaller dimensions. Each cluster is considered as a sub-tsp, and in the second stage, ACS is applied to find the optimal solution of each sub-tsp. We also design an algorithm to combine the sub-tours found in the second stage in order to get a final solution of the original TSP instance. For the same settings, the run-time of

our approach is much shorter than that for ACS applied to the original TSP. We find that the proposed method not only benefits ACS s advantages in finding the optimal solutions but also suits better to large-scale Euclidean TSP instances and outperforms ACS both in run-time and quality of solution specially for clustered instances. However, our results do not indicate decreasing run-time in the complexity of the algorithm ACS but in practical implementation. In our article, the ACS algorithm and the TSP problem are recalled briefly in the next section. Necessary theorems on reducing run-time and guaranteeing quality of solution under some efficient conditions are presented in section 3, followed by some experimental results and discussions section 4. The algorithm is tested using benchmark problems. 2 The TSP problem and the ACS algorithm 2. The TSP Problem The TSP is formally defined as: Let V {a,.., z} be a set of cities, A {(r, s) : r, s V } be the set of edges, and δ(r, s) be the cost measure associated with the edge (r, s) A. The objective is to find a minimum cost closed tour that goes through each city only once. In the case that all of cities in V are given by their coordinates and δ(r, s) is the Euclidean distance between any r and s (r, s V ) then this is so-called an Euclidean TSP (ETSP for short) problem. If δ(r, s) δ(s, r) for at least one edge (r, s) then TSP becomes asymmetric TSP (ATSP). 2.2 Ant Colony System Ant Colony System-ACS is a nature-inspired algorithm. The natural ants are capable of finding the shortest road from their nest to a food source by using the pheromone information they deposit on the ground. Details of the algorithm can be found in [4]. How ACS works: There are three rules used in ACS, () the state transition rule, (2) the global pheromone updating rule, and (3) the local pheromone updating rule. The ACS algorithm can be described as follows (interested readers may refer to the appendix in [4] for more details). ACS algorithm is described as below. Initialize Loop // at this level, each loop is called an iteration Each ant is positioned on a starting node Loop //this level, each loop is called a step Each ant applies a state transition rule to increasingly build a solution and a local pheromone updating rule Until all ants have built a complete rule. A global pheromone updating rule is applied. Until End condition 3 The 3-stage Method 3. The First Stage - Clustering The objective of the first stage is to divide the original instance into segments such that distance between any two segments is as large as possible. We propose a simple technique used in other fields such as pattern recognition, data mining etc. The proposed clustering technique is based on the following definition: Given a positive constant Θ, points i and j belong to the same cluster if d(i, j) Θ, where d(i, j) is the Euclidean distance between i and j. However, to achieve advantage in run-time as discussed later, a systematic parameter K max is used at this stage to limit the number of elements in any cluster to the upper bound K max. The existence of the number Θ: The optimal tour can be separated to form sequences of consecutive nodes (still reserve the same order as in the optimal tour). There always exist a number θ i such that when Θ θ i nodes of the i th sequence can be partitioned into the same cluster. In some cases, specially for cases with clustered instances, the distance d ij between any two sequences i and j is greater than max{diameter i, diameter j } 2, or in other word, if A min {d ij} > B i,j max{diameter i }, then there will exist a number i Θ (B, A) which may be a candidate value used in the proposed clustering algorithm. This evidence may explain additionally why the quality of solution of clustered instances (produced by this 3-stage method) is improved much. In this article, distance between two sets is defined as the smallest distance joining two elements, one element is in a set, another element is in another one. 2 By diameter of a set it means the distance between two points in the set which are farthest apart. 2

3.2 The Second Stage - ACS for Individual Cluster After the original TSP is separated into several smaller clusters in stage, the algorithm ACS is applied to find the optimal tour for each part. It is clear that when the number of cities and/or the number of iterations increases, this second stage becomes the most time-consuming stage comparing with other two stages. Let A be the time required for the second stage, B be the time required to solve the original instance without clustering, and we assume that the same algorithm (ACS) with same settings is used in both cases. We can find an upper and strict lower bound of A B. Proposition Given c i > 0, i..k such that c i then > k c m i holds m 2. k m i i Proof It is clear that 0 < c m i < c i, i..k, hence the left hand side of the inequalities holds. Let f(x) x m, m 2, the 2 nd derivative of f is f m(m )x m 2 > 0 x > 0. Thus f(x) is a convex down function in the interval (0, ). According to Jensen inequality 3 f( f(c i ) i k k m ( k c m i. i i c i ) m k i The equality happens when c i k i. c i i k c m i k m ) Proposition 2 With any numbers a, b, c, d > 0, we have max( a b, c d ) a+c b+d min( a b, c d ). The equality takes place if and only if a b c d. This proposition leads easily to the next corollary: Corollary With any a i, > 0 i..n, then max[{ a i } n i ] n a i i n i min[{ a i } n i ]. These equalities happen concurrently when a b a 2 b 2.. a n bn. Let Ω {F : R + R +, F (x) m α i x i, m N, m 2; α i > 0; i 2, m}. i2 3 http://mathworld.wolfram.com/jensensinequality.html Theorem Assuming the run-time function F of a certain algorithm Υ we applied in the second stage is Ω, and with the input size n, the amount of run-time of Υ is F (n), then > A B >, k m where k is the number of parts resulting from the clustering stage, m is the degree of polynomial F, A is the total amount of run-time to find the optimal solutions for all clusters using Υ, B s definition is similar A s definition but for the original instance (without clustering). The proof of this theorem is referred to appendix A. Remark: Assuming that the run-times of the first and third stage are very much insignificant compared to that of the second stage, this theorem suggests that the proposed 3-stage method is faster than the other, but can not faster than k m times. The original instances should be clustered into parts such that the clusters are approximately equal in size to reach the limit (k m ). Using the same symbols given in the theorem, we define the following corollary to present a result of the case when the size of every part is bounded. Corollary 2 If the size of every part is bounded in the interval [, K max ] then ɛ A B, where 0 < ɛ, n is the size of the original instance. K max n Proof 2 Let β i c i n K max n. We have a i βj i j k i β i ; 0 < β i ε β j ε i ε i, i 2. () j The equality takes place if α j ɛ, j hence k α j ɛ k K max n. Replace () into left hand side of inequality (2), we obtain ɛ max[{ɛ i } m i2 ] max[{ a i } m i2 ] A B. Summary: Most practical implementations of Υ may not have a run-time function F of the exact form described in Ω. But if the input size is large enough and F is a polynomial then sum of components of degrees that is lower than the degree of F is less significant compared to the value of the highest degree component of F. The above results, in such case, are still reasonable. In addition, if F is not a polynomial but of other form e.g. logarithmic function, we can approximate F using a polynomial G such that the highest coefficient of G is positive. Then this case is similar to the former one. 3

3.3 The Third Stage - Method of Combining Individual Solutions After the solutions of sub-tsps are obtained from the second stage, we combine them to get a feasible solution i.e. a closed tour of the original instance. The most general solution is to choose some edges in a sub-tour to create bridges linking the cluster to other clusters such that when the chosen edges are removed the bridging edges will make a closed tour. We shall call the edges that bridge between clusters the linking edges from now on. If we search for a final tour by increasing the number of chosen edges of all sub-tours to its number of edges then it becomes an exhaustive searching, and of course, it is practically impossible as the Euclidean TSP is a NP-hard problem. So we design a greedy combining method restricting the number of chosen edges to one only, and point out the conditions under which the combining method is good enough. A sufficient condition for search space including the optimal solution for the case of two clusters: For the case of the number of clusters k 2, we have the following result: Theorem 2 Let A and B be the two clusters, d A and d B be diameter of A and B, respectively, and d AB be distance between A and B. If there exist n N, 2 n min{ A, B } such that d AB d n min{d A,d B } 2(n ) + d A + d ( ) B then the search space of the proposed 3-stage method contains the optimal final tour and that space is formed by number of linking edges less than 2n. The proof of this theorem is referred to appendix B. Remark: A clear corollary resulting from this theorem is that if n 2 and (**) still holds then the optimal final tour must belong to the search space formed by using only two linking edges, each from a sub-tour. It means that, 3-stage method will searche the optimal tour in such a reduced space which attributes to reduce its run-time as well. An efficient combining algorithm for clustered instance: If the number of clusters k > 2, and the distances among clusters is large enough compared with the diameter of any cluster, we can have the following theorem with an assumption that the distances between any three clusters meet the triangular inequality: Theorem 3 In the optimal final tour no two clusters exist that are joined by at least two linking edges. Theorem 4 In the optimal tour, there exists no cluster that has at least four linking edges (to link to other four distinct clusters). In other words, to build the optimal tour from sub-tours, each cluster has two linking edges which link to two different clusters. Remark: Theorems 3, 4 are useful, specially for clustered instances. Based on these results, for the clustered instance, an implementation of 3-stage- ACS is proposed as below: Step : Partition the original instance into smaller parts. Step 2: Find k linking edges for k clusters according to theorems 3,4. It seems that it is also considered roughly as a TSP problem with k nodes. Step 3: Apply ACS for each cluster, but its subtour always contains a fixed edge whose two nodes link this cluster to two other distinct clusters. 4 Experiment 4. Large ETSP instances for testing We test the effectiveness of the proposed 3-stage method by applying the algorithm to two different benchmark problems. The first problem is the TSPLIB 4. We consider some large Euclidean instances with the number of cities between 650 and 3795. The second benchmark is a group of instances where cities are randomly clustered distributed on the square [0, 0 6 ]. These instances with the number of cities between 000 and 5000 were generated by the Instance Generator Code of the 8 th DIMACS Implementation Challenge 5. 4.2 Comparison between 3-stage-ACS and other algorithms We compare the proposed 3-stage-ACS with ACS. Because the run-time quantity is also used to compare their computational efficiency, both algorithms were run on the same computer (a Dell PC Pentium IV 2.4GHz processor, 256MB of RAM), the same code to implement ACS (except code parts particularly designed for 3-stage-ACS including clustering and combining parts), and the 4 http://www.iwr.uni-heidelberg.de/groups/comopt /software/tsplib95 5 http://www.research.att.com/ dsj/chtsp/download.html 4

Table : A comparison of 3-stage-ACS and ACS is based on instances of 000-5000 cities clustered randomly generated. Each trial was stopped after 5000 iterations. Averages are over 5 trials. Results in bold are the best in the table. (*) is the proportion of run-time of ACS to 3-stage-ACS s; k is the number of clusters. No. of 3-stage ACS (*)ACS/ [(2)-()] cities average std dev best () k average std dev best (2) 3-stage /() 000 870553.80 740.93 747792 6 2302227.67 8829.5 268397 3.68 3.58% 500 3840435.67 70049.8 373567 6 443592.20 04250.7 39707 5.63.7% 2000 6435565.40 8262.62 6307804 8 724049.3 8748.32 690889 5.08 3.69% 2500 784082.93 68272.57 774068 3 8592455.20 85965.00 8324584 6.35 3.29% 5000 25300826.40 8262.62 2547562 26 2655075.20 23956.92 26202524.82 4.20% Table 2: A comparison of 3-stage-ACS and ACS is based on large benchmark instances. Averages are over 5 trials. Each trial were stopped after 5000 iterations. Results in bold are the best in the table. (*) is the proportion of run-time of ACS to 3-stage-ACS s; k is the number of clusters. prob. 3-stage ACS Optimum (*)ACS/ [(2)-()] name average std dev best () k average std dev best (2) known 3-stage /() p654 35554.67 297.48 353 3 35860.67 438.98 3520 34643.96 0.02% fl3795 30804.33 20.27 30689 4 3088.33 64.97 30842 28772 2.44 0.50% same settings for parameters as discussed in [4], m 0, β 2, q 0 0.98, α ρ 0., and τ /(n L nn ). In addition, all testing instances are very large, thus for both case, a candidate list is used with the length of cl 20. Memory storage requirement: For the largescale TSP instances, the required memory to store the pheromone and cost matrix may contribute most remarkably to the memory requirement of the algorithm. With the input size of N cities, the amount of memory to store these two matrices should be CN(N ) bytes, where C is a systemdependent parameter to store a real-type number and these two matrices store the upper triangle of matrices only. However, the proposed 3-stage method takes, during the execution of ACS, approximately Cn(n ) bytes where n is the size of largest cluster. It can be seen that, from the point of view of memory requirement, the 3-stage-ACS is more efficient than ACS. Experimental results: To compare the run-time of 3-stage-ACS with ACS, we take the factor total run-time of ACS of 3-stage-ACS. The larger this factor, the faster the 3-stage-ACS is. Due to the fact that the optimal solutions of generated clustered Euclidean TSP instances are unknown, we use a factor relative performance to compare ACS s performance with 3-stage-ACS on quality of solutions. This factor (relative performance) is computed by taking proportion of the subtraction of smallest cost found by ACS with 3-stage-ACS s over 3-stage-ACS s. Each instance was run totally 5 trials, each trial had 5000 iterations, and for clusters whose size is less than 50 we set the parameter cl 0, because such a cluster may not be considered as a large instance. For clustered Euclidean TSPs: As shown in the Table, 3-stage-ACS always produces better solutions than the standard ACS, in much shorter time. For example, it took 6354.6 seconds for solving a 5000-city instance using ACS, but only 383.3 seconds for the proposed 3-stage-ACS algorithm. This is attributed to the fact that 3-stage- ACS outperforms ACS in clustered Euclidean TSP. For benchmark Euclidean TSPs: Because almost large benchmark instances do not follow the sufficient condition mentioned in theorems (3, 4), thus the combining algorithm used for such an instance has a little but important change which takes a part in improving the quality of final solution. After doing the same thing as doing for the above type of instance, the representative tour (found at step 2 in the above remark) is replaced by a closed tour which characterizes like a TSP tour -starts from a city, visit all other cities and come back the starting city- but a city in the closed tour can be visited more than once as long as its total length is less than the replaced one. As shown in table 2, 3-stage ACS outperformed ACS for both benchmark instances p654 and fl3795 both average and optimal solution, but for fl3795 the ACS seems more stable than 3-stage ACS which can be seen from the value of standard deviation. 5 Conclusion and future work The 3-stage ACS proposed in this article shows promising results in increasing efficiency both in 5

run-time and quality of solution for large clustered Euclidean TSP. The proposed method runs faster than the conventional ACS that does not use clustering. Moreover, the proposed algorithm can be converted to a parallel version with a little changes in its serial version. The results presented in this article underscores the idea that both decreasing run-time and guaranteed quality of solutions is achieved when the proposed method is applied to problems whose solutions can be partitioned into sub-solutions and vice versa. References [] B. Bullnheimer, R.F. Hartl, and Ch. Strauss. A new rank based version of the ant system: a computational study. Central European Journal of Operations Research, 7():25 38, 999. [2] M. Dorigo and G. Di Caro. The ant colony optimization metaheuristic. In D. Corne, M. Dorigo, and F. Glover, editors, New Ideas In Optimization. McGraw-Hill, 999. [3] M. Dorigo, G. Di Caro, and L.M. Gambardella. Ant algorithms for discrete optimization. Artificial Life, 5:37 72, 999. [4] M. Dorigo and L.M. Gambardella. Ant colony system: A cooperative learning approach to the travelling salesman problem. IEEE Transactions on Evolutionary Computation, :53 66, 997. [5] M. Dorigo, V. Maniezzo, and A. Colorni. Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on System, Man, and Cybernetics, 26():28 4, 996. [6] L.M. Gambardella and M. Dorigo. Ant-Q: A reinforcement learning approach to the traveling salesman problem. In International Conference on Machine Learning, pages 252 260, 995. [7] M.R. Garey, R.L. Graham, and D.S. Johnson. Some NPcomplete geometric problems. In Proc. ACM Symposium on Theory of Computing, pages 0 22, 976. [8] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy Kan, and D. B. Shmoys. The traveling salesman problem. John Wiley, 985. [9] C. H. Papadimitriou. Euclidean TSP is NP-complete. Theoretical Computer Science, 4:237 244, 977. Appendix A Lemma If n, α i > 0i 2, m, k > then k m. m i2 k i α in i m > i2 Proof 3 (of lemma ) From the corollary, m i2 k i α in i m i2 min[ k i α in i m i2 ] min[ k i m i2 ] k m. Since the sequence { k i } i is strictly monotonic the equality does not take place, and hence the lemma is proved. Proof 4 (of theorem ) From the assumption, B F (n) m m, where, i 2, m. Similarly, j i2 A i2 F (c j ) j i2 j m ( α i c i j) j i2 m m ( α ic i j) a i, where a i k α ic i j, i 2, m, and c j is the size of part i (cluster i). According to proposition (), with the note of n k c i, we obtain i > c i j j ( k c j ) i j > a i j i2, i 2 ki α i c i j α k i( c j) i j Combining with corollary, we have > max[{ a i } m i2] A B m i2 k i m i2 m i2 k i. m a i i2 m i2 α k i i n i m. The right hand side equality happens when a i k i c c 2.. c k n. Combine equation (2) with the k lemma () hence > A >, or the theorem is proved. B k m Appendix B Proof 5 (of theorem 2) Let T n be the set of all final tours obtained by combining all two tuples of n edges, each tuple from a sub-tour, n min{ A, B }. Set S 2 n min{cost of tour t t : t T n }, we will prove that if (**) holds then S n > S. (3) We choose any n edges in each sub-tour whose lengths are a i,, i..n, a i A, B. Let x i, i..2n be the length of the i th linking edge, ξ be the sum of length of the remaining edges of the two sub-tours which are not chosen to be deleted. We need to consider two cases: Case : There exists at least one pair of edges such that the two linking edges link only four vertices of these two edges as shown in Fig.. The length of the final tour is t 2 ξ + 2n 2 x i n (a i + ) + (e + f a c). i i From the assumption, we have 2n 2 x i (2n 2)d AB > i2 i (2) 6

Figure : pair of edges such that the two linking edges link only four vertices of these two edges Figure 2: There is no pair of edges such that the two linking edges link only four vertices of these two edges 2(n )(d A +d B ) 2 n i (a i + ) t 2 > ξ+ n i (a i + )+ (e + f a c) S. Case 2: If case does not happen, we can assume without loss of generality that d A d B and there are two edges of sub-tour of cluster A and two of cluster B with linking as shown in Fig.2. The length of the final tour, in this case, is t 2 ξ+ 2n 4 x i n 2 (a i + )+(e+g+h+i a b c d). i i Due to h + x > f and 2n 4 i n 2 2{ x i + (g + i) d B + 2(n )(d A + d B ) (a i + ) + (b + d)} + x i n 2 t 2 > ξ + (a i + ) + (b + d) + (e + f a c) S. i Hence the two cases shown above prove the inequality (3). Since d n min{d A,d B } + d 2(n ) A + d B is a decreasing monotonic sequence, if (**) takes place at n n 0 then d AB d n0 > d n n > n 0 or S n > S n > n 0, on the other words, the inequality (3) is true n n 0 or the optimal final tour is in the restricted search space formed by using less than n chosen edges in each sub-tour. Theorem 2 is completely proved. 7