Solving a combinatorial problem using a local optimization in ant based system

Similar documents
arxiv: v1 [cs.ai] 9 Oct 2013

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

Ant Colony Optimization: The Traveling Salesman Problem

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

Ant Colony Optimization

An Ant Approach to the Flow Shop Problem

Ant Colony Optimization for dynamic Traveling Salesman Problems

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company

Using Genetic Algorithms to optimize ACS-TSP

An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization

DIPARTIMENTO DI ELETTRONICA - POLITECNICO DI MILANO

The Ant System: Optimization by a colony of cooperating agents

Image Edge Detection Using Ant Colony Optimization

Applying Opposition-Based Ideas to the Ant Colony System

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

NORMALIZATION OF ACO ALGORITHM PARAMETERS

Adaptive Ant Colony Optimization for the Traveling Salesman Problem

A heuristic approach to find the global optimum of function

A new improved ant colony algorithm with levy mutation 1

IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM

A Review: Optimization of Energy in Wireless Sensor Networks

Ant Colony Optimization

Ant Algorithms for Discrete Optimization

Swarm Intelligence (Ant Colony Optimization)

Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Ant Algorithms for Discrete Optimization

SWARM INTELLIGENCE -I

RESEARCH ARTICLE. Accelerating Ant Colony Optimization for the Traveling Salesman Problem on the GPU

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

SavingsAnts for the Vehicle Routing Problem. Karl Doerner Manfred Gronalt Richard F. Hartl Marc Reimann Christine Strauss Michael Stummer

Ant Algorithms for Discrete Optimization

Memory-Based Immigrants for Ant Colony Optimization in Changing Environments

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm

Ant Colony Optimization: Overview and Recent Advances

Université Libre de Bruxelles IRIDIA Brussels, Belgium. Holger H. Hoos

Consultant-Guided Search A New Metaheuristic for Combinatorial Optimization Problems

Hybrid Ant Colony Optimization and Cuckoo Search Algorithm for Travelling Salesman Problem

The Ant Colony Optimization Metaheuristic: Algorithms, Applications, and Advances

MAX-MIN ANT OPTIMIZER FOR PROBLEM OF UNCERTAINITY

Ant Colony Optimization (ACO) For The Traveling Salesman Problem (TSP) Using Partitioning

Ant Colony Optimization Exercises

Ant Colony Optimization: A Component-Wise Overview

The Ant Colony Optimization Meta-Heuristic 1

A Recursive Ant Colony System Algorithm for the TSP

Ant Colony Optimization Algorithm for Reactive Production Scheduling Problem in the Job Shop System

AN IMPROVED ANT COLONY ALGORITHM BASED ON 3-OPT AND CHAOS FOR TRAVELLING SALESMAN PROBLEM

ACO and other (meta)heuristics for CO

Heuristic Search Methodologies

Relationship between Genetic Algorithms and Ant Colony Optimization Algorithms

ACO for Maximal Constraint Satisfaction Problems

150 Botee and Bonabeau Ant Colony Optimization (ACO), which they applied to classical NP-hard combinatorial optimization problems, such as the traveli

RESEARCH OF COMBINATORIAL OPTIMIZATION PROBLEM BASED ON GENETIC ANT COLONY ALGORITHM

An Adaptive Ant System using Momentum Least Mean Square Algorithm

Scalability of a parallel implementation of ant colony optimization

Ant colony optimization with genetic operations

Pre-scheduled and adaptive parameter variation in MAX-MIN Ant System

Improvement of a car racing controller by means of Ant Colony Optimization algorithms

ANT COLONY OPTIMIZATION FOR SOLVING TRAVELING SALESMAN PROBLEM

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization

Navigation of Multiple Mobile Robots Using Swarm Intelligence

Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma

A STUDY OF SOME PROPERTIES OF ANT-Q

Ant-Q: A Reinforcement Learning approach to the traveling salesman problem

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art

ACCELERATING THE ANT COLONY OPTIMIZATION

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016)

Ant Colony Optimization: Overview and Recent Advances

SCIENCE & TECHNOLOGY

An Ant Colony Optimization approach to solve Travelling Salesman Problem

An Experimental Study of the Simple Ant Colony Optimization Algorithm

MSc Robotics and Automation School of Computing, Science and Engineering

Ant Colony Optimization Approaches for the Sequential Ordering Problem

Searching for Maximum Cliques with Ant Colony Optimization

Adaptive Model of Personalized Searches using Query Expansion and Ant Colony Optimization in the Digital Library

Genetic Algorithms and Genetic Programming Lecture 13

THE natural metaphor on which ant algorithms are based

Advances on image interpolation based on ant colony algorithm

Task Scheduling Using Probabilistic Ant Colony Heuristics

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

THE OPTIMIZATION OF RUNNING QUERIES IN RELATIONAL DATABASES USING ANT-COLONY ALGORITHM

An Efficient GPU Implementation of Ant Colony Optimization for the Traveling Salesman Problem

Hybrid approach for solving TSP by using DPX Cross-over operator

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Constructive meta-heuristics

An Efficient Analysis for High Dimensional Dataset Using K-Means Hybridization with Ant Colony Optimization Algorithm

Learning Fuzzy Rules Using Ant Colony Optimization Algorithms 1

ACO with semi-random start applied on MKP

Blind Image Deconvolution Technique for Image Restoration using Ant Colony Optimization

A new meta-heuristic framework: A B Domain

Automatic Programming with Ant Colony Optimization

SIMULATION APPROACH OF CUTTING TOOL MOVEMENT USING ARTIFICIAL INTELLIGENCE METHOD

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

Ant Colony Optimization: A New Meta-Heuristic

Transcription:

Solving a combinatorial problem using a local optimization in ant based system C-M.Pintea and D.Dumitrescu Babeş-Bolyai University of Cluj-Napoca, Department of Computer-Science Kogalniceanu 1, 400084 Cluj-Napoca, Romania {cmpintea, ddumitr}@cs.ubbcluj.ro Abstract Local optimizations introduced to obtain improved tours for Traveling Salesman Problem have a great impact on the final solution. That is way we introduce a new ant system algorithm with a new local updating pheromone rule, and the tours are improved using k- opt techniques. The tests use different parameters, in order to obtain solutions close to the optimal values. Keywords: Ant Systems, Heuristics, k-opt algorithms, Traveling Salesman Problem. 1 Introduction Ant Colony System (ACS) meta-heuristics a particular class of ant algorithms, has been introduced by Dorigo and Gambardella [5, 6]. Ant System (AS) [6], which was the first ant algorithm designed as a set of three ant algorithms differing in the way the pheromone trail was updated by ants. The algorithms are ant-density, ant-quantity, and ant-cycle. A number of algorithms, including meta-heuristic ones, were inspired by ant-cycle algorithm, the best performing of the three. The ACS is based on 3 modifications of Ant System: a different node transition rule, a different pheromone trail updating rule, the use of local and global pheromone updating rule to favor exploration. 1

One of the most studied discrete optimization problem is the Traveling Salesman Problem (TSP). Easy to formulate and with a large number of applications, TSP is one of the most popular difficult to solve problem. Approximate algorithms based on improvement heuristics start from a tour and iteratively improve it by changing some parts of it at each iteration. The best known tour improvement are based on edge exchange, as k-opt algorithms, Lin and Kernighan algorithms. Other versions of Ant System (AS) are Elitist Ant System [3], Ant Colony System (ACS) [4], MAX MIN Ant System (MMAS) [10], Rank-based version on Ant System [1] and Best-Worst Ant System [2]. These algorithms have been used for solving the classical TSP. Based on Ant Colony System, we propose a new algorithm Inner Ant System (IAS). IAS includes a new local update pheromone rule followed by the 2-opt and 3-opt methods to improve the solution. There is used the innerupdate rule [8] as a local rule. The results of numerical experiments for Inner Ant System are compared with Ant Colony System and MAX MIN Ant System. 2 Ant Colony System for TSP The first researcher of Traveling Salesman Problem was, perhaps, K. Menger [7]. A known definition of TSP is the following. Given a complete graph with weights on the edges (arcs), find a hamiltonian cycle in graph of minimum total weight. Ant system applied to the Traveling Salesman Problem(TSP) is stated as follows. Given a set of n nodes the TSP can be stated as the problem of finding a minimal length closed path that visits each node once. An instance of the TSP is represented by a fully connected graph G(U,E), where U is the set of nodes and E is the set of connections between the nodes. First considering what happens when an ant comes across an obstacle and it has to decide the best route to take around the obstacle. Initially, there is equal probability as to which way the ant will turn in order to negotiate the

obstacle. If we assume that one route around the obstacle is shorter than the alternative route then the ants taking the shorter route will arrive at a point on the other side of the obstacle before the ants which take the longer route. If we now consider other ants coming in the opposite direction, when they come across the same obstacle they are also faced with the same decision as to which way to turn. However, as ants walk they deposit a pheromone trail. The ants that have already taken the shorter route will have laid a trail on this route so ants arriving at the obstacle from the other direction are more likely to follow that route as it has a deposit of pheromone. Over a period of time, the shortest route will have high levels of pheromone so that all ants are more likely to follow this route. There is positive feedback which reinforces that behavior so that the more ants that follow a particular route, the more desirable it becomes. An pseudo-code for ACS algorithm as in [4] it follows. procedure ACS algorithm begin Set parameters, initialize pheromone trails Loop Each ant is positioned on a starting node Loop Each ant applies a state transition rule to incrementally build a solution and a local pheromone updating rule Until all ants have built a complete solution A global pheromone updating rule is applied Until End condition end Table 1: Algorithmic skeleton for ACS algorithm 3 MAX MIN Ant System One of the most robust ant system is MAX MIN Ant System (MMAS) of T. Stützle and H.H. Hoos published in 2000 in the Future Generation Computer Systems, [10]. The work is showing how to achieve best performance

of ACS algorithms combining an improved exploitation of the best solutions found during the search with an effective mechanism for avoiding early search stagnation. MAX MIN Ant System, which has been specifically developed to meet these requirements, differs in three key aspects from AS. To exploit the best solutions found during an iteration or during the run of the algorithm, after each iteration only one single ant adds pheromone. This ant may be the one which found the best solution in the current iteration, iterationbest ant, or the one which found the best solution from the beginning of the trial, globalbest ant. To avoid stagnation of the search the range of possible pheromone trails on each solution component is limited to an interval [min,max]. We initialize the pheromone trails to max, achieving in this way a higher exploration of solutions at the start of the algorithm. 3.1 Pheromone trail updating In MMAS only one single ant is used to update the pheromone trails after each iteration. Using one single ant for the pheromone trail update was also proposed in ACS. MMAS focuses on the use of the iteration best solutions. The use of only one solution is the most important means of search exploitation in this algorithm. By this choice, solution elements which frequently occur in the best found solutions get a large reinforcement. Still, a judicious choice between the iterationbest and globalbest ant for updating the pheromone trails controls the way the history of the search is exploited. 3.2 Pheromone trail limits For each city, one of the exiting arcs has a much higher pheromone level than the others. So, an ant will prefer this solution component over all alternatives and further reinforcement will be given to the solution component in the

pheromone trail update. The ants will construct the same solution over and over again and the exploration of the search space stops. Such a stagnation situation should be avoided. One way for achieving this is to influence the probabilities for choosing the next solution component, which depend directly on the pheromone trails and the heuristic information. MAX MIN imposes explicit limits min and max on the minimum and maximum pheromone trails after each iteration one has to ensure that the pheromone trail respects the limits. The maximum pheromone trail, max, is set to an estimate of the asymptotically maximum value. To determine reasonable values for min, we use the following assumptions. The best solutions are found shortly before search stagnation occurs. In such a situation the probability of reconstructing the globalbest solution in one algorithm iteration is significantly higher than zero. Better solutions may be found close to the best solution found. The main influence on the solution construction is determined by the relative difference between upper and lower pheromone trail limits, rather than by the relative differences of the heuristic information. Good values for min can be found by relating the convergence of the algorithm to the minimum trail limit. When MMAS has converged, the best solution found is constructed with a probability significantly higher than zero. In this situation, an ant constructs the best solution found if it makes at each choice point the right decision and chooses a solution component with maximum pheromone trail max. In fact, the probability of choosing the corresponding solution component at a choice point directly depends on max and min. 3.3 Pheromone trail initialization In MAX MIN we initialize the pheromone trails in such a way that after the first iteration all pheromone trails correspond to max. This can easily be achieved by setting zero to some arbitrarily high value. After the first iteration of MAX MIN, the trails will be forced to take values within

the imposed bounds, in particular, they will be set to max. This type of trail initialization is chosen to increase the exploration of solutions during the first iterations of the algorithm. 4 Local search improvement The class of approximate algorithms may be subdivided into three classes. Tour construction algorithms Tour improvement algorithms Composite algorithms The tour construction algorithms gradually build a tour by adding a new city at each step. The tour improvement algorithms improve upon a tour by performing various exchanges. The composite algorithms combine these two features. A simple example of a tour construction algorithm is the Nearest Neighbor algorithm. Start in an arbitrary city. As long as there are cities, that have not yet been visited, visit the nearest city that still has not appeared in the tour. Finally, return to the first city. A lot of other construction algorithms have been developed to remedy this problem (see for example [2], [12] and [13]). A simple example of this type of algorithm is 2-opt algorithm. Start with a given tour. Replace two links of the tour with two other links in such a way that the new tour length is shorter. Continue in this way until no more improvements are possible. Figure 1 shows a 2-opt exchange of links, a 2-opt move. A 2-opt move keeps the tour feasible and corresponds to a reversal of a subsequence of the cities. A generalization of this simple principle forms the basis for one the most effective approximate algorithms for solving the symmetric TSP, the Lin-Kernighan algorithm [1]. The algorithm is specified in terms of exchanges or moves that can convert one tour into another. Given a feasible tour, the algorithm repeatedly performs exchanges that reduce the length of the current tour, until a tour is

Figure 1: 2-opt Algorithm representation reached for which no exchange yields an improvement. This process may be repeated many times from initial tours generated in some randomized way. Let T be the current tour. At each iteration step the algorithm attempts to find two sets of links, X = {x 1,x 2,x 3 } and Y = {y 1,y 2,y 3 }, such that, if the links of X are deleted from T and replaced by the links of Y, the result is a better tour. This interchange of links is a 3-opt move. Figure 2: 3-opt Algorithm representation 5 Inner Ant System In the model proposed, the ants are endowed with a memory of their best tour in the local pheromone trail. The ants reinforce this local best tour with pheromone during an iteration to mimic the search focusing of the elitist ants. The new algorithm is called Inner Ant System (IAS). IAS introduces in the local search of ACS an update pheromone trail rule, called inner rule, [8]. The solution is improved using 2-opt and 3-opt heuristics. The algorithm Inner Ant System has a similar structure as Ant Colony System, except the local pheromone update rule replaced with the inner rule.

Let denote lplus the best length tour, and j the nodes from the unvisited neighbors of i. Inner rule involved by IAS algorithm can be expressed as 1 τ ij (t + 1) = (1 ρ)τ io (t) + ρ (n lplus). Using the IAS algorithm we try to improve the pheromone trail τ. Initially the ants are placed randomly in the nodes of the graph. At iteration t + 1 every ant moves to a new node and the parameters controlling the algorithm are updated. Assuming that the TSP is represented as a fully connected graph, each edge is labeled by a trail intensity. Let τ ij (t) represent the intensity of trail edge (i,j) at time t. When an ant decides which node is the next move it does so with a probability that is based on the distance to that node and the amount of trail intensity on the connecting edge. The inverse of distance to the next node, is known as the visibility, η ij. Visibility is defined as η ij = 1 d ij where, d ij, is the distance between nodes i and j. At each time unit evaporation takes place. This is to stop the intensity trails increasing unbounded. The rate evaporation is denoted by ρ, and its value is between 0 and 1. In order to stop ants visiting the same node in the same tour a tabu list is maintained. This prevents ants visiting nodes they have previously visited. To favor the selection of an edge that has a high pheromone value, τ, and high visibility value, η a function p k iu is considered. J k i are the unvisited neighbors of node i by ant k and u J k i. According to this function may be defined as p k iu(t) = [τ iu (t)][η iu (t)] β Σ o J k i [τ io (t)][η io (t)] β, (1)

where β is a parameter used for tunning the relative importance of edge length in selecting the next node. p k iu is the probability of choosing j = u as the next node if q > q 0 (the current node is i). q is a random variable uniformly distributed over [0, 1] and q 0 is a parameter similar to the temperature in simulated annealing, 0 q 0 1. If q q 0 the next node j is chosen as follows j = argmax u J k i {τ iu (t)[η iu (t)] β } (2) After each transition the trail intensity is updated using the introduced inner correction rule, as a local update rule 1 τ ij (t + 1) = (1 ρ)τ ij (t) + ρ (n lplus) (3) The global update rule is applied to the edges belonging to the best tour. Let L + be the length of the best tour. The correction rule is τ ij (t + 1) = (1 ρ)τ ij (t) + ρ τ ij (t), (4) where τ ij (t) is the inverse length of the best tour: τ ij (t) = 1 L +. (5) IAS, as Ant Colony System, make use of the simultaneous exploration of different solutions. After a complete tour, each ant k lays a quantity of pheromone τ k ij(t) on each (i,j) edge that it has used at iteration t, according to the formula (5). Positive feedback mechanism is employed as a search and optimization tool. A virtual pheromone, used as reinforcement, allows good solutions to be kept in memory. It is important to avoid only the good, but not the very good solutions, from becoming reinforced, which can lead to premature convergence.

The result of the algorithm is the shortest tour found. The algorithm computes for a given number of iteration tmax to find a good solution, the optimal solution if it is possible. 6 Numerical experiments For numerical experiments we use Euclidean instances from T SPLIB library [9]. This library provide optimal objective values for each of the instances. The parameter testing sets are shown in Table 2. Set β q 0 ρ ants time trials max.tours 1 2 0.9 0.5 25 10sec./trial 10 100 2 2 0.95 0.1 10 60sec./trial 10 100 Table 2: Parameter sets Table 3 contains the results with the first parameter set for Inner Ant System on some TSPLIB instances. Instance optimal Best Avg.best Avg.iter. gil262 2378 2378 2378.2 132.20 lin318 42029 42029 42029 119.80 rd400 15281 15281 15288.20 234.20 rat783 8806 8852 8881.70 131.40 d1291 50801 50874 50959.8 65.60 Table 3: The results of Inner Ant System for parameter set no.1 We are showing some comparative aspects between the results of Inner Ant System and two of the most robust ant systems, Ant Colony System[4] and MAX MIN Ant System[10], followed by 2-opt and 3-opt heuristics. For instances with a small number of nodes, the results are almost similar for the mentioned algorithms. That is way we are discussing some results for instances with a large number of nodes. Making computational studies on these algorithms for the instances specified on Table 3 and Table 4, we see that IAS has better values than ACS

Best Avg.best Avg.iter. Best Avg.best Avg.iter. Instance ACS ACS ACS MMAS MMAS MMAS gil262 2378 2378.30 44.80 2378 2378.20 132.70 lin318 42029 42069.60 108.00 42029 42042.40 121.70 rd400 15281 15288.30 202.10 15281 15284.60 67.90 rat783 8855 8891.50 121.20 8816 8822.20 156.10 d1291 50838 51005.40 62.50 50840 50917.90 80.80 Table 4: The results of ACS and MMAS for parameter set no.1 for the Average-best computation values on all considered instances. Also. IAS has better values for the Average-iterations on d1291 and the Best try for rat783 Also IAS has better results than MAX MIN Ant System regarding Average-iterations for gil262, lin318, rat783. A very good result is that IAS obtain for lin318 the Average-best value exact the optimal value. All used algorithms, including IAS obtained for gil262 and lin318 the same Best try. For rd400 the Best try is the optimal value for all algorithms. Table 5 shows the results with the second parameter set for Inner Ant System on the same TSPLIB instances as in Table 3. Instance optimal Best Avg.best Avg.iter. gil262 2378 2378 2378 1030.20 lin318 42029 42029 42047.80 2322.60 rd400 15281 15281 15282.10 509.50 rat783 8806 8806 8824.10 1436.60 d1291 50801 50801 50844.90 617.50 Table 5: The results of Inner Ant System for parameter set no.2 The comparative results for the second parameter set, for Ant Colony System and MAX MIN Ant System in Table 6. Regarding IAS, for the second parameter set, the values for all instances are improved, except the Average-best for lin318. It is very important to see that IAS obtain for all instances, the Best value exact the Optimal value.

Best Avg.best Avg.iter. Best Avg.best Avg.iter. Instance ACS ACS ACS MMAS MMAS MMAS gil262 2378 2378 1816.30 2378 2378 2100.70 lin318 42029 42064.40 1218.60 42029 42064.60 2252.00 rd400 15281 15293.50 1200.50 15281 15283.10 979.90 rat783 8808 8818.00 1433.60 8809 8814.10 1347.30 d1291 50820 50872.20 621.90 50824 50852.10 705.40 Table 6: The results of ACS and MMAS for parameter set no.2 Average-best is better for IAS than other two algorithms, except rat783. In the following some comparative computational results on Standarddeviation best values, Standard-deviation iterations and Standard-deviation time. As we see from Figure 3 (Figure 4), the new algorithm is improving the solutions for the specified instances. The Standard-deviation best values, very close to zero, for IAS, shows that the best values are close to the optimal value. Figure 3: The results of Standard deviation Best try for the parameter set no.1, shows for IAS that the best values are closer to the optimal value, for the majority of instances. In Figure 3, the better values are obtain with IAS algorithm for the instances gil262, lin318, rd400 and d1291. We want to remark that for lin318, IAS has all the best values exactly the optimal values, meaning that standard deviation best is zero.

In Figure 4, for the second parameter set, the gil262 has for all algorithms Standard deviation best value exactly zero. For rd400 the new algorithm have the smaller value. Figure 4: The results of Standard deviation best try for parameter set no.2, shows for IAS, that gil262 has the best values exactly the optimal value, like the other two algorithms. For rd400, IAS has the smaller value and for lin318 and rat783 has better values than ACS.

Standard deviation iterations values, shows that IAS uses few iterations with the first parameter set, on the trials, in order to obtain the solutions. This characteristic induces also a short execution time. The first parameter set has the results in Figure 5. Figure 5: The results of Standard deviation iterations for parameter set no.1, has for IAS, better values for the instances rd400,rat783 and d1291. To obtain the best values, the number of iterations, is lower for these instances. For all five instances IAS has better values than ACS. Figure 6: The results of Standard deviation iterations for parameter set no.2, has for IAS, the better values using instances gil262, rd400 and rat783. For the specified instances the iterations are fewer for IAS, in order to obtain values closer to optimal value. Standard deviation iterations from Figure 6 is for the second parameter set. rat783, gil262 and rd400 have better results for IAS.

Standard deviation time values show that IAS algorithm find solutions in a relatively short time. That we can see from d1291, rd400 and rat783, the larger instances tested, for the first parameter set in Figure 7. Figure 7: The results of Standard deviation time, for parameter set no.1, with IAS are better than ACS in four cases. IAS has the best values on the last three instances, a consequence of fewer iterations, Figure 5. Figure 8: The results of Standard deviation time for parameter set no.2, with IAS, are better for gil262, rd400 and rat783. As the iterations are fewer, Figure 6, a lower time is need to obtain the best value. For the second parameter set, rat783 has Standard deviation time values the same in all cases. IAS have better values, in this case, for gil262 and rd400.

For the parameters sets, we remark that IAS find better solutions for the second parameter test with β = 2, q 0 = 0.95, ρ = 0.1,10 ants, 60 seconds on a trial, for ten trials and maximum one hundred tours. Increasing the time on a trial and using a lower number of ants we obtain better solutions than for the first parameter set. Inner Ant System performed well, finding good solution in many cases. There is possible to find better values for the parameters or an efficient combination with other algorithms. Simulated annealing, tabu search and other heuristics could be used for improving IAS. 7 Conclusions A new ant colony based technique called Inner Ant System for solving TSP is proposed. A new local update pheromone trail is introduced. The best tours are improved using the 2-opt and 3-opt heuristics. Experimental results obtain by using Inner Ant System seems to be promising. Some results could be improved if there are find and use better values for the parameters or combining IAS with other algorithms. References [1] B.Bullnheimer, R.F.Hartl, C.Strauss A new rank-based version of the Ant System: A computational study, Central European Journal for Operations Research and Economics, 7(1):25-38, 1999 [2] O.Cordon, I.Fernández de Viana, F.Herrera, L.Moreno A new ACO model integrating evolutionary computation concepts: The best-worst Ant System, In M.Dorigo, M.Middendorff and T.Stützle, editors, Proceedings of Ants 2000, pages 22-29,Brussels, Belgium, 2000 [3] M.Dorigo, Optimization, Learning and Natural Algorithms (in Italian). Ph.D thesis, Dipartamento di Elettronica, Politecnico di Milano, Italy, 1992, pp.140

[4] M.Dorigo, L.M.Gambardella, Ant Colony System: A cooperative learning approach to the traveling salesman problem, IEEE Transactions on Systems, Man, and Cybernetics-Part B, 26(1):29-41, 1996 [5] M.Dorigo, L.M.Gambardella, Ant Colonies for the Traveling Salesman Problem, BioSystems 43, 73-81, 1997 [6] M.Dorigo and L.M.Gambardella. Ant Colony System: A cooperative learning approach to the Traveling Salesman Problem. IEEE Trans. Evol. Comp., 1:5366, 1997. [7] K.Menger, Das botenproblem, Ergebnisse Eines Mathematischen Kolloquiums 2 (1932), 11-12 [8] C-M.Pintea, D.Dumitrescu, Improving ant systems using a local updating rule, IEEE Proceedings of SYNASC, Symbolic and Numeric Algorithms for Scientific Computing, 2005 [9] G.Reinelt, TSPLIB- A Traveling Salesman Problem Library, ORSA Journal on Computing 3 (1991) 376-384 [10] T.Stützle, H.H. Hoos,MAX MIN Ant System, Future Generation Computer Systems, 16(8):889-914, 2000