Attractor of Local Search Space in the Traveling Salesman Problem

Similar documents
A memetic algorithm for symmetric traveling salesman problem

Investigation of the Fitness Landscapes and Multi-parent Crossover for Graph Bipartitioning

A Web-Based Evolutionary Algorithm Demonstration using the Traveling Salesman Problem

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Solving Travelling Salesman Problem and Mapping to Solve Robot Motion Planning through Genetic Algorithm Principle

3. Genetic local search for Earth observation satellites operations scheduling

Improving Lin-Kernighan-Helsgaun with Crossover on Clustered Instances of the TSP

Hybrid approach for solving TSP by using DPX Cross-over operator

A Genetic Approach to Analyze Algorithm Performance Based on the Worst-Case Instances*

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

ACO and other (meta)heuristics for CO

Using Penalties instead of Rewards: Solving OCST Problems with Problem-Specific Guided Local Search

Complete Local Search with Memory

Metaheuristic Optimization with Evolver, Genocop and OptQuest

A Parallel Architecture for the Generalized Traveling Salesman Problem

Escaping Local Optima: Genetic Algorithm

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

Exploration vs. Exploitation in Differential Evolution

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

The Heuristic Strategy Implementation to the Hopfield -Tank TSP Neural Algorithm

Using Genetic Algorithms to optimize ACS-TSP

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem

Solving ISP Problem by Using Genetic Algorithm

BI-OBJECTIVE EVOLUTIONARY ALGORITHM FOR FLEXIBLE JOB-SHOP SCHEDULING PROBLEM. Minimizing Make Span and the Total Workload of Machines

Improved Large-Step Markov Chain Variants for the Symmetric TSP

Algorithm Design (4) Metaheuristics

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Using Genetic Algorithm with Triple Crossover to Solve Travelling Salesman Problem

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

A COMPARATIVE STUDY OF FIVE PARALLEL GENETIC ALGORITHMS USING THE TRAVELING SALESMAN PROBLEM

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

The Traveling Salesman Problem: State of the Art

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Evolutionary Computation Algorithms for Cryptanalysis: A Study

A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology

Pre-requisite Material for Course Heuristics and Approximation Algorithms

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

THE Multiconstrained 0 1 Knapsack Problem (MKP) is

ABSTRACT I. INTRODUCTION. J Kanimozhi *, R Subramanian Department of Computer Science, Pondicherry University, Puducherry, Tamil Nadu, India

A Memetic Heuristic for the Co-clustering Problem

A Two-Dimensional Mapping for the Traveling Salesman Problem

Grouping Genetic Algorithm with Efficient Data Structures for the University Course Timetabling Problem

First-improvement vs. Best-improvement Local Optima Networks of NK Landscapes

MIRROR SITE ORGANIZATION ON PACKET SWITCHED NETWORKS USING A SOCIAL INSECT METAPHOR

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Khushboo Arora, Samiksha Agarwal, Rohit Tanwar

A Hybrid Genetic Algorithm for the Hexagonal Tortoise Problem

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm

Parallel Genetic Algorithm to Solve Traveling Salesman Problem on MapReduce Framework using Hadoop Cluster

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Four Methods for Maintenance Scheduling

The Genetic Algorithm for finding the maxima of single-variable functions

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Algorithms for Integer Programming

A HYBRID GENETIC ALGORITHM A NEW APPROACH TO SOLVE TRAVELING SALESMAN PROBLEM

CHAPTER 4 GENETIC ALGORITHM

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

REDUCING GRAPH COLORING TO CLIQUE SEARCH

Adaptive Tabu Search for Traveling Salesman Problems

One-Point Geometric Crossover

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

A LOCAL SEARCH GENETIC ALGORITHM FOR THE JOB SHOP SCHEDULING PROBLEM

Multi-Attractor Gene Reordering for Graph Bisection

GENETIC ALGORITHM with Hands-On exercise

A Meta-heuristic Applied for a Topologic Pickup and Delivery Problem with Time Windows Constraints

A Polynomial-Time Deterministic Approach to the Traveling Salesperson Problem

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

SPATIAL OPTIMIZATION METHODS

An Evolutionary Algorithm with Stochastic Hill-Climbing for the Edge-Biconnectivity Augmentation Problem

Non-deterministic Search techniques. Emma Hart

The Simple Genetic Algorithm Performance: A Comparative Study on the Operators Combination

Modified Order Crossover (OX) Operator

The Traveling Salesman Problem: Adapting 2-Opt And 3-Opt Local Optimization to Branch & Bound Techniques

Using Proposed Hybrid Algorithm for Solving the Multi Objective Traveling Salesman Problem

Outline of the module

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

Optimizing the Sailing Route for Fixed Groundfish Survey Stations

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

A HYBRID APPROACH IN GENETIC ALGORITHM: COEVOLUTION OF THREE VECTOR SOLUTION ENCODING. A CASE-STUDY

Comparison Study of Multiple Traveling Salesmen Problem using Genetic Algorithm

Clustering-Based Distributed Precomputation for Quality-of-Service Routing*

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016)

ALGORITHM SYSTEMS FOR COMBINATORIAL OPTIMIZATION: HIERARCHICAL MULTISTAGE FRAMEWORK

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Theorem 2.9: nearest addition algorithm

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Chapter 14 Global Search Algorithms

A motivated definition of exploitation and exploration

Effective Optimizer Development for Solving Combinatorial Optimization Problems *

Exploiting a database to predict the in-flight stability of the F-16

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Transcription:

Attractor of Local Search Space in the Traveling Salesman Problem WEIQI LI School of Management University of Michigan - Flint 303 East Kearsley Street, Flint, Michigan 48502 U. S. A. Abstract: - A local search technique usually is locally convergent and, as result, outputs a local optimum. For many optimization problems, there is an attractor in the search space that drives the local search trajectories to converge into a small region. To gain insight into the basic properties of the attractor, this paper presents a framework for studying the attractor of local search space for the traveling salesman problem (TSP). The experimental results show that there is an empirical evidence for the existence of the attractor in the local search space for the TSP. The attractor is taken shape from the local optima that are located around the globally optimal solution. Key-Words: - solution attractor, local search heuristics, traveling salesman problem 1 Introduction Many combinatorial optimization problems are known to be NP-hard, which means that there is no known algorithm that can solve every instance of this problem in a time growing less than a power of problem size n, and it is unlikely that such an algorithm can be found [1][2]. Due to the fact that the complete enumeration of the solution space is in many cases impractical, local search algorithms have become a popular means to find reasonable solutions to hard combinatorial optimization problem. A local search starts off with an initial solution and then continually tries to find better solution by searching neighborhoods. Local search techniques are simple to implement and quick to execute, but they have the main disadvantage that they are locally convergent. Local search algorithm outputs a final solution that may deviate from the global optimum, which is called a local optimum. The theoretical foundations of local search heuristics have been established by mathematical, analytical, computational and experimental means. However, identifying and encoding the structure of the search space and the behavior of the local optima are still a difficult task, due to the fact that many local search algorithms adopt a stochastic approach to searching a large solution space. Many researchers have been trying to understand and exploit the search space. Some researchers studied the global structure of optimization surface through analyzing the relationship among local optima from the perspective of the best local optimum [3]-[6]. Other researchers have used the landscape theory to investigate the structure of search spaces that governs natural and artificial processes of optimization [7]-[10], to identify landscape characteristics which influence the effectiveness of heuristic search methods [11], and to prove the convergence properties of the simulated annealing algorithm and other evolution algorithms in some optimization problems [12]-[14]. Here the landscape means the set of allowable configurations in an optimization problem. The common opinion about the local optima is that, in many combinatorial optimization problems, the set of local optima forms a "big valley" or a global convex structure. The understanding of the nature of the "big" valley" can have distinct implications for a wide range of theoretical and practical problems in heuristic theory, design and analysis. The "big valley" structure of search space can be exploited profitably. However, the analysis of search space is not a simple task. The goal of this study was to investigate the "big valley" property of local search in the TSP. This paper is organized as follows. Section 2 discusses local search trajectory and solution attractor. Section 3 describes the experimental procedure used in this study. Section 4 presents the experimental results. The final section concludes this paper. 2 Search Trajectory and Solution Attractor Local search algorithms use the iterative method. Typically, a local search algorithm starts off at an

initial solution and then generates a sequence of iterations. Each iteration consists of a possible transition from the current solution s i to a new solution s j selected from the neighborhood N(s i ) of s i. Local search process explores a subset of feasible solutions by repeatedly moving from the current solution to a neighboring solution. A search trajectory is the path on the solution space followed by the search process as it evolves with time. There is a great variety of ways in which candidate moves can be chosen for consideration and in defining criteria for accepting candidate moves. Most heuristic search processes are based on randomization. The initial solution is randomly selected and the next step is chosen from multiple possibilities usually based on a random number. Thus, local optimality depends on not only the initial solution s 0 but also the neighborhood function that is used. In this sense, a local optimum is both initial condition sensitivity and final state sensitivity. The search trajectories are unstable everywhere in solution space, yet bounded and possessed of structure in the solution space. The behavior of a local search trajectory is apparently unpredictable and exhibits the transition from highdimensional stochastic to low-dimensional chaotic behavior [15]. The long-term dynamics of a local search process is captured in limit sets. For many optimization problems, there is an attractor that drives the local search trajectories to converge into a small region in the solution space. Suppose that a local search process starts with M initial solutions. These initial solutions are confined to some volume V in solution space bounded by a "surface" of all these initial points. As the local search process evolves, the volume V moves through solution space, which looks the motion of compressible object. The shape of the volume is distorted and the size is decreased. The final volume occupied by the set of locally optimal points becomes very small. The final small volume is called solution attractor, i.e. the set of attractive fixed points. With this attractor, equilibrium is applied to a region, rather than a particular point. The feature of a local search process is loss of freedom [15], which makes all local search trajectories converge to a small region in the solution space. A particular trajectory will typically converge to either one of the fixed points. No wonder many empirical studies [3], [16]-[19], in attempts to understand the general characteristics of search space, found a "big valley" structure in the solution space. In fact, it is the attractor that causes the set of local optima to form the "big valley" structure. The globally optimal solution is also embodied in this attractor. 3 Experimental Methodology The experiment in this study was designed with two goals in mind: to create the attractor from a set of local optima and to identify the properties of the attractor. The basic procedure used in this study is very straightforward: randomly generating a set of locally optimal solutions for a TSP instance to construct the attractor, then dividing the attractor into several solution clusters to identify the core of the attractor, and finally searching solutions in the core. Fig. 1 summarizes the experimental procedure. procedure TSP_Attractor (Q) begin repeat s i = Initial_Tour(); s j = Local_Search(s i ); Update(E(s j ); until StoppingCriterion = M; A = Find_Core(E); Exhausted_Search(A); end Fig. 1 the experimental procedure In the procedure of Fig. 1, Q is a TSP instance; s i is an initial solution constructed by Initial_Tour; s j is a local optimum outputted by a local search process. For a TSP instance, the solution space contains the tours the salesman may traverse. There are many ways to represent these tours in the computer. A good representation allows researchers to trace the search trajectories or maintain a functional link among the solutions. One natural representation for a solution of the TSP is a permutation, i.e. an ordered list of the cities to visit. In this way, the positions and order of cities are taken into account. Another representation of TSP solution is using an n n matrix E in which the element e ij in the row i and column j contains a "1" if and only if the tour goes from city i directly to city j. This procedure uses an n n matrix E, named hit-frequency matrix, to record the number of hits on each edge e ij by a set of locally optimal tours. In principle, this matrix can encapsulate all of the information about the structure of solution space, including both the micro-topology of each of the locally optimal tours and the macro-topology of the collection of these tours. In other words, the hit-frequency matrix should contain rich information about the solution attractor in the search space for the TSP. If we generate all possible search trajectories for a certain TSP instance, when all search trajectories reach their local optima, we can obtain the real attractor for the problem. Then, from the hit-frequency matrix, we should immediately recognize the attractor and easily identify the globally optimal

tour. Unfortunately, this "all possible search trajectories" scenario is unrealistic, due to the enormous amount of experimental data required. A more realistic goal would be gathering a moderate sample of local optima to build the solution attractor and infer statistical properties of the attractor. In fact, there are two types of search trajectories. One consists of all trajectories that go towards locally optimal points, called L-trajectories. The other includes the trajectories that go towards to globally optimal solutions, called G-trajectory. All edges that are contained in a globally optimal tour are called G-edges. In fact, a search trajectory is constantly adjusting itself along the G-trajectory by disregarding unfavorable edges and collecting G- edges. If it successfully collects all of the G-edges, it becomes a G-trajectory and the final tour is the globally optimal tour. If it only collects some of the G-edges, it is an L-trajectory and ends up at a local optimum. If a tour contains none of the G-edges, the search process can always improve the tour by exchanging edges. Local search algorithms cause individual search trajectories explore only a tiny fraction of the enormous solution space if n is large. Thus, it is very difficult for a particular search trajectory to select G-edges globally, and a search trajectory often ends at a local optimum. The more G-edges a local optimum contains, the closer to the global optimum it is. The locally optimal tours are linked each other by sharing the G-edges. In fact, the attractor of the search space for a local search algorithm in the TSP consists of all G-edges and some unfavorable edges. The mixture of unfavorable edges and G-edges in the solution attractor makes the analysis of the attractor significantly difficult as there is no way so far to identify clearly the boundary between these two sets of edges. The procedure in Fig. 1 uses a function Find_Core to cluster the edges to find the core of the attractor and record it in the matrix A. Finally, the matrix A is searched by an exhausted-enumeration process to generate all solutions in the core and reveal the properties of the core. This procedure generates M local optima and records the edges of these local optima in the hitfrequency matrix E. The hit-frequency matrix E provides a framework for analyzing the properties of the attractor. It acts as a coding system for all the information not directly representable in an individual local optima. For a set of M local optima, define = 1 if the edge e ij is in tour t, t = 1, 2, M e (1) t ij = 0 if it is not Then, the hit-frequency is defined as the number of occurrence of the edge e ij in these M local optima: M t f e ij = e ij t= 1 Although we defined a symmetric TSP, i.e. d ij =d ji, the hit-frequency matrix is defined as asymmetric, i.e. e ij e ji, considering the direction in which the edges are traversed. An example of hitfrequency matrix is shown in Fig. 2, which records the solutions s 1 = (1,2,3,4,5), s 2 = (2,4,5,3,1), s 3 = (4,5,3,2,1), s 4 = (5,1,4,2,3), and s 5 = (3,2,4,5,1). 1 1 0 2 1 3 1 2 0 4 5 0 3 2 3 4 5 2 1 2 0 0 2 2 0 1 0 0 1 1 0 4 2 0 0 Fig. 2 An example of hit-frequency matrix The basic idea behind the hit-frequency matrix is that the globally superior edges are hit by locally optimal tours with higher probabilities. Thus, the solution attractor is mainly formed by these superior edges. If we cluster edges based on their hit frequencies as shown in Fig. 3(a), the most-hit edges are expected to form the core of the attractor, as shown in Fig. 3(b). In this framework, edge clusters can be constructed by selecting the edges based on the hit frequency. Each cluster occupies a portion of the attractor, and the core forms a highly exploitable region of the solution space. The global optimum lies more or less central in the core of the attractor. Edges (a) 1/r most-hit edges 1/r least-hit edges It is important to choose proper r in Fig. 3(a). If r is too large, we may not find any feasible solution or... Verge of the attractor (b) Fig. 3 Constructing the attractor Core of the attractor (2)

may exclude the global solution from the most-hit cluster. If r is too small, the large cluster may contain too many solutions to make the exhaustive search in the core practical relevance. A set of local optima together contains information about other optimal solutions since each locally optimal solution shares its edges with other optimal solutions that are within the same region of the search space. For example, suppose that there are three local optima s 1 = (1,2,3,4,5), s 2 = (1,5,3,2,4), s 3 = (1,5,4,2,3), which are marked in a 5 5 matrix as shown in Fig.4. These three local optima can contain information about other solutions. For instance, from the matrix we can easily identify another solution s 4 = (1,2,4,5,3). This new solution s 4 is very similar to a new feasible offspring derived by performing edge-crossover and mutation in a genetic algorithm. This conception motivates this study to take advantage of the context where certain partial configurations of solutions often occur as components of other solutions. The core of a solution attractor consists of the most promising partial configurations. A strategy of seeking "the most promising partial configurations" can help to circumvent the combinatorial explosion by manipulating only the most primitive elements of the solution space. Fig. 4 A set of solutions contains information about other solutions. The hit-frequency matrix gives us important insights into the nature of the attractor and provides the opportunity to discover ways for considering only the promising solution region. When we restrict our attention to a smaller solution space represented by the attractor core, the number of possibilities for search is no longer prohibitive. The edges in the core are in fact globally superior. For example, if an edge is selected by 73% percent of the locally optimal solutions, although each of them selects the edge based on its neighborhood structure, the edge is globally superior since the edge is reached by these individual solutions from different search paths. Here it must be pointed out that a globally superior edge is net necessarily a G-edge. If all edges of a tour are G-edges the tour is the global optimum. But if a tour contains globally superior edges, it is simply a good tour. We know that the set of G-edges exists and is fixed, but we have no way to separate the G-edges from other edges. This experimental procedure carefully retains stochastic aspects as well as deterministic aspects in the treatment of experiment. A process in stochastic motion has the ability to explore wide regions of solution space. This is important because the different regions correspond to different solutions of the process, and we want to sample as many of these regions as possible, in order to maximize the search. We have known that the initial segment of a search trajectory is in stochastic state [15]. This suggests that the multi-start technique is a good choice for creating this stochastic state. On the other hand, we also wish to make the search trajectories converge to an attractor. The greedy-descent mechanics enforce a control law to the search process. In other words, control is achieved by applying local search, which performs a system constraint such that a locally optimal point is targeted on each search trajectory. Each trajectory follows a different search path. All these search trajectories will be attracted to a compact sets. In this way, the collection of the local optima can form an ergodic ensemble. 4 The Experiment Result This study generated a general symmetric TSP instance, which consists of n = 500 cities with the distance d ij = d ji independently drawn from a uniform distribution of the integers over the interval [1, 1000]. The 2-opt algorithm was used to perform the local search. The guarantee the diversity of the sample of local optima, this study generated M=5000 initial points, and, as a result, generated 5000 distinct local optima with the values in the range of [5021, 5204] as shown in Fig. 5. However, the 5000 solution points still represents a small sample of the population of solutions for the TSP instance. The experiment relied heavily on randomization for the benefits of the analysis. The purpose is to sample the attractor randomly. In this study, all initial tours were constructed at random so that each edge had equal probability to be selected. In the 2-opt local search, the algorithm randomly selected a solution in the neighborhood of the current solution. A move that gave the first improvement was chosen. The great advantage of first-improvement pivoting rule is to produce randomized local optima. However, the randomness in the procedure was interrupted by the underlying structure of the attractor. Final solution points would condense at certain edges that encapsulate the attractor. The long-term behavior of the hitfrequency matrix is loss of randomness and running

into the attractor. Since this study did not intend to solve the TSP, but rather focused on the analysis of the solution attractor. The local search process terminated when no improvement had been achieved during 1000 iterations. Thus, a final solution outputted by the local search process may not necessarily be a real local optimum. Fig. 5 The images of initial tours (left) and locally optimal tours (right). The data for hit-frequency were stored in a 500 500 matrix. Each element of the matrix recorded the number of hit on the edge e ij by the group of tours. If we use black color to represent the edge that is hit by one or more tours and use white color to represent the edge that is hit by none of the tours, we can display the image of the tours and thus visualize the solution space represented by these tours. As illustrated in Fig. 5, the left image shows the solution space covered by the 5000 initial tours, and the right image shows the solution space occupied by the 5000 local optima. The right image is the picture of the solution attractor. It is obvious that the solution space is dramatically reduced. During the cluster construction process, this study used a trial-and-error subroutine to find a proper r such that the number of marked (most-hit) edges in each column of the 500 500 matrix A was limited in the range of [4, 8]. The marked edges in the matrix A form the core of the attractor. Then an exhausted-enumeration algorithm was used to generate all solutions that exist in the core. The experiment results are summarized in Table 1. We can see that the values for the 5000 local optima are in the range of [5021, 5204]. In fact, the solution attractor constructed by these 5000 local optima contains many other solutions. By using an exhausted-enumeration algorithm, we found that the core of the attractor contained 68 solutions with the values range of [4872, 5087], among which 23 solutions had better values than the best solution found in the 5000 original local optima. Table 1 The Property of the Solution Attractor Original 5000 Characters of the Core local optima Best Worst Number of Best Worst Number of Solution Solution Solutions Solution Solution Better Solutions 5021 5204 68 4872 5087 23 The experiment result shows the evidence of an attractor that governs local optima in the TSP solution space. The results also demonstrate that the set of most-hit edges forms the core of the attractor. Once such a core is identified, it is rather easy to find very high quality solutions, even possibly the global solution, in the core.

5. Conclusion This study presents a method for analyzing the solution attractor of local search in the TSP. The method uses the hit-frequency matrix to represent the attractor, in which useful information about its core can be used to identify high-quality solutions and even the global solution. It is clear that local optima hit some edges much more frequently than others, which reflects the distribution of edges in the attractor. The hitfrequency matrix acts as a long-term learning memory. Although this memory is created by the individual local optima, local interactions among edges can lead to a pattern that contains global information for the TSP at hand. The movement of the data structure in the hit-frequency matrix is a globally iterative process. As this process repeatedly attempts to bring all search trajectories closer to the globally optimal point, the evolution of the hitfrequency matrix will ultimately reach a static final state (the attractor). Since the edges of the global solution are shared by all local optima, it is expected that the global optimum is located within the core of the solution attractor. References: [1] M. R. Garey and D. S. Johnson, Computers and Intractability, Freeman, 1979. [2] C. H. Papadimitriou, Computational Complexity. Addison-Wesley, 1994. [3] K. D. Boese, A. B. Kahng and S. Muddu, A New Adaptive Multistart Technique for Combinatorial Global Optimizations, Operations Research Letters, Vol.16, No.2, 1994, pp. 101-113. [4] T. C. Johnson, C. R. Aragon, L. A. McGeoch and C. Schevon, Optimization by Simulated Annealing: An Experimental Evaluation Part 1: Graph Partitioning, Operations Research, Vol.37, 1989, pp. 865-892. [5] M. Mezard and G. Parisi, A Replica Analysis of the Traveling Salesman Problem, Journal of Physique, Vol.47, 1986, pp. 1285-1296. [6] G. R. Raidl, G. Kodydek and B. A. Julstrom, On Weight-biased Mutation for Graph Problems, in E. Aarts and J. K. Lenstra (Eds.) Local Search in Combinatorial Optimization, John Wiley & Sons, 1997, pp. 204-213. [7] S. Kauffman and S. Levin, Towards a General Theory of Adaptive Walks on Rugged Landscapes, Journal of Theoretical Biology, Vol.128, 1987, pp. 11-45. [8] E. D. Weinberger, Correlated and Uncorrelated Fitness Landscapes and How to Tell the Difference, Biological Cybernetics, Vol.63, 1990, pp. 325-336. [9] E. D. Weinberger, Local Properties of Kauffman's N-k Model: A Tunably Rugged Energy Landscape, Physics Review A, Vol.44, 1991, pp. 6399-6413. [10] C. A. Macken, P. S. Hagan and A. S. Perelson, Evolutionary Walks on Rugged Landscapes, SIAM Journal of Applied Mathematics, Vol.51, 1991, pp. 799-827. [11] T. Jones and S. Forrest, Fitness Distance Correlation as a Measure of Problem Difficulty for Generic Algorithm, Proceedings of the Sixth International Conference on Genetic Algorithms, Morgan Kaufmann, 1995, pp. 184-192. [12] P. Merz and B. Freisleben, Memetic Algorithms and the Fitness Landscape of the Graph Bi-partitioning Problem, in A. E. Eiben, T Bäck, M. Schoenauer and H. P. Schwefel (Eds.) Parallel Problem Solving from Nature, LNCS 1498, Springer Verlag, 1998, pp. 765-774. [13] P. Moscato, On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms, CalTech Concurrent Computation Program Report 826, CalTech, 1989. [14] G. B. Sorkin, Efficient Simulated Annealing on Fractal Energy Landscapes, Algorithmica, Vol.6, 1991, pp. 367-418. [15] W. Li and B. Alidaee, Dynamics of Local Search Heuristics for the Traveling Salesman Problem, IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, Vol.32, No.2, 2002, pp. 173-184. [16] T. C. Hu, V. Klee and D. Larman, Optimization of Globally Convex Functions, SIAM Journal of Control Optimization, Vol.27, 1989, pp. 1026-1047. [17] D. S. Johnson and L. A. McGeoch, The Traveling Salesman Problem: A Case Study, in E. Aarts and J. K. Lenstra (Eds.) Local Search in Combinatorial Optimization, John Wiley & Sons, 1997, pp. 215-310. [18] S. Kirkpatrick and G. Toulouse, Configuration Space Analysis of Traveling Salesman Problem, Journal of Physique, Vol.46, 1985, pp.1277-1292. [19] C. R. Reeves, Landscapes, Operators and Heuristic Search, Annals of Operations Research, Vol.86, 1998, pp. 473-49.