A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION

Similar documents
Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Tracking Changing Extrema with Particle Swarm Optimizer

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

PARTICLE SWARM OPTIMIZATION (PSO)

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

A Study on Optimization Algorithms for Clustering Gene Expression Data

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Genetic-PSO Fuzzy Data Mining With Divide and Conquer Strategy

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2

Feature weighting using particle swarm optimization for learning vector quantization classifier

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Binary Differential Evolution Strategies

Initializing the Particle Swarm Optimizer Using the Nonlinear Simplex Method

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

Discrete Particle Swarm Optimization for TSP based on Neighborhood

A NEW METHODOLOGY FOR EMERGENT SYSTEM IDENTIFICATION USING PARTICLE SWARM OPTIMIZATION (PSO) AND THE GROUP METHOD OF DATA HANDLING (GMDH)

An Island Based Hybrid Evolutionary Algorithm for Optimization

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Particle Swarm Optimization

Modified Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Witold Pedrycz. University of Alberta Edmonton, Alberta, Canada

Optimization Using Particle Swarms with Near Neighbor Interactions

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Hybrid Differential Evolution Algorithm for Traveling Salesman Problem

HPSOM: A HYBRID PARTICLE SWARM OPTIMIZATION ALGORITHM WITH GENETIC MUTATION. Received February 2012; revised June 2012

Particle swarm optimization for mobile network design

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Using CODEQ to Train Feed-forward Neural Networks

Optimization of Micro Strip Array Antennas Using Hybrid Particle Swarm Optimizer with Breeding and Subpopulation for Maximum Side-Lobe Reduction

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

Discrete Multi-Valued Particle Swarm Optimization

A Branch and Bound-PSO Hybrid Algorithm for Solving Integer Separable Concave Programming Problems 1

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

Optimized Algorithm for Particle Swarm Optimization

A hybrid constrained optimization approach coupling PSO and adaptive constraint-handling technique

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Null Steering and Multi-beams Design by Complex Weight of antennas Array with the use of APSO-GA

A Particle Swarm Approach to Quadratic Assignment Problems

Solving A Nonlinear Side Constrained Transportation Problem. by Using Spanning Tree-based Genetic Algorithm. with Fuzzy Logic Controller

Using Genetic Algorithms to optimize ACS-TSP

A Hybrid Fireworks Optimization Method with Differential Evolution Operators

GENETIC ALGORITHM with Hands-On exercise

FOREST PLANNING USING PSO WITH A PRIORITY REPRESENTATION

Automata Construct with Genetic Algorithm

Small World Particle Swarm Optimizer for Global Optimization Problems

Discrete Particle Swarm Optimization With Local Search Strategy for Rule Classification

A novel adaptive sequential niche technique for multimodal function optimization

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

Generation of Ultra Side lobe levels in Circular Array Antennas using Evolutionary Algorithms

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization

DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

1 Lab 5: Particle Swarm Optimization

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

1 Lab + Hwk 5: Particle Swarm Optimization

Adaptive Radiation Pattern Optimization for Antenna Arrays by Phase Perturbations using Particle Swarm Optimization

A Modified PSO Technique for the Coordination Problem in Presence of DG

A New Discrete Binary Particle Swarm Optimization based on Learning Automata

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Particle Swarm Optimization

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

A PSO-based Generic Classifier Design and Weka Implementation Study

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming

Study on GA-based matching method of railway vehicle wheels

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY

An Improved Particle Swarm Optimization Algorithm and Its Application

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

Optimal Facility Layout Problem Solution Using Genetic Algorithm

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

1 Lab + Hwk 5: Particle Swarm Optimization

Small World Network Based Dynamic Topology for Particle Swarm Optimization

A HYBRID APPROACH IN GENETIC ALGORITHM: COEVOLUTION OF THREE VECTOR SOLUTION ENCODING. A CASE-STUDY

Dr. Ramesh Kumar, Nayan Kumar (Department of Electrical Engineering,NIT Patna, India, (Department of Electrical Engineering,NIT Uttarakhand, India,

Network Routing Protocol using Genetic Algorithms

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Solving Economic Load Dispatch Problems in Power Systems using Genetic Algorithm and Particle Swarm Optimization

DERIVATIVE-FREE OPTIMIZATION

Transcription:

INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 1, Number 3-4, Pages 275-282 2005 Institute for Scientific Computing and Information A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION XIAODONG, DUAN, CUNRUI WANG, XIANGDONG LIU AND NANNAN WANG Abstract This paper presents a hybr genetic algorithm (Genetic Algorithm based on Particle Swarm Optimization, ) combining the ea of the particle swarm (PSO) with genetic algorithm (). we also prove that the basic PSO is one extra form of the crossover model with float encoding. But other kinds of optimizations such as combinatorial optimization and constrained optimization are not resolved by PSO as well as. We hybr the way that each particle exploits information of its own experience to search the solution space with the crossover operator of, we propose a new algorithm to resolve Combinatorial Optimization and other optimization with discrete encoding. The outperforms the basic with faster convergence and better solution on MST and function optimization with discrete encoding. Key Words, particle Swarm Optimization, Combinatorial Optimization, Genetic Algorithm 1. Introduction Genetic algorithms (s) are stochastic search methods, which have been inspired by the process of biological evolution [1]. Because of s' robustness and their uniform approach to the large number of different classes of problems, they have been used in many applications. Particle Swarm Optimization (PSO) was originally designed by Kennedy and Eberhart in 1995 as a new method for function optimization [2]. The PSO ea is inspired by natural concepts, such as fish schooling, bird flocking and human social relations, making it a main offspring of swarm intelligence. The can resolve more kinds of optimizations than PSO does, and the PSO has proven to be a competitor to the standard when it comes to function optimization [3]. As we know, PSO does well in the function optimization, the variables of which are real, that is, particle is real number representation. But other kinds of optimizations such as combinatorial optimization and constrained optimization are not resolved by PSO as well as. Though several discrete particle swarm algorithms (DPSO) [4] [5] were designed to resolve these problems which encoding is discrete; the distance between two particles and the velocity of particles must be defined. Operation of DPSO becomes more intricate so that the efficiency of DPSO is much lower than s. Thus two questions should be asked: one is why the efficiency of PSO is better than s in real function optimizations, but not as good as s in other kinds of optimizations with discrete encoding; the other is how to find a hybr algorithm with kinds of encoding in and efficiency of the PSO. Some research on hybr algorithm about PSO and has been concerned by adding Received April 10, 2005; revised January 15, 2006 275

276 X. DUAN, C. WANG, X. LIU AND N. WANG PSO with breeding and subpopulations [6]. It has proven to achieve faster convergence and better solution than s in function optimization. From our point of view, there is a need for observing particles of swarm making use of all kinds of information from self and others to improve the efficiency of. The research has discovered that particle s reserving and exploiting historical experience is one of the cores in PSO to achieve faster convergence. So a hybr algorithm combining the model of with reserving historical experience in PSO is presented. For making use of experience, a crossover operator for indivual in to inherit three kinds of information father s present encoding, mother s present encoding and encoding of father s experience has been designed. was compared with standard on typical benchmark problems. The next section introduces the data structures, crossover operator and entire flow of the model. Section 3 introduces the experimental settings. The results are described in section 4. The experimental results are discussed in section 5 and finally the conclusion is given in Section 6. 2. model Before introducing the modification of with the PSO ea, firstly basic PSO is introduced, and then the data structure of indivuals and the crossover operator in the model are described. At last the flow of the model is shown. 2.1. The standard PSO model. The PSO designed by Kennedy [2] involves casting n particles over the D-dimension search space, each with an indivual, initially random, location x i =(x i1, x i1,,x id ) and velocity vector (v i ). The particles fly over the search space, remembering the best (most fit) solution encountered (p i ). At each iteration, every particle adjusts its velocity vector (v i ), based on its momentum and the influence of both its best solution and the best solution of all particles (p g ) then computes a new point to examine. The momentum of each particle tends to keep it from being trapped by a local, non-optimal, extrema, yet by each particle consering both its own memory and that of its neighbors. The entire swarm tends to converge on a global extrema. The standard PSO formulae are: (1) = ω + 11( ) + 2 2( gd v v c r p x c r p x ) (2) x = x + v i = 1,2,,n,d= 1,2,,D ω is the inertia weight described in [7], c 1 and c 2 are learning factors, and they usually have constant values. If the velocity is higher than a certain limit, called V max, implementation of the original algorithm requires placing limits on the search area (X max and X min ). 2.2. Data structure of indivual in. In the model the data structure of indivual is different from that of the standard model. It is inspired by the PSO ea that each indivual in remembers the best solution encountered. That is, the structure of each particle includes the encoding of present solution (X i ) and the encoding of the historical experience (P i ). For examples, if the problem is TSP with five cities, the structure of particle i is shown in Figure 1.

A HYBRID PARTICLE SWARM ALGORITHM 277 X i 5 1 4 3 2 P i 1 3 5 2 4 X i 5 1 4 3 2 FIGURE 1. Structures of in and on TSP with 5 cities If problem is function optimization with binary encoding, Figure 2 illustrates the difference structure on and. X i 0111...110 P i 001...1011 X i 011...1110 FIGURE 2. Structures of in and on function optimization with binary encoding 2.3. Crossover operator in. Each indivual remembers the best solution, but how to make use of this information is very important to inherit the PSO ea. Here λ is defined as the crossover model under coefficient λ, which may be one-cutpoint crossover, float crossover or any of other crossovers with single crossover coefficient. For example, one crossover will take place between X1 and X 2, and then their respective children are called X 1 and X 2 : (3) X 1 = X 1 λ (P 1 1 λ X 2 ), λ 1 λ 2 2 (4) X 2 = X 2 λ (P 2 1 λ X 1 ), λ 1 λ 2 2 After the crossover between P 1 and X 2, only one child will be randomly selected from two breeding children as for the particle which attends next crossover. The ancestor s experience would be partly inherited to his only child. If λ 1 =λ 2 the algorithm becomes standard in some probability. In this way each child (X i ) is born under the influence of his parents, X i and X j (i j), and his best ancestor (P i ) like the particle in PSO. We conser that this crossover model is more general crossover to PSO, since it can be proved that the basic PSO is one extra form of this crossover model with float encoding. The proof is as the following: the direction-based crossover is adapted from [11], (5) x = x + λ (x jd - x ) then expressions (5) is substituted into expressions 3, we can get the following expression, (6) x = x + λ 1 {[p +λ 2 (x - p )] - x } which can then be written as (7) x = x + λ λ λ )( p x ) + λ λ ( x x ) ( 1 1 2 1 2 jd Because λ 1 and λ 2 are random number between 0 and 1.0, we replace (λ 1 -λ 1 λ 2 ) and (λ 1 λ 2 )B with c 1 and c 2. (8) x = x + c p x ) + c ( x x ) 1( 2 jd We can found that exp. 8 is the same with exp. 1 except particle velocity v, the crossover model by expression 3 and 4 are more general than the basic PSO with

278 X. DUAN, C. WANG, X. LIU AND N. WANG real number representation. 2.4. The flow of the model. The other operators such as roulette wheel selection, mutation operator etc are the same as standard. But after having finished crossover and mutation operator except the crossover operator in, the experience of each indivual should be refreshed if the fitness of each indivual is better than the one of his best ancestor. The detail of the model is illustrated as the following. Step1:Initialize random the particle swarm, set n as population size, max as maximun Iteration; Step2:calculate fitness of each particle, store current position x i as experience of each particle p i, i=1,, n; Step3:calculate selection pressure of each particle i by proportional fitness assignment i=1,, n; Step4:set i =0, select j th particle from swarm by rouletee wheel selection, if reselect particle j by the same way; j i, Step5:crossover particle i and particle j by the crossover model introduced in Section 2.3; Step6:if i n then i++ and goto Step4,otherwise, goto Step7; Step7:by mutation probability, mutate some particles of swarm. Step8: calculate fitness of each particle, if current fitness is better than its best solution, refresh experience of each particle, refresh the best solution of all particles (p g ); Step9:if max>0 then max=max-1,goto Step3;otherwise output the best solution. After initializing each indivual (X i ) of population, its P i is empty, so Copy experience step is excuted. In a similar way Refresh P i must be excuted after mutation operator. 3. Experimental settings Both the standard and the were tested on two benchmark problems, one function optimizations and another combinatorial optimization. 3.1. Combinatorial optimization. The combinatorial optimization is a degree-constrained Minimum Spanning Tree (MST) problem [8]. Prüfer numbers are used for genotype representation [9] and Figure 3 shows an example of a 8-nodes Spanning Tree with its Prüfer numbers.

A HYBRID PARTICLE SWARM ALGORITHM 279 1 7 2 5 6 3 4 8.. Prufer number X i 2 5 6 8 2 5 FIGURE 3. Prüfer number and its corresponding tree In Kruskal example [10] graphs are us ed. The maximum degree of each node in Kruskal graphs is 3. The edge weights of graph are given in Table 1. T ABLE 1. Edge weights of 9-vertex D-MST problem i 1 2 3 4 5 6 7 8 1 0 224 224 361 671 300 539 800 2 224 0 200 200 447 283 400 728 3 224 200 0 400 566 447 600 922 4 361 200 400 0 400 200 200 539 5 671 447 566 400 0 600 447 781 6 300 283 447 200 600 0 283 500 7 539 400 600 200 447 283 0 361 8 800 728 922 539 781 500 361 0 9 943 762 949 583 510 707 424 500 3.2. Function optimization with binary encoding. Camel function is wely known and used as benchmark for optimization strategies. The function is designed for the global minimum. Camel function is given by the equation 2 2 x 2 2 2 min f ( x, y) = (4 2.1x + ) x + xy + ( 4 + 4y ) y, where x, y [ 10,10]. 3 The solution X in these functions is binary string representation. 3. 3. Other parameters. In order to get a fair comparison between the two models, the two algorithms are the same as random initialization, selection and one-cutpoint crossover, but inversion mutation is used in MST and reversion mutation in Camel function optimization. The population size is set to 50, crossover probability is 0.25 and mutation probability is set to 0.001. All experiments were repeated 100 times. The number of time step was set to from 20 to 1000 for MST and Camel function optimization. The categories and extrema are listed in Table 2. T ABLE 2. Category and extrama of optimization 4. Experimental results Category Extremum Camel Minimum -1.031628 MST Minimum 2256 [10]

280 X. DUAN, C. WANG, X. LIU AND N. WANG Table 3 shows the results of the experiments. The table lists the name of benchmark problems, the average best fitness found for the 1000 runs of each test case respectively. Standard errors of each value are also listed. The parameter settings in the model are the same as the settings of the standard as described in the previous section. The average best fitness of each generation is shown in Figures 4 and 6 for standard and model. The graphs illustrate a representative set of experiment for test case with the same settings as described in section. Figure 4, 5, 6 and Figure 7 with corresponding test case show the standard deviation of one group of fitness for each generation for the two models. With camel function the achieved better and had much faster convergence than the standard. Figure 4 shows the model can get much steadier result than. T ABLE 3. Average best fitness value and standard deviation of the two optimization problems Camel -1.0316±2.14E-05-1.0166±0.019057 MST 2262.86±19.185 2340.4±57.566 In the experiments with the MST, Figure 6 and 7 show that the probability of getting global minimum spanning tree in is higher than the standard model. Average Fitness -0.86-0.88-0.9-0.92-0.94-0.96-0.98-1 -1.02 20 40 100 200 300 400 500 600 700 800 900 1000 Deviation Fitness 0.14 0.12 0.1 0.08 0.06 0.04 0.02-1.04 0 20 40 100 200 300 400 500 600 700 800 900 1000-1.06 Generation -0.02 Generation FIGURE 4. Standard versus model for Camel function with the average fitness FIGURE 5. Standard versus model for Camel function with standard deviation 2800 140 Average Fitness 2700 2600 2500 2400 2300 2200 Deviation Fitness 120 100 80 60 40 2100 20 2000 10 30 50 150 250 350 450 550 650 750 850 950 Generation 0 10 30 50 150 250 350 450 550 650 750 850 950 Generation FIGURE 6. Standard versus model for MST with the average fitness FIGURE 7. Standard versus model for MST with standard deviation

A HYBRID PARTICLE SWARM ALGORITHM 281 5. Discussion From figure 4 to 7 it is found that model achieves much faster and better convergence than which happened in about 100-200 timesteps. This is very similar to the speed of convergence of the PSO model in function optimization. Since PSO and model get much faster and better solution than, this seems to indicate that the reserving and extracting experiences from oneself are very important to the speed of convergence in function optimization. As the saying goes, Experience teaches, Experience does it. 6. Conclusion In this paper a hybr algorithm based on the PSO ea and the standard genetic algorithm is proposed. The hybr model is basically the standard combined with the method to reserve its own experience in the PSO model. For making use of the experience we design a crossover operator between any two indivuals by which indivuals in can get more experience from oneself and the others to find a better solution. The results with the model show a great potential. On the two benchmark problems the model outperforms the basic with faster convergence and better solution. The result not only indicates that the model is a better solution to combinatorial optimization and function optimization than the standard model, but also indicates why the PSO model can achieve faster and better convergence in some function optimization. The answer is agent s retaining and renewing own experience to make decision before next movement. 7. Future work Future work should lie in the study the mutual relationship between and PSO and how to exchange and exploit the information of population to improve the efficiency of optimization algorithm. The next step is to find out which crossover operator is better in. We believe that this model can become a powerful and adaptive algorithm. Acknowledgement Thanks for the support of the Excellent Young Teachers Program of MOE, the Basic Research Foundation of the Educational Department of Liaoning Province, the Natural Science Foundation of Liaoning Province of China under Grant No. 20021069 and the National Natural Science Foundation of China under Grant No. 60573124. References [1] Holland, J., Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975;MIT Press, Cambrge, MA, 1992 [2] J. Kennedy and R. C. Eberhart, Particle swarm optimization, Proceedings of the 1995 IEEE International Conference on Neural Networks, vol. 4, 1942-1948. IEEE Press. [3] R. C. Eberhart and Y. Shi, Comparison between Genetic Algorithm and Particle Swarm Optimization, Evolutionary Programming VII(1998), Lecture Notes in Computer Science 1447, 611-616. Springer. [4] J. Kennedy and R. C. Eberhart, A discrete binary version of the particle swarm algorithm, Proc. Conf. on Systems, Man, and Cybernetics, Piscataway, NJ, 1997.

282 X. DUAN, C. WANG, X. LIU AND N. WANG [5] C. K. Mohan and B. Al-kazemi, Discrete particle swarm optimization, Proc. Workshop on Particle Swarm Optimization, Indianapolis, IN: Purdue School of Engineering and Technology, IUPUI, 2001. [6] Løvbjerg, M., Rasmussen, TK, Krink, T., (1998). Hybr Particle Swarm Optimizer with Breeding and Subpopulations, Proceedings Genetic and Evolutionary Computation Conference, GECCO- 2001, Morgan Kaufmann, San Francisco. [7] Shi, YH, Eberhart, RC, (1998), A Modified Particle Swarm Optimizer, IEEE International Conference on Evolutionary Computation, Anchorage, Alaska, May 4-9, 1998. [8] Graham, R. L. and Hell, P. On the History of the Minimum Spanning Tree Problem. Ann. History Computing, vol. 7, pp. 43-57, 1985. [9] Gengui Zhou, Mitsuo Gen, An effective genetic algorithm approach to the quadratic minimum spanning tree problem, Computers and Operations Research, v.25 n.3, p.229-237, March 1998. [10] M. Savelsbergh. Local search for routing problems with time windows. Annals of Operations Research, 4:285-305, 1985. [11] Michalewicz, Z., Logan, TD, and Swaminathan, S., Evolutionary Operators for Continuous Convex Parameter Spaces, Proceedings of the 3rd Annual Conference on Evolutionary Programming, AV Sebald and LJ Fogel, pp.84-97 [12] Xiaodong Duan, Institute of nonlinear information & technology, Dalian Nationalities University, Liaohe west Street, No. 18, China. Email: duanxd@dlnu.edu.cn. He received his BS in Computer Science from Nankai University in 1985, MS in Mathematics from Northeastern University, China in 1988. He obtained a Ph.D. in Computer Software & Theory at Northeastern University. His research interests are in the areas of intelligence optimization, fractal theory and Data Mining. He has published several papers in these areas. Cunrui Wang, The research institute of nonlinear information &technology, Dalian Nationalities Univ ersity, China. Email: cunrui@gmail.com. He received his B.Sc degree in Computer Science from Dalian Nationalities University in 2002, M.Sc degree in Computer Software & Theory from Northeastern University, China in 2005. His research interests are in the areas of swarm intelligence and Data Mining. Institute of nonlinear information & technology, Dalian Nationalities University, Dalian 116600, China E -mail: duanxd@dlnu.edu.cn cunrui@gmail.com liuxd@dlnu.edu.cn nannan@dlnu.edu.cn