Adaptive Imperialist Competitive Algorithm (AICA)

Similar documents
Imperialist Competitive Algorithm using Chaos Theory for Optimization (CICA)

QCA & CQCA: Quad Countries Algorithm and Chaotic Quad Countries Algorithm

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Particle Swarm Optimization

Modified Particle Swarm Optimization

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

A Hybrid Fireworks Optimization Method with Differential Evolution Operators

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

A Gaussian Firefly Algorithm

A Novel Hybrid Imperialist Competitive Algorithm for Global Optimization

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Binary Differential Evolution Strategies

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Scheduling Scientific Workflows using Imperialist Competitive Algorithm

Modified Particle Swarm Optimization with Novel Modulated Inertia for Velocity Update

PARTICLE SWARM OPTIMIZATION (PSO) [1] is an

Automatic differentiation based for particle swarm optimization steepest descent direction

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Hybrid PSO-SA algorithm for training a Neural Network for Classification

International Conference on Modeling and SimulationCoimbatore, August 2007

Chaos Genetic Algorithm Instead Genetic Algorithm

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Genetic-PSO Fuzzy Data Mining With Divide and Conquer Strategy

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

THE Artificial Neural Network (ANN) is constructed of

Cloud Computing Resource Planning Based on Imperialist Competitive Algorithm

A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

Provide a Method of Scheduling In Computational Grid Using Imperialist Competitive Algorithm

PARTICLE SWARM OPTIMIZATION (PSO)

Solving the Graph Bisection Problem with Imperialist Competitive Algorithm

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Simulated Tornado Optimization

Prediction of traffic flow based on the EMD and wavelet neural network Teng Feng 1,a,Xiaohong Wang 1,b,Yunlai He 1,c

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Cooperative Coevolution using The Brain Storm Optimization Algorithm

Small World Particle Swarm Optimizer for Global Optimization Problems

Neural Network Weight Selection Using Genetic Algorithms

An Island Based Hybrid Evolutionary Algorithm for Optimization

Hybrid Particle Swarm and Neural Network Approach for Streamflow Forecasting

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

DE/EDA: A New Evolutionary Algorithm for Global Optimization 1

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

What Makes A Successful Society?

Experiments with Firefly Algorithm

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search

Using Genetic Algorithms to optimize ACS-TSP

Comparative Study of Meta-heuristics Optimization Algorithm using Benchmark Function

A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical Test Functions.

A New Approach for Finding the Global Optimal Point Using Subdividing Labeling Method (SLM)

Enhanced Symbiotic Organisms Search (ESOS) for Global Numerical Optimization Doddy Prayogo Dept. of Civil Engineering Petra Christian University Surab

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization

HPSOM: A HYBRID PARTICLE SWARM OPTIMIZATION ALGORITHM WITH GENETIC MUTATION. Received February 2012; revised June 2012

OPTIMIZATION OF OBJECT TRACKING BASED ON ENHANCED IMPERIALIST COMPETITIVE ALGORITHM

Particle Swarm Optimization

Non-deterministic Search techniques. Emma Hart

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

DERIVATIVE-FREE OPTIMIZATION

Unidimensional Search for solving continuous high-dimensional optimization problems

A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

GRAPH COLOURING PROBLEM BASED ON DISCRETE IMPERIALIST COMPETITIVE ALGORITHM

Constraints in Particle Swarm Optimization of Hidden Markov Models

Multi-Objective Optimization Using Genetic Algorithms

The Modified IWO Algorithm for Optimization of Numerical Functions

AN EFFICIENT COST FUNCTION FOR IMPERIALIST COMPETITIVE ALGORITHM TO FIND BEST CLUSTERS

Effect of the PSO Topologies on the Performance of the PSO-ELM

Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms

Providing new meta-heuristic algorithm for optimization problems inspired by humans behavior to improve their positions

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over

AN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP

IMPROVED ARTIFICIAL FISH SWARM ALGORITHM AND ITS APPLICATION IN OPTIMAL DESIGN OF TRUSS STRUCTURE

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY

Modified Shuffled Frog-leaping Algorithm with Dimension by Dimension Improvement

An Improved Tree Seed Algorithm for Optimization Problems

Feature Selection using Modified Imperialist Competitive Algorithm

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

A Naïve Soft Computing based Approach for Gene Expression Data Analysis

Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale Global Optimization

[Kaur, 5(8): August 2018] ISSN DOI /zenodo Impact Factor

Water cycle algorithm with fuzzy logic for dynamic adaptation of parameters

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Real Coded Genetic Algorithm Particle Filter for Improved Performance

CT79 SOFT COMPUTING ALCCS-FEB 2014

Tracking Changing Extrema with Particle Swarm Optimizer

An Evolutionary Algorithm for Minimizing Multimodal Functions

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Transcription:

Adaptive Imperialist Competitive Algorithm () Marjan Abdechiri Elec., comp. & IT Department, Qazvin Azad University, Qazvin, Iran, Marjan.abdechiri@qiau.ac.ir Karim Faez Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran, Kfaez@aut.ac.ir Helena Bahrami Elec., comp. & IT Department, Qazvin Azad University, Qazvin, Iran, Bahramihelena@yahoo.com Abstract The novel Imperialist Competitive Algorithm () that was recently introduced has a good performance in some optimization problems. The inspired by sociopolitical process of imperialistic competition of human being in the real world. In this paper, a new Adaptive Imperialist Competitive Algorithm () is proposed. In the proposed algorithm, for an effective search, the Absorption Policy changed dynamically to adapt the angle of colonies movement towards imperialist s position. The is easily stuck into a local optimum when solving high-dimensional multi-model numerical optimization problems. To overcome this shortcoming, we use probabilistic model that utilize the information of colonies positions to balance the exploration and exploitation abilities of the imperialistic competitive algorithm. Using this mechanism, exploration capability will enhance. Some famous unconstraint benchmark functions used to test the performance. Also, we use the Algorithm to adjust the weights of a three-layered Perceptron neural network to predict the maximum worth of the stocks change in Tehran s Bourse Market. Simulation results show this strategy can improve the performance of the algorithm significantly. Keywords-Imperialist Competitive Algorithm; absorption policy; density probabilistic model. I. INTRODUCTION The global optimization problem is applicable in every field of science, engineering and business. So far, many Evolutionary Algorithms (EA) [1,2], have been proposed for solving the global optimization problem. Inspired by the natural evolution, EA analogizes the evolution process of biological population, which can adapt the changing environments to the finding of the optimum of the optimization problem through evolving a population of candidate solutions. Some Evolutionary Algorithms for optimization problem are: the Genetic Algorithm () [2,3,4,5,6,7], at first proposed by Holland, in 1962 [4], Particle Swarm Optimization algorithms () [8,9] that at first proposed by Kennedy and Eberhart [8], in 1995, Simulated Annealing (SA) [10,11,12], Cultural Evolutionary algorithms (CE) [13,14] at first was developed by Reynolds, in the early 1990s [14] and etc. The optimization methods are extensively used to adjust the weights of multi-layered Neural Networks. While gradient descent is a very popular optimization method, it plagued by slow convergence and susceptibility to local minima. Therefore, other approaches to improve NN training introduced. These methods include global optimization algorithms, such as Simulated Annealing [15], Genetic Algorithms [16,17], Particle Swarm Optimization algorithms [18,19,20] and other Evolutionary Algorithms. Recently, a new algorithm has been proposed by Atashpaz-Gargari and lucas [21], in 2007 that has inspired from a socio-human phenomenon. In this paper, we have proposed a new algorithm called Adaptive Imperialist Competitive Algorithm () that uses the probability density function to adapt the angle of colonies movement towards imperialist s position during iterations dynamically. This mechanism, enhance the global search capability of the algorithm. This idea increases the performance of the algorithm effectively in solving the optimization problems. We examined the proposed algorithm in several standard benchmark functions that usually tested in Evolutionary Algorithms. Also, we use the Algorithm to adjust the weights of a three-layered Perceptron Neural Network to predict the maximum worth of the stocks change in Tehran s Bourse Market[22]. The results of applying the proposed algorithm on benchmark functions and Neural Network to predict the maximum worth of the stocks change in Tehran s Bourse Market indicated that the convergence speed and the quality of obtained solution in compare with, using a Sugeno function as inertia weight decline curve[23] and algorithm show a good performance. The rest of this paper organized as follows. Section two, provides an introduction the algorithm. In section three, Adaptive Imperialistic Competitive Algorithm is proposed. Fourth section is devoted to the empirical results of proposed algorithm implementation and its compression with the results obtained by, and algorithms. The last section concludes the paper. II. INTRODUCTION OF IMPERIALIST COMPETITIVE ALGORITHMS () In this section, we introduce algorithm and chaos theory. Proc. 9th IEEE Int. Conf. on Cognitive Informatics (ICCI 10) F. Sun, Y. Wang, J. Lu, B. Zhang, W. Kinsner & L.A. Zadeh (Eds.) 978-1-4244-8040-1/10/$26.00 2010 IEEE

A. Imperialist Competitive Algorithm () Imperialist Competitive Algorithm () is a new evolutionary algorithm in the Evolutionary Computation field based on the human's socio-political evolution. The algorithm starts with an initial random population called countries. Some of the best countries in the population selected to be the imperialists and the rest form the colonies of these imperialists. In an N dimensional optimization problem, a country is a array. This array defined as below The cost of a country is found by evaluating the cost function f at the variables. Then The algorithm starts with N initial countries and the best of them (countries with minimum cost) chosen as the imperialists. The remaining countries are colonies that each belong to an empire. The initial colonies belong to imperialists in convenience with their powers. To distribute the colonies among imperialists proportionally, the normalized cost of an imperialist is defined as follow Where, is the cost of nth imperialist and is its normalized cost. Each imperialist that has more cost value, will have less normalized cost value. Having the normalized cost, the power of each imperialist is calculated as below and based on that the colonies distributed among the imperialist countries. On the other hand, the normalized power of an imperialist is assessed by its colonies. Then, the initial number of colonies of an empire will be Where, is initial number of colonies of nth empire and is the number of all colonies. To distribute the colonies among imperialist, of the colonies is selected randomly and assigned to their imperialist. The imperialist countries absorb the colonies towards themselves using the absorption policy. The absorption policy shown in Fig.1, makes the main core of this algorithm and causes the countries move towards to their minimum optima. The imperialists absorb these colonies towards themselves with respect to their power that described in (6). The total power of each imperialist is determined by the power of its both parts, the empire power plus percents of its average colonies power. Where is the total cost of the nth empire and is a positive number which is considered to be less than one. In the absorption policy, the colony moves towards the imperialist by x unit. The direction of movement is the vector from colony to imperialist, as shown in Fig.1, in this figure, the distance between the imperialist and colony shown by d and x is a random variable with uniform distribution. Where is greater than 1 and is near to 2. So, a proper choice can be. In our implementation is respectively. (8) In algorithm, to search different points around the imperialist, a random amount of deviation is added to the direction of colony movement towards the imperialist. In Fig. 1, this deflection angle is shown as, which is chosen randomly and with an uniform distribution. While moving toward the imperialist countries, a colony may reach to a better position, so the colony position changes according to position of the imperialist. Figure1. Moving colonies toward their imperialist In this algorithm, the imperialistic competition has an important role. During the imperialistic competition, the weak empires will lose their power and their colonies. To model this competition, firstly we calculate the probability of possessing all the colonies by each empire considering the total cost of empire. Where, is the total cost of nth empire and is the normalized total cost of nth empire. Having the normalized total cost, the possession probability of each empire is calculated as below

after a while all the empires except the most powerful one will collapse and all the colonies will be under the control of this unique empire. III. THE PROPOSED IMPERIALIST COMPETITIVE ALGORITHM The algorithm like many Evolutionary Algorithms suffers the lack of ability to global search properly in the problem space. During the search process, the algorithm may trap into local optima and it is possible to get far from the global optima. This causes the premature convergence. In this paper, a new method suggested that balance the exploration and exploitation abilities of the proposed algorithm, using colonies positions information. In the algorithm absorption policy that mentioned in the previous section, the colonies move towards imperialists with an angle, which is a random variable. The colonies movement because of the constant parameter has a monotonic nature, so the colonies movement could not be adapted with the search process. Therefore, if the algorithm traps in the local optima, it cannot leave it and move towards the global optima. For solving this problem, and make balance between the explorative and exploitative search, we define the parameter adaptively, and dynamically adjust the movement of colonies to the imperialists during the search process. A. the definition of adaptive movement angle in the absorption policy As mentioned before in algorithm the colonies move towards the imperialist by a random amount of deviation. The parameter is this deviation. In this paper, we extract the statistical information about the search space from the current population of solutions to provide an adaptive movement angle. We proposed a probabilistic model, to modify the global search capability. The probabilistic model P(x) that we use here is a Gaussian distribution model [24,25,26,27]. The joint probability distribution of all the countries, is given by the product of the marginal probabilities of the countries: Where The average, μ, and the standard deviation,, for the colony countries of each empire is approximated as below: In each iteration, the country densities compute using the probabilistic model in Eq(11). If the countries density in the current iteration is more than the previous iteration, then with 85% the previous angle of the movement of the countries towards their empires will be shrunk and with 15% the mentioned angle will be expanded., is the current angle of movement., is the previous angle and is the step size of shrinking and expanding the angle of movement. The value of this step size is varying between 0.0001 and 0.1. Otherwise, if the countries density in the current iteration is less than the previous iteration, then with 85% the previous angle of the movement of the countries towards their empires will be expanded and with 15% the mentioned angle will be shrunk. If the countries density in the current iteration is more than the previous iteration, it means that may be the countries are converging to an optimum point. So, in Eq. (15), depending on the density of the countries distribution, we set the angle of movement so that each country can escape from the dense area with 15% and with 85% the country will move towards its empire with a shrinking angle. In the cases that the countries converge to a local optima, this method will help to escape from falling into the local optima s trap with possibility of 15%. In this way, we add explorative search ability to the algorithm. In Eq. (16), if the countries density in the current iteration is less than the previous iteration, each country with possibility of 15% will move towards its empire with a shrinking angle and with 85% the country will move towards its empire with an expanding angle. This way, provides a more efficient search in all over the search space of the problem. The results show that the quality of solutions and the speed of convergence of imperialist competitive algorithm with adaptive absorption policy is better than to, using a Sugeno function as inertia weight and algorithms. This is observable in analysis and conclusion section. (1) Initialize the empires and their colonies positions randomly. (2) Compute the adaptive (colonies movement angle towards the imperialist s position) using the probabilistic model. (3) Compute the total cost of all empires (Related to the power of both the imperialist and its colonies). (4) Pick the weakest colony (colonies) from the weakest empire and give it (them) to the empire that has the most likelihood to possess it (Imperialistic competition). (5) Eliminate the powerless empires. (6) If there is just one empire, then stop else continue. (7) Check the termination conditions. Figure2. The algorithm.

IV. ANALYSIS AND CONSIDERATION OF EMPIRL RESULTS In this paper, the proposed algorithm, that called Adaptive Imperialist Competitive Algorithm (), applied to some well-known benchmark functions and a three-layered Perceptron Neural Network to update its weights, in order to verify the algorithm performance and compared with and using a Sugeno function as inertia weight and algorithms. These benchmarks presented in Table1. Sphere Rosenbrock Rastrigin Griewank Ackley TABLE I. BENCHMARKS FOR SIMULATION Mathematical representation (x)=-20exp(-0.2 exp( - +20+e Range (-100,100) (-100,100) (-10,10) (-600,600) (-32,32) testing. The neural network trained by C,, and algorithms and the results compared with each other. The results of these experiments presented in Table 2 and 3. In the Fig.3, which belongs to Sphere it is observable that the quality of global optima solution and the convergence velocity towards the optima point has improved in compare with the other three algorithms. In the log plot of the Sphere function, at the first 20 iterations, algorithm has better convergence speed than the and algorithms but after that iteration the won the competition. 10 4 10 2 10 0 10 6 comparative Result for Sphere 10-2 10-4 michalewicz (x)= - (0,) 10-6 We made simulations for considering the rate of convergence and the quality of the proposed algorithm optima solution, in comparison to, using a Sugeno function as inertia weight and algorithms that all the benchmarks tested by 30 dimensions separately. The average of optimum value for 20 trails obtained. In these experiments, all the simulations done during 1000 generations for Sphere and Rosenbrock uni-modal functions and Rastrigin, Griwank, Ackley and michalewicz multimodal functions. In these simulations for and algorithms, we set the parameters,=0.001. The number of imperialists and the colonies are set respectively to 8 and 80. In algorithm the parameters and are fix to 1.5 and the number of the particle is 80. Determining this amount for c1 and c2 we have given equal chance to social and cognition components take part in search process. In the population size is 80, the mutation and crossover rate are respectively set to 0.01 and 0.5. We applied the trained neural network with,, and algorithms on the data of TEHRAN's bourse market. The inputs of this network are the volume of changed stocks, the last price, the least price and the most prices in different times. The output of this network is the approximation of the most prices of the changed stocks in TEHRAN's bourse market. In these simulations, we used a three-layered Perseptron Neural Network containing an input layer with 7 nodes, a hidden layer with 5 nodes and an output layer with one node. The dataset include of 1155 instances. Using Holdout method (The holdout method splits the data into two mutually exclusive sets, sometimes referred to as the training and test sets) we apply 80% of instance data for training the Neural Network and the remaining 20 % for 10-8 Figure3. The cost of Sphere function In Rosenbrock uni-modal function the speed of convergence of algorithm is better than, and algorithm until the 200th iteration. After the 200th iteration, the velocity and quality of optima solution recovered in algorithm. 10 9 10 8 10 7 10 6 10 10 comparative Result for Rosenbrock 10 5 10 4 10 3 10 2 10 1 Figure4. The cost of Rosenbrock function. As we can see in Fig.5, for Rasrigin multi-modal function the algorithm has better performance rather than the and algorithms. The proposed algorithm has shown a good performance in this function and has been able to escape from the local peaks and reach to global optima.

10 3 comparative Result for Rastrigin 10 2 20 18 16 comparative Result for Ackley 10 1 14 12 Cost 10 0 Cost 10 8 10-1 6 10-2 4 2 10-3 Figure 5. The cost of Rastrigin function. In Fig.6, Michalewicz multi-modal function, the porposed algorithm has shown good performance. -6-8 -10-12 -14-16 -18-20 comparative Result for Michalewicz -22 Figure 6. The cost of Michalewicz function. In Fig.7, Griewank multi-modal function the proposed algorithm has had remarkable improved in this function both in optima solution quality and in convergence speed rather than the, and algorithms. 16 14 12 10 comparative Result for Griewank 0 Figure8. The cost of Ackley function. In Fig.8, Ackley multi-modal function, the proposed algorithm has better performance in this function both in optima solution quality and in convergence speed rather than the, and algorithms reach to a better optima. In table 2, the average of optimum value for 20 trails, which is obtained from proposed algorithm,, and are shown. The benchmarks, were tested by 30 dimensions and the stop condition was 1000 generations. The numerical results show that the proposed algorithm has recovered the global optima solution remarkably. TABEL II. Average optimum value for 20 trails for benchmarks. Sphere 9.0371 18.8398 28.7678 1.4052 Rosenbrock 32.6318 1.7112 3.6425 25.6721 Rastrigin 4.6395 19.2644 0.0994 0.0052 Michalewicz -16.8100-18.2950-20.9049-21.4124 Griewank -0.3687-0.3702-0.8319-2.3712 Ackley 0.7863 2.6179 1.4384 6.1457 In Fig9, comparison of Mean Square Error (MSE) of Neural Network trained by,, and indicated that the proposed algorithm trained very well rather than the other algorithms. 10-1 Mean Square Error 8 6 4 MSE 10-2 2 0-2 -4 Figure7. The cost of Griewank function. 10-3 Epoch # Figure 9. The comparison of Mean Square Errors (MSE). Table 2, shows the result of,, and training algorithms mean square errors. As it is observable, the

algorithm has the least MSE in compare with the other algorithms. Train Error Test Error TABLE II. COMPARE RESULTS Train correlation Test correlation Time of training (second) 0.0031 0.0063 0.9909 0.9784 511.880180 0.0016 0.0016 0.9949 0.9943 1363.970089 0.0021 0.0014 0.9936 0.9952 1165.571148 0.0011 6.2798 0.9964 0.9979 1178.611372 V. CONCLUSION In this paper, an improved imperialist algorithm called Adaptive Imperialist Competitive Algorithm () introduced. The proposed algorithm uses the probability density function to adapt the angle of colonies movement towards imperialist s position during iterations dynamically. This mechanism, enhance the global search capability of the algorithm. This idea balances the exploration and exploitation abilities of the proposed algorithm, using colonies positions information. We examined the proposed algorithm in several standard benchmark functions that usually tested in Evolutionary Algorithms. Also, we use the Algorithm to adjust the weights of a three-layered Perceptron Neural Network to predict the maximum worth of the stocks change in Tehran s Bourse Market. Experimental results show that the proposed algorithm is a promising method with good global convergence performance than the, and algorithms. In the future, we will work on the affect of the different probability models on the performance of the proposed algorithm. REFERENCES [1]H. Sarimveis and A. Nikolakopoulos, "A Line Up Evolutionary Algorithm for Solving Nonlinear Constrained Optimization Problems", Computers & Operations Research, 32(6):pp.1499 1514, 2005. [2]H. M uhlenbein, M. Schomisch, J.Born, "The Parallel Genetic Algorithm as Function Optimizer", Proceedings of The Fourth International Conference on Genetic Algorithms, University of California, San diego, pp. 270-278,1991. [3]C. Bing-rui and F. Xia-ting, "Self-adapting Chaos-genetic Hybrid Algorithm with Mixed congruential Method", Forth International Conference, pp. 674-677,2008. [4]J.H. Holland. "ECHO: Explorations of Evolution in a Miniature World", In J.D. Farmer and J. Doyne, editors, Proceedings of the Second Conference on Artificial Life, 1990. [5] M. Gao, J. Xu, J. Tian and H. Wu, "Path Planning for Mobile Robot based on Chaos Genetic Algorithm", Forth International Conference, pp. 409-413,2008. [6]M. Melanie, "An Introduction to Genetic Algorithms", Massachusett"s: MIT Press, 1999. [7]May RM, "Simple mathematical models with very complicated dynamics,. Nature 1976;261:459. [8]J. Kennedy and R.C. Eberhart, "Particle swarm optimization", in: Proceedings of IEEE International Conference on Neural Networks, Piscataway: IEEE, pp. 1942 1948, 1995. [9]X. Yang, J. Yuan, J. Yuan and H. Mao," A modified particle swarm optimizer with dynamic adaptation", Applied Mathematics and Computation, Volume 189, Issue 2, pp. 1205-1213, 2007. [10]B.E. Rosen and J.M. Goodwin, "Optimizing Neural Networks Using Very Fast Simulated Annealing. Neural", Parallel & Scientific Computations, pp.383 392, 1997. [11]L.A. Ingber, "Simulated annealing: practice versus theory", J. Math. Comput. Modell. 18 (11), pp.29 57, 1993. [12]M.F. Cardoso, R.L. Salcedo, S.F. Azevedo, D. Barbosa, "A simulated annealing approach to the solution of minlp problems", Comput. Chem. Eng. 21 (12),pp.1349 1364, 1997. [13]B. Franklin and M. Bergerman, "Cultural Algorithms: Concepts and Experiments", In Proceedings of the IEEE Congress on Evolutionary Computation, volume 2, pp. 1245 1251, 2000. [14] X. Jin and R.G. Reynolds, "Using Knowledge-Based Evolutionary Computation to Solve Nonlinear Constraint Optimization Problems: A Cultural Algorithm Approach", In Proceedings of the IEEE Congress on Evolutionary Computation, volume 3, pp. 1672 1678, 1999. [15]B.E. Rosen and J.M. Goodwin, "Optimizing Neural Networks Using Very Fast Simulated Annealing", Neural, Parallel & Scientific Computations, pp.383 392, 1997. [16]C.L. Wu, K.W. Chau, "A flood forecasting neural network model with genetic algorithm", International Journal of Environment and Pollution 28(3 4) pp. 261 273,(2006). [17]N. Muttil, K.W. Chau, "Neural network and genetic programming for modelling coastal algal blooms", International Journal of Environment and Pollution 28 (3 4) pp. 223 238, 2006. [18] J. Kennedy and R.C. Eberhart, "Particle swarm optimization", in: Proceedings of IEEE International Conference on Neural Networks, Piscataway: IEEE, pp. 1942 1948, 1995. [19]K. Lei, Y. Qiu and Y. He, "A New Adaptive Well-Chosen Inertia Weight Strategy to Automatically Harmonize Global and Local Search Ability in Particle Swarm Optimization", ISScAA, 2006. [20]Y. Da, X.R. Ge, "An improved -based ANN with simulated annealing technique", Neurocomput. Lett. 63 pp. 527 533, 2005. [21]E. Atashpaz-Gargari and C. Lucas, "Imperialist Competitive Algorithm: An Algorithm for Optimization Inspired by Imperialistic Competition", IEEE Congress on Evolutionary Computation (CEC 2007).pp 4661-4667, 2007. [22]http://www.IRBourse.com, The dataset for training the Neural Network. [23]S. Kirtrick and C. D. Gelatt and M. P. Vecchi, Optimization by Simulated Annealing, Science, Vol 220, Number 4598, pp. 671-680, 1983. [24]A. Papoulis, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 1965. [25]Randall C. Smith, Peter Cheeseman, On the Representation and Estimation of Spatial Uncertainty, the International Journal of Robotics Research,Vol.5, No.4, Winter 1986. [26]T.K. Paul and H. Iba, Linear and Combinatorial Optimizations by Estimation of Distribution Algorithms, 9th MPS Symposium on Evolutionary Computation, IPSJ, Japan, 2002. [27]Yaakov Bar-Shalom, X. Rong Li, Thiagalingam Kirubarajan, Estimation with Applications to Tracking and Navigation, John Wiley & Sons, 2001.