Particle Swarm Optimization in Scilab ver 0.1-7

Size: px
Start display at page:

Download "Particle Swarm Optimization in Scilab ver 0.1-7"

Transcription

1 Particle Swarm Optimization in Scilab ver S. SALMON, Research engineer and PhD. student at M3M - UTBM Abstract This document introduces the Particle Swarm Optimization (PSO) in Scilab. The PSO is a meta-heuristic optimization process created by Kennedy and Eberhart in Three PSO are implanted in this toolbox : the "Inertia Weight Model" by Shi & Eberhart in 1998, the "Radius" and the "BSG-Starcraft" by the author. Source code is released under CC-BY-NC-SA. 1 Introduction In order to treat optimization cases, two main optimization families are available (excepted hybrids methods): gradient based methods such as Newton, conjugate gradient,...; meta-heuristic methods such as Nelder-Mead, Torzcon, simulated annealing, ant colonies, genetics algorithms.... Gradient based methods are non-linear sensitive and so may not converge to a good solution due to the need of derivative evaluation. Meta-heuristic methods are design to such problem thanks to only required the direct evaluation of the objective function. In the objective to treat non-linear problem with a simple optimization process, the Particle Swarm Optimization appears to be well adapted. 2 The Particle Swarm Optimization The PSO method, published by Kennedy and Eberhart in 1995 [4], is based on a population of points at first stochastically deployed on a search field. Each member of this particle swarm could be a solution of the optimization problem. This swarm flies in the search field (of N dimensions) and each member of it is attracted by its personal best solution and by the best solution of its neighbours [3, 1]. Each particle has a memory storing all data relating to its flight (location, speed and its personal best solution). It can also inform its neighbours, i.e. communicate its speed and position. This ability is known as socialisation. For each iteration, the objective function is evaluated for every member of the swarm. Then the leader of the whole swarm can be determined: it is the particle with the best personal solution. The process leads at the to the best global solution. This direct search method does not require any knowledge of the objective function derivatives. At each iteration, the location and speed of one particle are updated. The basic method proposed in [4] (Eq. 1) : 1

2 v t+1 = v t + R 1.C 1.(g x t ) + R 2.C 2.(p x t ) x t+1 = x t + v t+1 (1) where C 1 and C 2 are learning parameters, R 1 and R 2 are random numbers, g is the location of the leader and p the personal best location. This equation reveals the particle leader location to each particle. 3 The Inertia Weigth Model A variant of the PSO method has been developped by Shi & Eberhart in 1998 in which a modification of the speed equation improves the convergence by inserting a time depant variable: this is the "Inertia Weight Model" [7] (Eq. 2) : v t+1 = ω t.v t + R 1.C 1.(g x t ) + R 2.C 2.(p x t ) (2) Decreasing the variable ω enables to slow down the speed of the particle around the leader location and provide a balance between exploration and exploitation. Particles trajectories have been studied in [3, 6, 2] and the parameter selection of the particle swarm in [7, 2]. 4 The Radius improvement This improvement, developed by the author and based on the "Inertia weight Model", consists in stopping the optimization process when the swarm became too small. When optimizing a real system such as actuators, waveforms or other physical devices, there are some material limitations due to sensors errors, milling defaults... And so it became useless to continue the computation while differencing particles with real measurement devices is impossible. A minimum radius is defined and while the swarm radius (using an infinite norm) is higher than the minimum radius, the optimization process continue. When the swarm radius become inferior to the minimum, a counter starts for 10 iterations. If one particle escapes from the minimum radius then the counter is reset the optimization process is stopped. 5 The BSG-Starcraft improvement Based on the "Inertia weight model" and developed by the author, the Battlestar Galactica (BSG) - Starcraft improvement is based on two ideas inspired from the science fiction film Battlestar Galactica and a video game Starcraft. Here the two ideas : the particle leader (the carrier) has the ability to s randomly some new particles to fast explore the space (raptors); if one raptor find a best position than the global best then the swarm jump (FTL jump), conserving the swarm geometry, to this new location. The carrier location is now the raptor one. This improvement is in evaluation stage and could be useful when the swarm is initially long away from the objective function minimum. 2

3 6 How to use and comparison on test case 6.1 How to use The toolbox is available via Atoms in Scilab or from the Scilab forge. Those PSO methods are designed to be mono-objective. So in order take into account multi-objective systems, the user has to reduce the size of the objective function output using for example a L2 norm. First step : An objective function has to be created in a script.sce file for example: function f=script(x) f=60+sum((x.^2)-10*cos(2*%pi.*x)); // Rastrigin s function R6 function; Second step : Create a command file to step up the PSO chosen and execute the file: clear lines(0) objective= script // the objective function file wmax=0.9; // initial inertia wmin=0.4; // final inertia itmax=200; // maximum iteration allowed c1=2; // personal best knowledge factor c2=2; // global best knowledge factor N=20; // number of particle D=6; // problem dimension borne_sup=20*[ ] ; // location min. milestone borne_inf=10*borne_sup; // location max. milestone vitesse_max=[ ] ; // max. speed milestone vitesse_min=-1*vitesse_max; // min. speed milestone radius=1e-4; // minimal radius //executing PSO // for inertial PSO PSO_inertial(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max) // for inertial radius PSO PSO_inertial_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius) // for BSG Starcraft PSO PSO_bsg_starcraft(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max) // for BSG Starcraft radius PSO PSO_bsg_starcraft_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius) 6.2 Test cases : Those test cases are chosen from the review realised by Molga and Smutnicki [5] and are designed to benchmark optimization algorithms on multi-modal and/or multidimensional and/or with many locals extremes. Each test case is repeated 100 times for each optimization program. 3

4 6.2.1 Rastrigin s function: The Rastrigin s function is defined by (Eq. 3): f(x) = 10n + n [ x 2 i 10cos (2πx i ) ] (3) i=1 The solution is located in zero of R n and the function value is also zero. The test case is a 20 dimension problem, PSO parameters are defined in Table 1 and results in Table 2: Parameter limits Value location inf. R location sup. R speed inf. R speed sup. R radius 1e-3 max. iteration 800 particle number 20 Table 1: Common PSO parameters Inertial Radius BSG-Starcraft BSG-Starcraft radius Mean Final value Mean Iteration Table 2: Results for Rastringin s function De Jong 1 s function: The De Jong 1 s function is defined by (Eq. 4): f(x) = n [ ] x 2 i The solution is located in zero of R n and the function value is also zero. The test case is a 20 dimension problem, PSO parameters are defined in Table 3 and results in Table 4: i=1 Parameter limits Value location inf. R location sup. R speed inf. R speed sup. R radius 1e-3 max. iteration 800 particle number 20 Table 3: Common PSO parameters (4) 4

5 Inertial Radius BSG-Starcraft BSG-Starcraft radius Mean Final value Mean Iteration Table 4: Results for De Jong 1 s function Ackley s function: The Ackley s function is defined by (Eq. 5): a = 30 b = 0.2 c = 2π f(x) = a. exp b 1 n n i=1 x 2 i exp ( 1 n ) n cos(cx i ) + a + exp(1) i=1 (5) The solution is located in zero of R n and the function value is also zero. The test case is a 20 dimension problem, PSO parameters are defined in Table 5 and results in Table 6: Parameter limits Value location inf. R location sup. R speed inf. R speed sup. R radius 1e-3 max. iteration 800 particle number 20 Table 5: Common PSO parameters Inertial Radius BSG-Starcraft BSG-Starcraft radius Mean Final value Mean Iteration Table 6: Results for Ackley s function Drop wave s function: The Drop wave s function is defined by (Eq. 6): f(x 1, x 2 ) = 1 + cos(12 x x2 2 ) 0.5(x x2 2 ) + 2 (6) The solution is located in zero of R 2 and the function value is 1. The test case is a 20 dimension problem, PSO parameters are defined in Table 7 and results in Table 8: 5

6 Parameter limits Value location inf. R location sup. R speed inf. R speed sup. R radius 1e-3 max. iteration 800 particle number 20 Table 7: Common PSO parameters Inertial Radius BSG-Starcraft BSG-Starcraft radius Mean Final value Mean Iteration Table 8: Results for Drop wave s function Inertia Weight Model vs BSG-Starcraft Here we are going to compare the ability to the PSO to converge to the global minimum in two cases on a 6 dimension Rastrigin s function : in the first case, the global minimum is in the initial swarm and the swarm isn t too large (Table 9), in the second case the swarm is large and long away from the minimum (Table 10). The maximum iteration number is set to 200. Parameter limits Value R 6 location inf. -10 location sup. 10 speed inf. -1 speed sup. 1 Table 9: Case 1 parameters Parameter limits Value R 6 location inf location sup. 100 speed inf. -10 speed sup. 10 Table 10: Case 2 parameters 6

7 Mean Final value Inertia weight Model BSG-Starcraft Case Case Table 11: Results for Inertia Weigth Model vs BSG-Starcraft We can notice that the BSG-Starcraft model is at least as effective as the inertial PSO for small range swarms and really effective in case of high range swarms (Table 11). 7 Conclusion The Particle Swarm Optimization as been used in many optimization cases both in linear and non-linear problems. This optimization process appears to be effective and simple to use. Both proposed improvements are also effective and may be combined to create a new PSO model. Appix Inertial PSO // Created by Sebastien Salmon // M3M - UTBM // sebastien[.]salmon[@]utbm[.]fr // 2010 // released under CC-BY-NC-SA function PSO_inertial(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max) lines(0) // Setting up graphics axis scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" // Declaring objective function var=objective+.sce ; exec(var) // PSO parameters definition // using inertial weigth parameter improvement //wmax=0.9; // initial weigth parameter //wmin=0.4; // final weigth parameter 7

8 // max iteration allowed //itmax=800; //Maximum iteration number // knowledge factors //c1=2; // for personnal best //c2=2; // for global best // problem dimensions //N=20; // number of particles //D=6; // problem dimension // Allocation of memory ans first computations // computation of the weigth vector for i=1:itmax W(i)=wmax-((wmax-wmin)/itmax)*i; // computation of location and speed of particles for i=1:d //borne_sup(i)= 1; //borne_inf(i)= 0; x(1:n,i) =borne_inf(i) +rand(n,1) * ( borne_sup(i) - borne_inf(i) ); // location //vitesse_min(i)=-0.3; //vitesse_max(i)=0.3; v(1:n,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(n,1); // speed // actual iteration number j=1; // First evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // mono-objective result // Search for the minimum of the swarm [C,I]=min((F(:,1,j))); // The first minimun is the global minimum cause first // iteration gbest(1,:,j)=x(i,:,j); for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // The first minimun is the best result cause first 8

9 // iteration Fbest(1,1,j)=F(I,1,j); // global best Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best // Each particle is her personnal best cause first // first iteration pbest(i,:,j)=x(i,:,j); // Speed and location computation for next iteration v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Entering to the optimization loop while (j<itmax-1) j=j+1 // First evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // Search for the minimum of the swarm [C,I]=min((F(:,:,j))); // Searching for global minimum gbest(1,:,j)=x(i,:,j); // hypothesis : this iteration is better than last one Fb(1,1,j)=F(I,1,j); // looking for the iteration best result Fbest(1,:,j)=Fb(1,:,j); // Fbest is the iteration best result if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // Computation of the new personnal best [C,I]=min(F(i,1,:)); 9

10 if F(i,1,j)<C pbest(i,:,j)=x(i,:,j); pbest(i,:,j)=x(i,:,i(3)); // Re-computation of Fbest for reliability test y=gbest(1,:,j); Fbest(:,:,j)=script(y); // Speed and location computation for next iteration v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Plotting the Fbest curve to monitor optimization for count=1:j Fbest_draw(count)=Fbest(1,1,count); if modulo(j,25)==0 // resetting graphics clf(1) scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" scf(1) plot(fbest_draw) drawnow() // Temporary save in case of crash, very usefull for long time optimization save ( PSO_temp ) // Out of the optimization loop disp( Fbest : + string(fbest(:,:,j))) disp( Gbest : + string(gbest(:,:,j))) 10

11 save( results_pso ) function Radius PSO // Created by Sebastien Salmon // M3M - UTBM // sebastien[.]salmon[@]utbm[.]fr // 2010 // released under CC-BY-NC-SA function PSO_inertial_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius) lines(0) // Setting up graphics axis scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" scf(2) gcf() xtitle("swarm radius vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="swarm radius (log10)" // Declaring objective function var=objective+.sce ; exec(var) // PSO parameters definition // swarm limit radius // radius=1e-4; // limit radius of the swarm, if inf then stopping process // using inertial weigth parameter improvement //wmax=0.9; // initial weigth parameter //wmin=0.4; // final weigth parameter // max iteration allowed //itmax=800; //Maximum iteration number // knowledge factors 11

12 //c1=2; // for personnal best //c2=2; // for global best // problem dimensions //N=20; // number of particles //D=6; // problem dimension // Allocation of memory ans first computations // computation of the weigth vector for i=1:itmax W(i)=wmax-((wmax-wmin)/itmax)*i; // computation of location and speed of particles for i=1:d //borne_sup(i)= 1; //borne_inf(i)= 0; x(1:n,i) =borne_inf(i) +rand(n,1) * ( borne_sup(i) - borne_inf(i) ); // location //vitesse_min(i)=-0.3; //vitesse_max(i)=0.3; v(1:n,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(n,1); // speed // actual iteration number j=1; // actual mesurability status mesurability=1; counter=0; // First evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // mono-objective result // Search for the minimum of the swarm [C,I]=min((F(:,1,j))); // The first minimun is the global minimum cause first // iteration gbest(1,:,j)=x(i,:,j); for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // The first minimun is the best result cause first 12

13 // iteration Fbest(1,1,j)=F(I,1,j); // global best Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best // Each particle is her personnal best cause first // first iteration pbest(i,:,j)=x(i,:,j); // Speed and location computation for next iteration v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Entering to the optimization loop while (j<itmax-1 & mesurability==1) j=j+1 // First evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // Search for the minimum of the swarm [C,I]=min((F(:,:,j))); // Searching for global minimum gbest(1,:,j)=x(i,:,j); // hypothesis : this iteration is better than last one Fb(1,1,j)=F(I,1,j); // looking for the iteration best result Fbest(1,:,j)=Fb(1,:,j); // hypothesis : Fbest is the iteration best result if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // Computation of the new personnal best [C,I]=min(F(i,1,:)); 13

14 if F(i,1,j)<C pbest(i,:,j)=x(i,:,j); pbest(i,:,j)=x(i,:,i(3)); // Re-computation of Fbest for reliability test y=gbest(1,:,j); Fbest(:,:,j)=script(y); // Speed and location computation for next iteration v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Computing measurability - generating capacity [C,I]=min((F(:,:,j))); leader_temp(1,:,j)=x(i,:,j); // getting swarm leader for t=1:n dist_temp(:,t,j) = abs((x(t,:,j)-leader_temp(:,:,j))); // computing L1 norm to leader for t=1:n dist(t,j)=max(dist_temp(:,t,j)); max_dist(j)=max(dist(:,j)); if max_dist(j)>=radius & counter<10 mesurability=1; counter=0; counter=counter+1; if counter==10 mesurability=0; rad_plot(j)=radius; if modulo(j,25)==0 // resetting graphics clf(2) scf(2) gcf() xtitle("swarm radius vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="swarm radius (log10)" 14

15 scf(2) plot(log10(max_dist(2:j)) ) plot(log10(rad_plot(2:j)), r ) drawnow() // Plotting the Fbest curve to monitor optimization for count=1:j Fbest_draw(count)=Fbest(1,1,count); if modulo(j,25)==0 // resetting graphics clf(1) scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" scf(1) plot(fbest_draw) drawnow() // Temporary save in case of crash, very usefull for long time optimization save ( PSO_temp ) // Out of the optimization loop disp( Fbest : + string(fbest(:,:,j))) disp( Gbest : + string(gbest(:,:,j))) save( results_pso_radius ) function save( results_pso_radius ) function BSG-Starcraft PSO 15

16 // Created by Sebastien Salmon // M3M - UTBM // sebastien[.]salmon[@]utbm[.]fr // 2010 // released under CC-BY-NC-SA function PSO_bsg_starcraft(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max) lines(0) // Setting up graphics axis scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" // Declaring objective function var=objective+.sce ; disp(var) exec(var) // PSO parameters definition // using inertial weigth parameter improvement //wmax=0.9; // initial weigth parameter //wmin=0.4; // final weigth parameter // max iteration allowed //itmax=800; //Maximum iteration number // knowledge factors //c1=2; // for personnal best //c2=2; // for global best // problem dimensions //N=20; // number of particles //D=6; // problem dimension // Allocation of memory ans first computations // computation of the weigth vector for i=1:itmax W(i)=wmax-((wmax-wmin)/itmax)*i; // computation of location and speed of particles for i=1:d 16

17 //borne_sup(i)= 1; //borne_inf(i)= 0; x(1:n,i) =borne_inf(i) +rand(n,1) * ( borne_sup(i) - borne_inf(i) ); // location //vitesse_min(i)=-0.3; //vitesse_max(i)=0.3; v(1:n,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(n,1); // speed // actual iteration number j=1; // First evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // mono-objective result // Search for the minimum of the swarm [C,I]=min((F(:,1,j))); // The first minimun is the global minimum cause first // iteration gbest(1,:,j)=x(i,:,j); gbestc(1,:,j)=x(i,:,j); v_gbestc(1,:,j)=v(i,:,j); for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // The first minimun is the best result cause first // iteration Fbest(1,1,j)=F(I,1,j); // global best Fbestc(1,1,j)=F(I,1,j); Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best // Each particle is her personnal best cause first // first iteration pbest(i,:,j)=x(i,:,j); // Speed and location computation for next iteration v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Entering to the optimization loop 17

18 while (j<itmax-1) j=j+1 // BSG Starcraft ability of the swarm - 1/10 chance // to enable capacity aleat=rand() if aleat>=0.9 j==2 disp( Enabling Starcraft Protoss carrier at iteration +string(j)) // getting the leader of the previous iteration // this is the carrier x_carrier=gbestc(:,:,j-1); F_carrier=Fbestc(:,:,j-1); v_carrier=v_gbestc(:,:,j-1); // creating some raptors to explore the space // for a quite long range from the carrier speed_multiplicator=2; // 2x faster than carrier number_raptor=20; // sing 20 raptors for i=1:number_raptor v_raptor=speed_multiplicator*norm(v_carrier); x_raptor(i,:)=x_carrier+(-1+2*rand(1,d))*v_raptor; // evaluating positions of the raptor disp( Sing Raptors ) for i=1:number_raptor F_raptor(i)=script(x_raptor(i,:)); // Comparing performance of raptors to carrier [C,I]=min(F_raptor); if F_raptor(I)<Fbest(:,:,j-1) gain=-1*(f_carrier-f_raptor(i)); disp( Gain = + string(gain)) disp( Enabling FTL jump ) // jumping the swarm conserving geometry jump_vector=x_raptor(i,:)-x_carrier; x(i,:,j)=x(i,:,j-1)+jump_vector; disp( FTL jump not required ) // evaluation of the jumped swarm 18

19 y=x(i,:,j); F(i,1,j)=script(y); // Evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // for bsg starcraft ability // Search for the minimum of the swarm [C,I]=min((F(:,:,j))); // Searching for global minimum gbest(1,:,j)=x(i,:,j); // hypothesis : this iteration is better than last one gbestc(1,:,j)=x(i,:,j); v_gbestc(1,:,j)=v(i,:,j); Fb(1,1,j)=F(I,1,j); // looking for the iteration best result Fbestc(1,1,j)=F(I,1,j); Fbest(1,:,j)=Fb(1,:,j); // Fbest is the iteration best result if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // Computation of the new personnal best [C,I]=min(F(i,1,:)); if F(i,1,j)<C pbest(i,:,j)=x(i,:,j); pbest(i,:,j)=x(i,:,i(3)); // Re-computation of Fbest for reliability test y=gbest(1,:,j); Fbest(:,:,j)=script(y); // Speed and location computation for next iteration 19

20 v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Plotting the Fbest curve to monitor optimization for count=1:j Fbest_draw(count)=Fbest(1,1,count); if modulo(j,25)==0 // resetting graphics clf(1) scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" scf(1) plot(fbest_draw) drawnow() // Temporary save in case of crash, very usefull for long time optimization save ( PSO_temp ) // Out of the optimization loop disp( Fbest : + string(fbest(:,:,j))) disp( Gbest : + string(gbest(:,:,j))) save( results_pso_bsg-starcraft ) function BSG-Starcraft radius PSO // Created by Sebastien Salmon // M3M - UTBM // sebastien[.]salmon[@]utbm[.]fr // 2010 // released under CC-BY-NC-SA function PSO_bsg_starcraft_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius 20

21 lines(0) // Setting up graphics axis scf(1) gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" scf(2) gcf() xtitle("swarm radius vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="swarm radius (log10)" // Declaring objective function var=objective+.sce ; exec(var) // PSO parameters definition // swarm limit radius // radius=1e-4; // limit radius of the swarm, if inf then stopping process // using inertial weigth parameter improvement //wmax=0.9; // initial weigth parameter //wmin=0.4; // final weigth parameter // max iteration allowed //itmax=800; //Maximum iteration number // knowledge factors //c1=2; // for personnal best //c2=2; // for global best // problem dimensions //N=20; // number of particles //D=6; // problem dimension // Allocation of memory ans first computations // computation of the weigth vector for i=1:itmax W(i)=wmax-((wmax-wmin)/itmax)*i; // computation of location and speed of particles for i=1:d //borne_sup(i)= 1; //borne_inf(i)= 0; 21

22 x(1:n,i) =borne_inf(i) +rand(n,1) * ( borne_sup(i) - borne_inf(i) ); // location //vitesse_min(i)=-0.3; //vitesse_max(i)=0.3; v(1:n,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(n,1); // speed // actual iteration number j=1; // actual mesurability status mesurability=1; counter=0; // First evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // mono-objective result // Search for the minimum of the swarm [C,I]=min((F(:,1,j))); // The first minimun is the global minimum cause first // iteration gbest(1,:,j)=x(i,:,j); gbestc(1,:,j)=x(i,:,j); v_gbestc(1,:,j)=v(i,:,j); for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // The first minimun is the best result cause first // iteration Fbest(1,1,j)=F(I,1,j); // global best Fbestc(1,1,j)=F(I,1,j); Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best // Each particle is her personnal best cause first // first iteration pbest(i,:,j)=x(i,:,j); // Speed and location computation for next iteration v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location 22

23 // Entering to the optimization loop while (j<itmax-1 & mesurability==1) j=j+1 // BSG Starcraft ability of the swarm - 1/10 chance // to enable capacity aleat=rand() if aleat>=0.9 disp( Enabling Starcraft Protoss carrier at iteration +string(j)) // getting the leader of the previous iteration // this is the carrier x_carrier=gbestc(:,:,j-1); F_carrier=Fbestc(:,:,j-1); v_carrier=v_gbestc(:,:,j-1); // creating some raptors to explore the space // for a quite long range from the carrier speed_multiplicator=2; // 2x faster than carrier number_raptor=20; // sing 20 raptors for i=1:number_raptor v_raptor=speed_multiplicator*norm(v_carrier); x_raptor(i,:)=x_carrier+(-1+2*rand(1,d))*v_raptor; // evaluating positions of the raptor disp( Sing Raptors ) for i=1:number_raptor F_raptor(i)=script(x_raptor(i,:)); // Comparing performance of raptors to carrier [C,I]=min(F_raptor); if F_raptor(I)<Fbest(:,:,j-1) gain=-1*(f_carrier-f_raptor(i)); disp( Gain = + string(gain)) disp( Enabling FTL jump ) // jumping the swarm conserving geometry jump_vector=x_raptor(i,:)-x_carrier; x(i,:,j)=x(i,:,j-1)+jump_vector; disp( FTL jump not required ) // evaluation of the jumped swarm 23

24 y=x(i,:,j); F(i,1,j)=script(y); // Evaluation of the objective function y=x(i,:,j); F(i,1,j)=script(y); // for bsg starcraft ability // Search for the minimum of the swarm [C,I]=min((F(:,:,j))); // Searching for global minimum gbest(1,:,j)=x(i,:,j); // hypothesis : this iteration is better than last one gbestc(1,:,j)=x(i,:,j); v_gbestc(1,:,j)=v(i,:,j); Fb(1,1,j)=F(I,1,j); // looking for the iteration best result Fbestc(1,1,j)=F(I,1,j); Fbest(1,:,j)=Fb(1,:,j); // Fbest is the iteration best result if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time for p=1:n G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation // Computation of the new personnal best [C,I]=min(F(i,1,:)); if F(i,1,j)<C pbest(i,:,j)=x(i,:,j); pbest(i,:,j)=x(i,:,i(3)); // Re-computation of Fbest for reliability test y=gbest(1,:,j); Fbest(:,:,j)=script(y); // Speed and location computation for next iteration 24

25 v(:,:,j+1)=w(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(g(:,:,j)-x(:,:,j)); // speed x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location // Computing measurability - generating capacity [C,I]=min((F(:,:,j))); leader_temp(1,:,j)=x(i,:,j); // getting swarm leader for t=1:n dist_temp(:,t,j) = abs((x(t,:,j)-leader_temp(:,:,j))); // computing L1 norm to leader for t=1:n dist(t,j)=max(dist_temp(:,t,j)); max_dist(j)=max(dist(:,j)); if max_dist(j)>=radius & counter<10 mesurability=1; counter=0; counter=counter+1; if counter==10 mesurability=0; rad_plot(j)=radius; if modulo(j,25)==0 // resetting graphics clf(2) scf(2) gcf() xtitle("swarm radius vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="swarm radius (log10)" scf(2) plot(log10(max_dist(2:j)) ) plot(log10(rad_plot(2:j)), r ) drawnow() // Plotting the Fbest curve to monitor optimization for count=1:j Fbest_draw(count)=Fbest(1,1,count); if modulo(j,25)==0 // resetting graphics clf(1) scf(1) 25

26 gcf() xtitle("objective function value vs Iteration") axe_prop=gca() axe_prop.x_label.text="iteration number" axe_prop.y_label.text="objective function value" scf(1) plot(fbest_draw) drawnow() // Temporary save in case of crash, very usefull for long time optimization save ( PSO_temp ) // Out of the optimization loop disp( Fbest : + string(fbest(:,:,j))) disp( Gbest : + string(gbest(:,:,j))) save( results_pso_bsg_starcraft_radius ) function References [1] Van den Bergh F and Engelbrecht A.P. A study of particle swarm optimization particle trajectories. Information Sciences, [2] Trelea I.C. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information Processing Letters, vol. 85, [3] S. Janson and M. Middorf. On trajectories of particles in pso. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium, [4] J. Kennedy and R.C. Eberhart. Particle swarm optimisation. In Proceedings of the IEEE International Conference on Neural Networks, [5] M. Molga and C. Smutnicki. Test functions for optimization needs [6] Engelbrecht A.P Van der Bergh F. A study of particle swarm optimization particle trajectories. Information Sciences, 176: , [7] Shi Y. and Eberhart RC. Parameter selection in particle swarm optimization. In Annual Conference on. Evolutionary Programming,

Modified Particle Swarm Optimization

Modified Particle Swarm Optimization Modified Particle Swarm Optimization Swati Agrawal 1, R.P. Shimpi 2 1 Aerospace Engineering Department, IIT Bombay, Mumbai, India, swati.agrawal@iitb.ac.in 2 Aerospace Engineering Department, IIT Bombay,

More information

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Nasser Sadati Abstract Particle Swarm Optimization (PSO) algorithms recently invented as intelligent optimizers with several highly

More information

PARTICLE SWARM OPTIMIZATION (PSO)

PARTICLE SWARM OPTIMIZATION (PSO) PARTICLE SWARM OPTIMIZATION (PSO) J. Kennedy and R. Eberhart, Particle Swarm Optimization. Proceedings of the Fourth IEEE Int. Conference on Neural Networks, 1995. A population based optimization technique

More information

Small World Particle Swarm Optimizer for Global Optimization Problems

Small World Particle Swarm Optimizer for Global Optimization Problems Small World Particle Swarm Optimizer for Global Optimization Problems Megha Vora and T.T. Mirnalinee Department of Computer Science and Engineering S.S.N College of Engineering, Anna University, Chennai,

More information

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Australian Journal of Basic and Applied Sciences, 4(8): 3366-3382, 21 ISSN 1991-8178 Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Akbar H. Borzabadi,

More information

Particle Swarm Optimization

Particle Swarm Optimization Dario Schor, M.Sc., EIT schor@ieee.org Space Systems Department Magellan Aerospace Winnipeg Winnipeg, Manitoba 1 of 34 Optimization Techniques Motivation Optimization: Where, min x F(x), subject to g(x)

More information

Adaptative Clustering Particle Swarm Optimization

Adaptative Clustering Particle Swarm Optimization Adaptative Clustering Particle Swarm Optimization Salomão S. Madeiro, Carmelo J. A. Bastos-Filho, Member, IEEE, and Fernando B. Lima Neto, Senior Member, IEEE, Elliackin M. N. Figueiredo Abstract The performance

More information

Binary Differential Evolution Strategies

Binary Differential Evolution Strategies Binary Differential Evolution Strategies A.P. Engelbrecht, Member, IEEE G. Pampará Abstract Differential evolution has shown to be a very powerful, yet simple, population-based optimization approach. The

More information

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation: Convergence of PSO The velocity update equation: v i = v i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) for some values of φ 1 and φ 2 the velocity grows without bound can bound velocity to range [ V max,v

More information

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE Fang Wang, and Yuhui Qiu Intelligent Software and Software Engineering Laboratory, Southwest-China Normal University,

More information

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2 15-382 COLLECTIVE INTELLIGENCE - S18 LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2 INSTRUCTOR: GIANNI A. DI CARO BACKGROUND: REYNOLDS BOIDS Reynolds created a model of coordinated animal

More information

Particle Swarm Optimization

Particle Swarm Optimization Particle Swarm Optimization Gonçalo Pereira INESC-ID and Instituto Superior Técnico Porto Salvo, Portugal gpereira@gaips.inesc-id.pt April 15, 2011 1 What is it? Particle Swarm Optimization is an algorithm

More information

PARTICLE SWARM OPTIMIZATION (PSO) [1] is an

PARTICLE SWARM OPTIMIZATION (PSO) [1] is an Proceedings of International Joint Conference on Neural Netorks, Atlanta, Georgia, USA, June -9, 9 Netork-Structured Particle Sarm Optimizer Considering Neighborhood Relationships Haruna Matsushita and

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Small World Network Based Dynamic Topology for Particle Swarm Optimization

Small World Network Based Dynamic Topology for Particle Swarm Optimization Small World Network Based Dynamic Topology for Particle Swarm Optimization Qingxue Liu 1,2, Barend Jacobus van Wyk 1 1 Department of Electrical Engineering Tshwane University of Technology Pretoria, South

More information

Optimization Using Particle Swarms with Near Neighbor Interactions

Optimization Using Particle Swarms with Near Neighbor Interactions Optimization Using Particle Swarms with Near Neighbor Interactions Kalyan Veeramachaneni, Thanmaya Peram, Chilukuri Mohan, and Lisa Ann Osadciw Department of Electrical Engineering and Computer Science

More information

A Hybrid Fireworks Optimization Method with Differential Evolution Operators

A Hybrid Fireworks Optimization Method with Differential Evolution Operators A Fireworks Optimization Method with Differential Evolution Operators YuJun Zheng a,, XinLi Xu a, HaiFeng Ling b a College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou,

More information

Fast Hybrid PSO and Tabu Search Approach for Optimization of a Fuzzy Controller

Fast Hybrid PSO and Tabu Search Approach for Optimization of a Fuzzy Controller IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 5, No, September ISSN (Online): 694-84 www.ijcsi.org 5 Fast Hybrid PSO and Tabu Search Approach for Optimization of a Fuzzy Controller

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem

More information

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM BAHAREH NAKISA, MOHAMMAD NAIM RASTGOO, MOHAMMAD FAIDZUL NASRUDIN, MOHD ZAKREE AHMAD NAZRI Department of Computer

More information

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM M. Sivakumar 1 and R. M. S. Parvathi 2 1 Anna University, Tamilnadu, India 2 Sengunthar College of Engineering, Tamilnadu,

More information

GREEN-PSO: Conserving Function Evaluations in Particle Swarm Optimization

GREEN-PSO: Conserving Function Evaluations in Particle Swarm Optimization GREEN-PSO: Conserving Function Evaluations in Particle Swarm Optimization Stephen M. Majercik 1 1 Department of Computer Science, Bowdoin College, Brunswick, Maine, USA smajerci@bowdoin.edu Keywords: Abstract:

More information

Particle swarm algorithms for multi-local optimization A. Ismael F. Vaz 1, Edite M.G.P. Fernandes 1

Particle swarm algorithms for multi-local optimization A. Ismael F. Vaz 1, Edite M.G.P. Fernandes 1 I Congresso de Estatística e Investigação Operacional da Galiza e Norte de Portugal VII Congreso Galego de Estatística e Investigación de Operacións Guimarães 26, 27 e 28 de Outubro de 2005 Particle swarm

More information

Model Parameter Estimation

Model Parameter Estimation Model Parameter Estimation Shan He School for Computational Science University of Birmingham Module 06-23836: Computational Modelling with MATLAB Outline Outline of Topics Concepts about model parameter

More information

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers

More information

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization Dr. Liu Dasheng James Cook University, Singapore / 48 Outline of Talk. Particle Swam Optimization 2. Multiobjective Particle Swarm

More information

A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical Test Functions.

A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical Test Functions. Australian Journal of Basic and Applied Sciences 3(4): 4344-4350 2009 ISSN 1991-8178 A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical

More information

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS METAHEURISTICS 1. Objectives The goals of the laboratory workshop are as follows: to learn basic properties of evolutionary computation techniques and other metaheuristics for solving various global optimization

More information

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,

More information

Effectual Multiprocessor Scheduling Based on Stochastic Optimization Technique

Effectual Multiprocessor Scheduling Based on Stochastic Optimization Technique Effectual Multiprocessor Scheduling Based on Stochastic Optimization Technique A.Gowthaman 1.Nithiyanandham 2 G Student [VLSI], Dept. of ECE, Sathyamabama University,Chennai, Tamil Nadu, India 1 G Student

More information

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION Manjeet Singh 1, Divesh Thareja 2 1 Department of Electrical and Electronics Engineering, Assistant Professor, HCTM Technical

More information

Research on Improved Particle Swarm Optimization based on Membrane System in Cloud Resource Scheduling

Research on Improved Particle Swarm Optimization based on Membrane System in Cloud Resource Scheduling Research on Improved Particle Swarm Optimization based on Membrane System in Cloud Resource Scheduling n a1 ; Wenjie Gong b ; Tingyu Liang c University of Electronic Science and Technology of China Chendu,

More information

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization 488 International Journal Wu-Chang of Control, Wu Automation, and Men-Shen and Systems, Tsai vol. 6, no. 4, pp. 488-494, August 2008 Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

More information

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search N. Arun & V.Ravi* Assistant Professor Institute for Development and Research in Banking Technology (IDRBT), Castle Hills Road #1,

More information

Overlapping Swarm Intelligence for Training Artificial Neural Networks

Overlapping Swarm Intelligence for Training Artificial Neural Networks Overlapping Swarm Intelligence for Training Artificial Neural Networks Karthik Ganesan Pillai Department of Computer Science Montana State University EPS 357, PO Box 173880 Bozeman, MT 59717-3880 k.ganesanpillai@cs.montana.edu

More information

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem Available online at www.sciencedirect.com Procedia Engineering 15 (2011) 3673 3677 Advanced in Control Engineeringand Information Science Application of Improved Discrete Particle Swarm Optimization in

More information

GRID SCHEDULING USING ENHANCED PSO ALGORITHM

GRID SCHEDULING USING ENHANCED PSO ALGORITHM GRID SCHEDULING USING ENHANCED PSO ALGORITHM Mr. P.Mathiyalagan 1 U.R.Dhepthie 2 Dr. S.N.Sivanandam 3 1 Lecturer 2 Post Graduate Student 3 Professor and Head Department of Computer Science and Engineering

More information

Global optimization using Lévy flights

Global optimization using Lévy flights Global optimization using Lévy flights Truyen Tran, Trung Thanh Nguyen, Hoang Linh Nguyen September 2004 arxiv:1407.5739v1 [cs.ne] 22 Jul 2014 Abstract This paper studies a class of enhanced diffusion

More information

Improving local and regional earthquake locations using an advance inversion Technique: Particle swarm optimization

Improving local and regional earthquake locations using an advance inversion Technique: Particle swarm optimization ISSN 1 746-7233, England, UK World Journal of Modelling and Simulation Vol. 8 (2012) No. 2, pp. 135-141 Improving local and regional earthquake locations using an advance inversion Technique: Particle

More information

MATH 209, Lab 5. Richard M. Slevinsky

MATH 209, Lab 5. Richard M. Slevinsky MATH 209, Lab 5 Richard M. Slevinsky Problems 1. Say the temperature T at any point (x, y, z) in space is given by T = 4 x y z 2. Find the hottest point on the sphere F = x 2 + y 2 + z 2 100 = 0; We equate

More information

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems Artificial bee colony algorithm with multiple onlookers for constrained optimization problems Milos Subotic Faculty of Computer Science University Megatrend Belgrade Bulevar umetnosti 29 SERBIA milos.subotic@gmail.com

More information

TASK SCHEDULING USING HAMMING PARTICLE SWARM OPTIMIZATION IN DISTRIBUTED SYSTEMS

TASK SCHEDULING USING HAMMING PARTICLE SWARM OPTIMIZATION IN DISTRIBUTED SYSTEMS Computing and Informatics, Vol. 36, 2017, 950 970, doi: 10.4149/cai 2017 4 950 TASK SCHEDULING USING HAMMING PARTICLE SWARM OPTIMIZATION IN DISTRIBUTED SYSTEMS Subramaniam Sarathambekai, Kandaswamy Umamaheswari

More information

Particle Swarm Optimization Based Approach for Location Area Planning in Cellular Networks

Particle Swarm Optimization Based Approach for Location Area Planning in Cellular Networks International Journal of Intelligent Systems and Applications in Engineering Advanced Technology and Science ISSN:2147-67992147-6799 www.atscience.org/ijisae Original Research Paper Particle Swarm Optimization

More information

Cooperative Coevolution using The Brain Storm Optimization Algorithm

Cooperative Coevolution using The Brain Storm Optimization Algorithm Cooperative Coevolution using The Brain Storm Optimization Algorithm Mohammed El-Abd Electrical and Computer Engineering Department American University of Kuwait Email: melabd@auk.edu.kw Abstract The Brain

More information

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for

More information

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems Dervis Karaboga and Bahriye Basturk Erciyes University, Engineering Faculty, The Department of Computer

More information

Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem

Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem arxiv:1709.03793v1 [cs.ne] 12 Sep 2017 Shubham Dokania, Sunyam Bagga, and Rohit Sharma shubham.k.dokania@gmail.com,

More information

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0 Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0 By Magnus Erik Hvass Pedersen November 2010 Copyright 2009-2010, all rights reserved by the author. Please see page

More information

Solving Travelling Salesman Problem Using Variants of ABC Algorithm

Solving Travelling Salesman Problem Using Variants of ABC Algorithm Volume 2, No. 01, March 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Solving Travelling

More information

A RANDOM SYNCHRONOUS-ASYNCHRONOUS PARTICLE SWARM OPTIMIZATION ALGORITHM WITH A NEW ITERATION STRATEGY

A RANDOM SYNCHRONOUS-ASYNCHRONOUS PARTICLE SWARM OPTIMIZATION ALGORITHM WITH A NEW ITERATION STRATEGY A RANDOM SYNCHRONOUS-ASYNCHRONOUS PARTICLE SWARM OPTIMIZATION ALORITHM WITH A NEW ITERATION STRATEY Nor Azlina Ab Aziz 1,2, Shahdan Sudin 3, Marizan Mubin 1, Sophan Wahyudi Nawawi 3 and Zuwairie Ibrahim

More information

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-3, Issue-6, January 2014 A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

More information

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem 0 IEEE Congress on Evolutionary Computation (CEC) July -, 0, Beijing, China A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem Tianxiang Cui, Shi Cheng, and Ruibin

More information

Scheduling Meta-tasks in Distributed Heterogeneous Computing Systems: A Meta-Heuristic Particle Swarm Optimization Approach

Scheduling Meta-tasks in Distributed Heterogeneous Computing Systems: A Meta-Heuristic Particle Swarm Optimization Approach Scheduling Meta-tasks in Distributed Heterogeneous Computing Systems: A Meta-Heuristic Particle Swarm Optimization Approach Hesam Izakian¹, Ajith Abraham², Václav Snášel³ ¹Department of Computer Engineering,

More information

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm Andreas Janecek and Ying Tan Key Laboratory of Machine Perception (MOE), Peking University Department of Machine Intelligence,

More information

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems FIFTH INTERNATIONAL CONFERENCE ON HYDROINFORMATICS 1-5 July 2002, Cardiff, UK C05 - Evolutionary algorithms in hydroinformatics An evolutionary annealing-simplex algorithm for global optimisation of water

More information

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization Cell-to-switch assignment in cellular networks using barebones particle swarm optimization Sotirios K. Goudos a), Konstantinos B. Baltzis, Christos Bachtsevanidis, and John N. Sahalos RadioCommunications

More information

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les Chaotic Crossover Operator on Genetic Algorithm Hüseyin Demirci Computer Engineering, Sakarya University, Sakarya, 54187, Turkey Ahmet Turan Özcerit Computer Engineering, Sakarya University, Sakarya, 54187,

More information

PARTICLE SWARM OPTIMIZATION APPLICATION IN OPTIMIZATION

PARTICLE SWARM OPTIMIZATION APPLICATION IN OPTIMIZATION 131 4 Dkiember 2008 PARTCLE SWARM OPTMZATON APPLCATON N OPTMZATON Abdul Talib Bon, PhD Deputy Dean (Research & Development) Faculty of Technology Management Universiti Tun Hussein Onn Malaysia 86400 Parit

More information

Adaptively Choosing Neighbourhood Bests Using Species in a Particle Swarm Optimizer for Multimodal Function Optimization

Adaptively Choosing Neighbourhood Bests Using Species in a Particle Swarm Optimizer for Multimodal Function Optimization Adaptively Choosing Neighbourhood Bests Using Species in a Particle Swarm Optimizer for Multimodal Function Optimization Xiaodong Li School of Computer Science and Information Technology RMIT University,

More information

Water cycle algorithm with fuzzy logic for dynamic adaptation of parameters

Water cycle algorithm with fuzzy logic for dynamic adaptation of parameters Water cycle algorithm with fuzzy logic for dynamic adaptation of parameters Eduardo Méndez 1, Oscar Castillo 1 *, José Soria 1, Patricia Melin 1 and Ali Sadollah 2 Tijuana Institute of Technology, Calzada

More information

Dr. Ramesh Kumar, Nayan Kumar (Department of Electrical Engineering,NIT Patna, India, (Department of Electrical Engineering,NIT Uttarakhand, India,

Dr. Ramesh Kumar, Nayan Kumar (Department of Electrical Engineering,NIT Patna, India, (Department of Electrical Engineering,NIT Uttarakhand, India, Dr Ramesh Kumar, Nayan Kumar/ International Journal of Engineering Research and An Efficent Particle Swarm Optimisation (Epso) For Solving Economic Load Dispatch (Eld) Problems Dr Ramesh Kumar, Nayan Kumar

More information

FDR PSO-Based Optimization for Non-smooth Functions

FDR PSO-Based Optimization for Non-smooth Functions M. Anitha, et al. / International Energy Journal 0 (009) 37-44 37 www.serd.ait.ac.th/reric FDR PSO-Based Optimization for n-smooth Functions M. Anitha*, S. Subramanian* and R. Gnanadass + Abstract In this

More information

A Gaussian Firefly Algorithm

A Gaussian Firefly Algorithm A Gaussian Firefly Algorithm Sh. M. Farahani, A. A. Abshouri, B. Nasiri and M. R. Meybodi Abstract Firefly algorithm is one of the evolutionary optimization algorithms, and is inspired by fireflies behavior

More information

KEYWORDS: Mobile Ad hoc Networks (MANETs), Swarm Intelligence, Particle Swarm Optimization (PSO), Multi Point Relay (MPR), Throughput.

KEYWORDS: Mobile Ad hoc Networks (MANETs), Swarm Intelligence, Particle Swarm Optimization (PSO), Multi Point Relay (MPR), Throughput. IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY APPLICATION OF SWARM INTELLIGENCE PSO TECHNIQUE FOR ANALYSIS OF MULTIMEDIA TRAFFIC AND QOS PARAMETERS USING OPTIMIZED LINK STATE

More information

International Conference on Modeling and SimulationCoimbatore, August 2007

International Conference on Modeling and SimulationCoimbatore, August 2007 International Conference on Modeling and SimulationCoimbatore, 27-29 August 2007 OPTIMIZATION OF FLOWSHOP SCHEDULING WITH FUZZY DUE DATES USING A HYBRID EVOLUTIONARY ALGORITHM M.S.N.Kiran Kumara, B.B.Biswalb,

More information

Non-deterministic Search techniques. Emma Hart

Non-deterministic Search techniques. Emma Hart Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting

More information

A Polar Coordinate Particle Swarm Optimiser

A Polar Coordinate Particle Swarm Optimiser A Polar Coordinate Particle Swarm Optimiser Wiehann Matthysen and Andries P. Engelbrecht Department of Computer Science, University of Pretoria, South Africa engel@cs.up.ac.za Abstract The Particle Swarm

More information

Effect of the PSO Topologies on the Performance of the PSO-ELM

Effect of the PSO Topologies on the Performance of the PSO-ELM 2012 Brazilian Symposium on Neural Networks Effect of the PSO Topologies on the Performance of the PSO-ELM Elliackin M. N. Figueiredo and Teresa B. Ludermir Center of Informatics Federal University of

More information

The Pennsylvania State University. The Graduate School. Department of Electrical Engineering COMPARISON OF CAT SWARM OPTIMIZATION WITH PARTICLE SWARM

The Pennsylvania State University. The Graduate School. Department of Electrical Engineering COMPARISON OF CAT SWARM OPTIMIZATION WITH PARTICLE SWARM The Pennsylvania State University The Graduate School Department of Electrical Engineering COMPARISON OF CAT SWARM OPTIMIZATION WITH PARTICLE SWARM OPTIMIZATION FOR IIR SYSTEM IDENTIFICATION A Thesis in

More information

Null Steering and Multi-beams Design by Complex Weight of antennas Array with the use of APSO-GA

Null Steering and Multi-beams Design by Complex Weight of antennas Array with the use of APSO-GA Null Steering and Multi-beams Design by Complex Weight of antennas Array with the use of APSO-GA HICHEM CHAKER Department of Telecommunication University of TLEMCEN BP 2, 34 TLEMCEN, ALGERIA ALGERIA mh_chaker25@yahoo.fr

More information

Clustering of datasets using PSO-K-Means and PCA-K-means

Clustering of datasets using PSO-K-Means and PCA-K-means Clustering of datasets using PSO-K-Means and PCA-K-means Anusuya Venkatesan Manonmaniam Sundaranar University Tirunelveli- 60501, India anusuya_s@yahoo.com Latha Parthiban Computer Science Engineering

More information

Reliability Growth Modeling for Software Fault Detection Using Particle Swarm Optimization Alaa Sheta

Reliability Growth Modeling for Software Fault Detection Using Particle Swarm Optimization Alaa Sheta Reliability Growth Modeling for Software Fault Detection Using Particle Swarm Optimization Alaa Sheta Abstract Modeling the software testing process to obtain the predicted faults (failures) depends mainly

More information

An Improved Tree Seed Algorithm for Optimization Problems

An Improved Tree Seed Algorithm for Optimization Problems International Journal of Machine Learning and Computing, Vol. 8, o. 1, February 2018 An Improved Tree Seed Algorithm for Optimization Problems Murat Aslan, Mehmet Beskirli, Halife Kodaz, and Mustafa Servet

More information

Applying Particle Swarm Optimization for Solving Team Orienteering Problem with Time Windows

Applying Particle Swarm Optimization for Solving Team Orienteering Problem with Time Windows Jurnal Teknik Industri, Vol. 16, No. 1, Juni 2014, 9-16 ISSN 1411-2485 print / ISSN 2087-7439 online DOI: 10.9744/jti.16.1.9-16 Applying Particle Swarm Optimization for Solving Team Orienteering Problem

More information

Surrogate-assisted Self-accelerated Particle Swarm Optimization

Surrogate-assisted Self-accelerated Particle Swarm Optimization Surrogate-assisted Self-accelerated Particle Swarm Optimization Kambiz Haji Hajikolaei 1, Amir Safari, G. Gary Wang ±, Hirpa G. Lemu, ± School of Mechatronic Systems Engineering, Simon Fraser University,

More information

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms Somayyeh Nalan-Ahmadabad and Sehraneh Ghaemi Abstract In this paper, pole placement with integral

More information

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi

More information

Tracking Changing Extrema with Particle Swarm Optimizer

Tracking Changing Extrema with Particle Swarm Optimizer Tracking Changing Extrema with Particle Swarm Optimizer Anthony Carlisle Department of Mathematical and Computer Sciences, Huntingdon College antho@huntingdon.edu Abstract The modification of the Particle

More information

Grid Scheduling using PSO with Naive Crossover

Grid Scheduling using PSO with Naive Crossover Grid Scheduling using PSO with Naive Crossover Vikas Singh ABV- Indian Institute of Information Technology and Management, GwaliorMorena Link Road, Gwalior, India Deepak Singh Raipur Institute of Technology

More information

Université Libre de Bruxelles

Université Libre de Bruxelles Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle An Estimation of Distribution Particle Swarm Optimization Algorithm Mudassar Iqbal

More information

Index Terms PSO, parallel computing, clustering, multiprocessor.

Index Terms PSO, parallel computing, clustering, multiprocessor. Parallel Particle Swarm Optimization in Data Clustering Yasin ORTAKCI Karabuk University, Computer Engineering Department, Karabuk, Turkey yasinortakci@karabuk.edu.tr Abstract Particle Swarm Optimization

More information

Initializing the Particle Swarm Optimizer Using the Nonlinear Simplex Method

Initializing the Particle Swarm Optimizer Using the Nonlinear Simplex Method Initializing the Particle Swarm Optimizer Using the Nonlinear Simplex Method K.E. PARSOPOULOS, M.N. VRAHATIS Department of Mathematics University of Patras University of Patras Artificial Intelligence

More information

A Novel Probabilistic-PSO Based Learning Algorithm for Optimization of Neural Networks for Benchmark Problems

A Novel Probabilistic-PSO Based Learning Algorithm for Optimization of Neural Networks for Benchmark Problems A Novel ProbabilisticPSO Based Learning Algorithm for Optimization of Neural Networks for Benchmark Problems SUDHIR G.AKOJWAR 1, PRAVIN R. KSHIRSAGAR 2 1 Department of Electronics and Telecommunication

More information

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Oğuz Altun Department of Computer Engineering Yildiz Technical University Istanbul, Turkey oaltun@yildiz.edu.tr

More information

PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER

PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER Journal of Al-Nahrain University Vol13 (4), December, 2010, pp211-215 Science PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER Sarab M Hameed * and Dalal N Hmood ** * Computer

More information

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization J.Venkatesh 1, B.Chiranjeevulu 2 1 PG Student, Dept. of ECE, Viswanadha Institute of Technology And Management,

More information

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat An Improved Firefly Algorithm for Optimization Problems Amarita Ritthipakdee 1, Arit Thammano, Nol Premasathian 3, and Bunyarit Uyyanonvara 4 Abstract Optimization problem is one of the most difficult

More information

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over EE Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over Emre Uğur February, 00 Abstract In this work, Particle Swarm Optimization (PSO) method is implemented and applied to various

More information

Multiprocessor Scheduling Using Particle Swarm Optimization

Multiprocessor Scheduling Using Particle Swarm Optimization S.N.Sivanandam 1, P Visalakshi 2, and A.Bhuvaneswari 3 Professor and Head 1 Senior Lecturer 2 Pg Student 3 Department of Computer Science and Engineering, Psg College of Technology, Peelamedu, Coimbatore

More information

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading 11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response

More information

Discrete Particle Swarm Optimization for Solving a Single to Multiple Destinations in Evacuation Planning

Discrete Particle Swarm Optimization for Solving a Single to Multiple Destinations in Evacuation Planning Discrete Particle Swarm Optimization for Solving a Single to Multiple Destinations in Evacuation Planning 1 MARINA YUSOFF, 2 JUNAIDAH ARIFFIN, 1 AZLINAH MOHAMED 1 Faculty of Computer and Mathematical Sciences

More information

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION International Journal of Computer Engineering and Applications, Volume VIII, Issue I, Part I, October 14 QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION Shradha Chawla 1, Vivek Panwar 2 1 Department

More information

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 10 (October. 2013), V4 PP 09-14 Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

More information

An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm

An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm Journal of Universal Computer Science, vol. 13, no. 10 (2007), 1449-1461 submitted: 12/6/06, accepted: 24/10/06, appeared: 28/10/07 J.UCS An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm

More information

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm A. Lari, A. Khosravi and A. Alfi Faculty of Electrical and Computer Engineering, Noushirvani

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

Hybrid PSO-SA algorithm for training a Neural Network for Classification

Hybrid PSO-SA algorithm for training a Neural Network for Classification Hybrid PSO-SA algorithm for training a Neural Network for Classification Sriram G. Sanjeevi 1, A. Naga Nikhila 2,Thaseem Khan 3 and G. Sumathi 4 1 Associate Professor, Dept. of CSE, National Institute

More information

Constraints in Particle Swarm Optimization of Hidden Markov Models

Constraints in Particle Swarm Optimization of Hidden Markov Models Constraints in Particle Swarm Optimization of Hidden Markov Models Martin Macaš, Daniel Novák, and Lenka Lhotská Czech Technical University, Faculty of Electrical Engineering, Dep. of Cybernetics, Prague,

More information