PARTICLE Swarm Optimization (PSO), an algorithm by

Size: px
Start display at page:

Download "PARTICLE Swarm Optimization (PSO), an algorithm by"

Transcription

1 , March 12-14, 2014, Hong Kong Cluster-based Particle Swarm Algorithm for Solving the Mastermind Problem Dan Partynski Abstract In this paper we present a metaheuristic algorithm that is inspired by Particle Swarm Optimization. The algorithm maintains a set of intercommunicating particle clusters and equips each particle with a specialized local search function. To demonstrate the effectiveness of the algorithm, we analyze its ability to solve Mastermind codes and compare its performance with other algorithms found in the literature. For the Mastermind problem, we have found that our algorithm is comparable to other algorithms for small problem sizes, but has much more efficient scaling behavior. Index Terms particle swarm optimization (PSO), cluster, mastermind. I. INTRODUCTION PARTICLE Swarm Optimization (PSO), an algorithm by Kennedy and Eberhart described in [1], is a popular method for optimizing continuous nonlinear functions. The algorithm is inspired by the interactions between agents in a flock of birds or a school of fish. Each agent initially begins in a random region of the search space, and more favorable regions are found iteratively as the agents begin to follow the best particle. While the algorithm was designed for continuous problems, various researchers have used the algorithm to solve various discrete problems, such as the Travelling Salesman Problem [2]. Mastermind is a code-breaking game for two players: an encoder and a decoder. The encoder selects a secret permutation of P digits from a list of N digits. There are no limits on the values of P and N, but during a single game these values are fixed. The decoder makes a series of guesses and for each guess gains information about how close the guess is to the secret code. When a guess is presented, the user receives a number of dark pegs which correspond to digits that have the correct value in the correct position. The user also receives a number of light pegs which correspond to the remaining digits that have the correct value but are in the incorrect position. Note that the dark pegs take precedence over the light pegs. For example, if the secret code is [ ] and the guess is [ ], then the decoder will receive one dark peg and two light pegs. We shall represent this as (1, 2). Mastermind can be viewed as a dynamic-constraint optimization problem. The decoder continues to make guesses and receives more information about what the secret code can and cannot be. Every (guess, response) pair imposes an additional constraint on the decoder. This information can be used to form an optimization function, and as a result various researchers have attempted to solve Mastermind codes using metaheuristics such as genetic algorithms. As the game Manuscript received December 07, 2013; revised January 10, Dan Partynski is a student of the Department of Mathematics and Computer Science, University of San Diego, San Diego, CA, USA (dpartynski@sandiego.edu). progresses, it is increasingly difficult to identify a guess that could still be the secret code. A potential guess is considered consistent if it could still be the secret code given the current list of constraints, and inconsistent otherwise. The general goal of Mastermind is to break the secret code as quickly as possible. Thus there are two values that a Mastermind algorithm could minimize: 1) Number of Guesses: An algorithm s performance may be measured by the average of number of guesses needed to break the secret code. A guess is one submission of a code to the encoder. 2) Number of evaluations: An algorithm will analyze several potential codes before making a guess. We define an evaluation as one call of the fitness function that the algorithm uses to analyze a single code. Minimizing the number of guesses presents an interesting challenge for large problem sizes, but it is not our main focus. Since we are interested in optimization, our algorithm will attempt to minimize the number of evaluations needed to break the secret code. In this paper we describe a cluster-based particle swarm algorithm for solving the game of Mastermind. Our algorithm is not a direct application of PSO, rather it uses the underlying idea of communicating particles as inspiration. The algorithm is similar to PSO in that it maintains a group of communicating agents, but it organizes them into clusters. It also introduces a local search function that greedily moves each agent across neighbor states to more promising regions. This combination of local and global search functions allows for more efficient exploration of the search space. Later in this paper we will demonstrate that the algorithm is able to solve high-dimenionsal instances of the Mastermind problem. Section II discusses the various Mastermind algorithms found in the literature. Section III describes the algorithm we used to solve the problem. Section IV discusses the experiments that we ran to test the algorithm s performance, and Section V concludes this paper. II. LITERATURE REVIEW The first Mastermind algorithm to appear in the literature was by Knuth [3] in This algorithm chooses the next guess that will minimize the maximum number of remaining possibilities. If there is a tie among potential guesses, the algorithm chooses the guess that is still consistent. Knuth tested this algorithm with P = 4 and N = 6, and was able to break the secret code in guesses on average (5 in the worst case). Kooi describes the Most Parts Strategy in [4]. This algorithm selects the guess that yields the highest number of possible responses. In the worst case this strategy takes 6

2 , March 12-14, 2014, Hong Kong guesses to break the secret code, but has a lower average of These averages were obtained when P = 4 and N = 6. Temporel and Kovacs describe an algorithm in [5] which attempts to minimize the number of evaluations required when P = 4 and N = 6. The algorithm requires 4.64 guesses on average which is slightly higher than the previous algorithms, but only evalutes 41.2 codes on average. However, no data is given on larger problem sizes. These algorithms work well for small instances of Mastermind, but because they require either searching or storing the entire search space, they are computationally infeasible for large values of P and N. Bernier et al. describe the use of genetic algorithms and simulated annealing to solve larger instances of Mastermind [6]. Simulated annealing was better in terms of minimizing the number of evaluations, but the genetic algorithm resulted in a lower number of guesses on average. Berghman et al. describe an improvement to the genetic algorithm in [7]. They use genetic algorithms to create a set of consistent guesses, and use an exhaustive strategy to select the best guess from the set. This algorithm is comparable to the full-enumeration strategies in terms of average number of guesses, but outperforms them in terms of speed. When P = 8 and N = 12, the algorithm can find the secret code in an average of seconds. Merelo et al. also use this approach in [8]. They use genetic algorithms to create a set of consistent guesses and use the Most Parts strategy to select one to use. This algorithm was tested with P = 5 and N = 9 and yielded 5.95 guesses and 38,485 evaluations on average. III. ALGORITHM Our algorithm always selects a guess that is consistent with the previous guesses. The general outline is as follows: while Secret Code not found do Find a consistent guess g; Submit g get response r = (n dark, n light ); if n dark = P then Secret Code has been found; Fig. 1. Mastermind Algorithm We now will define a formula for a consistent guess so that we can phrase the Mastermind problem as an optimization problem. We first define a rule as a guess submitted by the decoder and the corresponding response from the encoder. More formally, we define a rule as a (guess, response) pair where guess is a vector of size P and response is a pair of integers (n dark, n light ) representing dark pegs and light pegs. Now given two guesses g i and g j we define a new function: h(g i, g j ) = (n dark, n light ) (1) where n dark is the number of dark pegs and n light is the number of light pegs the decoder would receive if g i was the secret code and g j was submitted as a guess. We now define a measure of the similarity of two responses r i and r j as follows: d(r i, r j ) = abs(r i dark r j dark ) + abs(r i light r j light ) (2) Informally, the distance between two responses is 0 if they have identical dark and light peg counts. Otherwise the function returns the Manhatten distance between the two response vectors. Now assume the decoder has made n guesses and has received n responses, so we have (g 1, r 1 ), (g 2, r 2 ),..., (g n, r n ) as our rule list. We now define the fitness of some potential guess g as follows: f(g) = n d(h(g, g i ), r i ) (3) i=1 We know that the secret code must match each of the given responses when compared to the given guesses. Thus for a guess to be consistent its fitness value must be 0. The higher the fitness value, the farther away a guess is from being consistent. Thus finding a consistent guess can be reduced to minimizing the above function. The following sections describe the algorithm used to achieve this minimization. A. Local Function Our algorithm maintains a collection of potential guesses, called agents, each equipped with a function that allows it to explore neighboring states. If the function to minimize was differentiable, then the agents could use gradient descent or some other fast optimization strategy as their local search function. Mastermind is a discrete problem, so we created a custom search function to move an agent across the search space. Given some guess g = [d 1 d 2... d P ], we define three possible ways to move to a neighboring state: 1) Replace some digit d i with a different value 2) Take two digits d i and d j and swap them 3) Take three digits d i, d j, and d k where i <j <k. Here there are two possible neighbor states. Swap so that the order is d k d i d j or d j d k d i. Notice that in each of these new states, every digit is in a new location. If this were not the case, the change could be made by only swapping two elements. Based on these possible neighbor states, we define three different local functions. Local function (1) iteratively looks at each digit and finds the replacement digit that reduces the fitness value the most. If no replacement digit reduces the fitness, the function does not change the guess vector. Local function (2) iteratively looks at each pair of digits and swaps them. If the fitness after the swap is less than or equal to the old fitness, the change to the vector is kept. Notice that unlike local function (1), we keep the resultant vector if the new fitness is equal to the old fitness rather than strictly less. This was found to be beneficial through experimentation. Local function (3) iterates over each triplet of digits in a code and tests the fitness of both unique swaps. If at least one of the swaps results in a fitness that is less than or equal to the old fitness, the change will be kept. If both swaps lead to a reduction in fitness, the function chooses the swap that

3 , March 12-14, 2014, Hong Kong leads to the largest reduction. If each reduction is equal, one of the two swaps is chosen at random. Note that all three of these functions exhaustively test each possible neighbor state. This was possible to do for the Mastermind problem, but for some problems this may be computationally infeasible. A possible solution in these cases is to only test the fitness of k random neighbors. The agents in the search space need a way to use these functions to move towards more favorable regions of the search space. We decided to simply run these functions sequentially, only moving from function (i) to function (j) if function (i) failed to improve the fitness of g. Once function (i) improves the fitness of g, the algorithm moves back to function (1). If function (3) fails to improve the fitness of g, then the algorithm stops as it has converged on a local minimum. Here is a more formal description of the local search algorithm: Given a guess g; while Not at local minimum do Run function (1) to form g ; Run function (2) to form g ; Run function (3) to form g ; Local minimum reached; Fig. 2. Local Search Function B. Metaheuristic The function described in the previous section often leads agents to a local minimum. Thus we introduce a metaheurtistic that uses the local search function to help locate the global minimum. Intuitively the idea is simple. We define k agent clusters {c 1, c 2,..., c k } where each cluster contains l agents. Each cluster is initially given a cluster head, which is set randomly in the search space. Each cluster head then uses a disperse function to distribute (l - 1) agents around it up to some predefined max distance. Each agent in each cluster uses the local search function until all agents arrive at a local minimum. When this happens, we identify the fittest agent in each cluster and redistribute the remaining agents in that cluster around it. We then locate the fittest individual amongst all the clusters and use a pull function to move each agent in its direction. This allows communication between the clusters and moves large amounts of agents to favorable regions of the search space. Our final step is to take the fittest individual in each cluster and set it equal to a random vector, leaving the other agents in the cluster to explore the region that had been discovered. This was introduced primarily as a measure to increase diversity and perhaps allow an agent to move to a favorable region of the search space that no cluster is exploring. Before giving a formal description of the algorithm, we need to define the disperse and pull functions that are required by the algorithm. Like the local search function, these functions are depent on the optimization problem, so we will explain how the functions were defined for the Mastermind problem. The function disperse(agent) when given some agent will randomly move a new agent to a close region of the search space. The function pull(agent i, agent j ) will move agent i closer to agent j. We first give our defintion of the disperse function. Given some agent a = [d 1 d 2... d P ], we would like to produce a by dispersing slightly from a. To do this, we will simply move a across some small number of neighbor states randomly. The number of neighbor states to move across is a random integer between two bounds: dist min and dist max. A more formal description of the disperse function is as follows: Given agent a; dist := random integer [dist min, dist max ]; for i 1 to dist do Choose random value r [0, 1); if r <.5 then Randomly swap two digits in a; else Change a random digit in a; Fig. 3. Disperse Function This produces a new agent that is slightly farther away from the central agent. Note that for simplicity, our disperse function does not move to neighbors that require a triple swap. As an illustration, suppose we have the guess vector [ ] and we wish to use the disperse function to create a new vector. If the random distance is 2, then the function may replace the second digit with the value 5 and swap the fourth and fifth digits. The resulting guess vector is [ ], which is 2 neighbor states away from the original. We now define our pull function. For this, we use an extra parameter called dist pull. Given agent i and agent j ; for i 1 to dist pull do k := radom integer {1, 2, 3,...,P}; k th digit of agent i := k th digit of agent j ; Fig. 4. Pull Function This function takes digits from the best agent and places them into the corresponding digit of the agent being moved. The value k can be selected with or without replacement. In our implementation, we select k with replacement because we found through experimentation that this strategy produces slightly better results. As an example, suppose we wish to use the pull function to move the vector [ ] in the direction of the vector [ ]. Deping on the random values, the function may replace the third digit of the first vector with the third digit of the second vector, resulting in [ ]. Now that we have defined these two functions, we give a more detailed description of the whole algorithm for finding a consistent guess:

4 , March 12-14, 2014, Hong Kong a head := random agent; add a head to c i ; for j 1 to (l - 1) do a new := disperse(a head ); add a new to c i ; while Sufficiently fit agent not found do Run local search function on a ij ; if f(a ij ) = 0 then return a ij ; if f(a ij ) is lowest fitness found then a best := a ij ; a best i := best agent c i ; a ij := disperse(a best i ); a best i := random agent; pull(a ij, a best ); Fig. 5. Particle Cluster Search Algorithm We will now summarize the parameters used by this algorithm: 1) k: The number of clusters 2) l: The number of agents in a cluster 3) dist min : The minimum number of neighbor states to move across in the disperse function 4) dist max : The maximum number of neighbor states to move across in the disperse function 5) dist pull : The maximum number of neighbor states that agent i will move across toward agent j in the pull function These parameters have a strong impact on the performance of the algorithm. Higher values of k and l are often better in terms of avoiding local minima, but lead to higher amounts of computation. The distance parameters are highly depent on the size of the problem. As we describe in our experiment section, we do not perform extensive analysis to determine good values for these parameters, but rather use our judgement deping on the size of the Mastermind problem being solved. C. Domain Heuristics Domain-specific heuristics can greatly improve the convergence speed of the algorithm. Based on the reponses of the decoder, we are able to eliminate large poritions of the search space. We do this by maintaining a 2-dimensional array called valid[][], which has P rows and N columns. We say that valid[i][j] is true iff it s possible for a consistent guess to have the value j for d i. If it is impossible, then valid[i][j] is false. Initially, all entries in the array are set to true. The disperse function and all local search functions avoid setting some digit d i to some value j if valid[i][j] is false. Thus the more entires that are set to false, the more inconsistent guesses will be avoided by the algorithm. The question then is how to determine if an array entry can be set to false. Our algorithm identifies three general categories of a response r = (n dark, n light ) that allow us to alter this array: 1) Suppose n dark = n light = 0. This is the most informative case, since no digit in the guess g can appear in a consistent guess. Thus i {1, 2, 3,...,P} and d j g, valid[i][d j ] = false 2) Suppose n dark = 0 but n light >0. In this case, a consistent guess cannot have any digit d j in slot j because this would produce a dark peg. Thus d j g valid[j][d j ] = false 3) Suppose n dark + n light = P. In this case, the decoder has identified all of the digits that appear in the secret code, but not in the right order. Thus the secret code is a permutation of the digits in g, so all digits not in g can be eliminated. More formally, i {1, 2, 3,...,P} and j {1, 2, 3,...,N}, valid[i][j] = false iff j / g After condition (3) is met, we also flag a special variable called game. The game occurs after all of the digits have been identified and the algorithm simply needs to find a permutation of them. When this happens, the algorithm first runs local functions (2) and (3) on the most recent guess played in the hopes that only a few swaps are needed to produce a consistent guess. If this fails, then the algorithm proceeds as normal. We have found experimentally that this enhancement often does lead to the quick discovery of a consistent guess. IV. EXPERIMENTS AND RESULTS To test the performance of the proposed algorithm, we ran several trials for several values of P and N. We used an Inspiron N4010 computer with a 2.43 GHz Intel i3 processor and 4 GB of RAM. The computer was equipped with the Windows 7 operating system. The programming language we chose was C++ for purposes of speed. Each agent is represented as a 64-bit integer, with each digit given 4 bits of information. Thus each digit can have a maximum of 16 posssible values. The algorithm has many parameters, and each trial uses 5 clusters and 8 particles in a cluster. All other parameters varied deping on the problem size, and we used our best judgment to select appropriate values. We compare our results to the results of Berghman et al. in [7] and Merelo et al. in [8] by comparing the average number of guesses, average number of evaluations, and average time needed to break the secret code. We then demonstrate our algorithm s scaling behavior on problem sizes not yet found in the literature.

5 , March 12-14, 2014, Hong Kong A. P = 4 and N = 6 For this configuration, we used [ ] as our first guess. We ran 10 trials for each possible secret code and averaged the results. As expected, our algorithm has a worse performance in terms of the average number of guesses. However, it is efficient in terms of the number of evaluations required to find the secret code. TABLE I COMPARISON OF ALGORITHMS FOR P = 4 AND N = 6 Berghman et al N/A.614 sec Merelo et al N/A New Algorithm sec B. P = 5 and N = 8 For this configuration we use [ ] as our first guess and run trials, each with a randomaly generated secret code. TABLE II COMPARISON OF ALGORITHMS FOR P = 5 AND N = 8 Berghman et al N/A N/A Merelo et al N/A New Algorithm sec TABLE IV PERFORMANCE FOR HIGH DIMENSIONS (P, N) Guesses Evaluations Time (10, 12) sec (10, 16) sec (11, 16) sec (12, 16) sec (13, 16) sec (14, 16) sec (15, 16) sec (16, 16) sec V. CONCLUSION In this paper we presented a new algorithm inspired by PSO. The algorithm is similar to PSO in that it uses a swarm of agents to explore a search space, but it introduces particle clusters and a specialized local search function. We have shown that the algorithm works well for solving Mastermind codes compared to other algorithms found in the literature. We plan to further analyze the parameters of the algorithm to determine good rules of thumb for parameter selection. We would also like to test the algorithm s performance on various other discrete optimization problems that are typically solved with genetic algorithms or other metaheuristics. This would require modifying the disperse, pull, and local search functions in order to suit the given optimization problem. We further plan to enhance the algorithm through parallelism by assigning each particle cluster its own processing core. REFERENCES C. P = 8 and N = 12 This is the largest configuration of Mastermind that we were able to find in the literature. We chose [ ] as our first guess and ran 3000 trials with randomly generated secret codes. We compare our results to Berghman et al. We see that our algorithm exibits a higher guess average, but the time needed to break the secret code is significantly reduced. TABLE III COMPARISON OF ALGORITHMS FOR P = 8 AND N = 12 Berghman et al N/A sec New Algorithm sec [1] J. Kennedy, Particle swarm optimization, in Encyclopedia of Machine Learning. Springer, 2010, pp [2] X. H. Shi, Y. C. Liang, H. P. Lee, C. Lu, and Q. Wang, Particle swarm optimization-based algorithms for tsp and generalized tsp, Information Processing Letters, vol. 103, no. 5, pp , [3] D. E. Knuth, The computer as master mind, Journal of Recreational Mathematics, vol. 9, no. 9, pp. 1 6, [4] B. P. Kooi, Yet another mastermind strategy. ICGA Journal, vol. 28, no. 1, pp , [5] A. Temporel and T. Kovacs, A heuristic hill climbing algorithm for mastermind, in Proceedings of the 2003 UK Workshop on Computational Intelligence (UKCI-03), 2003, pp [6] J. L. Bernier, C. I. Herráiz, J. Merelo, S. Olmeda, and A. Prieto, Solving mastermind using gas and simulated annealing: a case of dynamic constraint optimization, in Parallel Problem Solving from NaturePPSN IV. Springer, 1996, pp [7] L. Berghman, D. Goossens, and R. Leus, Efficient solutions for mastermind using genetic algorithms, Computers & operations research, vol. 36, no. 6, pp , [8] J.-J. Merelo, A. M. Mora, and C. Cotta, Optimizing worst-case scenario in evolutionary solutions to the mastermind puzzle, in Evolutionary Computation (CEC), 2011 IEEE Congress on. IEEE, 2011, pp D. Higher Dimensions To demonstrate that our algorithm can scale to higher dimensions, we present data on higher values of P and N. Due to our representation of Mastermind Codes as 64-bit integers, we limited P and N to a maximum of 16. As we can see, the algorithm scales very well to higher dimensions of Mastermind. Observe that when P = N = 16, there are possible secret codes, which is approximately This means that, on average, the algorithm evalutes only of the search space.

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

Discrete Particle Swarm Optimization for TSP based on Neighborhood

Discrete Particle Swarm Optimization for TSP based on Neighborhood Journal of Computational Information Systems 6:0 (200) 3407-344 Available at http://www.jofcis.com Discrete Particle Swarm Optimization for TSP based on Neighborhood Huilian FAN School of Mathematics and

More information

Using CODEQ to Train Feed-forward Neural Networks

Using CODEQ to Train Feed-forward Neural Networks Using CODEQ to Train Feed-forward Neural Networks Mahamed G. H. Omran 1 and Faisal al-adwani 2 1 Department of Computer Science, Gulf University for Science and Technology, Kuwait, Kuwait omran.m@gust.edu.kw

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Using Genetic Algorithm with Triple Crossover to Solve Travelling Salesman Problem

Using Genetic Algorithm with Triple Crossover to Solve Travelling Salesman Problem Proc. 1 st International Conference on Machine Learning and Data Engineering (icmlde2017) 20-22 Nov 2017, Sydney, Australia ISBN: 978-0-6480147-3-7 Using Genetic Algorithm with Triple Crossover to Solve

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Oğuz Altun Department of Computer Engineering Yildiz Technical University Istanbul, Turkey oaltun@yildiz.edu.tr

More information

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION International Journal of Computer Engineering and Applications, Volume VIII, Issue I, Part I, October 14 QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION Shradha Chawla 1, Vivek Panwar 2 1 Department

More information

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION Manjeet Singh 1, Divesh Thareja 2 1 Department of Electrical and Electronics Engineering, Assistant Professor, HCTM Technical

More information

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems Dervis Karaboga and Bahriye Basturk Erciyes University, Engineering Faculty, The Department of Computer

More information

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,

More information

Using Genetic Algorithms to optimize ACS-TSP

Using Genetic Algorithms to optimize ACS-TSP Using Genetic Algorithms to optimize ACS-TSP Marcin L. Pilat and Tony White School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada {mpilat,arpwhite}@scs.carleton.ca

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem

More information

A Modified PSO Technique for the Coordination Problem in Presence of DG

A Modified PSO Technique for the Coordination Problem in Presence of DG A Modified PSO Technique for the Coordination Problem in Presence of DG M. El-Saadawi A. Hassan M. Saeed Dept. of Electrical Engineering, Faculty of Engineering, Mansoura University, Egypt saadawi1@gmail.com-

More information

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat An Improved Firefly Algorithm for Optimization Problems Amarita Ritthipakdee 1, Arit Thammano, Nol Premasathian 3, and Bunyarit Uyyanonvara 4 Abstract Optimization problem is one of the most difficult

More information

Fast Efficient Clustering Algorithm for Balanced Data

Fast Efficient Clustering Algorithm for Balanced Data Vol. 5, No. 6, 214 Fast Efficient Clustering Algorithm for Balanced Data Adel A. Sewisy Faculty of Computer and Information, Assiut University M. H. Marghny Faculty of Computer and Information, Assiut

More information

Hybrid Differential Evolution Algorithm for Traveling Salesman Problem

Hybrid Differential Evolution Algorithm for Traveling Salesman Problem Available online at www.sciencedirect.com Procedia Engineering 15 (2011) 2716 2720 Advanced in Control Engineeringand Information Science Hybrid Differential Evolution Algorithm for Traveling Salesman

More information

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 10 (October. 2013), V4 PP 09-14 Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

More information

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen INF3490 - Biologically inspired computing Lecture 1: Marsland chapter 9.1, 9.4-9.6 2017 Optimization and Search Jim Tørresen Optimization and Search 2 Optimization and Search Methods (selection) 1. Exhaustive

More information

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi

More information

Outline of the module

Outline of the module Evolutionary and Heuristic Optimisation (ITNPD8) Lecture 2: Heuristics and Metaheuristics Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ Computing Science and Mathematics, School of Natural Sciences University

More information

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Australian Journal of Basic and Applied Sciences, 4(8): 3366-3382, 21 ISSN 1991-8178 Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Akbar H. Borzabadi,

More information

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops 1 Srinivas P. S., 2 Ramachandra Raju V., 3 C.S.P Rao. 1 Associate Professor, V. R. Sdhartha Engineering College, Vijayawada 2 Professor,

More information

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic Bulletin of Environment, Pharmacology and Life Sciences Bull. Env. Pharmacol. Life Sci., Vol 4 [9] August 2015: 115-120 2015 Academy for Environment and Life Sciences, India Online ISSN 2277-1808 Journal

More information

A Genetic Approach for Solving Minimum Routing Cost Spanning Tree Problem

A Genetic Approach for Solving Minimum Routing Cost Spanning Tree Problem A Genetic Approach for Solving Minimum Routing Cost Spanning Tree Problem Quoc Phan Tan Abstract Minimum Routing Cost Spanning Tree (MRCT) is one of spanning tree optimization problems having several applications

More information

Automatic differentiation based for particle swarm optimization steepest descent direction

Automatic differentiation based for particle swarm optimization steepest descent direction International Journal of Advances in Intelligent Informatics ISSN: 2442-6571 Vol 1, No 2, July 2015, pp. 90-97 90 Automatic differentiation based for particle swarm optimization steepest descent direction

More information

A Comparative Study on Nature Inspired Algorithms with Firefly Algorithm

A Comparative Study on Nature Inspired Algorithms with Firefly Algorithm International Journal of Engineering and Technology Volume 4 No. 10, October, 2014 A Comparative Study on Nature Inspired Algorithms with Firefly Algorithm M. K. A. Ariyaratne, T. G. I. Fernando Department

More information

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE Fang Wang, and Yuhui Qiu Intelligent Software and Software Engineering Laboratory, Southwest-China Normal University,

More information

Algorithm Design (4) Metaheuristics

Algorithm Design (4) Metaheuristics Algorithm Design (4) Metaheuristics Takashi Chikayama School of Engineering The University of Tokyo Formalization of Constraint Optimization Minimize (or maximize) the objective function f(x 0,, x n )

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)

More information

Homework 2: Search and Optimization

Homework 2: Search and Optimization Scott Chow ROB 537: Learning Based Control October 16, 2017 Homework 2: Search and Optimization 1 Introduction The Traveling Salesman Problem is a well-explored problem that has been shown to be NP-Complete.

More information

Center-Based Sampling for Population-Based Algorithms

Center-Based Sampling for Population-Based Algorithms Center-Based Sampling for Population-Based Algorithms Shahryar Rahnamayan, Member, IEEE, G.GaryWang Abstract Population-based algorithms, such as Differential Evolution (DE), Particle Swarm Optimization

More information

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0 Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0 By Magnus Erik Hvass Pedersen November 2010 Copyright 2009-2010, all rights reserved by the author. Please see page

More information

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Pre-requisite Material for Course Heuristics and Approximation Algorithms Pre-requisite Material for Course Heuristics and Approximation Algorithms This document contains an overview of the basic concepts that are needed in preparation to participate in the course. In addition,

More information

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing Solving Traveling Salesman Problem Using Parallel Genetic Algorithm and Simulated Annealing Fan Yang May 18, 2010 Abstract The traveling salesman problem (TSP) is to find a tour of a given number of cities

More information

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization Cell-to-switch assignment in cellular networks using barebones particle swarm optimization Sotirios K. Goudos a), Konstantinos B. Baltzis, Christos Bachtsevanidis, and John N. Sahalos RadioCommunications

More information

Introduction to Computer Science and Programming for Astronomers

Introduction to Computer Science and Programming for Astronomers Introduction to Computer Science and Programming for Astronomers Lecture 9. István Szapudi Institute for Astronomy University of Hawaii March 21, 2018 Outline Reminder 1 Reminder 2 3 Reminder We have demonstrated

More information

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing

More information

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization Dr. Liu Dasheng James Cook University, Singapore / 48 Outline of Talk. Particle Swam Optimization 2. Multiobjective Particle Swarm

More information

A Hybrid Fireworks Optimization Method with Differential Evolution Operators

A Hybrid Fireworks Optimization Method with Differential Evolution Operators A Fireworks Optimization Method with Differential Evolution Operators YuJun Zheng a,, XinLi Xu a, HaiFeng Ling b a College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou,

More information

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing

More information

OPTIMUM CAPACITY ALLOCATION OF DISTRIBUTED GENERATION UNITS USING PARALLEL PSO USING MESSAGE PASSING INTERFACE

OPTIMUM CAPACITY ALLOCATION OF DISTRIBUTED GENERATION UNITS USING PARALLEL PSO USING MESSAGE PASSING INTERFACE OPTIMUM CAPACITY ALLOCATION OF DISTRIBUTED GENERATION UNITS USING PARALLEL PSO USING MESSAGE PASSING INTERFACE Rosamma Thomas 1, Jino M Pattery 2, Surumi Hassainar 3 1 M.Tech Student, Electrical and Electronics,

More information

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem Available online at www.sciencedirect.com Procedia Engineering 15 (2011) 3673 3677 Advanced in Control Engineeringand Information Science Application of Improved Discrete Particle Swarm Optimization in

More information

Variable Neighborhood Search for Solving the Balanced Location Problem

Variable Neighborhood Search for Solving the Balanced Location Problem TECHNISCHE UNIVERSITÄT WIEN Institut für Computergraphik und Algorithmen Variable Neighborhood Search for Solving the Balanced Location Problem Jozef Kratica, Markus Leitner, Ivana Ljubić Forschungsbericht

More information

Single Candidate Methods

Single Candidate Methods Single Candidate Methods In Heuristic Optimization Based on: [3] S. Luke, "Essentials of Metaheuristics," [Online]. Available: http://cs.gmu.edu/~sean/book/metaheuristics/essentials.pdf. [Accessed 11 May

More information

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization J.Venkatesh 1, B.Chiranjeevulu 2 1 PG Student, Dept. of ECE, Viswanadha Institute of Technology And Management,

More information

Travelling Salesman Problem Using Bee Colony With SPV

Travelling Salesman Problem Using Bee Colony With SPV International Journal of Soft Computing and Engineering (IJSCE) Travelling Salesman Problem Using Bee Colony With SPV Nishant Pathak, Sudhanshu Prakash Tiwari Abstract Challenge of finding the shortest

More information

Solving Traveling Salesman Problem on High Performance Computing using Message Passing Interface

Solving Traveling Salesman Problem on High Performance Computing using Message Passing Interface Solving Traveling Salesman Problem on High Performance Computing using Message Passing Interface IZZATDIN A. AZIZ, NAZLEENI HARON, MAZLINA MEHAT, LOW TAN JUNG, AISYAH NABILAH Computer and Information Sciences

More information

Web page recommendation using a stochastic process model

Web page recommendation using a stochastic process model Data Mining VII: Data, Text and Web Mining and their Business Applications 233 Web page recommendation using a stochastic process model B. J. Park 1, W. Choi 1 & S. H. Noh 2 1 Computer Science Department,

More information

PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER

PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER Journal of Al-Nahrain University Vol13 (4), December, 2010, pp211-215 Science PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER Sarab M Hameed * and Dalal N Hmood ** * Computer

More information

Particle Swarm Optimization

Particle Swarm Optimization Dario Schor, M.Sc., EIT schor@ieee.org Space Systems Department Magellan Aerospace Winnipeg Winnipeg, Manitoba 1 of 34 Optimization Techniques Motivation Optimization: Where, min x F(x), subject to g(x)

More information

PARTICLE SWARM OPTIMIZATION (PSO)

PARTICLE SWARM OPTIMIZATION (PSO) PARTICLE SWARM OPTIMIZATION (PSO) J. Kennedy and R. Eberhart, Particle Swarm Optimization. Proceedings of the Fourth IEEE Int. Conference on Neural Networks, 1995. A population based optimization technique

More information

Speculative Evaluation in Particle Swarm Optimization

Speculative Evaluation in Particle Swarm Optimization Speculative Evaluation in Particle Swarm Optimization Matthew Gardner, Andrew McNabb, and Kevin Seppi Department of Computer Science, Brigham Young University Abstract. Particle swarm optimization (PSO)

More information

Artificial Intelligence (Heuristic Search)

Artificial Intelligence (Heuristic Search) Artificial Intelligence (Heuristic Search) KR Chowdhary, Professor & Head Email: kr.chowdhary@acm.org Department of Computer Science and Engineering MBM Engineering College, Jodhpur kr chowdhary heuristic

More information

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm A. Lari, A. Khosravi and A. Alfi Faculty of Electrical and Computer Engineering, Noushirvani

More information

Accelerating local search algorithms for the travelling salesman problem through the effective use of GPU

Accelerating local search algorithms for the travelling salesman problem through the effective use of GPU Available online at www.sciencedirect.com ScienceDirect Transportation Research Procedia 22 (2017) 409 418 www.elsevier.com/locate/procedia 19th EURO Working Group on Transportation Meeting, EWGT2016,

More information

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques N.N.Poddar 1, D. Kaur 2 1 Electrical Engineering and Computer Science, University of Toledo, Toledo, OH, USA 2

More information

Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty

Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Michael Comstock December 6, 2012 1 Introduction This paper presents a comparison of two different machine learning systems

More information

Model Parameter Estimation

Model Parameter Estimation Model Parameter Estimation Shan He School for Computational Science University of Birmingham Module 06-23836: Computational Modelling with MATLAB Outline Outline of Topics Concepts about model parameter

More information

Tracking Changing Extrema with Particle Swarm Optimizer

Tracking Changing Extrema with Particle Swarm Optimizer Tracking Changing Extrema with Particle Swarm Optimizer Anthony Carlisle Department of Mathematical and Computer Sciences, Huntingdon College antho@huntingdon.edu Abstract The modification of the Particle

More information

EXECUTION PLAN OPTIMIZATION TECHNIQUES

EXECUTION PLAN OPTIMIZATION TECHNIQUES EXECUTION PLAN OPTIMIZATION TECHNIQUES Július Štroffek Database Sustaining Engineer Sun Microsystems, Inc. Tomáš Kovařík Faculty of Mathematics and Physics, Charles University, Prague PostgreSQL Conference,

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2 15-382 COLLECTIVE INTELLIGENCE - S18 LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2 INSTRUCTOR: GIANNI A. DI CARO BACKGROUND: REYNOLDS BOIDS Reynolds created a model of coordinated animal

More information

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers

More information

Particle swarm optimization for mobile network design

Particle swarm optimization for mobile network design Particle swarm optimization for mobile network design Ayman A. El-Saleh 1,2a), Mahamod Ismail 1, R. Viknesh 2, C. C. Mark 2, and M. L. Chan 2 1 Department of Electrical, Electronics, and Systems Engineering,

More information

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization 488 International Journal Wu-Chang of Control, Wu Automation, and Men-Shen and Systems, Tsai vol. 6, no. 4, pp. 488-494, August 2008 Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

More information

PARALLEL PARTICLE SWARM OPTIMIZATION IN DATA CLUSTERING

PARALLEL PARTICLE SWARM OPTIMIZATION IN DATA CLUSTERING PARALLEL PARTICLE SWARM OPTIMIZATION IN DATA CLUSTERING YASIN ORTAKCI Karabuk University, Computer Engineering Department, Karabuk, Turkey E-mail: yasinortakci@karabuk.edu.tr Abstract Particle Swarm Optimization

More information

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland March 19, 2014

More information

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm So-Yeong Jeon 1 and Yong-Hyuk Kim 2,* 1 Department of Computer Science, Korea Advanced Institute of Science

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Particle Swarm Optimization

Particle Swarm Optimization Particle Swarm Optimization Gonçalo Pereira INESC-ID and Instituto Superior Técnico Porto Salvo, Portugal gpereira@gaips.inesc-id.pt April 15, 2011 1 What is it? Particle Swarm Optimization is an algorithm

More information

Binary Differential Evolution Strategies

Binary Differential Evolution Strategies Binary Differential Evolution Strategies A.P. Engelbrecht, Member, IEEE G. Pampará Abstract Differential evolution has shown to be a very powerful, yet simple, population-based optimization approach. The

More information

Algorithms & Complexity

Algorithms & Complexity Algorithms & Complexity Nicolas Stroppa - nstroppa@computing.dcu.ie CA313@Dublin City University. 2006-2007. November 21, 2006 Classification of Algorithms O(1): Run time is independent of the size of

More information

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation: Convergence of PSO The velocity update equation: v i = v i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) for some values of φ 1 and φ 2 the velocity grows without bound can bound velocity to range [ V max,v

More information

Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems

Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems Hongbo Liu 1,2,AjithAbraham 3,1, Okkyung Choi 3,4, and Seong Hwan Moon 4 1 School of Computer

More information

Particle Swarm Optimization Approach with Parameter-wise Hill-climbing Heuristic for Task Allocation of Workflow Applications on the Cloud

Particle Swarm Optimization Approach with Parameter-wise Hill-climbing Heuristic for Task Allocation of Workflow Applications on the Cloud Particle Swarm Optimization Approach with Parameter-wise Hill-climbing Heuristic for Task Allocation of Workflow Applications on the Cloud Simone A. Ludwig Department of Computer Science North Dakota State

More information

Grid Scheduling using PSO with Naive Crossover

Grid Scheduling using PSO with Naive Crossover Grid Scheduling using PSO with Naive Crossover Vikas Singh ABV- Indian Institute of Information Technology and Management, GwaliorMorena Link Road, Gwalior, India Deepak Singh Raipur Institute of Technology

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Modified Particle Swarm Optimization

Modified Particle Swarm Optimization Modified Particle Swarm Optimization Swati Agrawal 1, R.P. Shimpi 2 1 Aerospace Engineering Department, IIT Bombay, Mumbai, India, swati.agrawal@iitb.ac.in 2 Aerospace Engineering Department, IIT Bombay,

More information

Scheduling of Independent Tasks in Cloud Computing Using Modified Genetic Algorithm (FUZZY LOGIC)

Scheduling of Independent Tasks in Cloud Computing Using Modified Genetic Algorithm (FUZZY LOGIC) Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 9, September 2015,

More information

TABU search and Iterated Local Search classical OR methods

TABU search and Iterated Local Search classical OR methods TABU search and Iterated Local Search classical OR methods tks@imm.dtu.dk Informatics and Mathematical Modeling Technical University of Denmark 1 Outline TSP optimization problem Tabu Search (TS) (most

More information

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu

More information

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt TABU search and Iterated Local Search classical OR methods Outline TSP optimization problem Tabu Search (TS) (most important) Iterated Local Search (ILS) tks@imm.dtu.dk Informatics and Mathematical Modeling

More information

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-3, Issue-6, January 2014 A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

More information

An Adaptive Genetic Algorithm for Solving N- Queens Problem

An Adaptive Genetic Algorithm for Solving N- Queens Problem An Adaptive Genetic Algorithm for Solving N- ueens Problem Uddalok Sarkar 1, * Sayan Nag 1 1 Department of Electrical Engineering Jadavpur University Kolkata, India uddaloksarkar@gmail.com, * nagsayan112358@gmail.com

More information

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function Introduction METAHEURISTICS Some problems are so complicated that are not possible to solve for an optimal solution. In these problems, it is still important to find a good feasible solution close to the

More information

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM BAHAREH NAKISA, MOHAMMAD NAIM RASTGOO, MOHAMMAD FAIDZUL NASRUDIN, MOHD ZAKREE AHMAD NAZRI Department of Computer

More information

ACCELERATING THE ANT COLONY OPTIMIZATION

ACCELERATING THE ANT COLONY OPTIMIZATION ACCELERATING THE ANT COLONY OPTIMIZATION BY SMART ANTS, USING GENETIC OPERATOR Hassan Ismkhan Department of Computer Engineering, University of Bonab, Bonab, East Azerbaijan, Iran H.Ismkhan@bonabu.ac.ir

More information

An Improved Tree Seed Algorithm for Optimization Problems

An Improved Tree Seed Algorithm for Optimization Problems International Journal of Machine Learning and Computing, Vol. 8, o. 1, February 2018 An Improved Tree Seed Algorithm for Optimization Problems Murat Aslan, Mehmet Beskirli, Halife Kodaz, and Mustafa Servet

More information

The Modified IWO Algorithm for Optimization of Numerical Functions

The Modified IWO Algorithm for Optimization of Numerical Functions The Modified IWO Algorithm for Optimization of Numerical Functions Daniel Kostrzewa and Henryk Josiński Silesian University of Technology, Akademicka 16 PL-44-100 Gliwice, Poland {Daniel.Kostrzewa,Henryk.Josinski}@polsl.pl

More information

SwarmOps for C# Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0

SwarmOps for C# Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0 Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0 By Magnus Erik Hvass Pedersen January 2011 Copyright 2009-2011, all rights reserved by the author. Please see page 4

More information

SwarmOps for Java. Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0

SwarmOps for Java. Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0 Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0 By Magnus Erik Hvass Pedersen June 2011 Copyright 2009-2011, all rights reserved by the author. Please see page 4 for

More information

International Journal of Current Research and Modern Education (IJCRME) ISSN (Online): & Impact Factor: Special Issue, NCFTCCPS -

International Journal of Current Research and Modern Education (IJCRME) ISSN (Online): & Impact Factor: Special Issue, NCFTCCPS - TO SOLVE ECONOMIC DISPATCH PROBLEM USING SFLA P. Sowmya* & Dr. S. P. Umayal** * PG Scholar, Department Electrical and Electronics Engineering, Muthayammal Engineering College, Rasipuram, Tamilnadu ** Dean

More information