A Hybrid Ant Colony Optimization Algorithm for Graph Bisection

Size: px
Start display at page:

Download "A Hybrid Ant Colony Optimization Algorithm for Graph Bisection"

Transcription

1 The Pennsylvania State University The Graduate School Capital College A Hybrid Ant Colony Optimization Algorithm for Graph Bisection A Master s Paper in Computer Science by Lisa Strite c 2001 Lisa Strite Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science October 2001

2 Abstract This paper gives an algorithm for the graph bisection problem using the Ant Colony Optimization (ACO) technique. Among the novel ideas of this algorithm that distinguish it from ACOtype algorithms for other problems is the incorporation of local optimization algorithms to speed up the convergence rate and to improve the quality of the solutions. The results achieved by this algorithm on several classes of graphs are close to the best known results for most cases and in some cases equal the best known results. i

3 Table of Contents Abstract Acknowledgement List of Figures List of Tables i iii iv v 1 Introduction 1 2 The Graph Bisection Problem Definitions Applications Algorithms Exact Algorithms Approximation Algorithms Heuristic Algorithms Ant Colony Optimization Problems Solved with ACO Using ACO To Solve Graph Bisection 8 5 Algorithm Iteration Activation of an Animat Pheromone Death Reproduction Movement Between Sets ii

4 6 Results Graphs Types Results for different classes of graphs Graph Preprocessing Comparison with other heuristic algorithms 31 8 Conclusion 32 References 32 iii

5 Acknowledgement I would like to thank Dr. T. Bui for his input and guidance. He provided an immense amount of knowledge and insight which helped me greatly in achieving the goals of this project. I am grateful to the committee members: Dr. S. Kingan, Dr. P. Naumov, and Dr. L. Null for reviewing this paper. The financial support of the IBM Corporation is gratefully acknowledged. I also thank my husband for his support. iv

6 List of Figures 1 HACOalgorithm for graph bisection A bisection of U at various stages in the algorithm A bisection of U with cut size= U1000 with two d values Change in cut size over 25 sets for Breg Two bisections of a caterpillar graph v

7 List of Tables 1 Parameter values Results of HACOalgorithm for 100 trials HACOresults for selected Un.d graphs with 10 and 25 sets HACOresults with preprocessing for Bregn.b HACOresults with preprocessing for caterpillar graphs Comparison of HACOresults with other algorithms vi

8 1 INTRODUCTION 1 1 Introduction The graph bisection problem is a well known NP-hard optimization problem. It is very difficult to solve with conventional techniques such as exact or approximations algorithms. Because of its complexity, many heuristic algorithms have been developed to solve it with varying degrees of success. It has several important real world applications including Very Large Scale Integrated Circuit (VLSI) placement, sparse matrix computation and processor allocation. For these reasons it is an interesting problem to attempt to solve using new heuristic techniques. Ant colony optimization (ACO) is a type of algorithm that seeks to model the emergent behavior observed in ant colonies and utilize this behavior to solve problems. This technique has been applied to several problems, most of which are graph related because the ant colony metaphor can be most easily applied to these types of problems. In this paper, a heuristic algorithm is given to solve the graph bisection problem. The algorithm incorporates several ACOfeatures as well as local optimization techniques and graph preprocessing. The algorithm was tested on five classes of graphs ranging in size from 500 to 5,252 vertices with average degrees from 2 to 36. The results were compared with the best known results for each graph as well as results from several other heuristic algorithms. The algorithm produced very good results and in some cases equaled the best known results. This paper consists of an introduction to the graph bisection problem in Section 2 including a definition of the problem, applications of the problem and previous approaches that have been used to solve the problem. Section 3 introduces the ACOtechnique and problems to which it has been applied. Section 4 describes how the ACOconcept can be applied to graph bisection. In Section 5, we give a Hybrid ACOalgorithm for solving the graph bisection problem. Section 6 contains the results this algorithm achieved. A comparison with the results of other algorithms is provided in Section 7. Conclusions are given in Section 8.

9 2 THE GRAPH BISECTION PROBLEM 2 2 The Graph Bisection Problem 2.1 Definitions Given an unweighted graph, G = (V,E) onn vertices, a bisection of G consists of two disjoint sets, A and B, wherea B = V, A = B and A B =. Thecut size of a bisection (A, B) is the number edges that have one endpoint in A and the other endpoint in B. The graph bisection problem is the problem of finding a bisection of minimum cut size, which is called the bisection width of G. The graph bisection problem is a special case of the graph partition problem. In the more general graph partition problem, the sets A and B do not have to be of equal size, but must meet certain size criteria. An α-edge partition of a graph G on n vertices is a partitioning in which the sizes of A and B meet the criteria max { A, B } αn. As in the graph bisection problem, the goal is to minimize the cut size. Graph bisection is a special case of α-edge partitioning where α =1/2. Another type of partition is an α-vertex partition in which the graph is partitioned into three disjoint subsets A, B and C with A and B meeting the previously stated size criteria and no edges having one endpoint in A and the other in B. The vertex set C is referred to as the vertex separator. The problem is to minimize the cardinality of C which is called the size of the partition. 2.2 Applications The graph partition problem and its variants such as graph bisection have many real world applications. Consider a distributed system in which processes must be assigned to two processors where there is communication between some processes. The problem is to create an assignment of processes to processors that minimizes communication between processors but equalizes the number of processes assigned to each processor. This problem is analogous to the graph bisection problem, where the vertices represent processes and the sets A and B represent the processors to which they are assigned.

10 2 THE GRAPH BISECTION PROBLEM 3 The edges of the graph represent communication between processes [15]. Another application of graph partitioning and graph bisection is found in VLSI placement. In designing VLSI layouts, the divide-and-conquer approach is often utilized. Typically, the circuit is split in half by removing wires that connect the halves. Each half is recursively laid out and then the wires connecting the two halves are put back. The quality of the final layout depends greatly on the number of wires that are removed. The problem of minimizing the number of wires that go between the two halves is equivalent to the graph bisection problem if one considers the components to be the vertices in a graph and the wires to be edges. By producing a bisection of the graph with a minimal cut size, we minimize the number of wires between the two halves [6]. Graph partitioning, specifically α-vertex partitioning, is used in sparse matrix factorization. In this problem, a system of linear equations must be solved. By rearranging the matrix representation to meet certain criteria, the problem becomes easier to solve. The matrix can be represented as a graph and by finding a partition of minimum size, we produce a rearrangement of the matrix that makes the system of equations easier to solve [5][11]. 2.3 Algorithms Exact Algorithms Most general types of graph partitioning problems are NP-hard. For a few special types of graphs, some partitioning problems can be solved in polynomial time, but in general, an exact polynomial time algorithm is not possible. In the case of 2/3-vertex partitioning of trees, there is a polynomial time algorithm that produces the exact answer which is a vertex separator of size 1. The optimal solution can also be found for planar graphs with a bisection width of O(log n) [7].

11 2 THE GRAPH BISECTION PROBLEM Approximation Algorithms Since exact algorithms for graph partitioning do not exist for most classes of graphs, approximation algorithms are the next logical step. Approximation algorithms usually run in polynomial time and produce a solution within a certain range of the optimal solution. Algorithms of this category have been produced for certain classes of graphs and for specific types of graph partitioning problems. One such algorithm produces an α-edge partition that is within O(log 2 n) of the optimal for 1/3 <α 2/3 [18]. However as Bui and Jones showed, it is NP-hard to find an α-edge partition within n 1/2 ɛ of the optimal for any fixed ɛ>0and1/2 α<1[4] Heuristic Algorithms Since exact and approximation algorithms that run in polynomial time do not exist for graph partitioning problems in general, it is necessary to attempt to solve the problem using heuristic algorithms. Heuristic algorithms provide no guarantee of the quality of solution that is produced but in practice they have been shown to produce very good solutions. Some examples of algorithms of this type are the Kernighan-Lin algorithm and genetic algorithms. Kernighan-Lin is a local optimization algorithm specifically for the graph bisection problem [15]. The algorithm starts with a bisection, either created randomly or as the result of some other algorithm. The quality of the solution produced by the Kernighan-Lin algorithm depends to a large degree on the quality of the bisection used as input [6]. For this reason, it is often used in conjunction with other algorithms. Kernighan-Lin takes as input an initial bisection (A, B). The algorithm consists of repeatedly swapping subsets of A and B until no improvement can be made. For each pass through the algorithm, the vertices in A and B are arranged by selecting pairs of vertices a, b where a A and b B for which swapping the two vertices will produce the largest reduction in cut size. For each selection, it is assumed that all previously selected pairs have been swapped and cannot be considered for selection again. Once all vertices have been selected, an ordering of vertices,

12 3 ANT COLONY OPTIMIZATION 5 a 1,..., a n/2 and b 1,...b n/2 has been produced. Finally, the vertices a 1,..., a k and b 1,..., b k are actually swapped, where k is selected such that the total cut size reduction is maximized. This process is repeated until swapping a subset in this way produces no improvement. Genetic algorithms have also been used to solve graph partitioning problems. In this technique, a number of random partitionings are generated and encoded into chromosomes. This set of chromosomes forms an initial population. Members of the population are evaluated based on the quality of solution so that good solutions can then be used to generate a new set of partitionings through recombination. The new chromosomes are then mutated and some or all of them are used to replace members of the initial population. This process continues for a fixed number of generations or until some criteria have been reached. The results of genetic algorithms vary a great deal depending on the encoding scheme and the genetic operators that are used [6][8][12]. These are just two of the many types of heuristic algorithms designed to solved graph partitioning problems. Others include greedy algorithms, simulated annealing, multi-level algorithms, spectral algorithms and flow based algorithms. The reason for the plethora of heuristic algorithms is that the problem is too computationally complex to solve using either exact or approximation algorithms [1][3][5][11][14][20]. 3 Ant Colony Optimization Ant Colony Optimization is a heuristic technique that seeks to imitate the behavior of a colony of ants and their ability to collectively solve a problem. For example, it has been observed that a colony of ants is able to find the shortest path to a food source. As an ant moves and searches for food, it lays down a chemical substance called pheromone along its path. As it decides where to move, it looks for pheromone trails and prefers to follow trails with higher levels of pheromone. Suppose there are two possible paths to reach a food source. Regardless of the path chosen, the ant will lay the same

13 3 ANT COLONY OPTIMIZATION 6 amount of pheromone at each step. However, it will return to its starting point quicker when it takes the shorter path. It is then able to return to the food source to collect more food. Thus, in an equal amount of time, the ant would lay a higher concentration of pheromone over its path if it takes the shorter path, since it would complete more trips in the given time. The pheromone is then used by other ants to determine the path to find food. Ants follow the path with the most accumulation of pheromone, which happens to be the shortest path. In addition, some pheromone evaporates over time although not a significant amount [2][10]. 3.1 Problems Solved with ACO This idea is exploited in ACOalgorithms that attempt to solve the Travelling Salesman Problem (TSP). Because of the similarity between finding the shortest path to a food source and finding the shortest tour through a set of vertices, the technique is easy to apply to this problem. The TSP problem consists of a complete weighted graph on n vertices in which the goal is to create a complete tour of all n vertices of minimum weight. In most variations of the algorithm, each edge in the TSP problem has a capacity to hold pheromone. Initially, a small fixed amount of pheromone is placed on each edge. A number of ants create a tour of the vertices as follows. Each ant is started at a random vertex. The ant maintains a list of vertices already visited so as to construct a valid tour. For each move, it selects a vertex that has not yet been visited with probability proportional to the amount of pheromone on the edge between its current location and the other vertex and inversely proportional to the length of the edge. This continues until a tour has been constructed. Then, based on the quality of the solution obtained (the length of the tour), pheromone is laid down on the edges that comprise the solution. The better the solution (shorter the length of the tour), the more pheromone is laid down. The ants follow this process in parallel and then the pheromone levels are updated which simulates many ants moving simultaneously. In addition, after each such iteration, a percentage of the

14 3 ANT COLONY OPTIMIZATION 7 pheromone on each edge is evaporated. This prevents the solution from converging to a local optimum if the shortest path is not found by any ants immediately. This process can be repeated for a fixed number of times or until a certain tour length is achieved [2]. This follows very closely the behavior observed in ant colonies except in one point. In this algorithm, the ants do not lay down pheromone until after they complete a tour because the amount of pheromone placed is proportional to the quality of solution. In addition, a higher amount of evaporation is used than what is found in nature. This is necessary to prevent premature convergence and since the problem is more difficult than finding the shortest path to a food source, the ants may not find the shortest path immediately. TSP was the first problem to which the ACOtechnique was applied because of its obvious use of the ants ability to find the shortest path. The TSP problem was first attempted in 1991 by Colorni, Dorigo and Maniezzo [9]. Other problems that have been the focus of ACO work include the quadratic assignment, network routing, vehicle routing and frequency assignment problems [19]. In addition, some work has been done to design ACOalgorithms for graph coloring, shortest common supersequence, machine scheduling, multiple knapsack and sequential ordering [2]. Since the ACOtechnique has been successfully applied to graph related problems such as TSP, it seems likely that the technique could prove useful for the graph bisection problem. However, since the solution to the graph bisection problem is not a path as in TSP, another mechanism is needed. In addition to the idea of finding shortest paths, the idea of territorial colonization and swarm intelligence can also be utilized in ACOalgorithms. Kuntz and Snyers applied these concepts to a graph clustering problem [16]. Their algorithm combines features of the ACOtechnique with swarm intelligence to form a model, which is an artificial system designed to perform a certain task. The organisms are called animats, reflecting the fact that the system draws ideas from several sources, not just ant colonies. These ideas are important in the graph bisection problem, because the graph can be viewed as territory to be colonized.

15 4 USING ACO TO SOLVE GRAPH BISECTION 8 By combining these two ideas of animats following paths and forming colonies, we develop an algorithm that solves the graph bisection problem. 4 Using ACO To Solve Graph Bisection In this algorithm, animats follow a set of local rules, which collectively create a solution to the graph bisection problem. The individual animats are unaware of the collective task they are accomplishing or the global state of the system. Local optimization techniques are also employed to assist the animats in achieving a good solution. The basic foundation of the algorithm is to consider each vertex in the graph as a location that can hold any number of animats. The animats can move around the graph by moving across edges to reach a new vertex. Each animat belongs to one of two species (called species A and B). However, animats of both species follow the same rules. To start the algorithm, α animats are placed on the graph. Their species and location are chosen randomly. At any point throughout the algorithm, the configuration of animats on the graph constitutes a partitioning of the graph in the following way. Each vertex is considered to be colonized by one species. At a given time, it is said to be colonized by whichever species that has the greater number of animats on it. Any ties are recorded and after the colonies of all other vertices are calculated, the ties are broken in a random order by assigning the vertex to the species that results in a lower cut size. The set of all vertices colonized by species A constitutes A s colony and likewise the vertices colonized by species B (which are all the remaining vertices) form B s colony. At a given point in time, including the initial configuration, this is not necessarily a bisection, since one colony may contain more vertices than the other. Thus, other techniques are used at certain points in the algorithm to ensure that the final solution is a bisection. In addition, each vertex can hold pheromone (as opposed to edge in the TSP algorithm). The two species produce separate types of pheromone, so each vertex has an amount of A pheromone and B pheromone.

16 5 ALGORITHM 9 The idea of the algorithm is for each species of animats to form a colony consisting of a set of vertices that are highly connected to each other but highly disconnected from the other colony. The result should be two sets of vertices that are highly connected amongst themselves, but have few edges going between the two sets. For an individual animat, the goal will be to lay down pheromone when the current vertex is a good position for animats of its own species, and to move to new vertices that it wants to add to its species colony. If each animat follows these goals, the result will be a partitioning of the vertices into two sets of similar size, with few edges going between the two sets (i.e., a very small cut size). 5 Algorithm The Hybrid ACO(HACO) algorithm consists of a number of iterations in which a percentage of animats are activated. When an animat is activated, it adds an amount of pheromone to the vertex it is currently at based on conditions at the vertex. It then may die with a certain probability or it may reproduce with a certain probability and finally, it will move to a new vertex. These operations involve only local information known by the animat. The animat is assumed to know the current time (i.e., iteration number), information about the vertex it is located at (such as the number and species of other animats on the vertex) and information about the vertices adjacent to its location. In each iteration, these activations are performed in parallel. After each iteration, the graph is updated with the new information. The algorithm is divided up into σ sets each comprised of γ iterations. After each set, the configuration of the graph is forced into a bisection using a greedy algorithm and a local optimization algorithm is run to help speed up the convergence rate. During each set, the parameters, which include probabilities for activation, death, reproduction, and birth are varied. The parameters are varied in such a way that at the beginning of the set, the colonies change a great deal and by the end of the set the colonies have converged to a stable configuration. The next set begins at the state where

17 5 ALGORITHM 10 the previous set ended. However, if the animats follow their usual rules immediately, they will not be able to move away from the local optimum that has been reached. So, for all but the initial set, a jolt is performed for a certain number of the first iterations to help move the configuration, or distribution of animats on the vertices, away from the solution to which it had converged. The jolt allows animats to select moves randomly instead of following the normal rules for movement. The length of the jolt is changed during the algorithm. The first jolt lasts for ν iterations and for subsequent jolts the length decreases linearly until the last set where there is no jolt. The idea is that with each successive iteration, the bisection should come closer to the optimal bisection, and thus shorter and shorter jolts are needed. This description of iterations and sets shows that the algorithm includes two mechanisms for converging on a solution. During each set of iterations, the animats start with a jolt to break out of a local optimum. Following the jolt, the changes in the configuration after each iteration are substantial. In the later iterations, the movement is much less and the colonies converge on a solution. In addition, in the larger context of the algorithm, each successive set utilizes a shorter jolt and thus by the end of the sets, the colonies have converged upon the best bisection. These two mechanisms prevent the colonies from prematurely converging to a local optimum in most cases. After σ sets have been completed, the solution is the best bisection that has been achieved. This is usually the bisection found by the last set; however, occasionally the best bisection is found earlier. In the following sections we will describe in detail what occurs in one iteration, what occurs when an animat is activated and what occurs between sets. Figure 1 provides pseudocode for the HACOalgorithm. 5.1 Iteration An iteration of the algorithm consists of a percentage of the animats being activated and then performing the necessary operations in parallel. The probability of an animat being activated changes during the set. At the

18 5 ALGORITHM 11 Randomly add α animats to graph For set=1 to σ For time=1 to γ For each animat do (in parallel): Activate animat with probability a(time) If activated Add p(animat, time) pheromone to the animat s location Die with probability π d If not dead If animat meets reproduction criteria Then reproduce with probability π r If time is in a jolt period Then select move randomly Else select move based on pheromone and connectivity End If End If Next animat Evaporate ɛ percent of the pheromone from each vertex Next time Convert configuration to a bisection with a greedy algorithm Run Kernighan-Lin local optimization Reduce total number of animats Equalize number of animats in each species Next set Return best bisection found Figure 1: HACOalgorithm for graph bisection

19 5 ALGORITHM 12 beginning of the set, more animats are activated during each iteration. By the end of the set, only a small percentage of the animats are activated in each iteration. The more animats that are activated during one iteration, the larger the possible change in the configuration will be. The actual probability of activation is a sigmoid-like function. The function starts at a maximum of π a max and ends at π a min. After the activations of animats have been completed, ɛ percentage of the pheromone on each vertex is evaporated. This prevents pheromone from building up too much and highly populated vertices from being overemphasized, which in turn prevents the algorithm from converging prematurely. 5.2 Activation of an Animat When an animat is activated, it deposits pheromone on its current vertex, dies with a certain probability or reproduces with a certain probability, and then moves to another vertex. These operations are performed by the animat by using local information to make decisions Pheromone The purpose of pheromone is to allow the algorithm to retain a memory of good configurations that have been found in the past. Members of each species deposit their pheromone on a vertex to indicate that this is a good configuration and more animats of their species should come to this vertex. When an animat is activated, it determines the percentage of adjacent vertices that are colonized by its own species. If the vertex is highly connected to vertices colonized by the animat s species, then the animat knows that this vertex is a good candidate for being colonized by its own species (in fact it may already be colonized by that species). The animat then attempts to reinforce this vertex by depositing a larger amount of its pheromone. However, if the animat determines that the vertex is not highly connected to vertices colonized by its own species, it will lay down less pheromone because it does not want to encourage more of its species to come to this vertex. In ad-

20 5 ALGORITHM 13 dition, the animats place lesser amounts of pheromone in early iterations and more pheromone in later iterations. This is because in early iterations, more change is needed, and in later iterations, animats should continue to converge on the solution that has been developing. In other words, this allows the animats to explore more of the search space in the beginning and to exploit more of their current configuration near the end. There is also a limit to the amount of pheromone of each species that can be stored on a vertex. The limit for a vertex is the product of the degree of that vertex and the pheromone limit parameter (λ). This allows densely connected vertices to accumulate more pheromone. The more highly connected a vertex is, the more essential it is that it is colonized by the right species. This is because a mistake on a highly connected vertex will mean a much greater cut size. The formula for the amount of pheromone to be deposited is: p(a, i) = a col a total i γ where a is the animat, i is the iteration number, a col is the number of vertices adjacent to the animat s current location which are colonized by the animat s species, and a total is the total number of vertices adjacent to the animat s current location (i.e., the degree of the vertex on which the animat is currently located) Death Next, the animat may be selected to die. The animat will die with probability π d, which is fixed throughout the algorithm. However, the activation probability changes throughout the set, so that early in the set, more animats are activated, and therefore more animats die early in the set. The purpose of this is to have shorter life spans in the beginning, which allows more turnover and change in the configuration. Later in the set, the animats live longer and thus there is less change and the solution is able to converge. If the animat is selected to die, it is removed from the list and the reproduction and movement operations are not performed.

21 5 ALGORITHM Reproduction If the animat is not selected for death, the algorithm proceeds to the reproduction step. The animat is selected for reproduction with fixed probability π r. However, the number of new animats that are produced depends on time. In the first iteration of a set, the average number of animats born is β init and it decreases linearly over time to β final in the last iteration. The changing birth rate serves to allow more change in earlier iterations, in which animats live for shorter lengths of time. In later iterations, fewer animats are born, but they live longer. The actual number of animats born is selected uniformly at random over a range centered on the average birth rate for the iteration. The number of animats born can be up to β range more or less than the specified average. When the new animats are created, they are the same species as the parent. If the vertex on which the parent is located is colonized by its own species, the new animats are all placed on that vertex. However, if the vertex is colonized by the opposite species, only ψ stay percent of the offspring animats are placed there. The remaining new animats will be placed on the vertex to which the parent animat moves in the next step. The rationale is that if the parent animat is already in its own colony but moves to another vertex, it should leave its offspring behind to help maintain the majority on that vertex. However, if the parent s species is not in majority, it should take most of its children to the new vertex in which it is trying to create a colony. The parent leaves some of its offspring behind however, so that some of its species remain at the vertex (in case that vertex really should be part of their colony). There are two other constraints on reproduction. First, there is a limit to how many offspring an animat can produce during its lifetime (η). This value is fixed throughout the algorithm and is the same for each animat. Once the limit is reached, the animat can no longer reproduce. This serves to prevent one species from taking over the graph and forcing the other species into extinction. For example, without this constraint, if species A happens to reproduces more in the first few iterations, during the next iteration, the

22 5 ALGORITHM 15 population will be comprised of a larger number of animats of species A. Since the selection of animats for activation is random, more animats of species A will be activated. Then, because only the activated animats get an opportunity to reproduce, more animats of species A will reproduce. This continues until one species dominates the graph. Another problem is when an animat reproduces and places all of its children on its current vertex, that species will continue to build up on that vertex. The reason is that once one of the children is activated, it will in turn reproduce and deposit more children on the same vertex before moving. This overemphasizes that vertex and does not allow the colonies to change much from their original starting configuration. Because of this, animats are not able to reproduce until they have made a set minimum number of moves (µ). This ensures that the graph is explored and that new configurations are created by the reproduction and movement rather than being inhibited by these operations Movement Movement is by far the most important operation the animats perform. The animats movement is the main mechanism by which the solution is produced. The animat can move to any vertex which is connected to its current location by an edge. There are two factors used to select a move from the set of possible moves. For each vertex to which the animat could move, the connectivity to other vertices is examined. The animat should move to a vertex that is highly connected to other vertices colonized by its own species. This factor gives an indication of the current configuration of the graph. In addition, the animat should learn from the past and take into account the pheromone that other animats have deposited. Throughout the course of a set, these two factors are weighted differently. Initially, the pheromone is weighted at ω pmin with the weight increasing linearly to ω pmax. Conversely, the connectivity is weighted at ω cmax to begin and decreases linearly to ω cmin. In this way, the configuration of the colonies changes greatly in early iterations and over time learning is incorporated into the algorithm. These basic factors drive

23 5 ALGORITHM 16 the animats to create colonies of highly connected vertices which are highly disconnected from the vertices colonized by the opposing species. These factors are the basis of move selection. The probability of moving to an adjacent vertex is proportional to the two combined factors. Specifically, the factors are combined as follows to create a probability of moving to a specific vertex v: pr(v) =cv c + pv p + π min where v c is the number of vertices adjacent to v that are colonized by the animat s own species, c is the connectivity weight and ω cmin c ω cmax, v p is the amount of pheromone of the animat s species on vertex v, p is the pheromone weight and ω pmin p ω pmax,andπ min is a fixed amount added to prevent any probabilities from being zero. In addition, one more factor is considered in selecting a move. During early sets, the animats should be forced to explore more of the graph rather than oscillating between two vertices. In order to encourage this, the probability of selecting the move which would result in the animat returning to its previous location is reduced. The factor it is reduced by starts at ρ and decreases linearly after each set until it reaches zero in the final set. Then the probability of moving to a connected vertex is the resulting value divided by the sum of values over all possible moves. 5.3 Between Sets After each set of iterations, several other operations are performed. It should be noted that all of these operations depend on what the animats have done so far. They help nudge the configuration into a bisection, improve the bisection through local optimization and then prepare the configuration for the next set. First, the algorithm looks for mistakes the animats have made. Here the algorithm looks for vertices in which a very high percentage (ψ swap )of the adjacent vertices are colonized by the opposite species. In these cases,

24 5 ALGORITHM 17 the vertex is swapped to the other colony. This is achieved by changing the species of animats on the vertex until the new species attains ψ maj percent of the animats. In most cases, few such vertices are found. Next the colonies are manipulated to produce a bisection. As was discussed earlier, any given configuration of animats on the graph does not necessarily induce a bisection. Therefore, if one species is colonizing more vertices than the other, some vertices will have to be swapped to the other species. The vertices to be swapped are selected from the set of fringe vertices, that is, vertices that are adjacent to a vertex of the opposite colony. By changing the colony of only fringe vertices, the algorithm continues in the direction the animats were heading rather than selecting vertices in a region that in completely dominated by one species and creating an anomaly. Vertices are selected to be swapped by making the greedy choice from amongst the fringe vertices. After each vertex is selected, the swap is performed and subsequent choices are made based on the new configuration of the colonies. Using the bisection produced by this greedy optimization, the Kernighan- Lin algorithm is run. Since the quality of the result produced by the Kernighan- Lin algorithm depends largely on the quality of the bisection used as input, it produces little if any improvement in early sets. However, in later sets, after the animats have begun to converge upon a good solution, it usually improves the solution slightly. At this point, the two colonies still form a bisection and this data is recorded (in addition, during previous iterations in which the colonies naturally formed a bisection, the configuration was recorded if it had the smallest cut size thus far). Now, if this was not the final set, we prepare the graph and population to start a new set by performing two more manipulations. Even though we now have a bisection, the number of animats on the graph may differ from the initial number of animats of both species. Usually after a set, the number of animats is higher than the initial number. The problem with this is that if it continues, the number of animats grows so large that the computations of each iteration become prohibitively slow (since a percentage of animats are activated in each iteration). To correct this, the number of

25 5 ALGORITHM 18 Parameter Value Description γ 1000 Number of iterations per set σ 10 Number of sets ν 50 Maximum jolt length α Initial number of animats π a max 0.8 Maximum activation probability π a min 0.2 Minimum activation probability π d Death probability β init 4 Expected number of animats born in first iteration β final 2 Expected number of animats born in final iteration β range 50% Percentage range from average number of animats born π r 0.01 Reproduction probability η 10 Maximum number of offspring per animat µ 5 Number of moves needed before animat can reproduce ψ stay 20% Percentage of offspring that stay on old location when not colonized by animat s species ω pmin 0 Minimum pheromone weight ω pmax 1 Maximum pheromone weight ω cmin 250 Minimum connection weight ω cmax 500 Maximum connection weight π min 0.1 Minimum probability for moving to a vertex ρ 0.9 Reduction factor for returning to previous location ψ swap 75% Percentage of animats needed for swap ψ maj 90% Percentage of animats needed for majority ɛ 0.2 Evaporation rate λ 1000 Pheromone limit τ 50 Free edge factor Table 1: Parameter values

26 5 ALGORITHM 19 animats is reduced to the initial number. This is done by randomly removing animats until the correct population size is reached. This may disrupt the colonies, however this is not a problem since each new set begins with a jolt anyway. Finally, the number of animats in the two species is equalized. Normally the number of animats in each species is quite close, since the colonies have been forced into a bisection. However, this is not always the case. A bisection does not guarantee that the two species have the same number of animats. For example, consider a vertex, which is inhabited by 10 animats of species A, and 11 animats of species B. Species B is considered to have colonized the vertex. However, in another case, a vertex may have 10 animats of species A and 1 animat of species B, in which case species A has colonized the vertex. If we consider these two vertices as a graph that is bisected, we have overall 20 animats of species A and 11 animats of species B. In practice, the results are not this unbalanced. However, in some cases, the dominance of one species can perpetuate through each set and prevent the other species from exploring the graph. To alleviate this possible problem, animats are added to equalize the number of animats in each species. Usually this is a very small number and thus is not problematic in consideration of the previous operation (reducing the number of animats to the initial number). The new animats are added only to vertices where their own species is already in majority. Thus, this operation does not significantly alter the configuration of the colonies; it merely gives added strength to the colonies in which animats are added. Following this operation, a new set is begun. Again the time is initialized to 0 and all probabilities relating to time are reset. Thus, as the animats have converged on a possible solution, starting a new set allows the animats to move away from that solution in expectation of finding a better solution in case this solution was a local optimum. After σ sets have been completed, the solution is the bisection with minimum cut size.

27 6 RESULTS 20 6 Results Using the parameter values listed in Table 1, the algorithm was tested on five types of graphs to determine its behavior on a wide selection of inputs. These graphs are used as a benchmark as they have been used to test a number of different graph bisection algorithms. Thus the results can be compared with other algorithms. The graphs range in size from 500 to 5,252 vertices and have average degrees from 2 to 36. The algorithm was implemented in C++ and run on a Pentium III 800MHz with 256 MB RAM. For each graph, the algorithm was run for 100 trials. These results are given in Table 2 which also gives average running time in seconds for one trial of each graph. In this section, the five graph types are described, the results for different graph types are discussed and graph preprocessing is described along with the improved results it yields. 6.1 Graphs Types In [13], Johnson et al. described two classes of graphs that we use to test our algorithm. The first type, Gn.p, is a random graph on n vertices where an edge is placed between two vertices with probability p, independent of all other edges. The expected vertex degree is then p(n 1). These graphs are a good test case as they have large optimal bisections. The second type, Un.d, is a random geometric graph on n vertices with expected vertex degree d. It is generated by selecting n points within the unit square which represent the vertices. An edge is placed between two vertices if their Euclidean distance does not exceed t. It can be determined that d = nπt 2 is the expected vertex degree. This type of graph is highly clustered so it provides a very different test case than the previous class of graphs. Three other graph types were proposed by Bui et al. in [3]. They define a random regular graph, Bregn.b, onn vertices with degree 3 and optimal cut size b with probability 1 o(1). These graphs provide an interesting test case because of they are sparse and have a provable unique optimal bisection with high probability. A grid graph, Gridn.b, onn vertices is a

28 6 RESULTS 21 Graph Best known HACOMin HACOAvg (Std Dev) Time G (2.33) G (4.43) G (2.99) 95.7 G (3.89) G * (3.23) G * (3.83) G (4.77) G (6.74) U500.05* (5.95) U500.10* (17.16) U (27.96) U (52.89) U * (7.43) U * (22.09) U (57.47) U (164.49) Breg (29.17) Breg (13.42) Breg (6.50) Breg (10.64) Breg5000.0* (273.11) Breg5000.4* (223.31) Breg5000.8* (131.60) Breg * (0.00) Grid (1.09) Grid (7.76) Grid (3.02) Grid (10.60) W-Grid (0.00) W-Grid (11.81) W-Grid (3.75) W-Grid (22.23) Cat (1.45) Cat (2.31) Cat (2.87) Cat (3.92) RCat (0.66) RCat (1.75) RCat (1.74) RCat (3.15) *These graphs were run for 25 sets, all others were only run for 10 sets because after 10 sets they had converged. Table 2: Results of HACOalgorithm for 100 trials

29 6 RESULTS 22 grid with known optimal cut size b. AvariationofthistypeisW-Gridn.b in which the grid boundaries are wrapped around. This class of graphs is highly structured with good connectivity. The last class of graphs used is the caterpillar graph, Cat.n, onn vertices with an optimal cut size of 1. It is constructed by starting with a spine, which is a straight line in which all vertices except the two ends have degree 2. Then to each vertex on the spine, called a node, we add six legs each of which consist of adding a vertex and connecting it to the node. If the number of nodes on the spine is even, the optimal cut size of 1 is found by dividing the spine in half. In addition, RCat.n is a caterpillar graph in which each node on the spine has degree n. Caterpillar graphs seem simple but are difficult for local bisection algorithms. 6.2 Results for different classes of graphs The results for the Gn.p showed that the HACOalgorithm achieves a cut size very close to the best known and in some cases reaches the best known. The tests showed a low average and standard deviation in comparison to the other types of graphs. Due to the way these graphs were created, there are likely to be many different bisections that are close to the best known. For this reason, it was easy for the algorithm to find a good bisection. On the other hand, the Un.d graphs were more difficult for the algorithm to solve. In most cases, the minimum cut sizes were close to or equal to the best known. However, the averages and standard deviations were higher than for most other types of graphs. Rather than having many good bisections, these graphs seemed to have only a few good bisections which were also very different from each other. Once the population begins to converge to one of these configurations, it is very difficult to reach the best known bisection. For example, in U500.40, the best known cut size is 412. Because these graphs were created by choosing points in the unit square, we can plot the coordinates of the vertices and view the bisection. One example of this is shown in Figure 2, which displays the initial configuration of the partitions followed by the bisection after one, four and seven sets. Let (A, B) denotea

30 6 RESULTS 23 Initial configuration, cut size=4,331 After set 1, cut size=772 After set 4, cut size=586 After set 7, cut size=412 Figure 2: A bisection of U at various stages in the algorithm

31 6 RESULTS 24 Figure 3: A bisection of U with cut size=512 bisection, then the edges which connect vertices in A are colored blue and the edges that connect vertices in B are red. Green edges have one endpoint in A and one in B, thus the cut size of the bisection is the number of green edges. After seven sets, the optimal cut size has been reached. In the tests, the algorithm found that bisection 40% of the time. However, 51% of the time it found a bisection with cut size 512. This cut size is actually quite close to the best known when we consider that a random bisection of U produces a cut size of approximately 4,400. Figure 2 shows the bisection which produces a cut size of 412 while Figure 3 shows the bisection with cut size 512. It is clear that to move from the configuration of colonies in Figure 3 to the configuration in Figure 2 would require substantial changes. Since the good bisections for this class of graphs are so different, it was difficult for the population to find the correct bisection. This accounts for the fact the the best known bisection was found for most of these graphs but the average results and standard deviations were quite high. The results for the Un.d graphs also showed that for a fixed number of vertices, the higher the average degree, the easier the problem was to solve. This is due in part to the fact that a higher degree gives the animats more structure to use. Also if we consider the way this type of graph is constructed,

32 6 RESULTS sets 25 sets Graph Best known Min Avg (Std Dev) Min Avg (Std Dev) U (7.68) (5.95) U (19.73) (17.16) U (9.70) (7.43) U (32.03) (22.09) Table 3: HACOresults for selected Un.d graphs with 10 and 25 sets d is the expected degree and t is the maximum edge length where d = nπt 2. If d is higher, vertices that are further away from each other are connected. This means that an animat must make less individual moves to move from one side of the graph to the other. So in higher degree graphs, more change in configuration is possible in one set because the animats have the ability to move further simply because of the way in which the graph is constructed. Figure 4 shows two U1000 graphs, one with d = 10 and the other with d = 20. The second graph therefore has twice the expected degree of the first and the edges connecting two vertices can be longer. Because of this factor, several of the lower degree Un.d graphs were also tested with a higher number of sets. Increasing the number of sets improved the results for graphs where d 10. For graphs where d>10, the best known results were already found in the 100 trials of 10 sets. The effect of an increased number of sets is shown in Table 3. The class of Bregn.b graphs are structured in such a way that with high probability there is only one good bisection, the optimal bisection. Unlike other random graphs such as Gn.p which may have two completely different bisections that are very close in cut size, Breg graphs have no other significantly different bisections that approach the optimal cut size. This makes it easier for this algorithm to find the optimal solution because there are no local optima of good size. However, despite this, the Breg graphs have a very low degree (3), which means that there is less structure for the animats to use in their movement, thus more iterations are needed to converge to a solution. Because of this, the larger Breg graphs needed to be run for more sets in or-

33 6 RESULTS 26 d =10 d =20 Figure 4: U1000 with two d values der to produce the optimal bisection. The graphs with 5,000 vertices were run for 25 sets instead of 10. In these graphs, the cut size was still decreasing steadily after 10 sets, unlike most other graphs. In Figure 5, we see that the optimal bisection is converged upon at set 18. This is a typical result, but the set in which the configuration converges to the solution varies. As the graph shows, the colonies very quickly converge to the solution in sets 11 through 17. In some trials, the optimal solution was not reached. When the optimal cut size was not achieved, the cut size was very large. This accounts for the high average and standard deviation. The algorithm was also tested for up to 50 sets which did produce better results for the Breg5000 graphs; however, running this many sets takes much longer. In addition, the algorithm does not handle Bregn.0 graphs well because they contain two unconnected subgraphs. The optimal bisection would have one subgraph completely colonized by species A and the other subgraph completely colonized by species B. However, the algorithm usually bisects each subgraph individually. This is due to the fact that animats can only move to a vertex that is connected to their current location by an edge. The greedy algorithm and Kernighan-Lin optimization do change the species of

An Ant System Algorithm for Graph Bisection

An Ant System Algorithm for Graph Bisection An Ant System Algorithm for Graph Bisection Thang N. Bui Dept. of Computer Science Penn State Harrisburg Middletown, PA 17057 Lisa C. Strite Dept. of Computer Science Penn State Harrisburg Middletown,

More information

Omid Ansary Professor of Electrical Engineering Director, School of Science, Engineering, and Technology

Omid Ansary Professor of Electrical Engineering Director, School of Science, Engineering, and Technology We approve the thesis of Ezra Nugroho. Date of Signature Thang N. Bui Associate Professor of Computer Science Chair, Mathematics and Computer Science Programs Thesis Advisor Sukmoon Chang Assistant Professor

More information

Ant Colony Optimization for dynamic Traveling Salesman Problems

Ant Colony Optimization for dynamic Traveling Salesman Problems Ant Colony Optimization for dynamic Traveling Salesman Problems Carlos A. Silva and Thomas A. Runkler Siemens AG, Corporate Technology Information and Communications, CT IC 4 81730 Munich - Germany thomas.runkler@siemens.com

More information

Ant Colony Optimization: The Traveling Salesman Problem

Ant Colony Optimization: The Traveling Salesman Problem Ant Colony Optimization: The Traveling Salesman Problem Section 2.3 from Swarm Intelligence: From Natural to Artificial Systems by Bonabeau, Dorigo, and Theraulaz Andrew Compton Ian Rogers 12/4/2006 Traveling

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function Introduction METAHEURISTICS Some problems are so complicated that are not possible to solve for an optimal solution. In these problems, it is still important to find a good feasible solution close to the

More information

Genetic Algorithm for Circuit Partitioning

Genetic Algorithm for Circuit Partitioning Genetic Algorithm for Circuit Partitioning ZOLTAN BARUCH, OCTAVIAN CREŢ, KALMAN PUSZTAI Computer Science Department, Technical University of Cluj-Napoca, 26, Bariţiu St., 3400 Cluj-Napoca, Romania {Zoltan.Baruch,

More information

Ant Colony Optimization

Ant Colony Optimization Ant Colony Optimization CompSci 760 Patricia J Riddle 1 Natural Inspiration The name Ant Colony Optimization was chosen to reflect its original inspiration: the foraging behavior of some ant species. It

More information

Using Genetic Algorithms to optimize ACS-TSP

Using Genetic Algorithms to optimize ACS-TSP Using Genetic Algorithms to optimize ACS-TSP Marcin L. Pilat and Tony White School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada {mpilat,arpwhite}@scs.carleton.ca

More information

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006 Simulated Ant Colonies for Optimization Problems July 6, 2006 Topics 1 Real Ant Colonies Behaviour of Real Ants Pheromones 2 3 Behaviour of Real Ants Pheromones Introduction Observation: Ants living in

More information

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 49 Dynamic Robot Path Planning Using Improved Max-Min Ant Colony

More information

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing Solving Traveling Salesman Problem Using Parallel Genetic Algorithm and Simulated Annealing Fan Yang May 18, 2010 Abstract The traveling salesman problem (TSP) is to find a tour of a given number of cities

More information

SIMULATION APPROACH OF CUTTING TOOL MOVEMENT USING ARTIFICIAL INTELLIGENCE METHOD

SIMULATION APPROACH OF CUTTING TOOL MOVEMENT USING ARTIFICIAL INTELLIGENCE METHOD Journal of Engineering Science and Technology Special Issue on 4th International Technical Conference 2014, June (2015) 35-44 School of Engineering, Taylor s University SIMULATION APPROACH OF CUTTING TOOL

More information

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company International Journal of Computational Intelligence and Applications c World Scientific Publishing Company The Accumulated Experience Ant Colony for the Traveling Salesman Problem JAMES MONTGOMERY MARCUS

More information

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization Gaurav Bhardwaj Department of Computer Science and Engineering Maulana Azad National Institute of Technology Bhopal,

More information

Genetic Algorithms and Genetic Programming Lecture 13

Genetic Algorithms and Genetic Programming Lecture 13 Genetic Algorithms and Genetic Programming Lecture 13 Gillian Hayes 9th November 2007 Ant Colony Optimisation and Bin Packing Problems Ant Colony Optimisation - review Pheromone Trail and Heuristic The

More information

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques N.N.Poddar 1, D. Kaur 2 1 Electrical Engineering and Computer Science, University of Toledo, Toledo, OH, USA 2

More information

A HYBRID GENETIC ALGORITHM A NEW APPROACH TO SOLVE TRAVELING SALESMAN PROBLEM

A HYBRID GENETIC ALGORITHM A NEW APPROACH TO SOLVE TRAVELING SALESMAN PROBLEM A HYBRID GENETIC ALGORITHM A NEW APPROACH TO SOLVE TRAVELING SALESMAN PROBLEM G.ANDAL JAYALAKSHMI Computer Science and Engineering Department, Thiagarajar College of Engineering, Madurai, Tamilnadu, India

More information

ACO and other (meta)heuristics for CO

ACO and other (meta)heuristics for CO ACO and other (meta)heuristics for CO 32 33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution

More information

Image Edge Detection Using Ant Colony Optimization

Image Edge Detection Using Ant Colony Optimization Image Edge Detection Using Ant Colony Optimization Anna Veronica Baterina and Carlos Oppus Abstract Ant colony optimization (ACO) is a population-based metaheuristic that mimics the foraging behavior of

More information

Solving a combinatorial problem using a local optimization in ant based system

Solving a combinatorial problem using a local optimization in ant based system Solving a combinatorial problem using a local optimization in ant based system C-M.Pintea and D.Dumitrescu Babeş-Bolyai University of Cluj-Napoca, Department of Computer-Science Kogalniceanu 1, 400084

More information

A Clustering Approach to the Bounded Diameter Minimum Spanning Tree Problem Using Ants. Outline. Tyler Derr. Thesis Adviser: Dr. Thang N.

A Clustering Approach to the Bounded Diameter Minimum Spanning Tree Problem Using Ants. Outline. Tyler Derr. Thesis Adviser: Dr. Thang N. A Clustering Approach to the Bounded Diameter Minimum Spanning Tree Problem Using Ants Tyler Derr Thesis Adviser: Dr. Thang N. Bui Department of Math & Computer Science Penn State Harrisburg Spring 2015

More information

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem TRUNG HOANG DINH, ABDULLAH AL MAMUN Department of Electrical and Computer Engineering

More information

Automatic Programming with Ant Colony Optimization

Automatic Programming with Ant Colony Optimization Automatic Programming with Ant Colony Optimization Jennifer Green University of Kent jg9@kent.ac.uk Jacqueline L. Whalley University of Kent J.L.Whalley@kent.ac.uk Colin G. Johnson University of Kent C.G.Johnson@kent.ac.uk

More information

RESEARCH ARTICLE. Accelerating Ant Colony Optimization for the Traveling Salesman Problem on the GPU

RESEARCH ARTICLE. Accelerating Ant Colony Optimization for the Traveling Salesman Problem on the GPU The International Journal of Parallel, Emergent and Distributed Systems Vol. 00, No. 00, Month 2011, 1 21 RESEARCH ARTICLE Accelerating Ant Colony Optimization for the Traveling Salesman Problem on the

More information

CT79 SOFT COMPUTING ALCCS-FEB 2014

CT79 SOFT COMPUTING ALCCS-FEB 2014 Q.1 a. Define Union, Intersection and complement operations of Fuzzy sets. For fuzzy sets A and B Figure Fuzzy sets A & B The union of two fuzzy sets A and B is a fuzzy set C, written as C=AUB or C=A OR

More information

Lesson 2 7 Graph Partitioning

Lesson 2 7 Graph Partitioning Lesson 2 7 Graph Partitioning The Graph Partitioning Problem Look at the problem from a different angle: Let s multiply a sparse matrix A by a vector X. Recall the duality between matrices and graphs:

More information

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS Jim Gasvoda and Qin Ding Department of Computer Science, Pennsylvania State University at Harrisburg, Middletown, PA 17057, USA {jmg289, qding}@psu.edu

More information

IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM

IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM Er. Priya Darshni Assiociate Prof. ECE Deptt. Ludhiana Chandigarh highway Ludhiana College Of Engg. And Technology Katani

More information

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm Proceedings of the 7th WSEAS Int. Conf. on Signal Processing, Computational Geometry & Artificial Vision, Athens, Greece, August 24-26, 2007 88 Solving the Shortest Path Problem in Vehicle Navigation System

More information

Homework 2: Search and Optimization

Homework 2: Search and Optimization Scott Chow ROB 537: Learning Based Control October 16, 2017 Homework 2: Search and Optimization 1 Introduction The Traveling Salesman Problem is a well-explored problem that has been shown to be NP-Complete.

More information

We approve the thesis of Gnanasekaran Sundarraj.

We approve the thesis of Gnanasekaran Sundarraj. We approve the thesis of Gnanasekaran Sundarraj. Date of Signature Thang N. Bui Associate Professor of Computer Science Chair, Mathematics and Computer Science Programs Thesis Advisor Sukmoon Chang Assistant

More information

DIPARTIMENTO DI ELETTRONICA - POLITECNICO DI MILANO

DIPARTIMENTO DI ELETTRONICA - POLITECNICO DI MILANO DIPARTIMENTO DI ELETTRONICA - POLITECNICO DI MILANO Positive feedback as a search strategy Marco Dorigo Vittorio Maniezzo Alberto Colorni Report n. 91-016 June 1991 2 Title: Positive feedback as a search

More information

Optimization of Makespan and Mean Flow Time for Job Shop Scheduling Problem FT06 Using ACO

Optimization of Makespan and Mean Flow Time for Job Shop Scheduling Problem FT06 Using ACO Optimization of Makespan and Mean Flow Time for Job Shop Scheduling Problem FT06 Using ACO Nasir Mehmood1, Muhammad Umer2, Dr. Riaz Ahmad3, Dr. Amer Farhan Rafique4 F. Author, Nasir Mehmood is with National

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007) In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning

More information

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2 15-382 COLLECTIVE INTELLIGENCE - S18 LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2 INSTRUCTOR: GIANNI A. DI CARO ANT-ROUTING TABLE: COMBINING PHEROMONE AND HEURISTIC 2 STATE-TRANSITION:

More information

Modified Greedy Methodology to Solve Travelling Salesperson Problem Using Ant Colony Optimization and Comfort Factor

Modified Greedy Methodology to Solve Travelling Salesperson Problem Using Ant Colony Optimization and Comfort Factor International Journal of Scientific and Research Publications, Volume 4, Issue 10, October 2014 1 Modified Greedy Methodology to Solve Travelling Salesperson Problem Using Ant Colony Optimization and Comfort

More information

CAD Algorithms. Circuit Partitioning

CAD Algorithms. Circuit Partitioning CAD Algorithms Partitioning Mohammad Tehranipoor ECE Department 13 October 2008 1 Circuit Partitioning Partitioning: The process of decomposing a circuit/system into smaller subcircuits/subsystems, which

More information

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU Gaurav Bhardwaj Department of Computer Science and Engineering Maulana Azad National Institute of Technology

More information

Non-deterministic Search techniques. Emma Hart

Non-deterministic Search techniques. Emma Hart Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting

More information

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm SCITECH Volume 3, Issue 1 RESEARCH ORGANISATION March 30, 2015 Journal of Information Sciences and Computing Technologies www.scitecresearch.com Solving Travelling Salesmen Problem using Ant Colony Optimization

More information

Improvement of a car racing controller by means of Ant Colony Optimization algorithms

Improvement of a car racing controller by means of Ant Colony Optimization algorithms Improvement of a car racing controller by means of Ant Colony Optimization algorithms Luis delaossa, José A. Gámez and Verónica López Abstract The performance of a car racing controller depends on many

More information

Optimizing the Sailing Route for Fixed Groundfish Survey Stations

Optimizing the Sailing Route for Fixed Groundfish Survey Stations International Council for the Exploration of the Sea CM 1996/D:17 Optimizing the Sailing Route for Fixed Groundfish Survey Stations Magnus Thor Jonsson Thomas Philip Runarsson Björn Ævar Steinarsson Presented

More information

Combinatorial Optimization - Lecture 14 - TSP EPFL

Combinatorial Optimization - Lecture 14 - TSP EPFL Combinatorial Optimization - Lecture 14 - TSP EPFL 2012 Plan Simple heuristics Alternative approaches Best heuristics: local search Lower bounds from LP Moats Simple Heuristics Nearest Neighbor (NN) Greedy

More information

Vertex Cover Approximations

Vertex Cover Approximations CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation

More information

An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem

An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem 1 An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem Krishna H. Hingrajiya, Ravindra Kumar Gupta, Gajendra Singh Chandel University of Rajiv Gandhi Proudyogiki Vishwavidyalaya,

More information

Mutations for Permutations

Mutations for Permutations Mutations for Permutations Insert mutation: Pick two allele values at random Move the second to follow the first, shifting the rest along to accommodate Note: this preserves most of the order and adjacency

More information

An Ant Approach to the Flow Shop Problem

An Ant Approach to the Flow Shop Problem An Ant Approach to the Flow Shop Problem Thomas Stützle TU Darmstadt, Computer Science Department Alexanderstr. 10, 64283 Darmstadt Phone: +49-6151-166651, Fax +49-6151-165326 email: stuetzle@informatik.tu-darmstadt.de

More information

Escaping Local Optima: Genetic Algorithm

Escaping Local Optima: Genetic Algorithm Artificial Intelligence Escaping Local Optima: Genetic Algorithm Dae-Won Kim School of Computer Science & Engineering Chung-Ang University We re trying to escape local optima To achieve this, we have learned

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Evolutionary Computation Algorithms for Cryptanalysis: A Study Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis

More information

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016)

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016) Survey on Ant Colony Optimization Shweta Teckchandani, Prof. Kailash Patidar, Prof. Gajendra Singh Sri Satya Sai Institute of Science & Technology, Sehore Madhya Pradesh, India Abstract Although ant is

More information

Using a Divide and Conquer Method for Routing in a PC Vehicle Routing Application. Abstract

Using a Divide and Conquer Method for Routing in a PC Vehicle Routing Application. Abstract Using a Divide and Conquer Method for Routing in a PC Vehicle Routing Application Brenda Cheang Department of Management Information Systems University College Dublin Belfield, Dublin 4, Ireland. Sherlyn

More information

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery Monika Sharma 1, Deepak Sharma 2 1 Research Scholar Department of Computer Science and Engineering, NNSS SGI Samalkha,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented

More information

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices Yaser Alkhalifah Roger L. Wainwright Department of Mathematical Department of Mathematical and Computer Sciences and Computer

More information

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem Michalis Mavrovouniotis and Shengxiang Yang Abstract Ant colony optimization (ACO) algorithms are population-based algorithms

More information

Searching for Maximum Cliques with Ant Colony Optimization

Searching for Maximum Cliques with Ant Colony Optimization Searching for Maximum Cliques with Ant Colony Optimization Serge Fenet and Christine Solnon LIRIS, Nautibus, University Lyon I 43 Bd du 11 novembre, 69622 Villeurbanne cedex, France {sfenet,csolnon}@bat710.univ-lyon1.fr

More information

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

Genetic Algorithms with Oracle for the Traveling Salesman Problem

Genetic Algorithms with Oracle for the Traveling Salesman Problem PROCEEDINGS OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY VOLUME 7 AUGUST 25 ISSN 17-884 Genetic Algorithms with Oracle for the Traveling Salesman Problem Robin Gremlich, Andreas Hamfelt, Héctor

More information

Coloring 3-Colorable Graphs

Coloring 3-Colorable Graphs Coloring -Colorable Graphs Charles Jin April, 015 1 Introduction Graph coloring in general is an etremely easy-to-understand yet powerful tool. It has wide-ranging applications from register allocation

More information

CMPSCI611: The SUBSET-SUM Problem Lecture 18

CMPSCI611: The SUBSET-SUM Problem Lecture 18 CMPSCI611: The SUBSET-SUM Problem Lecture 18 We begin today with the problem we didn t get to at the end of last lecture the SUBSET-SUM problem, which we also saw back in Lecture 8. The input to SUBSET-

More information

SLS Methods: An Overview

SLS Methods: An Overview HEURSTC OPTMZATON SLS Methods: An Overview adapted from slides for SLS:FA, Chapter 2 Outline 1. Constructive Heuristics (Revisited) 2. terative mprovement (Revisited) 3. Simple SLS Methods 4. Hybrid SLS

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma

Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma Seminar doktoranada i poslijedoktoranada 2015. Dani FESB-a 2015., Split, 25. - 31. svibnja 2015. Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma (Single-Objective and Multi-Objective

More information

SWARM INTELLIGENCE -I

SWARM INTELLIGENCE -I SWARM INTELLIGENCE -I Swarm Intelligence Any attempt to design algorithms or distributed problem solving devices inspired by the collective behaviourof social insect colonies and other animal societies

More information

CHAPTER 4 GENETIC ALGORITHM

CHAPTER 4 GENETIC ALGORITHM 69 CHAPTER 4 GENETIC ALGORITHM 4.1 INTRODUCTION Genetic Algorithms (GAs) were first proposed by John Holland (Holland 1975) whose ideas were applied and expanded on by Goldberg (Goldberg 1989). GAs is

More information

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 2 Part 4: Dividing Networks into Clusters The problem l Graph partitioning

More information

Learning Fuzzy Rules Using Ant Colony Optimization Algorithms 1

Learning Fuzzy Rules Using Ant Colony Optimization Algorithms 1 Learning Fuzzy Rules Using Ant Colony Optimization Algorithms 1 Jorge Casillas, Oscar Cordón, Francisco Herrera Department of Computer Science and Artificial Intelligence, University of Granada, E-18071

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

A Review: Optimization of Energy in Wireless Sensor Networks

A Review: Optimization of Energy in Wireless Sensor Networks A Review: Optimization of Energy in Wireless Sensor Networks Anjali 1, Navpreet Kaur 2 1 Department of Electronics & Communication, M.Tech Scholar, Lovely Professional University, Punjab, India 2Department

More information

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example Lecture 2: Combinatorial search and optimisation problems Different types of computational problems Examples of computational problems Relationships between problems Computational properties of different

More information

Introduction to Approximation Algorithms

Introduction to Approximation Algorithms Introduction to Approximation Algorithms Dr. Gautam K. Das Departmet of Mathematics Indian Institute of Technology Guwahati, India gkd@iitg.ernet.in February 19, 2016 Outline of the lecture Background

More information

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems Chapter I An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems Christine Solnon I.1 Derek Bridge I.2 Subset selection problems involve finding an optimal feasible subset of an initial

More information

Lecture 13: Minimum Spanning Trees Steven Skiena

Lecture 13: Minimum Spanning Trees Steven Skiena Lecture 13: Minimum Spanning Trees Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.stonybrook.edu/ skiena Problem of the Day Your job

More information

Ant Colony Optimisation and Local Search for Bin Packing and Cutting Stock Problems

Ant Colony Optimisation and Local Search for Bin Packing and Cutting Stock Problems Ant Colony Optimisation and Local Search for Bin Packing and Cutting Stock Problems John Levine and Frederick Ducatelle Centre for Intelligent Systems and their Applications School of Informatics, University

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

MULTILEVEL OPTIMIZATION OF GRAPH BISECTION WITH PHEROMONES

MULTILEVEL OPTIMIZATION OF GRAPH BISECTION WITH PHEROMONES MULTILEVEL OPTIMIZATION OF GRAPH BISECTION WITH PHEROMONES Peter Korošec Computer Systems Department Jožef Stefan Institute, Ljubljana, Slovenia peter.korosec@ijs.si Jurij Šilc Computer Systems Department

More information

6. Algorithm Design Techniques

6. Algorithm Design Techniques 6. Algorithm Design Techniques 6. Algorithm Design Techniques 6.1 Greedy algorithms 6.2 Divide and conquer 6.3 Dynamic Programming 6.4 Randomized Algorithms 6.5 Backtracking Algorithms Malek Mouhoub, CS340

More information

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Evolutionary Algorithms. CS Evolutionary Algorithms 1 Evolutionary Algorithms CS 478 - Evolutionary Algorithms 1 Evolutionary Computation/Algorithms Genetic Algorithms l Simulate natural evolution of structures via selection and reproduction, based on performance

More information

Comparison of TSP Algorithms

Comparison of TSP Algorithms Comparison of TSP Algorithms Project for Models in Facilities Planning and Materials Handling December 1998 Participants: Byung-In Kim Jae-Ik Shim Min Zhang Executive Summary Our purpose in this term project

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP Wael Raef Alkhayri Fahed Al duwairi High School Aljabereyah, Kuwait Suhail Sami Owais Applied Science Private University Amman,

More information

Accelerating Ant Colony Optimization for the Vertex Coloring Problem on the GPU

Accelerating Ant Colony Optimization for the Vertex Coloring Problem on the GPU Accelerating Ant Colony Optimization for the Vertex Coloring Problem on the GPU Ryouhei Murooka, Yasuaki Ito, and Koji Nakano Department of Information Engineering, Hiroshima University Kagamiyama 1-4-1,

More information

Solving Traveling Salesman Problem for Large Spaces using Modified Meta- Optimization Genetic Algorithm

Solving Traveling Salesman Problem for Large Spaces using Modified Meta- Optimization Genetic Algorithm Solving Traveling Salesman Problem for Large Spaces using Modified Meta- Optimization Genetic Algorithm Maad M. Mijwel Computer science, college of science, Baghdad University Baghdad, Iraq maadalnaimiy@yahoo.com

More information

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization Intuitionistic Fuzzy Estimations of the Ant Colony Optimization Stefka Fidanova, Krasimir Atanasov and Pencho Marinov IPP BAS, Acad. G. Bonchev str. bl.25a, 1113 Sofia, Bulgaria {stefka,pencho}@parallel.bas.bg

More information

val(y, I) α (9.0.2) α (9.0.3)

val(y, I) α (9.0.2) α (9.0.3) CS787: Advanced Algorithms Lecture 9: Approximation Algorithms In this lecture we will discuss some NP-complete optimization problems and give algorithms for solving them that produce a nearly optimal,

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

RESEARCH OF COMBINATORIAL OPTIMIZATION PROBLEM BASED ON GENETIC ANT COLONY ALGORITHM

RESEARCH OF COMBINATORIAL OPTIMIZATION PROBLEM BASED ON GENETIC ANT COLONY ALGORITHM RESEARCH OF COMBINATORIAL OPTIMIZATION PROBLEM BASED ON GENETIC ANT COLONY ALGORITHM ZEYU SUN, 2 ZHENPING LI Computer and Information engineering department, Luoyang Institute of Science and Technology,

More information

Scalability of a parallel implementation of ant colony optimization

Scalability of a parallel implementation of ant colony optimization SEMINAR PAPER at the University of Applied Sciences Technikum Wien Game Engineering and Simulation Scalability of a parallel implementation of ant colony optimization by Emanuel Plochberger,BSc 3481, Fels

More information

Memory-Based Immigrants for Ant Colony Optimization in Changing Environments

Memory-Based Immigrants for Ant Colony Optimization in Changing Environments Memory-Based Immigrants for Ant Colony Optimization in Changing Environments Michalis Mavrovouniotis 1 and Shengxiang Yang 2 1 Department of Computer Science, University of Leicester University Road, Leicester

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 3 Definitions an undirected graph G = (V, E)

More information

A CLUSTERING APPROACH TO THE BOUNDED DIAMETER MINIMUM SPANNING TREE PROBLEM USING ANTS

A CLUSTERING APPROACH TO THE BOUNDED DIAMETER MINIMUM SPANNING TREE PROBLEM USING ANTS The Pennsylvania State University The Graduate School A CLUSTERING APPROACH TO THE BOUNDED DIAMETER MINIMUM SPANNING TREE PROBLEM USING ANTS A Thesis in Computer Science by Tyler Derr 2015 Tyler Derr Submitted

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions Introduction Chapter 9 Graph Algorithms graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 2 Definitions an undirected graph G = (V, E) is

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

What Secret the Bisection Method Hides? by Namir Clement Shammas

What Secret the Bisection Method Hides? by Namir Clement Shammas What Secret the Bisection Method Hides? 1 What Secret the Bisection Method Hides? by Namir Clement Shammas Introduction Over the past few years I have modified the simple root-seeking Bisection Method

More information

We have already seen the transportation problem and the assignment problem. Let us take the transportation problem, first.

We have already seen the transportation problem and the assignment problem. Let us take the transportation problem, first. Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 19 Network Models In this lecture, we will discuss network models. (Refer

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information