CHAPTER 6 DEVELOPMENT OF PARTICLE SWARM OPTIMIZATION BASED ALGORITHM FOR GRAPH PARTITIONING

Size: px
Start display at page:

Download "CHAPTER 6 DEVELOPMENT OF PARTICLE SWARM OPTIMIZATION BASED ALGORITHM FOR GRAPH PARTITIONING"

Transcription

1 CHAPTER 6 DEVELOPMENT OF PARTICLE SWARM OPTIMIZATION BASED ALGORITHM FOR GRAPH PARTITIONING 6.1 Introduction From the review, it is studied that the min cut k partitioning problem is a fundamental partitioning problem and is NP hard also. Most of the existing partitioning algorithms are heuristic in nature and they try to find a reasonably good solution. These algorithms falls in move based category in which solution is generated iteratively from an initial solution applying move to the recent solution. Most frequently, these move based approaches are combined with stochastic algorithms In this chapter, we have developed Multilevel Recursive Discrete Particle Swarm Optimization (MRDPSO) technique which integrates a new DPSO based refinement approach and an efficient matching based coarsening scheme for solving GPP. 6.2 Discrete Particle Swarm Optimization The original PSO algorithm optimizes problems in which the elements of solution space are continuous. Since most of the real life applications are based on optimization of discrete valued space, Kennedy et al [163] developed discrete PSO (DPSO) to solve discrete optimization problems. The PSO algorithm regulates the trajectories of population particles through a problem space using

2 information about the previous best performance of the particle and its neighbor. In PSO, trajectories represent changes in positions of some number of dimensions, whereas in DPSO particles operate on discrete search space and trajectories represents variations in probability which a coordinate takes a value from reasonable discrete values. In this method, each particle specifies the potential solution composed of k elements. The fitness function is used to evaluate the correctness of the solution. Each particle is considered as a position in k dimensional space and each element of particle is restricted to 0 and 1, where 0 represents included and 1 represents not included. Each element can vary from 0 to 1 and vice versa. Additionally, each particle will have k dimensional velocity between the range [- Vmax, Vmax]. Velocities are defined using probabilities that a bit will be in one state or the other. At the initial stage of the algorithm, the number of particles and their velocity vectors are generated randomly. Then in a certain number of iterations, the algorithm aims to find the optimal or close optimal solutions using predefined fitness functions. At each iteration, update the velocity vector using best positions p best,n best and then update the position of particles using velocity vector. p best and n best are k dimensional vectors composed of 0 and 1 which operates as a memory of the algorithm. Since the initial step of algorithm, best position that the particle has visited is p best and the position that the particle and its neighbor has visited is n best. Depending on size of neighborhoods, two distinct PSO algorithms can be developed. If the entire population size of the swarm is considered as the neighbor of particle, then n best is called as global best ( g best ) whereas, if for each particle smaller neighborhood are defined then n best is called

3 as local best (l best ). Star neighborhood and ring neighborhood are the topologies used by g best and l best respectively. PSO based on g best converges faster than l best based PSO due to its larger particle connectivity, but l best based PSO is less vulnerable to being trapped in local minima. Velocity and position of particle are updated using following equations: ( ) = ( ) + ( ) ( ) + ( ) ( ) (6.1) ( ) = 1, ( ) >, ( ) (6.2) Sigmoid function is given by the relation Sig( ( )) = ( ( )) (6.3) where ( ) - n th element of m th particle in t th iteration of algorithm. ( ) - n th element of the velocity vector of m th particle in t th iteration of algorithm. and and and are positive accelerated constants that controls the influence of on the search process. [0, 1] are random values sampled from a uniform distribution, and [0, 1] is random number. Stopping criteria for DPSO can either be the maximum number of iterations or determining acceptable solution or no further improvement in the number of iterations.

4 Discrete Particle Swarm Optimization algorithm is Create and initialize k dimensional swarm with N particles Repeat for each particle i = 1, 2,., N if ( ) > then // f represents the fitness function end = ; if > then end = ; end for each particle i = 1, 2,., N do update the velocity vector using equation (6.1) end update the position vector using equation (6.2) until stopping criteria is satisfied. 6.3 DPSO based Mathematical Model of Graph Partitioning Problem Let G = (V, E) be a weighted undirected graph with set of vertices V and E be set of edges, the weight of an edge ( ) and k be natural number greater than 1, then the graph partitioning problem (GPP) is to partition vertex set V into k - blocks,,, such that = and =,. If all blocks have the same weight, then the partition is balanced. For balanced partitioning = (1+ ) for {1,2,, } If = 0, then the partitioning is perfectly balanced. If >, then the block is overloaded.

5 Objective function for the graph partitioning problem is to minimize the cut of partition cut between two sets and which are subsets of is, = (, ) (6.4) Selection of Discrete PSO for GPP We have developed technique based on discrete particle swarm optimization (DPSO) algorithm to explore good quality approximate solutions of the min cut partitioning problem for graph G = (V, E) with p vertices and q edges. In the problem, each particle of the swarm is considered as partitioning vector where the solution space is p dimensional. Obviously, cut ( ) is the fitness function to be minimized. If the scale of the swarm is N then the position of n th particle will be represented by a p dimensional vector =,,,, {1, 2,, } where each element represents d th dimension of position vector which is restricted to the values zero and one.the velocity of this particle is denoted by another p dimensional vector =,,, in which each particle indicates the probability of the bit possessing the value one. The best formerly visited position of n th particle is represented by the vector =,,,. The best formerly visited position of the swarm is represented by the vector, =,,,, index of best particle in the swarm is i. Let k be iteration number, then the velocity and position used in our DPSO are defined by =. + [0, ] + [0, ] (6.5)

6 = 1, >, Sigmoid function is Sig( ) = ( ) (6.7) is function following uniform distribution which returns vectors whose positions are randomly selected. represents point wise multiplication of vectors. and are positive constants called as cognitive and social parameter repsectively. [0, 1] generates random number selected from a uniform distribution. The standard discrete PSO is not much effective approach since the space for feasible solutions of min cut partitioning problem is excessively large, particularly when the number of vertices is in the thousands. Hence, we have chosen multilevel partitioning method to combine with discrete PSO and developed, multiple recursive discrete particle swarm optimization algorithm (MRDPSO) for min cut partitioning. As MRDPSO algorithm is applied during the refinement phase of the multilevel method; it improves the quality of level m graph Gm using the boundary refinement policy for partitioning PGm. In addition to this, discrete PSO grouped with local search optimization and hybrid PSO local search is developed which applies the Fiduccia Mattheyses algorithm [55] to each particle for achieving local optimum and maintain the balance constraint of partitioning. This hybrid approach permits the search process to divert from local minima, as well simultaneously reinforces efficiency and achieves a noteworthy speedup for balanced partitioning.

7 6.4 Core Sort Heavy Edge Matching Sorted heavy edge matching (SHEM) sorts vertices of the graph in ascending order using their degrees to decide the order to visit for matching. We have matched the vertex u with vertex v, if there is an edge of maximum weight from vertex u to an unmatched vertex v. Highly connected groups of vertices are identified and collapsed together in SHEM, this method is discussed in detail in Section A core number of the vertex is a maximum order of core which contains that vertex [164]. To find core number, the vertices of the graph are to be arranged in ascending degrees, then for each vertex v identified the vertices adjacent to v whose degree is greater than v. Degree of all such vertices is reduced by 1. The process continued until all vertices get core number. The concept of graph core for coarsening power law graphs is introduced in [165]. An algorithm for finding Core Number: Input: Graph G = (V, E) Output: Table representing core number for each vertex Begin Compute degrees of vertices; arrange vertices in ascending order of their degrees; for each in the order do begin [ ] = [ ]; for each h ( ) do begin If [ ] > [ ] then begin [ ] = [ ] 1; end reorder V accordingly; end if end for end for return table with core number for each vertex.

8 Core sort heavy edge matching algorithm (CSHEM) combines SHEM w ith the concept of core sort. Initially, to decide order to visit for matching by CSHEM, we have sorted vertices of a graph in descending order using core numbers of the vertices. During the process of matching, if tie occurred, according to edge weights, then the vertex with highest core number is selected. An algorithm for Core Sort Heavy Edge Matching: Input: Graph = (, ) Output: Coarsened graph = (, ) Procedure coarsening = (, ) while do Randomly select a vertex Select the (, ) of maximal weight Remove, from Add the new vertex = {, } For each edge (, ) in E do if (, ) then (, ) (, ) + (, ) Remove (, ) from end if end for end while return = (, ) end procedure

9 6.5 Greedy Graph Growing Partition (GGGP) Graph growing partition algorithm (GGP) introduced by Karypis [17] iteratively generates set A of vertices consisting of half of the vertices such that ( ) = ( ). During implementation of algorithm vertices of graph are divided into three sets A, B, C where B contains vertices that are adjacent to vertices of set A called as border of A, and C contains remaining vertices. Set A is initialized by randomly selecting any vertex of the graph, at each iteration vertices that are adjacent to A are added to A, hence =. The process ends when A will contain set of vertices which represent half the total weight of the graph. Partition generated by GGP is dependent on proper choice of initial vertex. GGP algorithm has been improved by using the concept of cut gain [18] of the vertex and developed a greedy graph growing partition (GGGP). In GGGP also, vertices of graph are divided into three sets A, B, C as in the case of GGP algorithm. A is initialized by randomly selecting any vertex from V and sets B, C are then initialized. For initialization of set A, selected nearest vertex (say u) from the set B to the vertex in set A and then added it to A which is vertex of maximal gain in B. After that, each vertex in set C which is adjacent to u is moved to the set B and calculated its gain. Similarly, recalculate the gain of each vertex in B that is adjacent to u and therefore the next iteration starts. Process continued till the weight of set A reaches to half of the total weight. Algorithm ends when ( ) = ( ). GGGP algorithm generates better result for any choice of the initial vertex to move.

10 An algorithm for Greedy Graph Growing Partition: Input: Graph = (, ) Output: Set A such that ( ) = ( ) Randomly select a vertex { } { } { h h (, ) } Compute the gains of while ( ) < ( ) do Select the of maximal gain Move from for each edge (, ) in E do if then update the gain of else if then add to compute the gain of end if end for end while return end procedure 6.6 Multilevel Recursive Discrete Particle Swarm Optimization (MRDPSO) Algorithm To develop an algorithm for achieving superior quality approximate solutions of the min cut partitioning problem for graphs, we have used a hybrid approach in which multilevel partitioning method is combined with DPSO.

11 Our Multilevel recursive discrete particle swarm optimization algorithm (MRDPSO) works in three steps. The first step is an initial portioning phase, in which MRDPSO initializes population on the smallest graph. In the second step i.e, during the refinement phase, it successively projects all the particles back to the next level finer graph. Whereas, in a third step, it recursively partitions bisected graph into k - parts. For the initial partitioning of the given graph G (V, E), we have used GGGP approach. Effective matching based coarsening scheme is applied during the coarsening phase. In this phase, CSHEM algorithm is used on the original graph and SHEM is applied to the coarsened graphs. For graphs with small number vertices, use of core number yields matching same as that of SHEM. Hence, for the graphs with less than fifteen vertices directly SHEM is applied to the original graph. Position vector, velocity vector, and personal best vector on the graph = (, ) for each n th particle are,, and respectively. Initial MDPSO initializes the population on in partitioning phase and projects back successively all the particles,,, to the next level finer graph. In the next stage, internal and external weights for each particle are determined. Internal weight for the n th particle of the vertex xis denoted as and is the sum of weights of the edges incident on the vertex x within the block and external weight for the n th particle of the vertex xis denoted as and is the sum of weights of the edges incident on the vertex x outside the block. = (, ) (, ) (6.8) = (, ) (, ) (6.9)

12 Boundary vertices for the each particle with positive external weight are stored in a boundary hash table. An initialization phase starts at time t = 0 in which, internal weight, external weight and boundary hash table is computed. The structure of MRDPSO consists of a nested loop. Stopping criteria is decided by the outer loop, either MRDPSO is run for maximum number of cycles or not. Velocity of each particle is to be adjusted by boundary refinement policy in the inner loop of MRDPSO. Enforce maximum value on the velocity, if velocity exceeds the threshold value, then set it. Move vertices between the partitions and update particle s position, this moving may break balance constraint. To maintain the balance constraint of, we prefer to select vertex with highest gain to move from larger block. During these moves, it is very important to inspect and compare boundary vertices by the space which permits to store, recover and update the gains of vertices speedily. In this space, we have used last-in and first-out scheme to achieve an efficient MRDPSO. Internal and external weights of the particle play an important role in computing gain and boundary vertex for the easy implementation of MRDPSO. At each iteration, the weights of all the neighboring vertices of the moved vertex are updated for maintaining the consistency in internal and external weights. The boundary hash table also changes with respect to changes in partitioning. In the third step, we apply recursive algorithm for k partitioning of the bisected graph generated in the first two steps of MRDPSO.

13 6.7 Result and Discussion To assess the performance of the developed multilevel recursive discrete particle swarm optimization MRDPSO, we carried out experiment on ordinary graphs as well as hypergraphs. For the first one, we have used Walshaw Graph Partitioning Test Bench [82] as characterized in Table 6.1. Graph Size Degree Max Min Avg add data elt uk add bcsstk whitaker crack wing-nodal fe_4elt vibrobox bcsstk elt fe_sphere cti memplus cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing ,57 brack finan fe_tooth fe_rotor a fe_ocean Table 6.1: WalshawGraph Partitioning Test Benchmark

14 For hypergraphs ISPD98 benchmark suit [166] is used. The details of graphs are given in Table 6.2. Hypergraphs Vertices Hyperedges ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm Table 6.2: ISPD98 Benchmark Suit Performance Evaluation for Ordinary Graphs We have compared partitions obtained by our MRDPSO algorithm in short computing time limited to three minutes with state of the-art graph partitioning packages METIS [167], CHACO [168] and multilevel iterated tabu search [169]

15 Additionally, we have also compared our results with the best partitions ever reported in partitioning archive. Benchmark graphs along with their characteristics are as listed in Table We have used multilevel p METIS algorithm for METIS, multilevel KL algorithm for CHACO with recursive bisection is chosen. For MITS parameters values are: = 0.1, = 0.01, 0.02 and coarsening threshold being 100. METIS and CHACO does not permit randomized repetitions to run the algorithm, we chose to run MRDPSO only once. Furthermore, the optimal set of parameter values chosen for MRDPSO is = 1, = = 0.5, = 4, = 30, = 20. The cutoff limit varies with size of graphs. It is from one second for the graphs with upto 4000 vertices to three minutes for the largest graphs. In the results, METIS, CHACO, tabu search and our approach are labeled as p - METIS, CHACO, MITS and MRDPSO, respectively. Cut value for partition of each graph for different values of k are given in Table 6.3, 6.4, 6.5 and 6.6. If partition is perfectly balanced, then the balance constraint has value = 1. For the situations in which the partition is not perfectly balanced, balance constraint value is also mentioned in parenthesis along with cut value. The number of times each algorithm produces the best partition over the benchmark graphs is given in the last row of each table. CSHEM used for coarsening in MRDPSO immensely reduces the size of graph which helps GGGP for generating better balanced partition. Whereas implementation of DPSO at the most complex and time consuming refinement stage helps a lot to reduce cut value in small cutoff limit also. Hence from the results, we can observe that our MRDPSO approach performs very well in terms of cut value than the other three approaches. From the results, we can also observe that for 32, few partitions by p METIS, MITS and our approach are not perfectly balanced but in our approach this imbalance also occurs very less than other two.

16 Graph k = 2 k = 4 P - METIS CHACO MITS MRDPSO P - METIS CHACO MITS MRDPSO add ` data elt uk add bcsstk whitaker crack wingnodal fe_4elt vibrobox bcsstk elt fe_sphere cti memplus cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.3: Comparison of MRDPSO with p - METIS, CHACO, and MITS for k = 2, 4

17 Graph k = 8 k = 16 P - METIS CHACO MITS MRDPSO P - METIS CHACO MITS MRDPSO add data elt uk add bcsstk whitaker crack wingnodal fe_4elt vibrobox bcsstk elt fe_sphere cti memplus cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.4: Comparison of MRDPSO with p - METIS, CHACO, and MITS for k = 8, 16

18 Graph k = 32 P - METIS CHACO MITS MRDPSO add20 NAN (1.03) 2243 data 2060 (1.01) (1.01) 1732 (1.01) 3elt uk 316 (1.01) (1.01) 221 add (1.01) (1.01) 198 (1.01) bcsstk whitaker crack wing-nodal fe_4elt vibrobox (1.01) bcsstk elt fe_sphere cti memplus NAN cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.5: Comparison of MRDPSO with p - METIS, CHACO, and MITS for k = 32

19 Graph k = 64 P - METIS CHACO MITS MRDPSO add (1.07) (1.03) 2789 (1.01) data 3116 (1.03) (1.07) elt (1.03) 1487 uk 495 (1.02) (1.03) 395 add (1.02) (1.01) 442 (1.01) bcsstk (1.01) (1.05) whitaker (1.01) crack 2847 (1.01) (1.01) 2139 wing-nodal (1.01) (1.01) 13669(1.01) fe_4elt (1.01) vibrobox (1.01) bcsstk (1.01) (1.01) (1.01) 4elt (1.01) 2517 fe_sphere cti memplus (1.01) NAN cs bcsstk (1.02) bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.6: Comparison of MRDPSO with p - METIS, CHACO, and MITS for k = 64

20 As it will be noticed from the second experiment that for some graphs more computational time, hence more iterations are required to establish balance partition. Whereas, CHACO generates perfectly balanced partitions for all values of k due to use of recursive bisection. We have also compared the performance of the developed algorithm with best balanced partitions which are reported in the Graph Partitioning archive [82]. Most of these results are generated by the algorithm developed by Schulz et al [170], and their algorithm combines an evolutionary approach with JOSTEL multilevel method. This approach requires large running time (around one week on normal machine for larger graphs), since each run consists of almost 50,000 calls. Other best results are reported by approaches used in [171, 172]. For the second experiment we have increased cutoff limits ranging from one minute for the smallest graph to one hour for largest graph and run MRDPSO algorithm ten times for each value of k for evaluating update in cut value. In table 6.7, 6.8 and 6.9 cut values for the best balance partitions, ever reported in partitioning archive, cut values generated by MRDPSO and standard deviations after ten executions of our algorithm are given. From the results we can be observed that the imbalance is completely removed due increase in cutoff time and the number of executions, but at the same time we can observe that the cut values are same in most of the cases though the time limit is increased. Hence use of DPSO in partitioning gives an optimal solution in less time and less executions also. Cut values generated by MRDPSO are better than best partitions also.

21 Graph BEST k = 2 k = 4 MRDPSO SD BEST MRDPSO SD (y) (y) (Y add data elt uk add bcsstk whitaker crack wing-nodal fe_4elt vibrobox bcsstk elt fe_sphere cti memplus cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.7: Comparison of MRDPSO with BEST reported cut for k = 2, 4

22 Graph BEST k = 8 k = 16 MRDPSO SD BEST MRDPSO SD (y) (y) (Y add data elt uk add bcsstk whitaker crack wing-nodal fe_4elt vibrobox bcsstk elt fe_sphere cti memplus cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.8: Comparison of MRDPSO with BEST reported cut for k = 8, 16

23 Graph k = 32 k = 64 BEST MRDPSO SD BEST MRDPSO SD add data elt uk add bcsstk whitaker crack wing-nodal fe_4elt vibrobox bcsstk elt fe_sphere cti memplus cs bcsstk bcsstk fe_pwt bcsstk fe_body t60k wing brack finan fe_tooth fe_rotor a fe_ocean Total Table 6.9: Comparison of MRDPSO with BEST reported cut for k = 32, 64

24 Figures 6.1 and 6.2 shows improvement in the partitioning cut obtained by MRDPSO relative to the best cut reported in partitioning archive for k =2, 4, 8 and for k = 16, 32, 64, respectively. Points lying below 1.0 in curves indicate that MRDPSO performs better than other approaches. It can be observed that our approach shows an improvement in 86%, 83%, 80%, 80%, 80% and 90% for k = 2, 4, 8, 16, 32 and 64, respectively. The results show that overall performance of the developed MRDPSO algorithm is noteworthy for generating balanced partition in significantly less time. 1.4 Relative Cut add20 data 3elt uk add32 bcsstk33 whitaker3 crack wing-nodal fe_4elt2 vibrobox bcsstk29 4elt fe_sphere cti memplus cs4 bcsstk30 bcsstk31 fe_pwt bcsstk32 fe_body t60k wing brack2 finan512 fe_tooth fe_rotor 598a fe_ocean k = 2 k = 4 k = 8 Fig 6.1: Relative improvement in the partitioning cut for k = 2, 4, 8

25 1.4 Relative Cut add20 data 3elt uk add32 bcsstk33 whitaker3 crack wing-nodal fe_4elt2 vibrobox bcsstk29 4elt fe_sphere cti memplus cs4 bcsstk30 bcsstk31 fe_pwt bcsstk32 fe_body t60k wing brack2 finan512 fe_tooth fe_rotor 598a fe_ocean k = 16 k = 32 k = 64 Fig 6.2: Relative improvement in the partitioning cut for k = 16, 32, Performance Evaluation for Hypergraphs We have used 18 hypergraphs from ISPD98 benchmark suit with vertices range from to and hyperedges range from to for performance evaluation of our algorithm. The characteristics of these hypergraphs are listed in Table 6.2. We compared results produced by MRDPSO algorithm with the results derived by METIS recursive bisection (hmetis RB) [173] and METIS k way partitioning (hm ETIS k way) [174]. We have used CSHEM algorithm during the coarsening phase. GGGP algorithm is used to consistently detects smaller edge cuts than the other algorithms during initial

26 partitioning phase and boundary KL for refinement and then MRDPSO is applied. To guarantee the statistical significance of results, we run the algorithm fifteen times. Furthermore, the optimal set of parameter values is chosen for MRDPSO as = 1, = = 0.5, = 4, = 30, = 20 and balance constraint is = 1.0. Table 6.10, 6.11, and 6.12 gives the number of hyperedges that are cut during the partitioning by MRDPSO algorithm, METIS recursive bisection (hmetis RB) and METIS k way partitioning (hm ETIS k way) for k = 8, 16 and 32 partitions, respectively. In the last row, the total time required by each method for all eighteen hypergraphs is given in minutes. From the results, it can be observed that MRDPSO gives optimal cut value in very less computing time. This is because of the fact that DPSO refinement heuristic is able to perform an excellent job of optimizing the objective function, as it is applied successively to finer coarsen graphs. Furthermore, results produced indicate that, the MRDPSO offers an additional benefit of producing high quality partitioning while enforcing tight balancing constraints. Figures 6.3 and 6.4 shows improvement in the partitioning cut obtained by MRDPSO relative to the METIS recursive bisection (hmetis RB) and METIS k way partitioning (hmetis k way) for k = 8, 16 and 32,respectively. Points below 1.0 indicate that MRDPSO performs better than hmetis RB and hmetis k way. The results illustrate that MRDPSO produces partitions whose cut is far better than the cut obtained by the other two methods.

27 Hypergraph k =8 hmetis RB hmetis k way MRDPSO ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm Run - Time Table 6.10: Comparison of MRDPSO with hmetis RB and hmetis k way for k = 8

28 Hypergraph k = 16 hmetis RB hmetis k way MRDPSO ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm Run - Time Table 6.11: Comparison of MRDPSO with hmetis RB and hmetis k way for k = 16

29 Hypergraph k = 32 hmetis RB hmetis k way MRDPSO ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm ibm Run - Time Table 6.12: Comparison of MRDPSO with hmetis RB and hmetis k way for k = 32

30 Improvement in Cut k=8 k = 16 k = 32 Fig 6.3: Improvement in the partitioning cut obtained by MRDPSO relative to hmetis RB for k = 8, 16, 32 Improvement in Cut k=8 k = 16 k = 32 Fig 6.4: Improvement in the partitioning cut obtained by MRDPSO relative to hm ETIS k way for k = 8, 16, 32

31 Table 6.13 gives improvement percentage in METIS RB and METIS k way corresponding to the results obtained by MRDPSO algorithm, for k = 8, 16 and 32 partitions. The average improvement in METIS RB is 32.14%, 29.41% and 28.76% for k = 8, 16 and 32 respectively. As well as average improvement in METIS k way is 32.14%, 29.41% and 28.76% for k = 8, 16 and 32 respectively. Hypergraph Improvement % in METIS RB Improvement % in METIS k way k = 8 k = 16 k = 32 k = 8 k = 16 k = 32 ibm % 49.76% 45.67% 51.94% 50.74% 45.00% ibm % 45.01% 46.91% 48.04% 46.04% 46.52% ibm % 39.71% 26.50% 46.61% 40.82% 27.5% ibm % 20.10% 17.84% 19.95% 18.19% 17.12% ibm % 27.85% 46.20% 41.93% 29.73% 43.82% ibm % 63.31% 62.19% 60.86% 62.01% 61.19% ibm % 23.75% 22.23% 29.36% 23.11% 21.13% ibm % -5.51% -0.21% -9.84% -9.94% -5.67% ibm % 43.90% 45.72% 49.66% 44.83% 45.61% ibm % 29.30% 29.11% 39.47% 29.52% 28.72% ibm % 22.75% 8.95% 22.62% 24.35% 9.19% ibm % 19.53% 15.36% 33.05% 19.80% 18.17% ibm % 46.52% 47.73% 33.49% 44.58% 48.21% ibm % 19.14% 16.94% 21.33% 18.47% 17.96% ibm % 31.06% 27.72% 40.35% 34.88% 30.39% ibm % 18.56% 28.22% 18.42% 18.72% 27.03% ibm % 24.14% 26.80% % ibm % 10.48% 3.87% % Average 32.14% 29.41% 28.76% 31.94% 29.59% 28.28% Table 6.11: Improvement percentage in METIS RB and METIS k way corresponding to the results obtained by MRDPSO algorithm, for k = 8, 16 and 32 partitions.

32 Swarm intelligence based Ant Colony Optimization is used for bisecting graph [20]. Same set up of hypergraphs, we have used to compare our MRDPSO based approach with the bipartitioning result obtained by ACO. Quality measures used for comparison are min cut and average cut. These measures are obtained in fifteen runs with different random seed for everyone. Table 6.12 shows min-cut and average cut obtained by ACO and MRDPSO, and their relative performance along with improvement percentage corresponding to ACO. ACO MRDPSO Relative Performance Improvement % Hypergraph Min - Cut Avg - Cut Min - Cut Avg - Cut Min - Cut Avg - Cut Min - Cut Avg - Cut ibm % 56.68% ibm % 31.79% ibm % 13.65% ibm % 6.54% ibm % 43.01% ibm % 7.38% ibm % 25.70% ibm % -0.36% ibm % 17.93% ibm % 22.61% ibm % 29.24% ibm % 14.63% ibm % 33.75% ibm % 13.37% ibm % -1.30% ibm % -5.94% ibm % -0.17% ibm % 17.62% Average % 14.65% Table 6.12: Min-cut and average cut obtained by ACO and MRDPSO, relative performance and improvement percentage

33 Figure 5.6 represents improvement percentage by MRDPSO in Min cut and Average cut corresponding to ACO. Our approach shows an improvement varies from -8.27% and 71.64%with average of 58% in Min Cut and varies from -5.94% and 56.68%with average of 14.65% in Avg Cut corresponding to bisection obtained by ACO based algorithm. Complete evaluation on all 18 graphs using an ACO based algorithm has taken two hours, whereas MRDPSO takes one hour eighteen minutes % Improvement % 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% % % Min - Cut Avg - Cut Fig 6.5: Improvement percentage in Min cut and Average cut for bipartitioning Conclusion In this chapter we have presented MRDPSO, a multilevel recursive discrete PSO approach for balance graph k partitioning. The developed algorithm follows the fundamental concept of multilevel graph partitioning method and integrates most powerful DPSO refinement procedure. We evaluated comprehensively the performance of the algorithm on a collection of the Graph Partitioning Archive

34 for ordinary graphs as well as hypergraphs for the different values of k, set to = 2, 4, 8, 16, 32, 64. From the results and comparisons, we can observe that the overall performance of developed the MRDPSO algorithm is remarkable for generating balanced partition in significantly less computing time than those produced by METIS, CHACO and Tabu search. The MRDPSO k way partitioning approach substantially outperforms than METIS RB and METIS k way, both for minimizing hyperedge cut as well as minimizing the computation time. Furthermore, our experiment research proves that MRDPSO algorithm is more competent than the ACO based algorithm for bipartitioning.

Engineering Multilevel Graph Partitioning Algorithms

Engineering Multilevel Graph Partitioning Algorithms Engineering Multilevel Graph Partitioning Algorithms Peter Sanders, Christian Schulz Institute for Theoretical Computer Science, Algorithmics II 1 Nov. 10, 2011 Peter Sanders, Christian Schulz Institute

More information

Application of Fusion-Fission to the multi-way graph partitioning problem

Application of Fusion-Fission to the multi-way graph partitioning problem Application of Fusion-Fission to the multi-way graph partitioning problem Charles-Edmond Bichot Laboratoire d Optimisation Globale, École Nationale de l Aviation Civile/Direction des Services de la Navigation

More information

Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering

Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering George Karypis and Vipin Kumar Brian Shi CSci 8314 03/09/2017 Outline Introduction Graph Partitioning Problem Multilevel

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

MULTILEVEL OPTIMIZATION OF GRAPH BISECTION WITH PHEROMONES

MULTILEVEL OPTIMIZATION OF GRAPH BISECTION WITH PHEROMONES MULTILEVEL OPTIMIZATION OF GRAPH BISECTION WITH PHEROMONES Peter Korošec Computer Systems Department Jožef Stefan Institute, Ljubljana, Slovenia peter.korosec@ijs.si Jurij Šilc Computer Systems Department

More information

Multilevel Graph Partitioning

Multilevel Graph Partitioning Multilevel Graph Partitioning George Karypis and Vipin Kumar Adapted from Jmes Demmel s slide (UC-Berkely 2009) and Wasim Mohiuddin (2011) Cover image from: Wang, Wanyi, et al. "Polygonal Clustering Analysis

More information

Graph and Hypergraph Partitioning for Parallel Computing

Graph and Hypergraph Partitioning for Parallel Computing Graph and Hypergraph Partitioning for Parallel Computing Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology June 29, 2016 Graph and hypergraph partitioning References:

More information

Multilevel Algorithms for Multi-Constraint Hypergraph Partitioning

Multilevel Algorithms for Multi-Constraint Hypergraph Partitioning Multilevel Algorithms for Multi-Constraint Hypergraph Partitioning George Karypis University of Minnesota, Department of Computer Science / Army HPC Research Center Minneapolis, MN 55455 Technical Report

More information

arxiv: v1 [cs.ne] 6 Feb 2017

arxiv: v1 [cs.ne] 6 Feb 2017 Distributed Evolutionary k-way Node Separators arxiv:1702.01692v1 [cs.ne] 6 Feb 2017 ABSTRACT Peter Sanders Karlsruhe Institute of Technology Karlsruhe, Germany sanders@kit.edu Darren Strash Colgate University

More information

Analysis of Multilevel Graph Partitioning

Analysis of Multilevel Graph Partitioning A short version of this paper appears in Supercomputing 995 The algorithms described in this paper are implemented by the METIS: Unstructured Graph Partitioning and Sparse Matrix Ordering System. METIS

More information

Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization

Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization IEEE TRANSACTIONS ON COMPUTER AIDED DESIGN, VOL XX, NO. XX, 2005 1 Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization Navaratnasothie Selvakkumaran and

More information

arxiv: v1 [cs.ds] 20 Feb 2018

arxiv: v1 [cs.ds] 20 Feb 2018 Alexandra Henzinger 1, Alexander Noe 2, and Christian Schulz 3 1 Stanford University, Stanford, CA, USA ahenz@stanford.edu 2 University of Vienna, Vienna, Austria alexander.noe@univie.ac.at 3 University

More information

Penalized Graph Partitioning for Static and Dynamic Load Balancing

Penalized Graph Partitioning for Static and Dynamic Load Balancing Penalized Graph Partitioning for Static and Dynamic Load Balancing Tim Kiefer, Dirk Habich, Wolfgang Lehner Euro-Par 06, Grenoble, France, 06-08-5 Task Allocation Challenge Application (Workload) = Set

More information

Parallel FEM Computation and Multilevel Graph Partitioning Xing Cai

Parallel FEM Computation and Multilevel Graph Partitioning Xing Cai Parallel FEM Computation and Multilevel Graph Partitioning Xing Cai Simula Research Laboratory Overview Parallel FEM computation how? Graph partitioning why? The multilevel approach to GP A numerical example

More information

Heuristic Graph Bisection with Less Restrictive Balance Constraints

Heuristic Graph Bisection with Less Restrictive Balance Constraints Heuristic Graph Bisection with Less Restrictive Balance Constraints Stefan Schamberger Fakultät für Elektrotechnik, Informatik und Mathematik Universität Paderborn Fürstenallee 11, D-33102 Paderborn schaum@uni-paderborn.de

More information

Multilevel k-way Hypergraph Partitioning

Multilevel k-way Hypergraph Partitioning _ Multilevel k-way Hypergraph Partitioning George Karypis and Vipin Kumar fkarypis, kumarg@cs.umn.edu Department of Computer Science & Engineering, University of Minnesota, Minneapolis, MN 55455 Abstract

More information

Particle Swarm Optimization

Particle Swarm Optimization Dario Schor, M.Sc., EIT schor@ieee.org Space Systems Department Magellan Aerospace Winnipeg Winnipeg, Manitoba 1 of 34 Optimization Techniques Motivation Optimization: Where, min x F(x), subject to g(x)

More information

Graph Partitioning for High-Performance Scientific Simulations. Advanced Topics Spring 2008 Prof. Robert van Engelen

Graph Partitioning for High-Performance Scientific Simulations. Advanced Topics Spring 2008 Prof. Robert van Engelen Graph Partitioning for High-Performance Scientific Simulations Advanced Topics Spring 2008 Prof. Robert van Engelen Overview Challenges for irregular meshes Modeling mesh-based computations as graphs Static

More information

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,

More information

Native mesh ordering with Scotch 4.0

Native mesh ordering with Scotch 4.0 Native mesh ordering with Scotch 4.0 François Pellegrini INRIA Futurs Project ScAlApplix pelegrin@labri.fr Abstract. Sparse matrix reordering is a key issue for the the efficient factorization of sparse

More information

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Australian Journal of Basic and Applied Sciences, 4(8): 3366-3382, 21 ISSN 1991-8178 Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Akbar H. Borzabadi,

More information

CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning

CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning Parallel sparse matrix-vector product Lay out matrix and vectors by rows y(i) = sum(a(i,j)*x(j)) Only compute terms with A(i,j) 0 P0 P1

More information

Requirements of Load Balancing Algorithm

Requirements of Load Balancing Algorithm LOAD BALANCING Programs and algorithms as graphs Geometric Partitioning Graph Partitioning Recursive Graph Bisection partitioning Recursive Spectral Bisection Multilevel Graph partitioning Hypergraph Partitioning

More information

Preclass Warmup. ESE535: Electronic Design Automation. Motivation (1) Today. Bisection Width. Motivation (2)

Preclass Warmup. ESE535: Electronic Design Automation. Motivation (1) Today. Bisection Width. Motivation (2) ESE535: Electronic Design Automation Preclass Warmup What cut size were you able to achieve? Day 4: January 28, 25 Partitioning (Intro, KLFM) 2 Partitioning why important Today Can be used as tool at many

More information

k-way Hypergraph Partitioning via n-level Recursive Bisection

k-way Hypergraph Partitioning via n-level Recursive Bisection k-way Hypergraph Partitioning via n-level Recursive Bisection Sebastian Schlag, Vitali Henne, Tobias Heuer, Henning Meyerhenke Peter Sanders, Christian Schulz January 10th, 2016 @ ALENEX 16 INSTITUTE OF

More information

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This

More information

MULTI-LEVEL GRAPH PARTITIONING

MULTI-LEVEL GRAPH PARTITIONING MULTI-LEVEL GRAPH PARTITIONING By PAWAN KUMAR AURORA A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE

More information

Think Locally, Act Globally: Highly Balanced Graph Partitioning

Think Locally, Act Globally: Highly Balanced Graph Partitioning Think Locally, Act Globally: Highly Balanced Graph Partitioning Peter Sanders, Christian Schulz Karlsruhe Institute of Technology, Karlsruhe, Germany {sanders, christian.schulz}@kit.edu Abstract. We present

More information

A Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering

A Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering Appears in the Journal of Parallel and Distributed Computing A short version of this paper appears in International Parallel Processing Symposium 996 The serial algorithms described in this paper are implemented

More information

An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm

An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm Journal of Universal Computer Science, vol. 13, no. 10 (2007), 1449-1461 submitted: 12/6/06, accepted: 24/10/06, appeared: 28/10/07 J.UCS An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm

More information

Partitioning. Course contents: Readings. Kernighang-Lin partitioning heuristic Fiduccia-Mattheyses heuristic. Chapter 7.5.

Partitioning. Course contents: Readings. Kernighang-Lin partitioning heuristic Fiduccia-Mattheyses heuristic. Chapter 7.5. Course contents: Partitioning Kernighang-Lin partitioning heuristic Fiduccia-Mattheyses heuristic Readings Chapter 7.5 Partitioning 1 Basic Definitions Cell: a logic block used to build larger circuits.

More information

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Nasser Sadati Abstract Particle Swarm Optimization (PSO) algorithms recently invented as intelligent optimizers with several highly

More information

Lecture 19: Graph Partitioning

Lecture 19: Graph Partitioning Lecture 19: Graph Partitioning David Bindel 3 Nov 2011 Logistics Please finish your project 2. Please start your project 3. Graph partitioning Given: Graph G = (V, E) Possibly weights (W V, W E ). Possibly

More information

A Genetic Algorithm for Multiprocessor Task Scheduling

A Genetic Algorithm for Multiprocessor Task Scheduling A Genetic Algorithm for Multiprocessor Task Scheduling Tashniba Kaiser, Olawale Jegede, Ken Ferens, Douglas Buchanan Dept. of Electrical and Computer Engineering, University of Manitoba, Winnipeg, MB,

More information

Unit 5A: Circuit Partitioning

Unit 5A: Circuit Partitioning Course contents: Unit 5A: Circuit Partitioning Kernighang-Lin partitioning heuristic Fiduccia-Mattheyses heuristic Simulated annealing based partitioning algorithm Readings Chapter 7.5 Unit 5A 1 Course

More information

PARTICLE SWARM OPTIMIZATION (PSO)

PARTICLE SWARM OPTIMIZATION (PSO) PARTICLE SWARM OPTIMIZATION (PSO) J. Kennedy and R. Eberhart, Particle Swarm Optimization. Proceedings of the Fourth IEEE Int. Conference on Neural Networks, 1995. A population based optimization technique

More information

Supplementary Material for The Generalized PatchMatch Correspondence Algorithm

Supplementary Material for The Generalized PatchMatch Correspondence Algorithm Supplementary Material for The Generalized PatchMatch Correspondence Algorithm Connelly Barnes 1, Eli Shechtman 2, Dan B Goldman 2, Adam Finkelstein 1 1 Princeton University, 2 Adobe Systems 1 Overview

More information

A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem

A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem 2011, TextRoad Publication ISSN 2090-4304 Journal of Basic and Applied Scientific Research www.textroad.com A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem Mohammad

More information

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization J.Venkatesh 1, B.Chiranjeevulu 2 1 PG Student, Dept. of ECE, Viswanadha Institute of Technology And Management,

More information

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem Available online at www.sciencedirect.com Procedia Engineering 15 (2011) 3673 3677 Advanced in Control Engineeringand Information Science Application of Improved Discrete Particle Swarm Optimization in

More information

Modified Particle Swarm Optimization

Modified Particle Swarm Optimization Modified Particle Swarm Optimization Swati Agrawal 1, R.P. Shimpi 2 1 Aerospace Engineering Department, IIT Bombay, Mumbai, India, swati.agrawal@iitb.ac.in 2 Aerospace Engineering Department, IIT Bombay,

More information

Multi-Threaded Graph Partitioning

Multi-Threaded Graph Partitioning Multi-Threaded Graph Partitioning Dominique LaSalle and George Karypis Department of Computer Science & Engineering University of Minnesota Minneapolis, Minnesota 5555, USA {lasalle,karypis}@cs.umn.edu

More information

Lesson 2 7 Graph Partitioning

Lesson 2 7 Graph Partitioning Lesson 2 7 Graph Partitioning The Graph Partitioning Problem Look at the problem from a different angle: Let s multiply a sparse matrix A by a vector X. Recall the duality between matrices and graphs:

More information

Module 7. Independent sets, coverings. and matchings. Contents

Module 7. Independent sets, coverings. and matchings. Contents Module 7 Independent sets, coverings Contents and matchings 7.1 Introduction.......................... 152 7.2 Independent sets and coverings: basic equations..... 152 7.3 Matchings in bipartite graphs................

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

Tracking Changing Extrema with Particle Swarm Optimizer

Tracking Changing Extrema with Particle Swarm Optimizer Tracking Changing Extrema with Particle Swarm Optimizer Anthony Carlisle Department of Mathematical and Computer Sciences, Huntingdon College antho@huntingdon.edu Abstract The modification of the Particle

More information

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization Improving Tree-Based Classification Rules Using a Particle Swarm Optimization Chi-Hyuck Jun *, Yun-Ju Cho, and Hyeseon Lee Department of Industrial and Management Engineering Pohang University of Science

More information

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers

More information

Efficient Universal Recovery in Broadcast Networks

Efficient Universal Recovery in Broadcast Networks Efficient Universal Recovery in Broadcast Networks Thomas Courtade and Rick Wesel UCLA September 30, 2010 Courtade and Wesel (UCLA) Efficient Universal Recovery Allerton 2010 1 / 19 System Model and Problem

More information

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng. Copyright 2002 IFAC 5th Triennial World Congress, Barcelona, Spain ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA Mark S. Voss a b and Xin Feng a Department of Civil and Environmental

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Image Segmentation Using Swarm Intelligence Based Graph Partitioning

Image Segmentation Using Swarm Intelligence Based Graph Partitioning IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 78-834,p-ISSN: 78-8735 PP 17- www.iosrjournals.org Image Segmentation Using Swarm Intelligence Based Graph Partitioning Dhanya

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Community Detection. Community

Community Detection. Community Community Detection Community In social sciences: Community is formed by individuals such that those within a group interact with each other more frequently than with those outside the group a.k.a. group,

More information

International Journal of Information Technology and Knowledge Management (ISSN: ) July-December 2012, Volume 5, No. 2, pp.

International Journal of Information Technology and Knowledge Management (ISSN: ) July-December 2012, Volume 5, No. 2, pp. Empirical Evaluation of Metaheuristic Approaches for Symbolic Execution based Automated Test Generation Surender Singh [1], Parvin Kumar [2] [1] CMJ University, Shillong, Meghalya, (INDIA) [2] Meerut Institute

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

Hypergraph-Partitioning Based Decomposition for Parallel Sparse-Matrix Vector Multiplication

Hypergraph-Partitioning Based Decomposition for Parallel Sparse-Matrix Vector Multiplication Hypergraph-Partitioning Based Decomposition for Parallel Sparse-Matrix Vector Multiplication Ümit V. Çatalyürek and Cevdet Aykanat, Member, IEEE Computer Engineering Department, Bilkent University 06 Bilkent,

More information

Parallel Graph Partitioning for Complex Networks

Parallel Graph Partitioning for Complex Networks Parallel Graph Partitioning for Complex Networks Henning Meyerhenke Karlsruhe Institute of Technology (KIT) Karlsruhe, Germany meyerhenke@kit.edu Peter Sanders Karlsruhe Institute of Technology (KIT) Karlsruhe,

More information

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for

More information

Combining Recursive Bisection and k-way Local Search for Hypergraph Partitioning Charel Mercatoris

Combining Recursive Bisection and k-way Local Search for Hypergraph Partitioning Charel Mercatoris Bachelor Thesis Combining Recursive Bisection and k-way Local Search for Hypergraph Partitioning Charel Mercatoris Date: 8. November 2018 Supervisors: Prof. Dr. rer. nat. Peter Sanders Sebastian Schlag,

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

Engineering Multilevel Graph Partitioning Algorithms

Engineering Multilevel Graph Partitioning Algorithms Engineering Multilevel Graph Partitioning Algorithms Manuel Holtgrewe, Vitaly Osipov, Peter Sanders, Christian Schulz Institute for Theoretical Computer Science, Algorithmics II 1 Mar. 3, 2011 Manuel Holtgrewe,

More information

Hybrid metaheuristics for the graph partitioning problem

Hybrid metaheuristics for the graph partitioning problem Hybrid metaheuristics for the graph partitioning problem Una Benlic and Jin-Kao Hao Abstract The Graph Partitioning Problem (GPP) is one of the most studied NPcomplete problems notable for its broad spectrum

More information

The Bounded Edge Coloring Problem and Offline Crossbar Scheduling

The Bounded Edge Coloring Problem and Offline Crossbar Scheduling The Bounded Edge Coloring Problem and Offline Crossbar Scheduling Jonathan Turner WUCSE-05-07 Abstract This paper introduces a variant of the classical edge coloring problem in graphs that can be applied

More information

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu

More information

Analysis of Algorithms. Unit 4 - Analysis of well known Algorithms

Analysis of Algorithms. Unit 4 - Analysis of well known Algorithms Analysis of Algorithms Unit 4 - Analysis of well known Algorithms 1 Analysis of well known Algorithms Brute Force Algorithms Greedy Algorithms Divide and Conquer Algorithms Decrease and Conquer Algorithms

More information

Multi-Resource Aware Partitioning Algorithms for FPGAs with Heterogeneous Resources

Multi-Resource Aware Partitioning Algorithms for FPGAs with Heterogeneous Resources Multi-Resource Aware Partitioning Algorithms for FPGAs with Heterogeneous Resources Navaratnasothie Selvakkumaran Abhishek Ranjan HierDesign Inc Salil Raje HierDesign Inc George Karypis Department of Computer

More information

Place and Route for FPGAs

Place and Route for FPGAs Place and Route for FPGAs 1 FPGA CAD Flow Circuit description (VHDL, schematic,...) Synthesize to logic blocks Place logic blocks in FPGA Physical design Route connections between logic blocks FPGA programming

More information

Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization

Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization Navaratnasothie Selvakkumaran and George Karypis Department of Computer Science / Army HPC Research

More information

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems Artificial bee colony algorithm with multiple onlookers for constrained optimization problems Milos Subotic Faculty of Computer Science University Megatrend Belgrade Bulevar umetnosti 29 SERBIA milos.subotic@gmail.com

More information

Analysis of Multilevel Graph Partitioning

Analysis of Multilevel Graph Partitioning Analysis of Multilevel Graph Partitioning GEORGE KARYPIS AND VIPIN KUMAR University of Minnesota, Department of Computer Science Minneapolis, MN 55455 {karypis, kumar}@cs.umn.edu Abstract Recently, a number

More information

Optimized Algorithm for Particle Swarm Optimization

Optimized Algorithm for Particle Swarm Optimization Optimized Algorithm for Particle Swarm Optimization Fuzhang Zhao Abstract Particle swarm optimization (PSO) is becoming one of the most important swarm intelligent paradigms for solving global optimization

More information

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE Fang Wang, and Yuhui Qiu Intelligent Software and Software Engineering Laboratory, Southwest-China Normal University,

More information

Dynamic Algorithm for Graph Clustering Using Minimum Cut Tree

Dynamic Algorithm for Graph Clustering Using Minimum Cut Tree Dynamic Algorithm for Graph Clustering Using Minimum Cut Tree Barna Saha Indian Institute of Technology, Kanpur barna@cse.iitk.ac.in Pabitra Mitra Indian Institute of Technology, Kharagpur pabitra@iitkgp.ac.in

More information

A Binary Model on the Basis of Cuckoo Search Algorithm in Order to Solve the Problem of Knapsack 1-0

A Binary Model on the Basis of Cuckoo Search Algorithm in Order to Solve the Problem of Knapsack 1-0 22 International Conference on System Engineering and Modeling (ICSEM 22) IPCSIT vol. 34 (22) (22) IACSIT Press, Singapore A Binary Model on the Basis of Cuckoo Search Algorithm in Order to Solve the Problem

More information

Dynamic Algorithm for Graph Clustering Using Minimum Cut Tree

Dynamic Algorithm for Graph Clustering Using Minimum Cut Tree Dynamic Algorithm for Graph Clustering Using Minimum Cut Tree Downloaded 12/30/17 to 37.44.192.255. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php Abstract

More information

Parallel Greedy Matching Algorithms

Parallel Greedy Matching Algorithms Parallel Greedy Matching Algorithms Fredrik Manne Department of Informatics University of Bergen, Norway Rob Bisseling, University of Utrecht Md. Mostofa Patwary, University of Bergen 1 Outline Background

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Particle Swarm Optimization Based Approach for Location Area Planning in Cellular Networks

Particle Swarm Optimization Based Approach for Location Area Planning in Cellular Networks International Journal of Intelligent Systems and Applications in Engineering Advanced Technology and Science ISSN:2147-67992147-6799 www.atscience.org/ijisae Original Research Paper Particle Swarm Optimization

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization Cell-to-switch assignment in cellular networks using barebones particle swarm optimization Sotirios K. Goudos a), Konstantinos B. Baltzis, Christos Bachtsevanidis, and John N. Sahalos RadioCommunications

More information

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 49 Dynamic Robot Path Planning Using Improved Max-Min Ant Colony

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

VLSI Physical Design: From Graph Partitioning to Timing Closure

VLSI Physical Design: From Graph Partitioning to Timing Closure Chapter Netlist and System Partitioning Original Authors: Andrew B. Kahng, Jens, Igor L. Markov, Jin Hu Chapter Netlist and System Partitioning. Introduction. Terminology. Optimization Goals. Partitioning

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

CHAPTER 5 OPTIMAL CLUSTER-BASED RETRIEVAL

CHAPTER 5 OPTIMAL CLUSTER-BASED RETRIEVAL 85 CHAPTER 5 OPTIMAL CLUSTER-BASED RETRIEVAL 5.1 INTRODUCTION Document clustering can be applied to improve the retrieval process. Fast and high quality document clustering algorithms play an important

More information

PARALLEL DECOMPOSITION OF 100-MILLION DOF MESHES INTO HIERARCHICAL SUBDOMAINS

PARALLEL DECOMPOSITION OF 100-MILLION DOF MESHES INTO HIERARCHICAL SUBDOMAINS Technical Report of ADVENTURE Project ADV-99-1 (1999) PARALLEL DECOMPOSITION OF 100-MILLION DOF MESHES INTO HIERARCHICAL SUBDOMAINS Hiroyuki TAKUBO and Shinobu YOSHIMURA School of Engineering University

More information

Research Article Accounting for Recent Changes of Gain in Dealing with Ties in Iterative Methods for Circuit Partitioning

Research Article Accounting for Recent Changes of Gain in Dealing with Ties in Iterative Methods for Circuit Partitioning Discrete Dynamics in Nature and Society Volume 25, Article ID 625, 8 pages http://dxdoiorg/55/25/625 Research Article Accounting for Recent Changes of Gain in Dealing with Ties in Iterative Methods for

More information

OPTIMIZED TASK ALLOCATION IN SENSOR NETWORKS

OPTIMIZED TASK ALLOCATION IN SENSOR NETWORKS OPTIMIZED TASK ALLOCATION IN SENSOR NETWORKS Ali Bagherinia 1 1 Department of Computer Engineering, Islamic Azad University-Dehdasht Branch, Dehdasht, Iran ali.bagherinia@gmail.com ABSTRACT In this paper

More information

CAD Algorithms. Circuit Partitioning

CAD Algorithms. Circuit Partitioning CAD Algorithms Partitioning Mohammad Tehranipoor ECE Department 13 October 2008 1 Circuit Partitioning Partitioning: The process of decomposing a circuit/system into smaller subcircuits/subsystems, which

More information

Optimizing Parallel Sparse Matrix-Vector Multiplication by Corner Partitioning

Optimizing Parallel Sparse Matrix-Vector Multiplication by Corner Partitioning Optimizing Parallel Sparse Matrix-Vector Multiplication by Corner Partitioning Michael M. Wolf 1,2, Erik G. Boman 2, and Bruce A. Hendrickson 3 1 Dept. of Computer Science, University of Illinois at Urbana-Champaign,

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem Massively Parallel Approximation Algorithms for the Traveling Salesman Problem Vaibhav Gandhi May 14, 2015 Abstract This paper introduces the reader to massively parallel approximation algorithms which

More information

E-Companion: On Styles in Product Design: An Analysis of US. Design Patents

E-Companion: On Styles in Product Design: An Analysis of US. Design Patents E-Companion: On Styles in Product Design: An Analysis of US Design Patents 1 PART A: FORMALIZING THE DEFINITION OF STYLES A.1 Styles as categories of designs of similar form Our task involves categorizing

More information

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic Bulletin of Environment, Pharmacology and Life Sciences Bull. Env. Pharmacol. Life Sci., Vol 4 [9] August 2015: 115-120 2015 Academy for Environment and Life Sciences, India Online ISSN 2277-1808 Journal

More information

Multigrid Pattern. I. Problem. II. Driving Forces. III. Solution

Multigrid Pattern. I. Problem. II. Driving Forces. III. Solution Multigrid Pattern I. Problem Problem domain is decomposed into a set of geometric grids, where each element participates in a local computation followed by data exchanges with adjacent neighbors. The grids

More information

CT79 SOFT COMPUTING ALCCS-FEB 2014

CT79 SOFT COMPUTING ALCCS-FEB 2014 Q.1 a. Define Union, Intersection and complement operations of Fuzzy sets. For fuzzy sets A and B Figure Fuzzy sets A & B The union of two fuzzy sets A and B is a fuzzy set C, written as C=AUB or C=A OR

More information

Shape Optimizing Load Balancing for Parallel Adaptive Numerical Simulations Using MPI

Shape Optimizing Load Balancing for Parallel Adaptive Numerical Simulations Using MPI Parallel Adaptive Institute of Theoretical Informatics Karlsruhe Institute of Technology (KIT) 10th DIMACS Challenge Workshop, Feb 13-14, 2012, Atlanta 1 Load Balancing by Repartitioning Application: Large

More information