131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This chapter deals with a new variant of PSO named Orthogonal PSO (OPSO) for solving the multiprocessor scheduling problem. The objective of applying the orthogonal concept in the basic PSO algorithm is to enhance the performance when applied to the scheduling problem. The orthogonal concept is used in PSO to generate an initial population of points that are scattered uniformly over the feasible solution space, so that the algorithm can evenly scan the feasible solution space to locate good points for further exploration in subsequent iterations. 6.2 ORTHOGONAL DESIGN An orthogonal array is a fractional factorial array that assures a balanced comparison of levels of any factor or interaction of factors. In the context of experimental arrays, orthogonal means statistically independent. The array is called orthogonal because all columns can be evaluated independently of one another and the main effect of one factor does not bother the estimation of the main effect of another factor. Orthogonal design is applicable to discrete variables, but is not applicable to continuous variables (Yiu Wing Leung and Yuping Wang
132 2001). Before solving an optimization problem, no information about the location of the global optima is known. It is desirable that an algorithm starts to explore those points that are scattered evenly in the feasible solution space. In this manner, the algorithm can evenly scan the feasible solution space only once to locate the best points for further exploration in subsequent iterations. As the algorithm iterates and improves the quality of the solution, some points may move closer to the global optima. Hence the orthogonal design technique is used to generate this initial population. Yiu Wing Leung and Yuping Wang (2001) designed a genetic algorithm called the orthogonal genetic algorithm with quantization for global numerical optimization with continuous variables. A quantization technique is also proposed to complement an experimental design method called orthogonal design. The resulting methodology is applied to generate an initial population of points that are uniformly scattered over the feasible solution space. In addition, the quantization technique and the orthogonal design are applied to tailor a new crossover operator, such that this crossover operator can generate a small, but representative sample of points as the potential offspring. The proposed algorithm is tested for 15 benchmark problems with 30 or 100 dimensions and the results prove that the proposed method has found optimal or near optimal solutions. Shinn-Ying Ho et al (2004) proposed two intelligent evolutionary algorithms Intelligent Evolutionary Algorithms (IEA) and Intelligent Multiobjective Evolutionary Algorithms (IMOEA) using a novel Intelligent Gene Collector (IGC) to solve single and multiobjective large parameter optimization problems. IGC is the main phase in an intelligent recombination operator of IEA and IMOEA. Based on orthogonal experimental design, IGC uses a divide-and-conquer approach. IMOEA utilizes a novel generalized Pareto-based scale-independent fitness function for efficiently finding a set of
133 Pareto-optimal solutions to a multiobjective optimization problem. The IEA and IMOEA algorithms have high performance in solving benchmark functions comprising many parameters, as compared with existing Evolutionary Algorithms. Li-Sun Shu et al (2004) proposed a novel Orthogonal Simulated Annealing algorithm (OSA) for optimization of electromagnetic problems. The algorithm performs best when it employs an intelligent generation mechanism based on orthogonal experimental design (OED). The OED-based intelligent generation mechanism can efficiently generate a good candidate solution for the next step by using a systematic reasoning method instead of the conventional method of random perturbation. The authors claim that the OSA is more efficient in solving parametric optimization problems and in designing optimal electromagnetic devices than some existing optimization methods using simulated annealing algorithms and genetic algorithms. Jenn Long Liu and Chao Chun Chang (2008) proposed an orthogonal momentum-type particle swarm optimization (PSO) that finds good solutions to global optimization problems using a delta momentum rule to update the flying velocity of particles and incorporating a Fractional Factorial Design (FFD) via several factorial experiments to determine the best position of particles. The novel combination of the momentum-type PSO and FFD is termed as the momentum-type PSO with FFD herein. The momentumtype PSO modifies the velocity-updating equation of the original Kennedy and Eberhart PSO, and the FFD incorporates classical orthogonal arrays into a velocity-updating equation for analyzing the best factor associated with cognitive learning and social learning terms. Twelve widely used large parameter optimization problems are used to evaluate the performance of the proposed PSO with the original PSO, momentum-type PSO, and original PSO with FFD. Experimental results reveal that the proposed momentum-type PSO
134 with an FFD algorithm efficiently solves large parameter optimization problems. Shinn-Ying Ho et al (2008) proposed a novel variant of particle swarm optimization named Orthogonal Particle Swarm Optimization (OPSO) for solving intractable large parameter optimization problems. The standard version of PSO is associated with the lack of a mechanism responsible for the process of high dimensional vector spaces. The high performance of OPSO arises mainly from a novel move behavior using an Intelligent Move Mechanism (IMM) which applies orthogonal experiment design to adjust a velocity for each particle by using a systematic reasoning method instead of the conventional generate-and-go-method. The IMM uses a divide-andconquer approach to cope with the curse of dimensionality in determining the next move of particles. The OPSO with IMM is more specialized than the PSO and performs well on large-scale parameter optimization problems with few interactions between variables. Further the OPSO with IMM technique is also tested for the Task Assignment problem with up to 300 nodes and the results prove that the proposed technique performs well when compared to the normal PSO and the GA methods. 6.2.1 Construction of an Orthogonal Array Different orthogonal arrays are needed for different optimization problems as mentioned in the literature. In general, when there are N factors and Q levels per factor, there are Q N combinations. When N and Q are large, it may not be possible to do all Q N experiments. Therefore, it is desirable to sample a small, but representative set of combinations for experimentation. The orthogonal design provides a series of orthogonal arrays for different N and Q. Let L M (Q N ) be an orthogonal array for N factors and Q levels, where L denotes a Latin square and M is the number of combinations of levels. It has M rows, where every row represents a combination of levels. For
135 convenience, denote L M {Q N ) = [a i,j ] MxN where the j th factor in the i th combination has level a i,j and a i,j {1, 2,..., Q}. A special case of orthogonal arrays L M (Q N ) is used where Q is odd and M = Q J where J is a positive integer fulfilling and is represented in equation 6.1. J N ( Q 1) /( Q 1). (6.1) A simple permutation method is used to construct orthogonal arrays of this class (Yiu Wing Leung and Yuping Wang 2001). The j th column of the orthogonal array [a i, j ] MXN is denoted as a j. Columns a j for j = 1, 2, (Q 2 1) /( Q 1) +1, ( Q 3 1) /( Q 1) +1.. ( Q J 1 1) /( Q 1) +1 are called the basic columns and the others are called the non-basic columns. The basic columns are constructed first and then the nonbasic columns are constructed. The details are as follows, Step 1 : Construct the basic columns as follows: for k = 1 to J do begin k j = (Q 1 1) /( Q 1) + 1; for i = 1 to J Q do j k a i, j = [ (i-1)/( Q ) ] mod Q; end. Step 2: Construct the non-basic columns as follows: for k = 2 to J do begin k j = ( Q 1 1) /(Q 1) + 1; for s = 1 to j -1 do
136 for t = 1 to Q-1 do a ( a xt a )mod Q j ( s 1 )( Q 1 ) t s j ; End Step 3 : Increment a i, j by one for all 1 i M and 1 j N Concatenate all the a j elements which is the orthogonal array. properties. In general, the orthogonal array L M (Q N ) has the following 1) For the factor in any column, every level occurs M/Q times. 2) For the two factors in any two columns, every combination of two levels occurs M/Q 2 times. 3) For the two factors in any two columns, the M combinations contain the following combinations of levels: (1, 1), (1, 2),, (1, Q), (2, 1), (2, 2),, (2, Q),...,(Q, 1), (Q, 2),...,(Q, Q). 4) If any two columns of an orthogonal array are swapped, the resulting array is still an orthogonal array. 5) If some columns are taken away from an orthogonal array, the resulting array is still an orthogonal array with a smaller number of factors. Consequently, the selected combinations are scattered uniformly over the space of all possible combinations. Orthogonal design is proven to be optimal for additive and quadratic models (Yiu Wing Leung et al 2001) and
137 the selected combinations are good representatives for all of the possible combinations. 6.3 PROPOSED OPSO ALGORITHM This chapter proposes a first method for task scheduling namely the Orthogonal Particle Swarm Optimization Technique. The procedure for Orthogonal PSO is as follows, 1. Generate the initial swarm randomly. 2. Construct the Orthogonal Array for the initial swarm as mentioned in 6.2.1. 3. Initialize the personal best of each particle and the global best of the entire swarm. 4. Evaluate the initial swarm using the fitness function. 5. Select the personal best and global best of the swarm. 6. Update the velocity and the position of each particle using the equations (2.1) and (2. 2). 7. Obtain the optimal solution in the initial stage. 8. Repeat step 2- step 7 until the maximum number of iterations specified. 9. Obtain the optimal solution at the end of the specified iteration. 6.4 PROPOSED PARALLEL OPSO ALGORITHM The second method proposed is the solution for multiprocessor scheduling using the Asynchronous Orthogonal Particle Swarm Optimization
138 technique. The Asynchronous PSO is better than the Synchronous PSO and the results are justified in Chapter 5. The POPSO is implemented using a master-slave approach. Initially the orthogonal array particles are generated by the master as highlighted in the section 6.2.1. The master processor holds the queue of feasible particles to be sent to the slave processors. The master performs all decision making processes such as velocity updation, position updation and convergence checks. The slaves perform the function evaluations for the particles sent to them. The tasks performed by the master and slave processors are as follows, Master processor 1. Initialize all optimization parameters and particle positions and velocities. 2. Holds a queue of orthogonal array particles for the slave processors to evaluate. 3. Updates the particle positions and velocities based on the currently available information. 4. Sends the next particle in the queue to an available slave processor. 5. Receives cost function values from slave processors. 6. Checks convergence. Slave Processor 1. Receives the particle from the master processor. 2. Evaluates the objective function of the particle sent to the slaves. 3. Sends the cost function value to the master processor.
139 The proposed OPSO and the Parallel OPSO techniques are tested for the multiprocessor scheduling problem. Static task scheduling (Independent and Dependent tasks) and dynamic task scheduling (with and without load balancing) problems are simulated in a Java environment. Benchmark datasets for independent and dynamic task scheduling are taken from Eric Taillard s site. The data for dependent task scheduling are taken from the Standard Task Graph dataset. Two data sets are taken for simulation. Data set 1 involves 50 tasks and 20 processors. Data set 2 involves 100 tasks and 20 processors. The tasks are non pre-emptive in nature. The number of iterations and the population size is taken as twice the number of tasks taken for scheduling (Ayed Salman et al 2002). In a heuristic approach, every independent run of a program generates a different solution. Thus 20 independent runs are executed and the average, best and worst solutions are taken for comparison. The topology adopted is the global best topology in which every particle is connected to every other particle in the search space. The following sections deal with the various types of task scheduling. 6.5 SCHEDULING STATIC INDEPENDENT TASKS The Illustration1 deals with the scheduling of static tasks which are independent in nature. In this method, the tasks are independent of one another and any task can be executed in any order. The objective function is the same as specified in equation 2.7.
140 Table 6.1 Convergence time, Best, Worst and Average s of the OPSO algorithms for Independent Task Schedule Method PSO-VI OPSO POPSO m 50 100 50 100 50 100 Best 1612 3928 1148 2846 1148 2846 Worst 1865 4434 1562 3318 1284 3227 Average 1711.4 4067.2 1248.5 3033.2 1207.9 2958.3 Convergence time in seconds 1.9241 4.2038 2.1284 4.7923 0.7865 1.6951 Table 6.1 implies that the POPSO outperforms all other methods tested for multiprocessor scheduling. The Best cost obtained for dataset1 is 1148 for OPSO and POPSO method, whereas the cost is 1612 for PSO-VI method. For dataset2, the Best cost is 2846 for OPSO and POPSO methods, whereas the cost is 3928 for PSO-VI method. The average cost is also improved in the case of the OPSO and the POPSO methods. The time for convergence is 1.1 times slower than the PSO-VI method. But the convergence time for the POPSO method is 2.4 times faster than the PSO-VI method, because of the parallel asynchronous nature of the algorithm. for OPSO and POPSO for Independent Task Schedule for 50 tasks and 20 processors in Rupees 1800 1600 1400 1200 1000 800 600 400 200 0 PSO-VI OPSO POPSO Mehtods Figure 6.1 Best cost for Independent Task Schedule for 50 tasks and 20 processors for OPSO and POPSO methods
141 for OPSO and POPSO for Independent Task Schedule for 100 tasks and 20 processors in Rupees 4500 4000 3500 3000 2500 2000 1500 1000 500 0 PSO-VI OPSO POPSO Methods Figure 6.2 Best cost for Independent Task Schedule for 100 tasks and 20 processors for OPSO and POPSO methods Figure 6.1 and Figure 6.2 illustrates the Best cost obtained for dataset1 and dataset2 respectively. There is a significant improvement in the result because of the asynchronous implementation of the Parallel Orthogonal Particle Swarm Optimization Algorithm. Table 6.2 Efficiency Calculation for Independent Task Scheduling PSO-VI and OPSO PSO-VI and POPSO OPSO POPSO ( 1 )x100 ( 1 )x100 PSO VI PSO VI Data set I 27.04% 29.42% Data set II 27.21% 29% In terms of efficiency, the POPSO performs better than all other methods tested and is illustrated in Table 6.2. When the PSO-VI and the OPSO method are compared, the OPSO method is 27.04% efficient than the PSO-VI method for 50 tasks and 20 processors and the OPSO method is 27.21% efficient than the PSO-VI method for 100 tasks and 20 processors.
142 When the PSO-VI and the POPSO methods are compared, the POPSO method performs better than the PSO-VI method. The POPSO method is 29.42% efficient than the PSO-VI method for 50 tasks and 20 processors. The POPSO method is 29% efficient than the PSO-VI method when 100 tasks and 20 processors are involved. At the outset, the result infers that the POPSO performs better than the OPSO method when applied to the task assignment problem which involves independent tasks. 6.6 SCHEDULING STATIC DEPENDENT TASKS The illustration2 deals with the scheduling of tasks which are dependent in nature. In this method, there is a dependency among the tasks to be scheduled. The dependent tasks should be scheduled in a sequential manner so that the order of the dependency is satisfied. The objective of the methodology is to minimize the make span of the entire schedule. Table 6.3 Convergence time, Best, Worst and Average costs of the Orthogonal PSO algorithms for dependent task schedule Method PSO-VI OPSO POPSO m 50 100 50 100 50 100 Best 1064 4485 921 3542 921 3542 Worst 1642 5086 1272 4064 1069 3988 Average 1455.7 5117.8 1060.3 3735.9 1030.2 3630.6 Convergence time in seconds 2.2418 4.9000 2.5108 5.5370 0.9419 2.0332 Table 6.3 infers that the performance of the POPSO method is better than the OPSO and the PSO-VI methods. The Best cost obtained for dataset1 and dataset2 is the same for the POPSO and the OPSO method which is 921 and 3542 respectively. But both the methods differ in the convergence
143 time. The POPSO converges faster (2.4 times) than the PSO-VI method, but the OPSO method converges slower (1.1 times) than the PSO-VI method. This is because of the asynchronous nature of the parallel algorithm. The OPSO is slower than the PSO-VI method because of the refinement in the initial population taking place due to the orthogonal principle. for OPSO and POPSO for Dependent Task Schedule for 50 tasks and 20 processors in Rupees 1100 1050 1000 950 900 850 800 PSO-VI OPSO POPSO Methods Figure 6.3 Best cost for Dependent Task Schedule for 50 tasks and 20 processors for OPSO and POPSO methods for OPSO and POPSO for Dependent Task Schedule for 100 tasks and 20 processors 5000 in Rupees 4000 3000 2000 1000 0 PSO-VI OPSO POPSO Methods Figure 6.4 Best cost for Dependent Task Schedule for 100 tasks and 20 processors for OPSO and POPSO methods
144 Figure 6.3 and Figure 6.4 represents the Best cost achieved for dataset1 and dataset2 respectively. POPSO method is better because of the orthogonal principle combined with the parallel asynchronous concept. In terms of efficiency, the POPSO method outperforms all other methods tested and is illustrated in Table 6.4. When the PSO-VI and the OPSO method are compared, the OPSO method is 27.16% efficient than the PSO-VI method for 50 tasks and 20 processors and the OPSO method is 27% efficient than the PSO-VI method for 100 tasks and 20 processors. Table 6.4 Efficiency Calculation for Dependent Task Scheduling PSO-VI & OPSO PSO-VI & POPSO OPSO POPSO ( 1 )x100 ( 1 )x100 PSO VI PSO VI Data set I 27.16% 29.23% Data set II 27% 29.06% When the PSO-VI and the POPSO methods are compared, the POPSO method performs better than the PSO-VI method. The POPSO method is 29.23% efficient than the PSO-VI method for 50 tasks and 20 processors. The POPSO method is 29.06% efficient than the PSO-VI method when 100 tasks and 20 processors are involved. At the outset, the result infers that the POPSO performs better than the OPSO method when applied to the task assignment problem which involves dependent tasks.
145 6.7 DYNAMIC TASK SCHEDULING The illustration3 deals with the tasks which are dynamic in nature. To achieve minimum cost for the Task Assignment Problem for dynamic task scheduling, the function is formulated as represented in equation 2.9 and equation 2.10.The objective function calculates the total execution time of the set of tasks allocated to each processor. Table 6.5 Convergence time, Best, Worst and Average costs of the Orthogonal PSO algorithms for dynamic task schedule Method PSO-VI OPSO POPSO m 50 100 50 100 50 100 Best 2592 4893 2011 3828 2011 3828 Worst 3428 5867 2462 4317 2186 4096 Average 3010.4 5513.2 2190.9 4016.9 2134.1 3896.7 Convergence time in seconds 4.3162 5.9828 4.6615 6.2221 1.7909 2.5033 From Table 6.5, it can be inferred that the Best cost is the same for the OPSO and the POPSO methods for dataset1 and dataset2. But the convergence time is faster in the case of the POPSO method than the OPSO method. The convergence time for the POPSO method is 2.4 times faster than the PSO-VI method, but the convergence time is 1.1 times slower in the OPSO method when compared to the POPSO method. The average cost is improved in both the OPSO and the POPSO method when compared to the PSO-VI method.
146 for OPSO and POPSO for Dynamic Task Schedule for 50 tasks and 20 processors in Rupees 3000 2500 2000 1500 1000 500 0 PSO-VI OPSO POPSO Methods Figure 6.5 Best cost for Dynamic Task Schedule for 50 tasks and 20 processors for OPSO and POPSO methods From Figure 6.5, Figure 6.6 and Table 6.3, it can be inferred that the POPSO outperforms OPSO and the Normal PSO with variable inertia methods. This is because of the combination of the orthogonal and the parallel principle in the proposed POPSO algorithm for OPSO and POPSO for Dynamic Task Schedule for 100 tasks and 20 processors in Rupees 6000 5000 4000 3000 2000 1000 0 PSO-VI OPSO POPSO Methods Figure 6.6 Best cost for Dynamic Task Schedule for 100 tasks and 20 processors for OPSO and POPSO methods
147 In terms of efficiency, the POPSO method performs better than all the other methods tested. When the PSO-VI and the OPSO method are compared, the OPSO method is 27.22% efficient than the PSO-VI method for 50 tasks and 20 processors and the OPSO method is 27.14% efficient than the PSO-VI method for 100 tasks and 20 processors. Table 6.6 Efficiency Calculation for Dynamic Task Scheduling PSO-VI and OPSO PSO-VI and POPSO OPSO POPSO ( 1 )x100 ( 1 )x100 PSO VI PSO VI Data set I 27.22% 29.11% Data set II 27.14% 29.32% When the PSO-VI and the POPSO methods are compared, the POPSO method performs better than the PSO-VI method. The POPSO method is 29.11% efficient than the PSO-VI method for 50 tasks and 20 processors. The POPSO method is 29.32% efficient than the PSO-VI method when 100 tasks and 20 processors are involved. At the outset, the result infers that the POPSO performs better than the OPSO method when applied to the task assignment problem which involves dynamic tasks. 6.8 DYNAMIC TASK SCHEDULING WITH LOAD BALANCING The illustration 4 deals with the dynamic task scheduling with load balancing concept. Effective processor utilization is needed to support the concept of load balancing. Thus the concept of load balancing is dealt, in which the objective function is the same as represented in equation 2.12, 2.13 and 2.14.
148 Table 6.7 Convergence time, Best, Worst and Average costs of the Orthogonal PSO algorithms for dynamic task schedule with load balancing Method PSO-VI OPSO POPSO m 50 100 50 100 50 100 Best 11.584 20.728 14.112 23.108 14.112 23.108 Worst 10.112 18.454 12.458 20.008 12.892 21.562 Average 10.389 19.382 13.237 24.627 13.402 25.047 Convergence time in seconds 5.1382 6.9101 5.7548 7.3938 2.1058 2.8913 for OPSO and POPSO for Dynamic Task Schedule with Load balancing for 50 tasks and 20 processors in Rupees 16 14 12 10 8 6 4 2 0 PSO-VI OPSO POPSO Methods Figure 6.7 Best cost for Dynamic Task Schedule with Load Balancing for 50 tasks and 20 processors for OPSO and POPSO methods The Best cost achieved for the dataset1 and dataset2 is same for both the proposed methods namely the OPSO and the POPSO methods and is illustrated in Table 6.7. But the convergence time is faster (2.4 times) in the POPSO method and is slower (1.1 times) for the OPSO method when compared to the PSO-VI methods.
149 for dynamic task schedule with load balancing for 100 tasks and 20 processors in Rupees 23.5 23 22.5 22 21.5 21 20.5 20 19.5 PSO-VI OPSO POPSO Methods Figure 6.8 Best cost for Dynamic Task Schedule with Load balancing for 100 tasks and 20 processors for OPSO and POPSO methods From Figure 6.7 and Figure 6.8, it can be inferred that the Parallel Orthogonal PSO outperforms the Orthogonal PSO and the PSO with varying inertia concepts. Table 6.8 Efficiency Calculation for Dynamic Task Scheduling with Load Balancing PSO-VI & OPSO PSO-VI & POPSO OPSO POPSO ( 1 )x100 ( 1 )x100 PSO VI PSO VI Data set I 27.41% 29% Data set II 27.06% 29.23% In terms of efficiency, the POPSO outperforms all other methods tested. When the PSO-VI and the OPSO method are compared, the OPSO method is 27.41% efficient than the PSO-VI method for 50 tasks and 20 processors and the OPSO method is 27.06% efficient than the PSO-VI method for 100 tasks and 20 processors and is represented in Table 6.8.
150 When the PSO-VI and the POPSO methods are compared, the POPSO method performs better than the PSO-VI method. The POPSO method is 29% efficient than the PSO-VI method for 50 tasks and 20 processors. The POPSO method is 29.23% efficient than the PSO-VI method when 100 tasks and 20 processors are involved. At the outset, the result infers that the POPSO performs better than the OPSO method when applied to the task assignment problem which involves dynamic tasks with load balancing. 6.9 SUMMARY This chapter has dealt with the application of Orthogonal and the Parallel Asynchronous Orthogonal PSO techniques to different types of task scheduling namely, static independent task scheduling, static dependent task scheduling, dynamic task scheduling and dynamic task scheduling with load balancing. The results infer that the Parallel Asynchronous Orthogonal PSO is better in performance when compared to the Orthogonal PSO and the PSO with varying inertia approaches. In terms of the Best cost, both the OPSO and the POPSO methods perform the same. When the average cost is considered, the OPSO method is 27% efficient than the PSO-VI method and the POPSO method is 29% efficient than the PSO-VI method. Both the method differs a lot in terms of the convergence time. The convergence time in the case of the POPSO method is 2.4 times faster than the PSO-VI method. The convergence is faster because of the asynchronous parallel version of the orthogonal PSO algorithm. The convergence time for the OPSO method is slower (1.1 times) when compared to the PSO-VI method because of the time taken to refine the initial population using the orthogonal principle. Thus the POPSO method performs better when compared to the other methods tested when applied to the multiprocessor scheduling problem.
151 CHAPTER 7 CONCLUSION 7.1 CONCLUSION This thesis involves the application of the various PSO techniques to solve the multiprocessor scheduling problem. In this work, the PSO and its variants namely PSO with dynamically varying inertia, Elite PSO with mutation, Hybrid PSO, Parallel PSO, Orthogonal PSO and Parallel Orthogonal PSO techniques are investigated to solve the multiprocessor scheduling problem. The PSO technique is also compared with the GA approach. Four types of task scheduling are dealt with namely, static independent task scheduling, static dependent task scheduling, dynamic task scheduling and dynamic task scheduling with load balancing. The introduction of inertia factor in the basic equation of the PSO algorithm has proven a significant improvement in the results when applied to the multiprocessor scheduling problem. The value of the inertia factor plays a major role in the achievement of the optimal solution. The fixed inertia and dynamically varying inertia is applied to solve the task assignment problem. PSO with variable inertia yields a better performance than fixed inertia when applied to multiprocessor scheduling problem. The proposed PSO-VI method yields an improved performance when compared with the GA method. On an average, the proposed PSO-VI method is 1.7 times faster than the GA method. The proposed PSO-VI method is on an average 14% efficient than the GA method in terms of cost.
152 The proposed PSO-VI method s performance is enhanced by modifying the basic working of the PSO method. The PSO with varying inertia method is also combined with another proposed technique known as elitism to achieve an improvement in the result when applied to the task assignment problem. Elitism is also combined with mutation to prevent the algorithm from being stuck at local optimum. EPSO-M algorithm improves the individual quality of the swarm and accelerates the convergence. Mutation operation is used to guarantee the diversity of the swarm. The proposed EPSO-M algorithm is on an average 7% better than the variable inertia PSO when applied to the task scheduling problem. The time for convergence is on average 1.12% faster than the PSO-VI method. Thus the cost and the convergence time is improved in EPSO-M method when compared to the PSO-VI method. Further, hybridization of the PSO algorithm and the Simulated Annealing (SA) algorithm is done to enhance the performance of the algorithm when applied to the multiprocessor scheduling problem. Simulated Annealing is chosen because it is good at finding at local optimum. Performance improvement is achieved when hybridization technique of PSO and SA is applied to multiprocessor scheduling. This hybrid technique is also compared with another version of hybridized PSO namely the combination of the PSO and the Hill Climbing concept. On an average the proposed PSO-SA method is 13% efficient than the PSO-VI method and the PSO-HC method is 4% efficient than the PSO-VI method. But in the proposed PSO-SA method, there is an increase in the convergence time (1.5 times) of the algorithm when compared to the PSO-VI method because of the involvement of the annealing schedule in simulated annealing and the PSO algorithms. Parallelization of the PSO algorithm is also proposed to speed up the execution and to provide concurrency. Two versions of parallelization are
153 done namely the Synchronous Parallel PSO and the Asynchronous Parallel PSO. The result infers that the Asynchronous version performs better than the Synchronous Parallel version. The Synchronous Parallel PSO and the Asynchronous Parallel PSO yields the same Best cost as the Hybrid PSO (PSO-SA) approach, but the convergence of Parallel PSO is faster than the PSO-SA method. The PAPSO converges faster than the PSPSO because the idle time of the processors is considerably reduced. Also the convergence time is 2.2 times faster in the PAPSO method when compared to the PSO-VI method. The convergence time is 1.3 times faster in the PSPSO method when compared to the PAPSO method. When the average cost is considered, the PSPSO method is around 14% efficient than the PSO-VI method and the PAPSO method is 18% efficient than the PSO-VI method when applied to the different types of task scheduling. Further the Orthogonal PSO (OPSO) is proposed which is used to refine the initial population. The parallelization of the OPSO algorithm (POPSO) is also proposed to further refine the results. In terms of the Best cost, both the OPSO and the POPSO methods perform the same. When the average cost is considered, the OPSO method is 27% efficient than the PSO- VI method and the POPSO method is 29% efficient than the PSO-VI method. Both the method differs a lot in terms of the convergence time. The convergence time is faster (2.4 times) in the case of the POPSO method when compared to the PSO-VI method. The convergence is faster because of the asynchronous parallel version of the orthogonal PSO algorithm. The convergence time for the OPSO method is slower (1.1 times) when compared to the PSO-VI method because of the time taken to refine the initial population using the orthogonal principle. Thus the POPSO method performs better when compared to the other methods tested when applied to the multiprocessor scheduling problem. The POPSO outperforms all other
154 methods proposed in this thesis for solving the multiprocessor scheduling problem. 7.2 FUTURE SCOPE OF THE WORK The multiprocessor scheduling problem or the Task Assignment Problem is an NP-hard problem. In this thesis work, the simulations are conducted in a Java environment with the benchmark datasets. Even though the simulations are carried out with bench mark datasets, the multiprocessor scheduling problem using the PSO approaches need testing in experiments and industrial practices. The nature of the tasks considered in this thesis is non-preemptive in nature. Work can be tried out for preemptive tasks. In the PSO-VI method more study can be carried out to find the optimal values of the parameters of the PSO equations to suit the applications. Further work can also be done for finding out the mathematical proof for the social factor and the cognitive factor in the basic PSO equation. The PSO with global topology is used for solving the multiprocessor scheduling problem. Other topologies mentioned in the literature can be tried out for solving the Task Assignment Problem. For the Parallel PSO method, the number of processors chosen is on a trial basis. So works can be tried out to find the optimal number of processors needed to solve a particular problem in a parallel environment in a real time scenario.