Load Balancing with Random Information Exchanged based Policy

Similar documents
A New Fuzzy Algorithm for Dynamic Load Balancing In Distributed Environment

CHAPTER 7 CONCLUSION AND FUTURE SCOPE

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

ANALYSIS OF A DYNAMIC LOAD BALANCING IN MULTIPROCESSOR SYSTEM

On Performance Evaluation of Reliable Topology Control Algorithms in Mobile Ad Hoc Networks (Invited Paper)

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN

Performance Impact of I/O on Sender-Initiated and Receiver-Initiated Load Sharing Policies in Distributed Systems

Operating Systems Unit 3

PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES Mrs. Yogita A. Dalvi Dr. R. Shankar Mr. Atesh Kumar

Prof. Darshika Lothe Assistant Professor, Imperial College of Engineering & Research, Pune, Maharashtra

On the Maximum Throughput of A Single Chain Wireless Multi-Hop Path

Nowadays data-intensive applications play a

A Comparative Study of Load Balancing Algorithms: A Review Paper

Dynamic Load Balancing Strategy for Grid Computing

Ch 4 : CPU scheduling

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Anil Saini Ph.D. Research Scholar Department of Comp. Sci. & Applns, India. Keywords AODV, CBR, DSDV, DSR, MANETs, PDF, Pause Time, Speed, Throughput.

A Comparative and Performance Study of On Demand Multicast Routing Protocols for Ad Hoc Networks

IUT Job Cracker Design and Implementation of a Dynamic Job Scheduler for Distributed Computation

Chapter 7 CONCLUSION

Load Balancing. Minsoo Ryu. Department of Computer Science and Engineering. Hanyang University. Real-Time Computing and Communications Lab.

Keywords: Cloud, Load balancing, Servers, Nodes, Resources

GRID SIMULATION FOR DYNAMIC LOAD BALANCING

LOAD BALANCING USING THRESHOLD AND ANT COLONY OPTIMIZATION IN CLOUD COMPUTING

The Effect of Scheduling Discipline on Dynamic Load Sharing in Heterogeneous Distributed Systems

Performance Evaluation of Mobile Agent-based Dynamic Load Balancing Algorithm

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

(INTERFERENCE AND CONGESTION AWARE ROUTING PROTOCOL)

A Hybrid Load Balance Mechanism for Distributed Home Agents in Mobile IPv6

Simulation Analysis of Linear Programming Based Load Balancing Algorithms for Routers

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #9: More on Uniprocessor Scheduling

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

Vaibhav Jain 2, Pawan kumar 3 2,3 Assistant Professor, ECE Deptt. Vaish College of Engineering, Rohtak, India. Rohtak, India

Chapter 3. Design of Grid Scheduler. 3.1 Introduction

A CLASSIFICATION FRAMEWORK FOR SCHEDULING ALGORITHMS IN WIRELESS MESH NETWORKS Lav Upadhyay 1, Himanshu Nagar 2, Dharmveer Singh Rajpoot 3

New Optimal Load Allocation for Scheduling Divisible Data Grid Applications

Unicast Routing in Mobile Ad Hoc Networks. Dr. Ashikur Rahman CSE 6811: Wireless Ad hoc Networks

Efficient Location Services Using Hierarchical Topology of Mobile Ad Hoc Networks

All Rights Reserved 2017 IJARCET

Lecture (08, 09) Routing in Switched Networks

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Time Synchronization in Wireless Sensor Networks: CCTS

Mobility Model for User s Realistic Behavior in Mobile Ad Hoc Network

RESERVATION OF CHANNELS FOR HANDOFF USERS IN VISITOR LOCATION BASED ON PREDICTION

Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm

Effective Load Balancing in Grid Environment

Connectivity, Energy and Mobility Driven Clustering Algorithm for Mobile Ad Hoc Networks

3. Evaluation of Selected Tree and Mesh based Routing Protocols

Two Hierarchical Dynamic Load Balancing Algorithms in Distributed Systems

EARM: An Efficient and Adaptive File Replication with Consistency Maintenance in P2P Systems.

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism

A LOAD BALANCING ALGORITHM BASED ON MOVEMENT OF NODE DATA FOR DYNAMIC STRUCTURED P2P SYSTEMS

Distributed clustering algorithms for data-gathering in wireless mobile sensor networks

Balanced Distribution of Load on Grid Resources using Cellular Automata

Impact of IEEE MAC Packet Size on Performance of Wireless Sensor Networks

TOPOLOGY CONTROL IN WIRELESS NETWORKS BASED ON CLUSTERING SCHEME

An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm

Enhanced Broadcasting and Code Assignment in Mobile Ad Hoc Networks

Co-operative Scheduled Energy Aware Load-Balancing technique for an Efficient Computational Cloud

Improved Task Scheduling Algorithm in Cloud Environment

Some Optimization Trade-offs in Wireless Network Coding

A Neighbor Coverage Based Probabilistic Rebroadcast Reducing Routing Overhead in MANETs

A Literature survey on Improving AODV protocol through cross layer design in MANET

We will discuss about three different static routing algorithms 1. Shortest Path Routing 2. Flooding 3. Flow Based Routing

ANT COLONY OPTIMIZED ROUTING FOR MOBILE ADHOC NETWORKS (MANET)

Fluid models for evaluating threshold-based control policies for survivability of a distributed network

Chapter 4 NETWORK HARDWARE

Study of Load Balancing Schemes over a Video on Demand System

Presenting a multicast routing protocol for enhanced efficiency in mobile ad-hoc networks

Dynamic Load Balancing Algorithms for Distributed Networks

FPOC: A Channel Assignment Strategy Using Four Partially Overlapping Channels in WMNs

Load Balancing Algorithms in Cloud Computing: A Comparative Study

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS

Chapter 16 Networking

CH : 15 LOCAL AREA NETWORK OVERVIEW

QUALITY OF SERVICE PROVISIONING IN MANET USING A CROSS-LAYER APPROACH FOR ROUTING

Last Class: Processes

Performance Enhancement of AOMDV with Energy Efficient Routing Based On Random Way Point Mobility Model

Modified Low Energy Adaptive Clustering Hierarchy for Heterogeneous Wireless Sensor Network

CHAPTER 6 ENERGY AWARE SCHEDULING ALGORITHMS IN CLOUD ENVIRONMENT

Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks. (Due in class on 7 March 2002)

A Jini Based Implementation for Best Leader Node Selection in MANETs

Dynamic Key Ring Update Mechanism for Mobile Wireless Sensor Networks

Improvement of Buffer Scheme for Delay Tolerant Networks

Effective Load Metric and Efficient Initial Job Placement for Dynamic Load Balancing in Cluster

Boosting the Performance of Myrinet Networks

Modified Ultra Smart Counter Based Broadcast Using Neighborhood Information in MANETS

A Congestion Controlled Multipath Routing Algorithm Based On Path Survivability Factor

cs/ee 143 Fall

Comparison of proposed path selection protocols for IEEE s WLAN mesh networks

Virtual On-demand Test Lab using Cloud based Architecture

Zonal based Deterministic Energy Efficient Clustering Protocol for WSNs

Transactions on Information and Communications Technologies vol 3, 1993 WIT Press, ISSN

Figure 1. Clustering in MANET.

Performance Evaluation of Mesh - Based Multicast Routing Protocols in MANET s

Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB

A Dynamic TDMA Protocol Utilizing Channel Sense

Load Balancing Algorithm over a Distributed Cloud Network

International Journal of Computer Engineering and Applications, Volume XII, Special Issue, July 18, ISSN

Transcription:

Load Balancing with Random Information Exchanged based Policy Taj Alam 1, Zahid Raza 2 School of Computer & Systems Sciences Jawaharlal Nehru University New Delhi, India 1 tajhashimi@gmail.com, 2 zahidraza@mail.jnu.ac.in Abstract The primary objective of load balancing is to minimize the job execution time and maximize resource utilization. The load balancing algorithms for parallel computing system must adhere to three inherent policies; viz. information policy, transfer policy and placement policy. To better utilize the system resources this work proposes a load balancing strategy with information exchange policy based on random walk of packets for system with decentralized nature. Information is exchanged via random packets so that each node in a system has up-to-date states of the others nodes. Keywords Parallel and distributed system; load balancing; information exchange; resource utilization. I. INTRODUCTION In a parallel system it is always desired from a scheduler that jobs should be scheduled in such a way that the resources are fully utilized with minimization of turnaround time. The most important consideration here is the development of efficient techniques for the allotment of the processes of a program on multiple processors in such a way that the software parallelism can be exploited to its fullest by using hardware parallelism. The solution to this problem lies in effectively load balancing the processes among processing nodes to achieve the performance goals. Load balancing is defined as a redistribution/reallocation of processes among the processors during allocation or execution time by transferring tasks from the heavily loaded to the lightly loaded processors with the aim of improving the performance of the system. A typical load balancing algorithm is defined by the following three policies [2, 3]: Information Policy: It specifies the amount of load information made available to job placement decisionmakers. Transfer Policy: It determines the conditions under which a job should be transferred, that is, the current load of the host and the size of the job under consideration. Placement Policy: It identifies the processing element to which a job should be transferred. Load balancing algorithms are broadly characterized as static and dynamic which are further categorized as centralized and decentralized [2, 3, 4]. Load balancing is done by a single processor in a centralized system whereas in the decentralized case scheduling responsibility is vested on all the processing nodes. Centralized algorithms are less dependable than decentralized algorithms however; decentralized algorithms have the problem of communication overheads incurred by regular information exchange between processors. The information is communicated through periodic or aperiodic broadcasting of messages. Due to information exchange, the communication and computation overheads are obviously elevated for decentralized systems. Besides, the information a processor has, relating to the states of the other nodes, may be outmoded due to the delay in collecting the data. The performance of such algorithms is indirectly proportional to the delay. As the delay increases the performance of the system degrades considerably. The only way to ease the degradation caused by the delay in having the updated information about other nodes is to increase the frequency of status exchange. However the unnecessary increase in frequency of information exchange can further increase the communication overhead. Thereby the performance of the algorithm deteriorates significantly and a stage is reached when further increase in the frequency of status exchange makes the system unstable [6]. In order to reduce the overhead caused due to frequent information exchange at periodic time intervals we propose a Load Balancing policy with Random Information Exchange (LBRIE) model. The proposed strategy, LBRIE is inspired from the work of Einstein on random walk and has taken up Random Direction Mobility Model which is purely based on random walk to implement this concept in information exchange. The remainder of the paper is organized as follows. In second section, literature related to information exchange in load balancing is discussed. Section three presents the proposed scheduler and the working of the model. Section four discusses illustrative example. Finally, the paper ends in section five with the concluding remarks. II. RELATED WORK Literature related to load balancing strategies in view of information exchanged models has been reported here. Information exchange models are categorized as periodic and on demand/aperiodic. As referred later in this section, PIA and ELISA are periodic information exchange models and rest all cited models make use of on demand/aperiodic information exchange for decision making. In PIA, the information is shared at periodic information exchange interval, called as transfer epochs. It is assumed that at the transfer epochs, each processor has ideal information about the state of every other 978-1-4799-2572-8/14/$31.00 c 2014 IEEE 690

processor in its buddy set. Based on the loading state, jobs are transferred from heavily loaded node to lightly loaded node [5]. In ELISA, the information is exchanged at transfer epochs and estimation of load is done at estimation epochs. According to the estimated queue lengths of the nodes in its buddy set, the node computes the average load to execute in the system. Nodes in the buddy set whose estimated queue length is less than the estimated average queue length by more than a threshold are considered to be a part of the active set. The node under consideration transfers jobs to the nodes in the active set until its queue length is not greater than the estimated average queue length. The value of threshold is predefined and is important for the performance of ELISA and should be fixed in such a way that the average response time of the system gets minimized [6]. The problem of redistribution of load of the system among its nodes so that overall performance of system is maximized has been dealt with in [3]. The work discusses several key issues in load distribution and describes many loadinformation as the distributing algorithms taking on demand backbone for communication. Various strategies have been suggested for dynamic load balancing, which includes sender initiated diffusion, receiver initiated diffusion, hierarchical balancing method, gradient model, domain exchange method based on aperiodic information exchange for decision making in [7]. Taxonomy of load sharing algorithms with discussion on various source-initiative and server-initiative approaches has been proposed in [8]. The taxonomy also discusses ten symbolic algorithms that have been evaluated against their performance. The algorithms make use of on demand information for load balancing success. III. LBRIE STRATEGY The proposed strategy, LBRIE is aperiodic information exchange model based on random information exchange inspired from the work of Einstein on random walk. It is feebly based on Random Direction Mobility Model of adhoc networks [9, 10]. For the decentralized systems, it is necessary to have correct load information about every other node in the system. In LBRIE packets carrying information about the system state are randomly circulated keeping in view that indefinite packet should not exist in the system. The probability of the success of a packet returning to its origin is the deriving factor in using this random mobility model of adhoc wireless network. Each node in the system has an information table with packets being used in updating the information. To realize a load balanced state, the scheduler continuously keeps track of the load on the nodes using thresholds with the aim of minimizing the turnaround time of the jobs submitted for execution. The jobs are reallocated during runtime based on the load status of the node. If a node is heavily loaded it will act as a sender and will search for possible receptor of the job and vice versa. Here, hybridism is maintained via combining sender and receiver initiated approach. At high system load, the probability of finding a sender is the maximum while at low system load, the probability of finding a receiver is maximum. The scheduler under consideration is as shown in Fig. 1 and various parameters used in study are given in Table I. The strategy is applied on distributed computers connected via mesh topology. The strategy is equally applicable to other topologies of the distributed system interconnection e.g. bus, star. The figure depicted is just a small snapshot of the actual model. However, in real situations, the number of processing nodes can be much more. The interconnection between the processor has delay of X ij where i and j are the interconnected nodes. The arrival rate and service rate at each node is taken as i and i respectively. Each processing element has an infinite job queue where the allotted jobs are queued up and are taken up for execution in first come first serve order. Information about the states of other nodes is recorded in the information table. It consist of a node identifier, number of jobs queued at each node and time of modification of information about that node. Figure 1. Scheduler Architecture Each node has its individual threshold parameters which are adaptive in nature. Depending upon the information available with the node, the threshold values are updated. Each node maintains three queues in which the node identifiers are saved depending upon their load state. The three states according to which node are categorized are lightly loaded node, normal loaded node and highly loaded node. Therefore, the three queues respectively gets referred to as minimum priority queue Min_Queue, normal queue Normal_Queue and maximum priority queue Max_Queue [1]. 2014 IEEE International Advance Computing Conference (IACC) 691

Parameters K Job_Queue Task_Set Job i Node i Info_Table Packet i l i T lower T upper P LHM UHM M Min_Queue Max_Queue Normal_Queue X ij i i TABLE I. Description PARAMETERS DESCRIPTION Number of nodes Job queue for each node where jobs are queued up Number of jobs submitted for execution Job identifier where 1 <= i<= Task_Set Node identifier where 1 <= i<= K Data structure for maintaining the information about load status of each node Packet identifier where 1 <= i<= K and fields same as Info_Table Workload on each node Node i Lower threshold Upper threshold Optimum percent value for calculating the T lower and T upper according to mean M Lower half mean of l i for the nodes sorted in ascending order Upper half mean of l i for the nodes sorted in ascending order Mean of l i for the nodes sorted in ascending order Min priority queue containing node identifier for nodes having load l i below T lower Max priority queue containing node identifier for nodes having load l i above T upper Queue for nodes having load between T lower and T upper Communication delay between i and j nodes Arrival rate of jobs on Node i Service rate of jobs on Node i Number of packets is equal to the number of nodes in the system with the packets having same fields as the information table. Packets are used to update the information table maintained at each node. Each packet has a time of modification for each node entry acting as a time stamp for updating the most recent information to the nodes. Same is maintained at each node via the information table to check the validity of the information about the other nodes. The packet is randomly forwarded on any link among the available connections of the given node to the neighboring nodes. Receiver of the packet then checks its information table. If there is a mismatch between the information available and information received about nodes visited by the packet, then the node update its table corresponding to the more recent updated information. If the information in the packet is different for their state then new information about the node is inserted into the packet, time of modification is reset and the packet is forwarded to other possibly unvisited neighboring nodes. These packets circulate indefinitely till all job queues become empty. As a node is assigned some workload, the same enters its job execution queue. The three queues maintained by each node are implemented as maximum priority queue (Max_Queue) for heavily loaded nodes, normal queue (Normal_Queue) for normally loaded nodes and minimum priority queue (Min_Queue) for lightly loaded nodes. Nodes in various queues are decided using threshold parameters viz. Lower Threshold (T lower ) and Upper Threshold (T upper ) which are adaptive in nature. Lower half mean (LHM) and Upper half mean (UHM) provide us the reference points using which T lower and T upper are set [1]. Initially the values of thresholds T lower and T upper are taken as 1 and 2 respectively and are gradually adjusted. As the load on nodes increases, the values of thresholds are reset and accordingly the number of nodes in Min_Queue, Max_Queue and Normal_Queue. During execution, situation may arise when the nodes are unevenly balanced. Under these circumstances jobs are transferred from a heavily loaded node to a lightly loaded node. Node acting as a receiver or sender is judged and selected as per the threshold parameters. If the load is high for the overall system, the receiver initiated approach is taken into consideration and vice versa. The number of jobs that are transferred from the heavily loaded node and number of jobs received by the lightly loaded node is taken to be one at a time. For a distributed system consisting of K nodes there are K packets in the system. If X ij is communication delay, then total time required for full updation is equal to K*X ij for all nodes. If algorithm runs for time T then the total number of messages circulated is given by (T/ (K*X ij ))*K 2 which is equal to T*K/X ij. For periodic information exchange K 2 messages are broadcasted at t i time interval. The total number of messages then exchanged is given by T*K 2 / t i. It can be seen that as the number of nodes increases, number of messages broadcasted goes on increasing. Moreover, if the system utilization increases, the time period of periodic exchanges needs to be decreased. Here, a situation may arise where decreasing t i may result in the system getting unstable. By the above discussed policy the information with nodes in LBRIE is updated without increasing the communication and computational overhead. This is possible because with minimum number of messages exchanged the node has correct information to make decision regarding balancing the load. The algorithm for the same is presented in the box. LBRIE () For All Processor Node i 1<=i<=K Do Allocate_ Job ( ) /* Based on Arrival Rate i */ Random_Information_Exchange () Update_ Threshold () Update_Queues () Balance_Load ( ) Execute_Job ( ) /* Based on Service Rate i */ While (Task_Set! =0 && Job_Queue! =0) Calculate TAT 692 2014 IEEE International Advance Computing Conference (IACC)

Random_Information_Exchange () For All Processor Node i 1<=i<=K Do Select Link ij = Random (For all Link ij where i j) Select Link ji= Random (For all Link ji where i j) Receive Packet from Link ji where j i Update Packet & Info_Table Send Packet on Link ij While (Task_Set! =0 && Job_Queue! =0) Update_ Threshold () Sort () /* Sort nodes in ascending order of load */ Calculate LHM, UHM & M Calculate T lower, T upper Update_Queues () Update () /* Update Min_Queue, Max Queue and Normal_Queue according to Updated T lower& T upper */ Balance_Load () Transfer Jobs /* Transfer jobs to under loaded node */ Receive Jobs /* Receive jobs from overloaded node */ IV. ILLUSTRATIVE EXAMPLE To better understand the working of the model, an example is illustrated in this section. It presents the basic working of the model in terms of average number of jobs getting executed. with jobs submitted for execution, the process starts by allocating these jobs to the processing elements according to the arrival rate i. When the 15 jobs are submitted to the system for execution, the state of nodes is shown in the Fig. 2. The arrival rate of jobs on the node is taken as i calculated as (Node_id%4 + 1). Accordingly, N 1 gets allocated with two jobs; N 2 three jobs, N 3 four jobs, N 4 one job, N 5 two jobs and N 6 three jobs respectively. The job allotment is done according to arrival rate. After the jobs are allotted, the load information is circulated in the system using random packets. The box below shows the random information exchange for information table updation of all the nodes in the system. Step 1: Node 1 initiates packet. Chooses any random direction and forwards the packet with its updated entry in it. Only one packet circulation is taken here for simplicity. Similarly other nodes start circulation. Step 2: Node 2 receives packet. Update its Info_Table. Chooses any random node link and forwards the packet with its updated entry in it. At each instance of packet forwarding, node chooses different link. If it has chosen link 4 before it will choose link among the remaining links. It forwards the packet to node 6. Step 3: Node 6 receives packet. Update its Info_Table. Chooses any random node link and forwards the packet with its updated entry in it. It forwards the packet to node 4. Step 4: Node 4 receives packet. Update its Info_Table. Update its entry in it and forwards the packet to node 5. Step 5: Node 5 receives packet. Update its Info_Table. Update its entry in it and forwards the packet to node 3. Step 6: Node 3 receives packet. Update its Info_Table. Update its entry in it and forwards the packet to node 1. Finally packet circulation is completed and node 1 has complete and other nodes have partial information about system state. Similarly other nodes have received full information by their packet. Each packet is forwarded based upon the procedure shown in the box above. When all the packets have reached their source of initialization then all nodes have complete information regarding the states of other nodes. After the successful reception of packets, the information with each node is shown in Fig. 3. Figure 2. Nodes after Job Allotment of 15 Jobs The example considers a scenario with the number of available nodes for execution as 6. Load on each node N i is represented by l i. Initially, T lower and T upper are assumed to be 1 and 2 respectively. The example considers 15 jobs for execution. The jobs considered in the work are assumed to be independent of each other. For a simple picture of the scheduler Figure 3. Information, Threshold and Queues Updation 2014 IEEE International Advance Computing Conference (IACC) 693

The thresholds are modified after the information update. T lower and T upper becomes 2 and 3 respectively. After threshold modification, queues at each node are modified. The nodes in Min_Queue are shown in light shade while nodes in Max_Queue are shown with little darker shade. After the queues modification, the load balancing process gets completed. One job is transferred from node 3 to node 4 and the scenario is depicted in Fig. 4. without balancing. The process repeats till the entire job queues become empty prompting the algorithm to stop. Figure 6. Information, Threshold and Queues Updation Figure 4. Scenario after Load Balancing The jobs are then executed according to service rate i which is ((Max_Node-Node_Id)%4 +1) ). Therefore, N 1 executes two jobs, N 2 one job, N 3 four jobs, N 4 three jobs, N 5 two jobs and N 6 one job respectively. Fig. 5 shows the nodes status after execution of jobs. Table II depicts the number of jobs allotted and number of jobs executed by each node. For balanced system average number of jobs executed by nodes should be around the mean. It has been assumed here thatt if the number of jobs getting executed by nodes is between (1±.25) of mean then the system is in the balanced state. By the above statement the average number of jobs that need to be executed by each node is 2.5 with boundary values being 2 and 3. Further, it can be seen from Table II that all nodes are executing number of jobs between 2 and 3 as desired, representing a balanced job execution. TABLE II. NUMBER OF JOBS ALLOTTED V/S EXECUTED Node No N 1 N 2 Allotted 2 3 Executed 2 3 N 3 N 4 N 5 N 6 4 1 2 3 3 2 2 3 Figure 5. Scenario after Execution of Jobs Fig. 6 shows the status of nodes after information exchange. Thresholds are modified and queues are updated based on the recent information with the nodes. Node 2 and 6 have two jobs each and other nodes have empty queues. As there are no highly loaded node the execution continues V. CONCLUSION The work proposes a load balancing policy with information exchange based on random walk of packets for a decentralized system. The strategy allocates the modules of the job(s) over the nodes in such a way that the desired objective of minimizing the turnaround time is met. Information is exchanged via random packets. Node load status decides whether receiver initiated or sender initiated load balancing strategy is taken into consideration. During execution phase, the tasks are reallocated dynamically depending upon the system state. The balancing process utilizes minimum CPU time as redistribution is carried out when lightly loaded and heavily loaded nodes are reported. The work is based on the fact that if average of the workload is executed by the processing elements with minimum request generated for job reallocation, then best results can be realized in terms of the turnaround time and resource utilization. The model can result in even better solutions by making it more realistic by 694 2014 IEEE International Advance Computing Conference (IACC)

considering other issues related to load balancing like data locality. REFERENCES [1] Alam, T., Raza, Z., A dynamic load balancing strategy with adaptive threshold based approach, Second IEEE International Conference on Parallel, Distributed and Grid Computing, Solan, India, pages 927-932, 2012. [2] Casavant, T.L. Kuhl, J., A Taxonomy of Scheduling in General-Purpose Distributed Computing Systems, IEEE Transactions on Software Engineering, vol. 14, no. 2, pages 141-154, 1988. [3] Shivaratri, N.G., Krueger, P., Singhal, M., Load Distributing for Locally Distributed Systems, Computer, vol. 25, no. 12, pages 33-44, 1992. [4] Casavant, T.L., Kuhl, J.G., Effects of Response and Stability on Scheduling in Distributed Computing Systems, IEEE Transactions on Software Engineering, vol. 14, no. 11, pages 1578-1588, 1988. [5] Zeng, Z., Bharadwaj, V., Design and Performance Evaluation of Queue-and-Rate-Adjustment Dynamic Load Balancing Policies for Distributed Networks, IEEE Transactions On Computers, vol. 55, no. 11, pages 1410-1422, 2006. [6] Anand, L., Ghose, D., Mani, V., ELISA: An Estimated Load Information Scheduling Algorithm for Distributed Computing System, Computers and Mathematics with Applications, vol. 37, pp. 57-85, 1999. [7] Willebeek, Marc, Reeves, Anthony P., Strategies for Dynamic Load Balancing on Highly Parallel Computers, IEEE Transaction on Parallel systems Software, vol. 4, no. 9, pages 979-993, 1993. [8] Wang, Y.T., Morris, R.J.T., Load Sharing in Distributed Systems, IEEE Transactions on Computers, vol. C-34, pp 204-217, 1985. [9] Einstein, Albert, Investigations on the Theory of the Brownian movement, PhD, 1926. [10] Camp, T., Boleng, J., Davies, V., A survey of mobility models for ad hoc network research, Wirel.Commun.Mob.Comput. vol 2, no. 5, pages 483 502, 2002. 2014 IEEE International Advance Computing Conference (IACC) 695