HETEROGENEOUS COMPUTING
|
|
- Violet Woods
- 5 years ago
- Views:
Transcription
1 HETEROGENEOUS COMPUTING Shoukat Ali, Tracy D. Braun, Howard Jay Siegel, and Anthony A. Maciejewski School of Electrical and Computer Engineering, Purdue University Heterogeneous computing is a set of techniques enabling the use of diverse computational capabilities for the execution of a meta-task [2, 4, 7]. A meta-task is an arbitrary collection of independent (non-communicating) tasks with a variety of computational needs, which are to be executed during a given interval of time (e.g., a day). Some tasks may be decomposable into one or more communicating subtasks (subtasks may, in turn, have diverse computational needs). There are many types of heterogeneous computing systems [2]. This article focuses on mixed-machine systems, where a heterogeneous suite of independent machines is interconnected by high-speed links to function as a meta-computer (see Meta-Computer) or as part of a computational grid (see Computational Grid) [3]. The user of a heterogeneous suite has the illusion of working on a single virtual machine. Research in the field of heterogeneous computing is motivated by the fact that high performance machines vary in capability and, hence, suitability for different types of tasks and subtasks. Examples of such machine architectures include large distributed shared memory machines (e.g., an SGI 2800), distributed memory multiprocessors (e.g., an IBM SP2), and small shared memory machines (e.g., a Sun Enterprise 3000 Server). Furthermore, two implementations of a given machine type may vary in CPU speed, cache memory size and structure, I/O bandwidth, etc. With the recent advances in high-speed digital communications, it has become possible to use collections of different machines in concert to execute large meta-tasks whose tasks and subtasks have diverse computational needs. The goal of heterogeneous computing is to assign these tasks and subtasks to machines and schedule their execution to optimize some performance measure. This measure may be as simple as the execution time of the meta-task. The measure may be more complex and be a mathematical function of various factors such as the weighted priorities of tasks, deadlines for task execution, security requirements, and quality of service (QoS) needs (see QoS). The process of assigning (matching) tasks/subtasks to machines and scheduling their execution is called mapping. A hypothetical example task with four subtasks that are best suited for different machine architectures is shown in Figure 1. The example task executes for 100 time units on a typical workstation. The task consists of four subtasks: the first (S1) is best suited to execute on a large cluster of PCs (e.g., a Beowulf Cluster), the second (S2) is best suited to execute on a distributed memory multiprocessor, the third (S3) is best suited to execute on a distributed shared memory machine, and the fourth (S4) is best suited to execute on a small shared memory machine. Executing the whole task on a large cluster may improve the execution time of the first subtask from 25 to 0.3 time units, and those of the other subtasks to varying extents. The overall execution time improvement may only be about a factor of five because other subtasks are not well suited for execution on a cluster (e.g., due to the need for interprocessor communication). However, using four different machines that match the computational requirements for each of the individual subtasks can result in an overall execution time that is better than the execution time on the workstation by a factor of over 50. For communicating subtasks, inter-machine data transfers need to be performed when multiple machines are used. Hence, data transfer overhead has to be considered as part of the overall execution time on the heterogeneous computing suite whereas there is no such overhead when the entire task is executed on a single workstation. This is a simplified example. Actual tasks may consist of a large number of subtasks with a much more complex intersubtask communications structure. Also, the sharing of the machines by all the tasks in the meta-task must be considered when mapping. Mapping Finding a mapping for tasks that optimizes some performance measure is, in general, an NP-complete problem. For example, consider mapping 30 tasks onto five machines. This means that there are 5 30 possible mappings. Even if it took only one nanosecond to evaluate each mapping, an exhaustive search to find the best mapping would require 5 30 nanoseconds > 1000 years! Therefore, it is necessary to have heuristics to find near-optimal mappings without using an exhaustive search. Factors that impact mapping decisions include: (1) match of the task computational requirements to the machine capabilities, (2) overhead for the inter-machine communication of code and data (initial and generated), (3) expected machine load and network congestion, and (4) inter-subtask precedence constraints. There are many different types of heuristics for mapping tasks to the machines in a heterogeneous computing suite. In static mapping heuristics [1], the mapping decisions are made off-line before the execution of the meta-task. A static
2 mapping heuristic is employed if (1) the tasks that comprise the meta-task are known a priori, (2) predictions about the available heterogeneous computing resources are likely to be accurate, and (3) the estimated expected execution time of each task on each machine in the suite is known reasonably accurately. Static mapping heuristics can be used for planning the next day s work on a heterogeneous computing system. execution on a baseline workstation S1 S2 S3 S units of time on a baseline workstation execution on a cluster five times faster than baseline execution on a heterogeneous suite times faster than baseline Figure 1. Hypothetical example (based on [4]) of the advantage of using a heterogeneous computing suite of machines. The number underneath each bar indicates execution time. For the suite, each subtask execution time includes the overhead to receive data. Not drawn to scale. In dynamic mapping heuristics [6], the mapping decisions are made on-line during the execution of the meta-task. Dynamic approaches to mapping are needed if any of the following are unpredictable: (1) arrival times of the tasks, (2) machines available in the heterogeneous computing system (some machines in the suite may go off-line and new machines may come on-line), and (3) expected execution times of the tasks on the machines. While a static mapper considers the entire meta-task to be executed (e.g., the next day) when making decisions, a dynamic mapper has only information about tasks that have already arrived for execution. Furthermore, because a dynamic mapper operates on-line, it must make decisions much faster than an off-line static mapper. Consequently, dynamic mapping heuristics often use feedback from the heterogeneous computing system (while tasks are executing) to improve any bad mapping decisions. A semi-static mapping heuristic [8] can be used for an iterative task whose subtask execution times will change from iteration to iteration based on the input data. A semi-static methodology observes, from one iteration to another, the effects of the changing characteristics of the task's input data, called dynamic parameters, on the task's execution time. The off-line phase uses a static mapping algorithm to generate high quality mappings for a sampling of values for the dynamic parameters a priori. During the on-line phase, the actual dynamic parameters are observed and a new mapping for the subtasks may be selected from the precomputed off-line mappings. Automatic Heterogeneous Computing One of the long-term goals of heterogeneous computing research is to develop software environments that will automatically map and execute tasks expressed in a machine-independent high-level language. Such an environment will facilitate the use of heterogeneous computing suite by increasing portability, because the programmer need not be concerned with the composition of the heterogeneous computing suite, and increasing the possibility of deriving better mappings than the user can derive with ad hoc methods. Thus, it will improve the performance of and encourage the use of heterogeneous computing. While no such environment exists today, many researchers are working to develop one. A conceptual model for such an environment using a dedicated heterogeneous computing suite of machines is described in Figure 2, and consists of four stages. Stage 1 uses information about the type of tasks in the meta-task and machines in the heterogeneous computing suite to generate a set of parameters relevant to both the computational characteristics of tasks and the architectural features of machines. The system then derives categories for computational requirements and categories for machine capabilities from this set of parameters. Stage 2 consists of two components: task profiling and analytical benchmarking. Task profiling (see Task Profiling) decomposes each task of the meta-task into subtasks, where each subtask is computationally homogeneous. The computational requirements of each subtask are quantified by profiling the code and data. Analytical benchmarking
3 (see Analytical Benchmarking) quantifies how effectively each available machine in the suite performs on each type of computational requirement. The information available from stage 2 is used by stage 3 to derive the estimated execution time of each subtask on each machine in the heterogeneous computing suite, along with the associated inter-machine communication overheads. These statically derived results are then incorporated with initial values for machine ready times, intermachine network delays, and status parameters (e.g., machine/network faults) to perform the mapping of subtasks to machines based on a given performance metric. The result is an assignment of subtasks to machines and an execution schedule. The process as described corresponds to a static mapping. The subtasks are executed in stage 4. If dynamic mapping is employed, the subtask completion times and loading/status of the machines/network are monitored (shown in dashed lines in Figure 2). The monitoring process is necessary because the actual computation times and data transfer times may be input-data dependent and deviate considerably from the static estimates. This information may be used to re-invoke the mapping of stage 3 to improve the machine assignment and execution schedule. STAGE 1 applications generation of parameters relevant to both applications and machines machines in suite categories for computational needs categories for machine capabilities 2 task profiling for given applications analytical benchmarking for machines decomposition into subtasks, characteristics of each subtask initial status of machines, network; performance measure machine characteristics, inter-machine communication overhead 3 mapping matching of tasks to machines, execution schedule current status of machines and network; value of performance measure 4 execution monitor Figure 2. Model for integrating the software support needed for automating the use of heterogeneous computing systems (based on [7]). Ovals indicate information and rectangles indicate action.
4 Environments and Applications Examples of heterogeneous computing environments are: (1) the Purdue University Network Computing Hubs, a wide area network computing system which can be used to run a selection of software tools via a World Wide Web browser [5]; (2) NetSolve, a client-server system with geographically distributed servers that can be accessed from a variety of interfaces, including MATLAB, shell scripts, C, and FORTRAN [3]; and (3) the Globus meta-computing infrastructure toolkit, a set of low-level mechanisms that can be built upon to develop higher level heterogeneous computing services [3]. Example applications that have demonstrated the usefulness of heterogeneous computing include: (1) a threedimensional simulation of mixing and turbulent convection at the Minnesota Supercomputer Center [7]; (2) the shipboard anti-air warfare program (HiPer-D) used at the Naval Surface Warfare Center for threat detection, target engagement, and missile guidance; and (3) a simulation of colliding galaxies performed by solving large n-body dynamics problems and large gas dynamics problems at the National Center for Supercomputing Applications [7]. Open Problems in Heterogeneous Computing Heterogeneous computing is a relatively new research area for the computer field. Interest in such systems continues to grow both in the research community and in the user community. The realization of the automatic heterogeneous computing environment envisioned in Figure 2 requires further research in many areas. Machine-independent languages with user-specified directives are needed to (1) allow compilation of a given task into efficient code for any machine in the suite, (2) aid in decomposing tasks into subtasks, and (3) facilitate determination of subtask computational requirements. Moreover, methods must be refined for measuring the loading and status of the machines in the heterogeneous computing suite and the network, and for estimating the subtask completion times. Also, the uncertainty present in the estimated parameter values, such as subtask completion times, should be taken into consideration in determining the mappings. Other research areas are (1) developing communication protocols for reliable, low overhead data transmission over heterogeneous networks with given QoS requirements, (2) devising debugging tools that can be used transparently across the suite of machines, and (3) formulating algorithms for task migration between heterogeneous machines, using task migration for fault tolerance or load re-balancing. Acknowledgment: This work was supported by the DARPA/ITO Quorum Program through the Office of Naval Research under Grant No. N References [1] T. D. Braun, H. J. Siegel, N. Beck, L. L. Boloni, M. Maheswaran, A. I. Reuther, J. P. Robertson, M. D. Theys, B. Yao, D. Hensgen, and R. F. Freund, A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and Distributed Computing, 61(6), , [2] M. M. Eshaghian (ed.), Heterogeneous Computing, Artech House, Norwood, MA, [3] I. Foster and C. Kesselman (eds.), The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmann, San Francisco, CA, [4] R. F. Freund and H. J. Siegel (guest eds.), Special Issue on Heterogeneous Processing, IEEE Computer, 26(6), [5] N. H. Kapadia and J. A. B. Fortes, PUNCH: An Architecture for Web-Enabled Wide-Area Network-Computing, Cluster Computing: The Journal of Networks, Software Tools and Applications, Special Issue on High Performance Distributed Computing, 2(2), , [6] M. Maheswaran, S. Ali, H. J. Siegel, D. Hensgen, and R. F. Freund, Dynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems, Journal of Parallel and Distributed Computing, Special Issue on Software Support for Distributed Computing, 59(2), , [7] M. Maheswaran, T. D. Braun, and H. J. Siegel, Heterogeneous Distributed Computing, in Encyclopedia of Electrical and Electronics Engineering, Vol. 8, J. G. Webster, ed., John Wiley, New York, NY, 1999, pp [8] M. D. Theys, T. D. Braun, Y.-K. Kwok, H. J. Siegel, and A. A. Maciejewski, Mapping of Tasks onto Distributed Heterogeneous Computing Systems Using a Genetic Algorithm Approach, in Solutions to Parallel and Distributed Computing Problems: Lessons from Biological Sciences, A. Y. Zomaya, ed., John Wiley & Sons, New York, NY, 2001, pp
5 Cross Reference: Analytical Benchmarking see Heterogeneous Computing. Computational Grid see Heterogeneous Computing. Meta-Computer see Heterogeneous Computing. Meta-Task see Heterogeneous Computing. Task Profiling see Heterogeneous Computing. Dictionary Terms: Analytical Benchmarking Analytical benchmarking of a given computing machine provides a measure of performance of the machine on each of the different code types that may be present in a given source program. The performance of a particular code type on a specific kind of resource is a multi-variable function. Some of the variables of such a function may be: the quality of service requirements of the application program (e.g., data precision), the size of the data set to be processed, the algorithm to be applied, programmer and compiler efforts to optimize the program, and the operating system and architecture of the machine that will execute the specific code type. (See Computational Grid A developing area of research and technology seeking to connect regional and national computational resources in a transparent fashion, thus transforming any computer connected to the grid into part of a new class of supercomputer. The implied analogy is with an electric power grid. If access to advanced computational capabilities and accessories became as ubiquitous and dependable as an electric power grid, the impact on society would be dramatic. (See Meta-Computer A system framework that utilizes the resources of many different computers connected via a network to cooperate on solving a problem. In general, this allows the problem to be solved much more quickly than would be possible using a single computer. Meta-computers usually consist of heterogeneous, distributed elements, and operate in a coarsegrained fashion. A meta-computer would be a more localized component of a larger computational grid. (See QoS QoS (Quality of Service) is an aggregate function of many different system characteristics used to represent the overall performance of a system. The components in the function, and the computation of the function itself, vary widely (i.e., QoS means many different things to many different people). Sample components of a QoS measure could include task deadlines, data precision, image color range, video jitter, network bandwidth, bit error rate, and end-to-end latency. (See Task Profiling Task profiling of a given source program specifies types of computations that are present in the source program by decomposing it into code blocks based on computational requirements of the blocks. The set of computation types defined depends on the architectural features of the machines available for executing the source program or its subprograms, and on both the application task code and the types and sizes of data sets it is to process. (See
A Novel Task Scheduling Algorithm for Heterogeneous Computing
A Novel Task Scheduling Algorithm for Heterogeneous Computing Vinay Kumar C. P.Katti P. C. Saxena SC&SS SC&SS SC&SS Jawaharlal Nehru University Jawaharlal Nehru University Jawaharlal Nehru University New
More informationMapping Heuristics in Heterogeneous Computing
Mapping Heuristics in Heterogeneous Computing Alexandru Samachisa Dmitriy Bekker Multiple Processor Systems (EECC756) May 18, 2006 Dr. Shaaban Overview Introduction Mapping overview Homogenous computing
More informationHeterogeneous Distributed Computing
Heterogeneous Distributed Computing Muthucumaru Maheswaran, Tracy D. Braun, and Howard Jay Siegel Parallel Processing Laboratory School of Electrical and Computer Engineering Purdue University West Lafayette,
More informationROBUST RESOURCE ALLOCATION IN HETEROGENEOUS PARALLEL AND DISTRIBUTED COMPUTING SYSTEMS INTRODUCTION
ROBUST RESOURCE ALLOCATION IN HETEROGENEOUS PARALLEL AND DISTRIBUTED COMPUTING SYSTEMS 2461 ROBUST RESOURCE ALLOCATION IN HETEROGENEOUS PARALLEL AND DISTRIBUTED COMPUTING SYSTEMS INTRODUCTION In parallel
More informationSkewness-Based Min-Min Max-Min Heuristic for Grid Task Scheduling
Skewness-Based Min-Min Max-Min Heuristic for Grid Task Scheduling Sanjaya Kumar Panda 1, Pratik Agrawal 2, Pabitra Mohan Khilar 3 and Durga Prasad Mohapatra 4 1,3,4 Department of Computer Science and Engineering
More informationDynamic Resource Allocation Heuristics for Maximizing Robustness with an Overall Makespan Constraint in an Uncertain Environment
Dynamic Resource Allocation Heuristics for Maximizing Robustness with an Overall Makespan Constraint in an Uncertain Environment Ashish M. Mehta, Jay Smith, H. J. Siegel, Anthony A. Maciejewski, Arun Jayaseelan,
More informationSimple Scheduling Algorithm with Load Balancing for Grid Computing
Simple Scheduling Algorithm with Load Balancing for Grid Computing Fahd Alharbi College of Engineering King Abdulaziz University Rabigh, KSA E-mail: fahdalharbi@kau.edu.sa Grid computing provides the means
More informationAustralian Journal of Basic and Applied Sciences. Resource Fitness Task Scheduling Algorithm for Scheduling Tasks on Heterogeneous Grid Environment
AENSI Journals Australian Journal of Basic and Applied Sciences ISSN:1991-8178 Journal home page: www.ajbasweb.com Resource Fitness Task Scheduling Algorithm for Scheduling Tasks on Heterogeneous Grid
More informationLoad balancing with Modify Approach Ranjan Kumar Mondal 1, Enakshmi Nandi 2, Payel Ray 3, Debabrata Sarddar 4
RESEARCH ARTICLE International Journal of Computer Techniques - Volume 3 Issue 1, Jan- Feb 2015 Load balancing with Modify Approach Ranjan Kumar Mondal 1, Enakshmi Nandi 2, Payel Ray 3, Debabrata Sarddar
More informationCurriculum Vitae Shoukat Ali
Curriculum Vitae Shoukat Ali Address IBM Dublin Research Laboratory Dublin 15, Ireland Phone: +353-1-640-2614 E-mail: drshoukatali@gmail.com Education 8/99 8/03 Ph.D. in Electrical and Computer Engineering,
More informationADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT
ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT PhD Summary DOCTORATE OF PHILOSOPHY IN COMPUTER SCIENCE & ENGINEERING By Sandip Kumar Goyal (09-PhD-052) Under the Supervision
More informationSIMULATION OF ADAPTIVE APPLICATIONS IN HETEROGENEOUS COMPUTING ENVIRONMENTS
SIMULATION OF ADAPTIVE APPLICATIONS IN HETEROGENEOUS COMPUTING ENVIRONMENTS Bo Hong and Viktor K. Prasanna Department of Electrical Engineering University of Southern California Los Angeles, CA 90089-2562
More information6.1 Multiprocessor Computing Environment
6 Parallel Computing 6.1 Multiprocessor Computing Environment The high-performance computing environment used in this book for optimization of very large building structures is the Origin 2000 multiprocessor,
More informationUTILIZATION-BASED TECHNIQUES FOR STATICALLY MAPPING HETEROGENEOUS APPLICATIONS ONTO THE HIPER-D HETEROGENEOUS COMPUTING SYSTEM
UTILIZATION-BASED TECHNIQUES FOR STATICALLY MAPPING HETEROGENEOUS APPLICATIONS ONTO THE HIPER-D HETEROGENEOUS COMPUTING SYSTEM SHOUKAT ALI, JONG-KOOK KIM, YANG YU, SHRIRAM B. GUNDALA, SETHAVIDH GERTPHOL,
More informationQoS Guided Min-Mean Task Scheduling Algorithm for Scheduling Dr.G.K.Kamalam
International Journal of Computer Communication and Information System(IJJCCIS) Vol 7. No.1 215 Pp. 1-7 gopalax Journals, Singapore available at : www.ijcns.com ISSN: 976 1349 ---------------------------------------------------------------------------------------------------------------------
More informationClustering based Max-Min Scheduling in Cloud Environment
Clustering based Max- Scheduling in Cloud Environment Zonayed Ahmed Department of CSE Stamford University Bangladesh Dhaka, Bangladesh Adnan Ferdous Ashrafi Department of CSE Stamford University Bangladesh
More informationTODAY S computing systems are often heterogeneous
, July 2-4, 2014, London, U.K. Heuristics for Robust Allocation of Resources to Parallel Applications with Uncertain Execution Times in Heterogeneous Systems with Uncertain Availability Timothy Hansen,
More informationLinear Programming Based Affinity Scheduling for Heterogeneous Computing Systems
Linear Programming Based Affinity Scheduling for Heterogeneous Computing Systems Issam Al-Azzoni Department of Computing and Software McMaster University Hamilton, Ontario, Canada Douglas G Down Department
More informationA Dynamic Matching and Scheduling Algorithm for Heterogeneous Computing Systems
A Dynamic Matching and Scheduling Algorithm for Heterogeneous Computing Systems Muthucumaru Maheswaran and Howard Jay Siegel Parallel Processing Laboratory School of Electrical and Computer Engineering
More informationGrid Scheduling Strategy using GA (GSSGA)
F Kurus Malai Selvi et al,int.j.computer Technology & Applications,Vol 3 (5), 8-86 ISSN:2229-693 Grid Scheduling Strategy using GA () Dr.D.I.George Amalarethinam Director-MCA & Associate Professor of Computer
More informationIncorporating Data Movement into Grid Task Scheduling
Incorporating Data Movement into Grid Task Scheduling Xiaoshan He 1, Xian-He Sun 1 1 Department of Computer Science, Illinois Institute of Technology Chicago, Illinois, 60616, USA {hexiaos, sun}@iit.edu
More informationOmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP
OmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP (extended abstract) Mitsuhisa Sato 1, Motonari Hirano 2, Yoshio Tanaka 2 and Satoshi Sekiguchi 2 1 Real World Computing Partnership,
More informationA COMPARATIVE STUDY OF FIVE PARALLEL GENETIC ALGORITHMS USING THE TRAVELING SALESMAN PROBLEM
A COMPARATIVE STUDY OF FIVE PARALLEL GENETIC ALGORITHMS USING THE TRAVELING SALESMAN PROBLEM Lee Wang, Anthony A. Maciejewski, Howard Jay Siegel, and Vwani P. Roychowdhury * Microsoft Corporation Parallel
More informationStudy of an Iterative Technique to Minimize Completion Times of Non-Makespan Machines
Study of an Iterative Technique to Minimize Completion Times of Non-Makespan Machines Luis Diego Briceño 1, Mohana Oltikar 1, Howard Jay Siegel 1,2, and Anthony A. Maciejewski 1 Colorado State University
More informationLOW AND HIGH LEVEL HYBRIDIZATION OF ANT COLONY SYSTEM AND GENETIC ALGORITHM FOR JOB SCHEDULING IN GRID COMPUTING
LOW AND HIGH LEVEL HYBRIDIZATION OF ANT COLONY SYSTEM AND GENETIC ALGORITHM FOR JOB SCHEDULING IN GRID COMPUTING Mustafa Muwafak Alobaedy 1, and Ku Ruhana Ku-Mahamud 2 2 Universiti Utara Malaysia), Malaysia,
More informationCurriculum Vitae Shoukat Ali
Curriculum Vitae Shoukat Ali Address IBM Dublin Research Laboratory Damastown Industrial Estate Dublin 15, Ireland Phone: +1-916-358-6127 Alternate phone: +353-86-864-3802 E-mail: drshoukatali@gmail.com
More informationA Resource Discovery Algorithm in Mobile Grid Computing Based on IP-Paging Scheme
A Resource Discovery Algorithm in Mobile Grid Computing Based on IP-Paging Scheme Yue Zhang 1 and Yunxia Pei 2 1 Department of Math and Computer Science Center of Network, Henan Police College, Zhengzhou,
More informationCompilers and Run-Time Systems for High-Performance Computing
Compilers and Run-Time Systems for High-Performance Computing Blurring the Distinction between Compile-Time and Run-Time Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/compilerruntime.pdf
More informationPERFOMANCE EVALUATION OF RESOURCE SCHEDULING TECHNIQUES IN CLUSTER COMPUTING
International Journal of Scientific & Engineering Research Volume 3, Issue 5, May-2012 1 PERFOMANCE EVALUATION OF RESOURCE SCHEDULING TECHNIQUES IN CLUSTER COMPUTING Admire Mudzagada, Benard Mapako and
More informationEvaluation of a Semi-Static Approach to Mapping Dynamic Iterative Tasks onto Heterogeneous Computing Systems
Evaluation of a Semi-Static Approach to Mapping Dynamic Iterative Tasks onto Heterogeneous Computing Systems YU-KWONG KWOK 1,ANTHONY A. MACIEJEWSKI 2,HOWARD JAY SIEGEL 2, ARIF GHAFOOR 2, AND ISHFAQ AHMAD
More informationIntroduction. Analytic simulation. Time Management
XMSF Workshop Monterey, CA Position Paper Kalyan S. Perumalla, Ph.D. College of Computing, Georgia Tech http://www.cc.gatech.edu/~kalyan August 19, 2002 Introduction The following are some of the authors
More informationA Grid-Enabled Component Container for CORBA Lightweight Components
A Grid-Enabled Component Container for CORBA Lightweight Components Diego Sevilla 1, José M. García 1, Antonio F. Gómez 2 1 Department of Computer Engineering 2 Department of Information and Communications
More informationGrid Computing. Grid Computing 2
Grid Computing Mahesh Joshi joshi031@d.umn.edu Presentation for Graduate Course in Advanced Computer Architecture 28 th April 2005 Objective Overview of the concept and related aspects Some practical implications
More informationLoad Balancing Algorithm over a Distributed Cloud Network
Load Balancing Algorithm over a Distributed Cloud Network Priyank Singhal Student, Computer Department Sumiran Shah Student, Computer Department Pranit Kalantri Student, Electronics Department Abstract
More informationAn Introduction to the Grid
1 An Introduction to the Grid 1.1 INTRODUCTION The Grid concepts and technologies are all very new, first expressed by Foster and Kesselman in 1998 [1]. Before this, efforts to orchestrate wide-area distributed
More informationBi-Objective Optimization for Scheduling in Heterogeneous Computing Systems
Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado
More informationA Decoupled Scheduling Approach for the GrADS Program Development Environment. DCSL Ahmed Amin
A Decoupled Scheduling Approach for the GrADS Program Development Environment DCSL Ahmed Amin Outline Introduction Related Work Scheduling Architecture Scheduling Algorithm Testbench Results Conclusions
More informationMeasuring the Robustness of Resource Allocations in a Stochastic Dynamic Environment
Measuring the Robustness of Resource Allocations in a Stochastic Dynamic Environment Jay Smith 1,2, Luis D. Briceño 2, Anthony A. Maciejewski 2, Howard Jay Siegel 2,3, Timothy Renner 3, Vladimir Shestak
More informationOn Latency Management in Time-Shared Operating Systems *
On Latency Management in Time-Shared Operating Systems * Kevin Jeffay University of North Carolina at Chapel Hill Department of Computer Science Chapel Hill, NC 27599-3175 jeffay@cs.unc.edu Abstract: The
More informationGrid Scheduler. Grid Information Service. Local Resource Manager L l Resource Manager. Single CPU (Time Shared Allocation) (Space Shared Allocation)
Scheduling on the Grid 1 2 Grid Scheduling Architecture User Application Grid Scheduler Grid Information Service Local Resource Manager Local Resource Manager Local L l Resource Manager 2100 2100 2100
More informationMeshlization of Irregular Grid Resource Topologies by Heuristic Square-Packing Methods
Meshlization of Irregular Grid Resource Topologies by Heuristic Square-Packing Methods Uei-Ren Chen 1, Chin-Chi Wu 2, and Woei Lin 3 1 Department of Electronic Engineering, Hsiuping Institute of Technology
More informationTasks Scheduling using Ant Colony Optimization
Journal of Computer Science 8 (8): 1314-1320, 2012 ISSN 1549-3636 2012 Science Publications Tasks Scheduling using Ant Colony Optimization 1 Umarani Srikanth G., 2 V. Uma Maheswari, 3.P. Shanthi and 4
More informationCheckpoint. Object. Object Stub. Proxy. Request. Request. Proxy
CORBA Based Runtime Support for Load Distribution and Fault Tolerance Thomas Barth, Gerd Flender, Bernd Freisleben, Manfred Grauer, and Frank Thilo University of Siegen, Hölderlinstr.3, D 57068 Siegen,
More informationCHAPTER 7 CONCLUSION AND FUTURE SCOPE
121 CHAPTER 7 CONCLUSION AND FUTURE SCOPE This research has addressed the issues of grid scheduling, load balancing and fault tolerance for large scale computational grids. To investigate the solution
More informationLecture 9: MIMD Architectures
Lecture 9: MIMD Architectures Introduction and classification Symmetric multiprocessors NUMA architecture Clusters Zebo Peng, IDA, LiTH 1 Introduction A set of general purpose processors is connected together.
More informationAgile Information Control Environment Program
AGILE INFORMATION CONTROL ENVIRONMENT PROGRAM Agile Control Environment Program I-Jeng Wang and Steven D. Jones Efficient and precise management and control of information flow are key components in winning
More informationFAULT-TOLERANCE AWARE MULTI OBJECTIVE SCHEDULING ALGORITHM FOR TASK SCHEDULING IN COMPUTATIONAL GRID
FAULT-TOLERANCE AWARE MULTI OBJECTIVE SCHEDULING ALGORITHM FOR TASK SCHEDULING IN COMPUTATIONAL GRID Dinesh Prasad Sahu 1, Karan Singh 2 and Shiv Prakash 3 1,2 School of Computer and Systems Sciences,
More informationCHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION
CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION 5.1 INTRODUCTION Generally, deployment of Wireless Sensor Network (WSN) is based on a many
More informationDYNAMIC RESOURCE MANGEMENT USING ADAPTIVE WORKFLOW SCHEDULING AND NETWORK SECURITY VIRTUALIZATION TECHNOLOGY (NSVT) IN GRID COMPUTING
DYNAMIC RESOURCE MANGEMENT USING ADAPTIVE WORKFLOW SCHEDULING AND NETWORK SECURITY VIRTUALIZATION TECHNOLOGY (NSVT) IN GRID COMPUTING C.Daniel Nesakumar1, G.Suganya 2, P.Jayasree 3 1Asst Prof, Dept. of
More informationAn Active Resource Management System for Computational Grid*
An Active Resource Management System for Computational Grid* Xiaolin Chen 1, Chang Yang 1, Sanglu Lu 2, and Guihai Chen 2 1 Department of Computer Science, Chuxiong Normal University, Chuxiong 675000,
More informationA grid representation for Distributed Virtual Environments
A grid representation for Distributed Virtual Environments Pedro Morillo, Marcos Fernández, Nuria Pelechano Instituto de Robótica, Universidad de Valencia. Polígono Coma S/N. Aptdo.Correos 22085, CP: 46071
More informationCost-based multi-qos job scheduling algorithm using genetic approach in cloud computing environment
ISSN: 2455-4227 Impact Factor: RJIF 5.12 www.allsciencejournal.com Volume 3; Issue 2; March 2018; Page No. 110-115 Cost-based multi-qos job scheduling algorithm using genetic approach in cloud computing
More informationA Study on Issues Associated with Mobile Network
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 9, September 2014,
More informationThe Cray Rainier System: Integrated Scalar/Vector Computing
THE SUPERCOMPUTER COMPANY The Cray Rainier System: Integrated Scalar/Vector Computing Per Nyberg 11 th ECMWF Workshop on HPC in Meteorology Topics Current Product Overview Cray Technology Strengths Rainier
More informationAssignment 5. Georgia Koloniari
Assignment 5 Georgia Koloniari 2. "Peer-to-Peer Computing" 1. What is the definition of a p2p system given by the authors in sec 1? Compare it with at least one of the definitions surveyed in the last
More informationAdaptive Resync in vsan 6.7 First Published On: Last Updated On:
First Published On: 04-26-2018 Last Updated On: 05-02-2018 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.vSAN's Approach to Data Placement and Management 1.3.Adaptive Resync 1.4.Results 1.5.Conclusion
More informationEvaluating the Performance of Skeleton-Based High Level Parallel Programs
Evaluating the Performance of Skeleton-Based High Level Parallel Programs Anne Benoit, Murray Cole, Stephen Gilmore, and Jane Hillston School of Informatics, The University of Edinburgh, James Clerk Maxwell
More informationNon-Uniform Memory Access (NUMA) Architecture and Multicomputers
Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico February 29, 2016 CPD
More informationDISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 1 Introduction Definition of a Distributed System (1) A distributed system is: A collection of
More informationIndex. ADEPT (tool for modelling proposed systerns),
Index A, see Arrivals Abstraction in modelling, 20-22, 217 Accumulated time in system ( w), 42 Accuracy of models, 14, 16, see also Separable models, robustness Active customer (memory constrained system),
More informationMobile Edge Computing for 5G: The Communication Perspective
Mobile Edge Computing for 5G: The Communication Perspective Kaibin Huang Dept. of Electrical & Electronic Engineering The University of Hong Kong Hong Kong Joint Work with Yuyi Mao (HKUST), Changsheng
More informationNon-Uniform Memory Access (NUMA) Architecture and Multicomputers
Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico September 26, 2011 CPD
More informationTechniques for mapping tasks to machines in heterogeneous computing systems q
Journal of Systems Architecture 46 (2000) 627±639 www.elsevier.com/locate/sysarc Techniques for mapping tasks to machines in heterogeneous computing systems q Howard Jay Siegel *, Shoukat Ali School of
More informationAn Architecture For Computational Grids Based On Proxy Servers
An Architecture For Computational Grids Based On Proxy Servers P. V. C. Costa, S. D. Zorzo, H. C. Guardia {paulocosta,zorzo,helio}@dc.ufscar.br UFSCar Federal University of São Carlos, Brazil Abstract
More informationA Survey on Grid Scheduling Systems
Technical Report Report #: SJTU_CS_TR_200309001 A Survey on Grid Scheduling Systems Yanmin Zhu and Lionel M Ni Cite this paper: Yanmin Zhu, Lionel M. Ni, A Survey on Grid Scheduling Systems, Technical
More informationUnit 9 : Fundamentals of Parallel Processing
Unit 9 : Fundamentals of Parallel Processing Lesson 1 : Types of Parallel Processing 1.1. Learning Objectives On completion of this lesson you will be able to : classify different types of parallel processing
More informationJ. Parallel Distrib. Comput.
J. Parallel Distrib. Comput. 68 (2008) 1504 1516 Contents lists available at ScienceDirect J. Parallel Distrib. Comput. journal homepage: www.elsevier.com/locate/jpdc Static resource allocation for heterogeneous
More informationResource CoAllocation for Scheduling Tasks with Dependencies, in Grid
Resource CoAllocation for Scheduling Tasks with Dependencies, in Grid Diana Moise 1,2, Izabela Moise 1,2, Florin Pop 1, Valentin Cristea 1 1 University Politehnica of Bucharest, Romania 2 INRIA/IRISA,
More informationScheduling in Multiprocessor System Using Genetic Algorithms
Scheduling in Multiprocessor System Using Genetic Algorithms Keshav Dahal 1, Alamgir Hossain 1, Benzy Varghese 1, Ajith Abraham 2, Fatos Xhafa 3, Atanasi Daradoumis 4 1 University of Bradford, UK, {k.p.dahal;
More informationCase Studies on Cache Performance and Optimization of Programs with Unit Strides
SOFTWARE PRACTICE AND EXPERIENCE, VOL. 27(2), 167 172 (FEBRUARY 1997) Case Studies on Cache Performance and Optimization of Programs with Unit Strides pei-chi wu and kuo-chan huang Department of Computer
More informationChapter 2 System Models
CSF661 Distributed Systems 分散式系統 Chapter 2 System Models 吳俊興國立高雄大學資訊工程學系 Chapter 2 System Models 2.1 Introduction 2.2 Physical models 2.3 Architectural models 2.4 Fundamental models 2.5 Summary 2 A physical
More informationAn Evaluation of Alternative Designs for a Grid Information Service
An Evaluation of Alternative Designs for a Grid Information Service Warren Smith, Abdul Waheed *, David Meyers, Jerry Yan Computer Sciences Corporation * MRJ Technology Solutions Directory Research L.L.C.
More informationResolving Load Balancing Issue of Grid Computing through Dynamic Approach
Resolving Load Balancing Issue of Grid Computing through Dynamic Er. Roma Soni M-Tech Student Dr. Kamal Sharma Prof. & Director of E.C.E. Deptt. EMGOI, Badhauli. Er. Sharad Chauhan Asst. Prof. in C.S.E.
More informationNetwork Bandwidth & Minimum Efficient Problem Size
Network Bandwidth & Minimum Efficient Problem Size Paul R. Woodward Laboratory for Computational Science & Engineering (LCSE), University of Minnesota April 21, 2004 Build 3 virtual computers with Intel
More informationOVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI
CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing
More informationTask Allocation for Minimizing Programs Completion Time in Multicomputer Systems
Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems Gamal Attiya and Yskandar Hamam Groupe ESIEE Paris, Lab. A 2 SI Cité Descartes, BP 99, 93162 Noisy-Le-Grand, FRANCE {attiyag,hamamy}@esiee.fr
More informationNon-Uniform Memory Access (NUMA) Architecture and Multicomputers
Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Parallel and Distributed Computing MSc in Information Systems and Computer Engineering DEA in Computational Engineering Department of Computer
More informationWhat are Clusters? Why Clusters? - a Short History
What are Clusters? Our definition : A parallel machine built of commodity components and running commodity software Cluster consists of nodes with one or more processors (CPUs), memory that is shared by
More informationSecure Mission-Centric Operations in Cloud Computing
Secure Mission-Centric Operations in Cloud Computing Massimiliano Albanese, Sushil Jajodia, Ravi Jhawar, Vincenzo Piuri George Mason University, USA Università degli Studi di Milano, Italy ARO Workshop
More informationDistributed Systems. Overview. Distributed Systems September A distributed system is a piece of software that ensures that:
Distributed Systems Overview Distributed Systems September 2002 1 Distributed System: Definition A distributed system is a piece of software that ensures that: A collection of independent computers that
More informationFrom Cluster Monitoring to Grid Monitoring Based on GRM *
From Cluster Monitoring to Grid Monitoring Based on GRM * Zoltán Balaton, Péter Kacsuk, Norbert Podhorszki and Ferenc Vajda MTA SZTAKI H-1518 Budapest, P.O.Box 63. Hungary {balaton, kacsuk, pnorbert, vajda}@sztaki.hu
More informationCHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS. Xiaodong Zhang and Yongsheng Song
CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS Xiaodong Zhang and Yongsheng Song 1. INTRODUCTION Networks of Workstations (NOW) have become important distributed
More informationPerformance of DB2 Enterprise-Extended Edition on NT with Virtual Interface Architecture
Performance of DB2 Enterprise-Extended Edition on NT with Virtual Interface Architecture Sivakumar Harinath 1, Robert L. Grossman 1, K. Bernhard Schiefer 2, Xun Xue 2, and Sadique Syed 2 1 Laboratory of
More informationMiddleware for the Use of Storage in Communication
Middleware for the Use of Storage in Communication Micah Beck, Dorian Arnold, Alessandro Bassi, Fran Berman, Henri Casanova, Jack Dongarra, Terry Moore, Graziano Obertelli, James Plank, Martin Swany, Sathish
More informationDEVELOPING A NEW MECHANISM FOR LOCATING AND MANAGING MOBILE AGENTS
Journal of Engineering Science and Technology Vol. 7, No. 5 (2012) 614-622 School of Engineering, Taylor s University DEVELOPING A NEW MECHANISM FOR LOCATING AND MANAGING MOBILE AGENTS AHMED Y. YOUSUF*,
More informationIntroduction to Distributed Systems (DS)
Introduction to Distributed Systems (DS) INF5040/9040 autumn 2009 lecturer: Frank Eliassen Frank Eliassen, Ifi/UiO 1 Outline What is a distributed system? Challenges and benefits of distributed system
More informationIntroduction. Distributed Systems IT332
Introduction Distributed Systems IT332 2 Outline Definition of A Distributed System Goals of Distributed Systems Types of Distributed Systems 3 Definition of A Distributed System A distributed systems
More informationLatency on a Switched Ethernet Network
Page 1 of 6 1 Introduction This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency as well as provide some real world examples.
More informationOpen Access LoBa-Min-Min-SPA: Grid Resources Scheduling Algorithm Based on Load Balance Using SPA
Send Orders for Reprints to reprints@benthamscience.net The Open Automation and Control Systems Journal, 2013, 5, 87-95 87 Open Access LoBa-Min-Min-SPA: Grid Resources Scheduling Algorithm Based on Load
More informationChapter 3. Design of Grid Scheduler. 3.1 Introduction
Chapter 3 Design of Grid Scheduler The scheduler component of the grid is responsible to prepare the job ques for grid resources. The research in design of grid schedulers has given various topologies
More informationFunctional Requirements for Grid Oriented Optical Networks
Functional Requirements for Grid Oriented Optical s Luca Valcarenghi Internal Workshop 4 on Photonic s and Technologies Scuola Superiore Sant Anna Pisa June 3-4, 2003 1 Motivations Grid networking connection
More informationScalable Computing: Practice and Experience Volume 8, Number 3, pp
Scalable Computing: Practice and Experience Volume 8, Number 3, pp. 301 311. http://www.scpe.org ISSN 1895-1767 c 2007 SWPS SENSITIVITY ANALYSIS OF WORKFLOW SCHEDULING ON GRID SYSTEMS MARíA M. LÓPEZ, ELISA
More informationEnergy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms
Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms Dawei Li and Jie Wu Department of Computer and Information Sciences Temple University Philadelphia, USA {dawei.li,
More informationThe Virtual Machine Aware SAN
The Virtual Machine Aware SAN What You Will Learn Virtualization of the data center, which includes servers, storage, and networks, has addressed some of the challenges related to consolidation, space
More informationCompiler Technology for Problem Solving on Computational Grids
Compiler Technology for Problem Solving on Computational Grids An Overview of Programming Support Research in the GrADS Project Ken Kennedy Rice University http://www.cs.rice.edu/~ken/presentations/gridcompilers.pdf
More informationEffective Load Balancing in Grid Environment
Effective Load Balancing in Grid Environment 1 Mr. D. S. Gawande, 2 Mr. S. B. Lanjewar, 3 Mr. P. A. Khaire, 4 Mr. S. V. Ugale 1,2,3 Lecturer, CSE Dept, DBACER, Nagpur, India 4 Lecturer, CSE Dept, GWCET,
More informationLIST BASED SCHEDULING ALGORITHM FOR HETEROGENEOUS SYSYTEM
LIST BASED SCHEDULING ALGORITHM FOR HETEROGENEOUS SYSYTEM C. Subramanian 1, N.Rajkumar 2, S. Karthikeyan 3, Vinothkumar 4 1 Assoc.Professor, Department of Computer Applications, Dr. MGR Educational and
More informationScheduling tasks sharing files on heterogeneous master-slave platforms
Scheduling tasks sharing files on heterogeneous master-slave platforms Arnaud Giersch 1, Yves Robert 2, and Frédéric Vivien 2 1: ICPS/LSIIT, UMR CNRS ULP 7005, Strasbourg, France 2: LIP, UMR CNRS ENS Lyon
More informationTabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach
Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach Carlos A. S. Passos (CenPRA) carlos.passos@cenpra.gov.br Daniel M. Aquino (UNICAMP, PIBIC/CNPq)
More informationA Heuristic Based Load Balancing Algorithm
International Journal of Computational Engineering & Management, Vol. 15 Issue 6, November 2012 www..org 56 A Heuristic Based Load Balancing Algorithm 1 Harish Rohil, 2 Sanjna Kalyan 1,2 Department of
More information