PC cluster as a platform for parallel applications

Size: px
Start display at page:

Download "PC cluster as a platform for parallel applications"

Transcription

1 PC cluster as a platform for parallel applications AMANY ABD ELSAMEA, HESHAM ELDEEB, SALWA NASSAR Computer & System Department Electronic Research Institute National Research Center, Dokki, Giza Cairo, EGYPT Abstract: - The complexity and size of the current generation of supercomputers leads to the emergence of cluster computing which is characterized by its scalability, flexibility of configuration and upgrade, high availability and improvement of cost and time. This paper, explains the importance of cluster computing and its advantages and disadvantages. Also, it presents the types of schedulers and the steps of building the cluster. The work herein also evaluates this cluster by two case studies: matrix multiplication as a simple case study and sobel edge detection as a heavy computation one. Key-Words: - cluster computing, middleware, latency, scheduler, image processing, execution time, efficiency. 1 Introduction The distribution and sharing of resources allows systems like supercomputer and large databases to be build at much lower cost. Also, requirements toward high availability and fault tolerance can in many cases only be realized in a distributed system [1]. Distributed systems consist of several computers that communicate with each other by message passing over a communication network and carry out some cooperative activity. They are developed on top of existing networking and operating systems software. They are not easy to build and maintain. To simplify their development and maintenance a new layer of software called middleware is being developed. This layer of software provides high-level services, abstracting over low-level details that may differ between platforms. This software layer allows multiple processes running on one or more machines to interact transparently across a network. If the distributed resources happen to be managed by a single global centralized scheduling system, then it is a cluster. A Linux cluster is a collection of interconnected parallel or distributed machines that can be viewed and used as a single, unified computing resource. Clusters can consist of homogeneous and heterogeneous collections of Von Neumann (serial) and parallel architecture computers or even sub-clusters [2]. A cluster system can be viewed as being made up of four major components, two hardware and two software. The two hardware components are the nodes that perform the work and the network that interconnects the nodes to form a single system. The two software components are the collection of tools used to develop user parallel application programs and the software environment for managing the parallel resources of the cluster [3]. The paper is organized as follows: Section 2, explains the importance of cluster computing and its advantages and disadvantages. Section 3, presents the architecture of Linux cluster. Section 4, discusses the types of schedulers. Section 5, explains the steps of building our Linux cluster and the performance evaluation of two case studies on the cluster. Finally, the conclusion is discussed in section 6. 2 Cluster computing advantages and disadvantages As high performance local and wide area networks have become less expensive and as the price of commodity computers has dropped, it is now possible to connect a number of relatively cheap computers with a network for the widespread, efficient sharing of data to produce a cluster which is a type of distributed systems. Cluster parallel processing offers several important advantages: - Cluster computing can scale to very large systems. - Better price/performance ratios. - High availability, clusters provide multiple redundant identical resources that, if managed correctly, can provide continuous system operation

2 through graceful degradation even as individual components fail. - Flexibility of configuration and upgrade [3]. Although clusters have several advantages, yet they have disadvantages too: - Generally, network hardware is not designed for parallel processing. Typically latency is very high and bandwidth is relatively low compared to SMP (Symmetric Multiprocessor ) and attached processors. For example, SMP latency is generally no more than a few microseconds, but is commonly hundreds or thousands of microseconds for a cluster. SMP communication bandwidth is often more than 100 MBytes/second, whereas even the fastest ATM network connections are more than five times slower. 3 Linux cluster architecture A cluster is a group of computers which work together toward a final goal. The architecture of a PC cluster is shown in Fig.1 where the first layer is the hardware which is the nodes that perform the work and the network that interconnects the node. The operating system that is used in this cluster is Linux which is the most popular open source operating system in the world. Its success is due to its stability, availability, and straightforward design. It can easily be modified, rearranged for whatever task. While most Linux clusters use a local file system for scratch data, it is often convenient to use network-based or distributed file systems to share data. Most common and most popular is NFS (Network File System) which allows remote hosts to mount partitions on a particular system and use them as though they were local file systems and NIS (Network Information Services) which allows one to setup a server and then configure a number of client machines that ask that server if the person logging into the client machine is allowed to. The nice thing about it is that usernames and passwords are stored in one place. NIS and NFS represent the middleware layer. Message-passing libraries are implemented on HPC (High Performance Computing) systems using two separate standards, PVM (Parallel Virtual Machine) and (Message Passing Interface). In many ways, and PVM are similar, each defines portable, high-level functions that are used by a group of processes to make contact and exchange data without having to be aware of the communication medium. The support for each is available over the Internet at low or no cost. Each supports C and Fortran 77. Each provides for automatic conversion between different representations of the same kind of data so that processes can be distributed over a heterogeneous computer network. The difference between and PVM is in the support for the topology of the communicating processes. In, the group size and topology are fixed when the group is created. This permits lowoverhead group operations. In PVM, group composition is dynamic, which requires the use of a group server process and causes more overhead in common group-related operations. Other differences are found in the design details of the two interfaces., for example, supports asynchronous and multiple message traffic, so that a process can wait for any of a list of message-receive calls to complete and can initiate concurrent sending and receiving [4]. Providing a cluster requires software to effectively control job and system resources, load balance across the network, maximize the use of shared resources, make sure that everyone can effectively and equitably utilize that resource, this software is known as Resource Manager (Job Scheduler). It takes job requests from user input or other means and schedules them to be run on the number of nodes required in the cluster. The next section, discusses the different types of schedulers. Application Scheduler PVM/ Middleware Hardware Fig.1 Cluster architecture 4 Types of schedulers There are a number of specialized scheduling software products available. These may be divided into batch queuing systems and extended batch systems. Batch queueing systems are designed for use on tightly interconnected clusters, which usually feature shared file systems. Extended batch systems, designed for use in loosely interconnected clusters, do not usually make assumptions about

3 shared file systems, and often offer increased functionality over typical batch queueing systems. Examples of batch queueing systems are: DQS; GNQS; ; EASY; LSF and LoadLeveler, while examples of extended batch systems are: Condor; PRM; CCS and Codine [2]. In our system, we choose (Portable Batch System) since it provides many features and benefits to the cluster administrator which are: (a) User interfaces: X provides a graphical interface for submitting both batch and interactive jobs, querying job, queue, and system status, and tracking the progress of jobs. Also available is the Open command line interface (CLI) providing the same functionality as X. (b) Job priority: Users can specify the priority of their jobs, and defaults can be provided at both the queue and system level. (c) Job-Interdependency: It enables the user to define a wide range of interdependencies between batch jobs. Such dependencies include: execution order, synchronization, and execution conditioned on the success or failure of a specified other job. (d) Automatic File Staging: It provides users with the ability to specify any files that need to be copied onto the execution host before the job runs, and any that need to be copied off after the job completes. The job will be scheduled to run only after the required files have been successfully transfered. (e) Single or Multiple Queue Support: It can be configured with as many queues as you wish. (f) Multiple Scheduling Algorithms: With Open you can select the standard first-in firstout scheduling, or a more sophisticated scheduling algorithm and other features which are discussed in [5],[6]. The next section, describes the steps of building the cluster and its performance evaluation. 5 Building and performance evaluation of the cluster The cluster consists of a number of nodes, one node is the master (server) and the other nodes are the slaves (clients), as shown in Fig.2. Building the cluster is made as described in the following steps: 1. Linux is used as an operating system. 2. The physical cluster network is build by using Network File System (NFS) and Network Information Services (NIS). Then checking the network server and client NFS & NIS setup are made. 3. is used and tested with several programs. 4. Portable batch system scheduler () is used and configured on server side and on client side. 5. Queue Manager is configured on the server then, we apply a batch script on it. Fig.2 Hardware architecture of the cluster For performance evaluation of this cluster, the two case studies, matrix multiplication as a simple one and edge detection as a practical one are implemented. The performance metrics used to evaluate these two case studies are, first, the execution time, and second, the efficiency which is defined by: E = S/P (2) where E is the efficiency, S is the speedup, P is the number of processors. The closer it is to one, the more perfectly parallel the task is at that level of parallelism; the closer to zero, the less of level of parallelism [7]. 5.1 Matrix multiplication case study For parallelization of the matrix multiplication case study, the master distributes the data among the workers who perform the actual multiplication in smaller blocks and send back their respective results to the master. We change the size of the matrices and record the execution time of the parallel program using different numbers of processors and different cluster layers which are and. It is clear that there is a dramatically reduction of execution time with increasing the number of processors, as shown in Fig.3. Time in seconds Server Client1 Client2 Client Network Execution time of vs 1 Procs 2 Procs 3 Procs 4 Procs Number of processors Fig.3 execution time for matrix multiplication case study using matrix size 500x500 Fig.4 and Fig.5 show the execution time and the efficiency respectively for the matrix multiplication

4 case study using running interactively and with its default scheduler (First-In First-Out) applied on four processors. It is clear that the execution time decreased when using because Job startup time is greatly decreased, so to use the resources of the cluster most efficiently, jobs that take more than the allowed CPU time (long jobs) must be executed using batch requests. In case of, the efficiency is very low for small matrix sizes this is because of the increased overhead due to communication and synchronization among processors. The chance for performing efficient parallelism is only available during computations of large matrix sizes to be distributed among processors. While using the efficiency for small and large matrix sizes is improved. Time in second Execution time of vs 100x x x x x500 Size of Matrix Fig.4 execution time for matrix multiplication case study using four processors Efficiency % 80.00% 60.00% 40.00% 20.00% 0.00% Efficiency of vs 100x x x x x500 Size of Matrix Fig.5 efficiency for matrix multiplication case study using four processors 5.2 Sobel edge detection case study Computations for edge detection are performed on a pixel by pixel basis, with many arithmetic operations performed on each pixel. The complete detection of edges in a gray-level image is generally employed in three steps. First, the image is convolved with a derivative mask (operator mask) that produces a measure of intensity gradient. Second, a threshold operation is applied in which points contributing to edges are identified as those exceeding a set level of intensity gradient values. Third, the edge points are combined into coherent edges by applying a linking algorithm. For this study, the Sobel masks were chosen because of their smoothing and differencing effects. The Sobel operator performs a 2-D spatial gradient measurement on an image. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. The Sobel edge detector uses a pair of 3x3 convolution masks, one estimating the gradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows). A convolution mask is usually much smaller than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time [8] [9]. The actual Sobel masks are shown below: The magnitude of the gradient is then calculated using the formula: (3) Parallel sobel edge detection naturally uses a master-worker paradigm. A flow chart of the responsibilities of the master and workers is provided in Fig.6. The advantage of the nonworking master, which used in this case, is that as soon as a worker sends its results, the master can almost immediately receive and evaluate the results. The master reads the image then distributes it to the slaves. The slaves work on its part of the image and perform sobel edge detection then send the sub detected image to the master which gathers those subimages into the final detected image. Input image data from file Send images to workers Receive image from master Calculate number of rows in each subimage Perform sobel edge detection on a part of the image Send the result to the master Receive results from workers Put the resulted detected image in a file Fig.6 Parallel sobel edge detection flow chart

5 Fig.7 shows two columns. Column A represents images of different sizes and column B represents the edge detected images % % Efficiency vs Efficiency 80.00% 60.00% 40.00% 20.00% 0.00% 256x x x x x700 Image Size Fig.9 efficiency for Sobel edge detection case study using four processors From the above figures, the efficiency using is more than 30% better than. Also, it is clear that the execution time for this case study is smaller than the execution time of the matrix case study. Since the matrix case study sends two matrixes to the processors, so this leads to increase the communication time compared to image case study. Then we record the execution time of the program using images of different size applied on different numbers of processors. Fig.8, Fig.9 show the execution time and the efficiency respectively using running interactively and applied on four processors. Time in second (A) (B) Fig.7 Images with different sizes and its Edge detected images using Sobel edge detection Execution time of vs 256x x x x x700 Image Size 6 Conclusion We build a Linux cluster using a number of relatively cheap computers which connected together by a network to produce widespread, efficient sharing of resources. In this paper the performance of the PC cluster is evaluated by two case studies running on two different layers of the cluster. First, the case studies run on interactively and show better results than the serial processing. Second, using which gives better results because it uses the full resource of the cluster. Also the parallelization of matrix multiplication and edge detected case studies decrease the execution time with increasing the number of processors, this leads to improve the efficiency. This paper approved that the cluster is an efficient platform for running heavy computation applications. Also, it is clear that the cluster gives better price to performance ratios. As a future work, this cluster will be extended over WAN to form a Grid which provides flexible, secure, coordinated resource sharing. Acknowledgement: This work is partially funded by NSF project No RAMSys: Collaborative Metacomputing System. Fig.8 execution time for Sobel edge detection case study using four processors

6 References: [1] N. Nicolas and B. Skarup, Java Grid: Building A Grid Computer Engine with Jinni and Java, Bachelor Thesis in Computer Science Distributed Systems, Aalborg University, [2] H. A. James, Scheduling In Meta Computing Systems, B.SC, PhD thesis, Department of Computer Science University of Adelaide, July, [3] T. sterling, Beowulf Cluster Computing with Linux, MIT Press, Cambridge, [4] D. Cortesi, A. Evans, W. Ferguson and J. Hartman, Topics in Irix Programming, Published by Silicon Graphics, [5] V. Hazlewood, Cluster Computing: A Survey and Tutorial, Published in SysAdmin, March, [6] B. Bode, D. M. Halstead, R. Kendall and Z. Lei, The Portable Batch Scheduler and the Maui Scheduler on Linux Clusters, Proceeding of the 4 th Annual Linux Showcase & Conference, Atlanta, October [7] W. Ramadan, Performance Evaluation of Multithreaded Programming Over Distributed Memory Message Passing in a Multiprocessor Computer System, MSC. thesis, Faculty of Engineering, Cairo University, Giza, Egypt, November, [8] L. Hopwood, W. Miller and A. George, Parallel Implementation of the Hough Transform for the Extraction of Rectangular Objects," Proc. IEEE Southeastcon, IEEE cat. no. 96ch35880, pp , April, [9] J. Barbosa, J. Tavares, and A. J. Padilha, Parallel Image Processing System on a Cluster of Personal Computers, Vector and Parallel Processing, 4 th International Conference, Porto, Portugal, June, 2000.

Lecture 9: MIMD Architectures

Lecture 9: MIMD Architectures Lecture 9: MIMD Architectures Introduction and classification Symmetric multiprocessors NUMA architecture Clusters Zebo Peng, IDA, LiTH 1 Introduction A set of general purpose processors is connected together.

More information

Lecture 9: MIMD Architecture

Lecture 9: MIMD Architecture Lecture 9: MIMD Architecture Introduction and classification Symmetric multiprocessors NUMA architecture Cluster machines Zebo Peng, IDA, LiTH 1 Introduction MIMD: a set of general purpose processors is

More information

Lecture 9: MIMD Architectures

Lecture 9: MIMD Architectures Lecture 9: MIMD Architectures Introduction and classification Symmetric multiprocessors NUMA architecture Clusters Zebo Peng, IDA, LiTH 1 Introduction MIMD: a set of general purpose processors is connected

More information

Chapter 18 Parallel Processing

Chapter 18 Parallel Processing Chapter 18 Parallel Processing Multiple Processor Organization Single instruction, single data stream - SISD Single instruction, multiple data stream - SIMD Multiple instruction, single data stream - MISD

More information

Multiprocessors 2007/2008

Multiprocessors 2007/2008 Multiprocessors 2007/2008 Abstractions of parallel machines Johan Lukkien 1 Overview Problem context Abstraction Operating system support Language / middleware support 2 Parallel processing Scope: several

More information

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems Distributed Systems Outline Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems What Is A Distributed System? A collection of independent computers that appears

More information

Non-uniform memory access machine or (NUMA) is a system where the memory access time to any region of memory is not the same for all processors.

Non-uniform memory access machine or (NUMA) is a system where the memory access time to any region of memory is not the same for all processors. CS 320 Ch. 17 Parallel Processing Multiple Processor Organization The author makes the statement: "Processors execute programs by executing machine instructions in a sequence one at a time." He also says

More information

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Donald S. Miller Department of Computer Science and Engineering Arizona State University Tempe, AZ, USA Alan C.

More information

Parallel & Cluster Computing. cs 6260 professor: elise de doncker by: lina hussein

Parallel & Cluster Computing. cs 6260 professor: elise de doncker by: lina hussein Parallel & Cluster Computing cs 6260 professor: elise de doncker by: lina hussein 1 Topics Covered : Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster

More information

CSCI 4717 Computer Architecture

CSCI 4717 Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Symmetric Multiprocessors & Clusters Reading: Stallings, Sections 18.1 through 18.4 Classifications of Parallel Processing M. Flynn classified types of parallel

More information

Distributed Systems. Lecture 4 Othon Michail COMP 212 1/27

Distributed Systems. Lecture 4 Othon Michail COMP 212 1/27 Distributed Systems COMP 212 Lecture 4 Othon Michail 1/27 What is a Distributed System? A distributed system is: A collection of independent computers that appears to its users as a single coherent system

More information

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Advanced School in High Performance and GRID Computing November Introduction to Grid computing. 1967-14 Advanced School in High Performance and GRID Computing 3-14 November 2008 Introduction to Grid computing. TAFFONI Giuliano Osservatorio Astronomico di Trieste/INAF Via G.B. Tiepolo 11 34131 Trieste

More information

Advanced Parallel Architecture. Annalisa Massini /2017

Advanced Parallel Architecture. Annalisa Massini /2017 Advanced Parallel Architecture Annalisa Massini - 2016/2017 References Advanced Computer Architecture and Parallel Processing H. El-Rewini, M. Abd-El-Barr, John Wiley and Sons, 2005 Parallel computing

More information

06-Dec-17. Credits:4. Notes by Pritee Parwekar,ANITS 06-Dec-17 1

06-Dec-17. Credits:4. Notes by Pritee Parwekar,ANITS 06-Dec-17 1 Credits:4 1 Understand the Distributed Systems and the challenges involved in Design of the Distributed Systems. Understand how communication is created and synchronized in Distributed systems Design and

More information

Solace JMS Broker Delivers Highest Throughput for Persistent and Non-Persistent Delivery

Solace JMS Broker Delivers Highest Throughput for Persistent and Non-Persistent Delivery Solace JMS Broker Delivers Highest Throughput for Persistent and Non-Persistent Delivery Java Message Service (JMS) is a standardized messaging interface that has become a pervasive part of the IT landscape

More information

Client Server & Distributed System. A Basic Introduction

Client Server & Distributed System. A Basic Introduction Client Server & Distributed System A Basic Introduction 1 Client Server Architecture A network architecture in which each computer or process on the network is either a client or a server. Source: http://webopedia.lycos.com

More information

3/24/2014 BIT 325 PARALLEL PROCESSING ASSESSMENT. Lecture Notes:

3/24/2014 BIT 325 PARALLEL PROCESSING ASSESSMENT. Lecture Notes: BIT 325 PARALLEL PROCESSING ASSESSMENT CA 40% TESTS 30% PRESENTATIONS 10% EXAM 60% CLASS TIME TABLE SYLLUBUS & RECOMMENDED BOOKS Parallel processing Overview Clarification of parallel machines Some General

More information

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Parallel and Distributed Computing MSc in Information Systems and Computer Engineering DEA in Computational Engineering Department of Computer

More information

The MOSIX Scalable Cluster Computing for Linux. mosix.org

The MOSIX Scalable Cluster Computing for Linux.  mosix.org The MOSIX Scalable Cluster Computing for Linux Prof. Amnon Barak Computer Science Hebrew University http://www. mosix.org 1 Presentation overview Part I : Why computing clusters (slide 3-7) Part II : What

More information

Parallel Architectures

Parallel Architectures Parallel Architectures CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Parallel Architectures Spring 2018 1 / 36 Outline 1 Parallel Computer Classification Flynn s

More information

Introduction. Distributed Systems. Introduction. Introduction. Instructor Brian Mitchell - Brian

Introduction. Distributed Systems. Introduction. Introduction. Instructor Brian Mitchell - Brian Distributed 1 Directory 1 Cache 1 1 2 Directory 2 Cache 2 2... N Directory N Interconnection Network... Cache N N Instructor Brian Mitchell - Brian bmitchel@mcs.drexel.edu www.mcs.drexel.edu/~bmitchel

More information

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico February 29, 2016 CPD

More information

Assignment 5. Georgia Koloniari

Assignment 5. Georgia Koloniari Assignment 5 Georgia Koloniari 2. "Peer-to-Peer Computing" 1. What is the definition of a p2p system given by the authors in sec 1? Compare it with at least one of the definitions surveyed in the last

More information

ANALYSIS OF CLUSTER INTERCONNECTION NETWORK TOPOLOGIES

ANALYSIS OF CLUSTER INTERCONNECTION NETWORK TOPOLOGIES ANALYSIS OF CLUSTER INTERCONNECTION NETWORK TOPOLOGIES Sergio N. Zapata, David H. Williams and Patricia A. Nava Department of Electrical and Computer Engineering The University of Texas at El Paso El Paso,

More information

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Non-Uniform Memory Access (NUMA) Architecture and Multicomputers Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico September 26, 2011 CPD

More information

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter Lecture Topics Today: Advanced Scheduling (Stallings, chapter 10.1-10.4) Next: Deadlock (Stallings, chapter 6.1-6.6) 1 Announcements Exam #2 returned today Self-Study Exercise #10 Project #8 (due 11/16)

More information

740: Computer Architecture Memory Consistency. Prof. Onur Mutlu Carnegie Mellon University

740: Computer Architecture Memory Consistency. Prof. Onur Mutlu Carnegie Mellon University 740: Computer Architecture Memory Consistency Prof. Onur Mutlu Carnegie Mellon University Readings: Memory Consistency Required Lamport, How to Make a Multiprocessor Computer That Correctly Executes Multiprocess

More information

Distributed OS and Algorithms

Distributed OS and Algorithms Distributed OS and Algorithms Fundamental concepts OS definition in general: OS is a collection of software modules to an extended machine for the users viewpoint, and it is a resource manager from the

More information

Distributed Computing: PVM, MPI, and MOSIX. Multiple Processor Systems. Dr. Shaaban. Judd E.N. Jenne

Distributed Computing: PVM, MPI, and MOSIX. Multiple Processor Systems. Dr. Shaaban. Judd E.N. Jenne Distributed Computing: PVM, MPI, and MOSIX Multiple Processor Systems Dr. Shaaban Judd E.N. Jenne May 21, 1999 Abstract: Distributed computing is emerging as the preferred means of supporting parallel

More information

Evaluation of Parallel Application s Performance Dependency on RAM using Parallel Virtual Machine

Evaluation of Parallel Application s Performance Dependency on RAM using Parallel Virtual Machine Evaluation of Parallel Application s Performance Dependency on RAM using Parallel Virtual Machine Sampath S 1, Nanjesh B R 1 1 Department of Information Science and Engineering Adichunchanagiri Institute

More information

S i m p l i f y i n g A d m i n i s t r a t i o n a n d M a n a g e m e n t P r o c e s s e s i n t h e P o l i s h N a t i o n a l C l u s t e r

S i m p l i f y i n g A d m i n i s t r a t i o n a n d M a n a g e m e n t P r o c e s s e s i n t h e P o l i s h N a t i o n a l C l u s t e r S i m p l i f y i n g A d m i n i s t r a t i o n a n d M a n a g e m e n t P r o c e s s e s i n t h e P o l i s h N a t i o n a l C l u s t e r Miroslaw Kupczyk, Norbert Meyer, Pawel Wolniewicz e-mail:

More information

It also performs many parallelization operations like, data loading and query processing.

It also performs many parallelization operations like, data loading and query processing. Introduction to Parallel Databases Companies need to handle huge amount of data with high data transfer rate. The client server and centralized system is not much efficient. The need to improve the efficiency

More information

Distributed Systems. Thoai Nam Faculty of Computer Science and Engineering HCMC University of Technology

Distributed Systems. Thoai Nam Faculty of Computer Science and Engineering HCMC University of Technology Distributed Systems Thoai Nam Faculty of Computer Science and Engineering HCMC University of Technology Chapter 1: Introduction Distributed Systems Hardware & software Transparency Scalability Distributed

More information

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS 1 RONNIE O. SERFA JUAN, 2 CHAN SU PARK, 3 HI SEOK KIM, 4 HYEONG WOO CHA 1,2,3,4 CheongJu University E-maul: 1 engr_serfs@yahoo.com,

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

Introduction to Parallel Programming

Introduction to Parallel Programming Introduction to Parallel Programming David Lifka lifka@cac.cornell.edu May 23, 2011 5/23/2011 www.cac.cornell.edu 1 y What is Parallel Programming? Using more than one processor or computer to complete

More information

Database Server. 2. Allow client request to the database server (using SQL requests) over the network.

Database Server. 2. Allow client request to the database server (using SQL requests) over the network. Database Server Introduction: Client/Server Systems is networked computing model Processes distributed between clients and servers. Client Workstation (usually a PC) that requests and uses a service Server

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Sobel Edge Detection Algorithm

Sobel Edge Detection Algorithm Sobel Edge Detection Algorithm Samta Gupta 1, Susmita Ghosh Mazumdar 2 1 M. Tech Student, Department of Electronics & Telecom, RCET, CSVTU Bhilai, India 2 Reader, Department of Electronics & Telecom, RCET,

More information

Lecture 7: Parallel Processing

Lecture 7: Parallel Processing Lecture 7: Parallel Processing Introduction and motivation Architecture classification Performance evaluation Interconnection network Zebo Peng, IDA, LiTH 1 Performance Improvement Reduction of instruction

More information

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT PhD Summary DOCTORATE OF PHILOSOPHY IN COMPUTER SCIENCE & ENGINEERING By Sandip Kumar Goyal (09-PhD-052) Under the Supervision

More information

Chapter 3. Design of Grid Scheduler. 3.1 Introduction

Chapter 3. Design of Grid Scheduler. 3.1 Introduction Chapter 3 Design of Grid Scheduler The scheduler component of the grid is responsible to prepare the job ques for grid resources. The research in design of grid schedulers has given various topologies

More information

Distributed Systems LEEC (2006/07 2º Sem.)

Distributed Systems LEEC (2006/07 2º Sem.) Distributed Systems LEEC (2006/07 2º Sem.) Introduction João Paulo Carvalho Universidade Técnica de Lisboa / Instituto Superior Técnico Outline Definition of a Distributed System Goals Connecting Users

More information

AASPI SOFTWARE PARALLELIZATION

AASPI SOFTWARE PARALLELIZATION AASPI SOFTWARE PARALLELIZATION Introduction Generation of multitrace and multispectral seismic attributes can be computationally intensive. For example, each input seismic trace may generate 50 or more

More information

A Test Suite for High-Performance Parallel Java

A Test Suite for High-Performance Parallel Java page 1 A Test Suite for High-Performance Parallel Java Jochem Häuser, Thorsten Ludewig, Roy D. Williams, Ralf Winkelmann, Torsten Gollnick, Sharon Brunett, Jean Muylaert presented at 5th National Symposium

More information

Parallel and High Performance Computing CSE 745

Parallel and High Performance Computing CSE 745 Parallel and High Performance Computing CSE 745 1 Outline Introduction to HPC computing Overview Parallel Computer Memory Architectures Parallel Programming Models Designing Parallel Programs Parallel

More information

Introduction to Distributed Systems

Introduction to Distributed Systems Introduction to Distributed Systems Other matters: review of the Bakery Algorithm: why can t we simply keep track of the last ticket taken and the next ticvket to be called? Ref: [Coulouris&al Ch 1, 2]

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

A Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004

A Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004 A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into

More information

Memory Systems in Pipelined Processors

Memory Systems in Pipelined Processors Advanced Computer Architecture (0630561) Lecture 12 Memory Systems in Pipelined Processors Prof. Kasim M. Al-Aubidy Computer Eng. Dept. Interleaved Memory: In a pipelined processor data is required every

More information

Parallel Processing. Computer Architecture. Computer Architecture. Outline. Multiple Processor Organization

Parallel Processing. Computer Architecture. Computer Architecture. Outline. Multiple Processor Organization Computer Architecture Computer Architecture Prof. Dr. Nizamettin AYDIN naydin@yildiz.edu.tr nizamettinaydin@gmail.com Parallel Processing http://www.yildiz.edu.tr/~naydin 1 2 Outline Multiple Processor

More information

MOHA: Many-Task Computing Framework on Hadoop

MOHA: Many-Task Computing Framework on Hadoop Apache: Big Data North America 2017 @ Miami MOHA: Many-Task Computing Framework on Hadoop Soonwook Hwang Korea Institute of Science and Technology Information May 18, 2017 Table of Contents Introduction

More information

Managing CAE Simulation Workloads in Cluster Environments

Managing CAE Simulation Workloads in Cluster Environments Managing CAE Simulation Workloads in Cluster Environments Michael Humphrey V.P. Enterprise Computing Altair Engineering humphrey@altair.com June 2003 Copyright 2003 Altair Engineering, Inc. All rights

More information

A Survey on Grid Scheduling Systems

A Survey on Grid Scheduling Systems Technical Report Report #: SJTU_CS_TR_200309001 A Survey on Grid Scheduling Systems Yanmin Zhu and Lionel M Ni Cite this paper: Yanmin Zhu, Lionel M. Ni, A Survey on Grid Scheduling Systems, Technical

More information

The Use of Cloud Computing Resources in an HPC Environment

The Use of Cloud Computing Resources in an HPC Environment The Use of Cloud Computing Resources in an HPC Environment Bill, Labate, UCLA Office of Information Technology Prakashan Korambath, UCLA Institute for Digital Research & Education Cloud computing becomes

More information

High Performance Computing Course Notes Course Administration

High Performance Computing Course Notes Course Administration High Performance Computing Course Notes 2009-2010 2010 Course Administration Contacts details Dr. Ligang He Home page: http://www.dcs.warwick.ac.uk/~liganghe Email: liganghe@dcs.warwick.ac.uk Office hours:

More information

4. Networks. in parallel computers. Advances in Computer Architecture

4. Networks. in parallel computers. Advances in Computer Architecture 4. Networks in parallel computers Advances in Computer Architecture System architectures for parallel computers Control organization Single Instruction stream Multiple Data stream (SIMD) All processors

More information

High Performance Computing Course Notes HPC Fundamentals

High Performance Computing Course Notes HPC Fundamentals High Performance Computing Course Notes 2008-2009 2009 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs

More information

Linux Clusters for High- Performance Computing: An Introduction

Linux Clusters for High- Performance Computing: An Introduction Linux Clusters for High- Performance Computing: An Introduction Jim Phillips, Tim Skirvin Outline Why and why not clusters? Consider your Users Application Budget Environment Hardware System Software HPC

More information

The Public Shared Objects Run-Time System

The Public Shared Objects Run-Time System The Public Shared Objects Run-Time System Stefan Lüpke, Jürgen W. Quittek, Torsten Wiese E-mail: wiese@tu-harburg.d400.de Arbeitsbereich Technische Informatik 2, Technische Universität Hamburg-Harburg

More information

Computer Organization. Chapter 16

Computer Organization. Chapter 16 William Stallings Computer Organization and Architecture t Chapter 16 Parallel Processing Multiple Processor Organization Single instruction, single data stream - SISD Single instruction, multiple data

More information

Organisasi Sistem Komputer

Organisasi Sistem Komputer LOGO Organisasi Sistem Komputer OSK 14 Parallel Processing Pendidikan Teknik Elektronika FT UNY Multiple Processor Organization Single instruction, single data stream - SISD Single instruction, multiple

More information

NUSGRID a computational grid at NUS

NUSGRID a computational grid at NUS NUSGRID a computational grid at NUS Grace Foo (SVU/Academic Computing, Computer Centre) SVU is leading an initiative to set up a campus wide computational grid prototype at NUS. The initiative arose out

More information

Lecture 1: January 22

Lecture 1: January 22 CMPSCI 677 Distributed and Operating Systems Spring 2018 Lecture 1: January 22 Lecturer: Prashant Shenoy Scribe: Bin Wang 1.1 Introduction to the course The lecture started by outlining the administrative

More information

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4 Motivation Threads Chapter 4 Most modern applications are multithreaded Threads run within application Multiple tasks with the application can be implemented by separate Update display Fetch data Spell

More information

Scalability and Classifications

Scalability and Classifications Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static

More information

Distributed Systems. Overview. Distributed Systems September A distributed system is a piece of software that ensures that:

Distributed Systems. Overview. Distributed Systems September A distributed system is a piece of software that ensures that: Distributed Systems Overview Distributed Systems September 2002 1 Distributed System: Definition A distributed system is a piece of software that ensures that: A collection of independent computers that

More information

SMD149 - Operating Systems - Multiprocessing

SMD149 - Operating Systems - Multiprocessing SMD149 - Operating Systems - Multiprocessing Roland Parviainen December 1, 2005 1 / 55 Overview Introduction Multiprocessor systems Multiprocessor, operating system and memory organizations 2 / 55 Introduction

More information

Overview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy

Overview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy Overview SMD149 - Operating Systems - Multiprocessing Roland Parviainen Multiprocessor systems Multiprocessor, operating system and memory organizations December 1, 2005 1/55 2/55 Multiprocessor system

More information

OmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP

OmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP OmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP (extended abstract) Mitsuhisa Sato 1, Motonari Hirano 2, Yoshio Tanaka 2 and Satoshi Sekiguchi 2 1 Real World Computing Partnership,

More information

Designing a Cluster for a Small Research Group

Designing a Cluster for a Small Research Group Designing a Cluster for a Small Research Group Jim Phillips, John Stone, Tim Skirvin Low-cost Linux Clusters for Biomolecular Simulations Using NAMD Outline Why and why not clusters? Consider your Users

More information

Crossbar switch. Chapter 2: Concepts and Architectures. Traditional Computer Architecture. Computer System Architectures. Flynn Architectures (2)

Crossbar switch. Chapter 2: Concepts and Architectures. Traditional Computer Architecture. Computer System Architectures. Flynn Architectures (2) Chapter 2: Concepts and Architectures Computer System Architectures Disk(s) CPU I/O Memory Traditional Computer Architecture Flynn, 1966+1972 classification of computer systems in terms of instruction

More information

What are Clusters? Why Clusters? - a Short History

What are Clusters? Why Clusters? - a Short History What are Clusters? Our definition : A parallel machine built of commodity components and running commodity software Cluster consists of nodes with one or more processors (CPUs), memory that is shared by

More information

Chapter 18. Parallel Processing. Yonsei University

Chapter 18. Parallel Processing. Yonsei University Chapter 18 Parallel Processing Contents Multiple Processor Organizations Symmetric Multiprocessors Cache Coherence and the MESI Protocol Clusters Nonuniform Memory Access Vector Computation 18-2 Types

More information

M. Roehrig, Sandia National Laboratories. Philipp Wieder, Research Centre Jülich Nov 2002

M. Roehrig, Sandia National Laboratories. Philipp Wieder, Research Centre Jülich Nov 2002 Category: INFORMATIONAL Grid Scheduling Dictionary WG (SD-WG) M. Roehrig, Sandia National Laboratories Wolfgang Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Philipp Wieder, Research

More information

Design and Implementation of High Performance and Availability Java RMI Server Group

Design and Implementation of High Performance and Availability Java RMI Server Group Design and Implementation of High Performance and Availability Java RMI Group 1. Introduction Tianjing Xu University of Auckland, Auckland, New Zealand txu012@ec.auckland.ac.nz Nowadays, providing high

More information

Parallel Algorithms for the Third Extension of the Sieve of Eratosthenes. Todd A. Whittaker Ohio State University

Parallel Algorithms for the Third Extension of the Sieve of Eratosthenes. Todd A. Whittaker Ohio State University Parallel Algorithms for the Third Extension of the Sieve of Eratosthenes Todd A. Whittaker Ohio State University whittake@cis.ohio-state.edu Kathy J. Liszka The University of Akron liszka@computer.org

More information

COSC 6385 Computer Architecture - Multi Processor Systems

COSC 6385 Computer Architecture - Multi Processor Systems COSC 6385 Computer Architecture - Multi Processor Systems Fall 2006 Classification of Parallel Architectures Flynn s Taxonomy SISD: Single instruction single data Classical von Neumann architecture SIMD:

More information

An Evaluation of Alternative Designs for a Grid Information Service

An Evaluation of Alternative Designs for a Grid Information Service An Evaluation of Alternative Designs for a Grid Information Service Warren Smith, Abdul Waheed *, David Meyers, Jerry Yan Computer Sciences Corporation * MRJ Technology Solutions Directory Research L.L.C.

More information

CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS. Xiaodong Zhang and Yongsheng Song

CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS. Xiaodong Zhang and Yongsheng Song CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS Xiaodong Zhang and Yongsheng Song 1. INTRODUCTION Networks of Workstations (NOW) have become important distributed

More information

Adaptive Cluster Computing using JavaSpaces

Adaptive Cluster Computing using JavaSpaces Adaptive Cluster Computing using JavaSpaces Jyoti Batheja and Manish Parashar The Applied Software Systems Lab. ECE Department, Rutgers University Outline Background Introduction Related Work Summary of

More information

Introduction to Distributed Systems. INF5040/9040 Autumn 2018 Lecturer: Eli Gjørven (ifi/uio)

Introduction to Distributed Systems. INF5040/9040 Autumn 2018 Lecturer: Eli Gjørven (ifi/uio) Introduction to Distributed Systems INF5040/9040 Autumn 2018 Lecturer: Eli Gjørven (ifi/uio) August 28, 2018 Outline Definition of a distributed system Goals of a distributed system Implications of distributed

More information

High Performance Computing. Introduction to Parallel Computing

High Performance Computing. Introduction to Parallel Computing High Performance Computing Introduction to Parallel Computing Acknowledgements Content of the following presentation is borrowed from The Lawrence Livermore National Laboratory https://hpc.llnl.gov/training/tutorials

More information

An Advance Reservation-Based Computation Resource Manager for Global Scheduling

An Advance Reservation-Based Computation Resource Manager for Global Scheduling An Advance Reservation-Based Computation Resource Manager for Global Scheduling 1.National Institute of Advanced Industrial Science and Technology, 2 Suuri Giken Hidemoto Nakada 1, Atsuko Takefusa 1, Katsuhiko

More information

Parallel Program for Sorting NXN Matrix Using PVM (Parallel Virtual Machine)

Parallel Program for Sorting NXN Matrix Using PVM (Parallel Virtual Machine) Parallel Program for Sorting NXN Matrix Using PVM (Parallel Virtual Machine) Ehab AbdulRazak Al-Asadi College of Science Kerbala University, Iraq Abstract The study will focus for analysis the possibilities

More information

Diffusing Your Mobile Apps: Extending In-Network Function Virtualisation to Mobile Function Offloading

Diffusing Your Mobile Apps: Extending In-Network Function Virtualisation to Mobile Function Offloading Diffusing Your Mobile Apps: Extending In-Network Function Virtualisation to Mobile Function Offloading Mario Almeida, Liang Wang*, Jeremy Blackburn, Konstantina Papagiannaki, Jon Crowcroft* Telefonica

More information

Chapter 20: Database System Architectures

Chapter 20: Database System Architectures Chapter 20: Database System Architectures Chapter 20: Database System Architectures Centralized and Client-Server Systems Server System Architectures Parallel Systems Distributed Systems Network Types

More information

Most real programs operate somewhere between task and data parallelism. Our solution also lies in this set.

Most real programs operate somewhere between task and data parallelism. Our solution also lies in this set. for Windows Azure and HPC Cluster 1. Introduction In parallel computing systems computations are executed simultaneously, wholly or in part. This approach is based on the partitioning of a big task into

More information

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems!

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

CS6401- Operating System QUESTION BANK UNIT-I

CS6401- Operating System QUESTION BANK UNIT-I Part-A 1. What is an Operating system? QUESTION BANK UNIT-I An operating system is a program that manages the computer hardware. It also provides a basis for application programs and act as an intermediary

More information

Concepts of Distributed Systems 2006/2007

Concepts of Distributed Systems 2006/2007 Concepts of Distributed Systems 2006/2007 Introduction & overview Johan Lukkien 1 Introduction & overview Communication Distributed OS & Processes Synchronization Security Consistency & replication Programme

More information

MediaTek CorePilot 2.0. Delivering extreme compute performance with maximum power efficiency

MediaTek CorePilot 2.0. Delivering extreme compute performance with maximum power efficiency MediaTek CorePilot 2.0 Heterogeneous Computing Technology Delivering extreme compute performance with maximum power efficiency In July 2013, MediaTek delivered the industry s first mobile system on a chip

More information

Shared Memory and Distributed Multiprocessing. Bhanu Kapoor, Ph.D. The Saylor Foundation

Shared Memory and Distributed Multiprocessing. Bhanu Kapoor, Ph.D. The Saylor Foundation Shared Memory and Distributed Multiprocessing Bhanu Kapoor, Ph.D. The Saylor Foundation 1 Issue with Parallelism Parallel software is the problem Need to get significant performance improvement Otherwise,

More information

Challenges in large-scale graph processing on HPC platforms and the Graph500 benchmark. by Nkemdirim Dockery

Challenges in large-scale graph processing on HPC platforms and the Graph500 benchmark. by Nkemdirim Dockery Challenges in large-scale graph processing on HPC platforms and the Graph500 benchmark by Nkemdirim Dockery High Performance Computing Workloads Core-memory sized Floating point intensive Well-structured

More information

TRAFFIC SIMULATION USING MULTI-CORE COMPUTERS. CMPE-655 Adelia Wong, Computer Engineering Dept Balaji Salunkhe, Electrical Engineering Dept

TRAFFIC SIMULATION USING MULTI-CORE COMPUTERS. CMPE-655 Adelia Wong, Computer Engineering Dept Balaji Salunkhe, Electrical Engineering Dept TRAFFIC SIMULATION USING MULTI-CORE COMPUTERS CMPE-655 Adelia Wong, Computer Engineering Dept Balaji Salunkhe, Electrical Engineering Dept TABLE OF CONTENTS Introduction Distributed Urban Traffic Simulator

More information

Big Orange Bramble. August 09, 2016

Big Orange Bramble. August 09, 2016 Big Orange Bramble August 09, 2016 Overview HPL SPH PiBrot Numeric Integration Parallel Pi Monte Carlo FDS DANNA HPL High Performance Linpack is a benchmark for clusters Created here at the University

More information

Distributed KIDS Labs 1

Distributed KIDS Labs 1 Distributed Databases @ KIDS Labs 1 Distributed Database System A distributed database system consists of loosely coupled sites that share no physical component Appears to user as a single system Database

More information

Best Practices for Setting BIOS Parameters for Performance

Best Practices for Setting BIOS Parameters for Performance White Paper Best Practices for Setting BIOS Parameters for Performance Cisco UCS E5-based M3 Servers May 2013 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page

More information

WhatÕs New in the Message-Passing Toolkit

WhatÕs New in the Message-Passing Toolkit WhatÕs New in the Message-Passing Toolkit Karl Feind, Message-passing Toolkit Engineering Team, SGI ABSTRACT: SGI message-passing software has been enhanced in the past year to support larger Origin 2

More information

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 1 Introduction Definition of a Distributed System (1) A distributed system is: A collection of

More information