I/O in the Gardens Non-Dedicated Cluster Computing Environment
|
|
- Nigel Shepherd
- 6 years ago
- Views:
Transcription
1 I/O in the Gardens Non-Dedicated Cluster Computing Environment Paul Roe and Siu Yuen Chan School of Computing Science Queensland University of Technology Australia fp.roe, Abstract Gardens is an integrated programming language and system designed to support parallel computing across nondedicated cluster computers, in particular networks of PCs. To utilise non-dedicated machines a program must adapt to those currently available. In Gardens this is realised by over decomposing a program into more tasks than processors, and migrating tasks to implement adaptation. Communication in Gardens is achieved via a lightweight form of remote method invocation. Furthermore I/O may be efficiently achieved by the same mechanism. All that is required is to support stable tasks which are not migrated - these are effectively bound to resources such as systems. The main contribution of this paper is to show how I/O may be achieved in an adaptive system utilising task migration to harness the power of non-dedicated cluster computers. 1 Introduction In the aggregate networks of workstations represent a huge and cheap unused computing resource. By their very nature such non-dedicated cluster computers are dynamic. The workstations available to a computation will typically change during the run of a program as workstation users come and go. Thus programs must adapt to the changing availability of workstations. The Gardens system [6] is an integrated programming language and system targeted at non-dedicated cluster computers. The goals of Gardens are: adaptation, safety, abstraction and performance (ASAP!). These are realised in part by a modern object oriented programming language, Mianjin [5], a derivative of Pascal. Gardens utilises task migration to realise adaptation. A program is over decomposed into more tasks than processors and tasks are migrated in response to changing workstation loads. This adaptation is transparent to the programmer. Transparent task migration entails location transparent communication between tasks. Communication between Garden tasks is achieved through a virtual shared object space. Tasks may perform remote method calls on objects belonging to other tasks. Such method calls are lightweight and asynchronous. Until recently I/O in Gardens has been handled in an ad hoc manner. The difficulty with I/O is that tasks are mobile and hence it must be performed in a location transparent manner, in a similar way to communication. However, standard handles etc. are static cannot be migrated with tasks. Furthermore it may be desirable to perform strictly local I/O for example to open a temporary or to utilise a which for performance reasons has been replicated across all machines. The main contribution of this paper is to show how efficient I/O may be achieved in a system utilising task migration to harness the power of non-dedicated cluster computers. In keeping with the Gardens philosophy the mechanisms are simple and enable efficient I/O abstractions to be created. The remainder of this paper is organised as follows, the next section describes the Gardens programming language Mianjin and in particular its support for a virtual shared object space. Section 3 describes the basic mechanisms and techniques for supporting I/O in an adaptive setting. Section 4 describes how data can be locally cached and how strictly local I/O resources can be utilised. Some preliminary performance figures are reported in Section 5. Section 6 presents related work, and the final section discusses the work and future directions. 2 Overview of Mianjin Gardens is an integrated programming language (Mianjin) and system to support parallel computation across networks of workstations. Programs are over-decomposed into more tasks than processors and task migration is used to implement load balancing. Multiple tasks are supported within a single operating system process.
2 Gardens supports a virtual shared object space. Each objects belongs to exactly one task which manages it; however, tasks may reference remote objects, belonging to other tasks. The Mianjin language distinguishes strictly local object references from potentially global ones, by labelling the latter GLOBAL. A simple example demonstrates the key idea of global objects: TYPE Acc = POINTER TO RECORD count, sum: INTEGER END; (* a remote method *) GLOBAL PROCEDURE (self:acc) Add (s:integer); self.sum := self.sum + s; self.count := self.count - 1; (* if last result unblock master task *) IF self.count=0 THEN Unblock END END Add; POLL PROCEDURE Worker (gsum: GLOBAL Acc); VAR localval: INTEGER; (* expensive calculation of localval *)... (* global method invocation *) gsum.add(localval) END Worker; POLL PROCEDURE Master; VAR acc: Acc; NEW(acc); acc.sum := 0; acc.count := NTasks; (* create worker tasks: Worker(acc) *) FOR i:= 1 TO NTasks DO Fork (Worker,acc) END; (* wait for all results *) WHILE acc.count#0 DO Block END;... END Master; The example implements a form of summation in which a master task accumulates the sum of values contributed by a set of worker tasks. A global object (acc), managed by a master task accumulates the sum. The master task has full access to the object. Worker tasks can only access acc via its global methods, in this case Add. When a worker task invokes the global method Add, actual parameters and the method index are communicated to the processor holding the object; there the method is invoked locally on the object. The master task owning and managing the global object (acc) blocks waiting for all local values to be contributed to the object. Local objects, the default, are always located in their referring tasks heap. Global objects may be either located in a different task or in the referring tasks heap. Furthermore a global object may be located in task on the same processor as the referring task or on a different processor to the referring task. Thus Mianjin supports location transparent communication via global objects and their associated global methods. Global object references are valid across all machines and hence location independent. This is necessary since tasks may be migrated between processors at run time, hence global object references must remain valid. 3 I/O and Processor Bound Tasks Since I/O is similar to intertask communication it is natural to try using intertask communication mechanisms i.e. global objects to support I/O. However unlike tasks I/O is usually bound to some specific resource such as a server or particular machine. Such resources are not mobile and should not be load balanced through migration. (Note if it makes sense to migrate a with a task then this can be easily achieved.) What are required are a special kind of object which are bound to resources and which are static - do not get migrated. To achieve this we extend Gardens with processor bound tasks which are never migrated from their host machine. Only one such task is required per machine. Objects which are allocated in processor bound tasks are standard global objects except that they are not migrated. We term these processor bound objects. Initially each processor bound task is seeded with a root I/O object, a global object, which is used for initiating all I/O. All tasks are given access to these root I/O objects. Root I/O objects are used for creating other processor bound objects corresponding to resource handles for performing I/O. For example typical root I/O object operations support opening and creating s. These operations result in objects (processor bound objects) which support read and write operations. More sophisticated root I/O object operations are used for creating custom resource bound i.e. processor bound objects. For example see Figure 1 for a description of simple opening and reading. Thus the only extension necessary to the Gardens system to support I/O in an adaptive setting are processor bound tasks. All objects such as root I/O objects and objects are standard global objects. They are just allocated in a processor bound task and hence are not migrated. If a task is migrated the existing support for global objects ensures that references to processor bound objects remain valid, see Figure 2. The mechanism allows different I/O abstractions to be coded. Typically the standard OS I/O calls are exposed in a special unsafe interface which I/O objects utilise for
3 Processor A proc bound task root I/O obj 3 create obj 2 OS open obj "foo" 6 OS read 1 open 4 ret obj ref 5 read 7 ret data Processor B mobile task root.open("foo") obj.read Figure 1. File Opening and Reading Processor A task before migration Processor B task after migration it may be desirable to cache some data on the local processor and share it between all tasks on the same processor. For example a may have been replicated across all processors for efficiency, or it may be desirable to create a temporary strictly locally. This goes against the idea of location transparency. However as described in [4] it is possible to safely make use of location information to optimise communications; the same can be done for I/O. For example, it is safe to test a global object reference to determine which processor hosts the object. Using such techniques a task may select from a number of I/O resources, represented by global objects, a local one to use. Since all the resources are represented by global objects such a test is safe and represents purely an optimisation. In particular there is no way for a task to access a local resource which becomes unavailable if the task is migrated - all that will happen is the task will access the resource remotely which will be inefficient but correct. Thus I/O requests can be sent to a local proxy object which will route requests to the local I/O resource. This is shown in Figure 3. processor bound object Processor 1 Processor i resource request Processor n resource proxy object resource map Figure 2. Task Migration actually performing I/O. In general we do not expect programmers to code I/O objects themselves; rather we want to provide a library of such routines for the programmer to use. Nevertheless an important part of our philosophy is that such abstractions should be programmable and not built-into the system in some special way. This is necessary in order for the system to be truly extensible. From the programmers perspective I/O is performed in the same way as intertask communication, this simplifies programming by economising on concepts. As described remote access to I/O resources is rather naive. For serious use, caching of data must be employed. It is possible to cache data on a task by task basis. However often caching on a per processor basis is best. The following section describes how this may be achieved. 4 Caching and Processor Bound Tasks Accessing remote s and other resources is useful, however sometimes the reverse is required. For efficiency local... resource 1 local resource i... Figure 3. Local Resource Map local resource n The two techniques may be combined so that remote data may be cached locally for access by all local tasks. Once again we are able to build sophisticated I/O abstractions using a few simple primitives. Furthermore we are able to utilise locality information to optimise I/O in a safe fashion. 5 Performance We have some preliminary performance figures based on simple comparisons of standard Unix I/O and I/O using our processor bound objects.
4 op. native Unix GO same proc GO diff proc create open close read write All times are average times in microseconds, reading and writing was of a 1000 byte block and names were 16 bytes long. The experiments were performed on two Sun SparcStation-4s connected via a Myrinet, using a custom version of AM for GO communication [6]. The difference in times between the native Unix I/O and processor bound object version may be accounted for by task context switch overhead. Note, the processor bound object is managed by a separate task from the referring task. We anticipate improving the performance of this further. Remote access to processor bound objects requires both communication and context switching, leading to increased access times. For larger blocks the communication time dominates context switch times. The performance figures are reasonable; however for serious use a more sophisticated implementation using caching is required. 6 Related Work There are many approaches to distributed systems, and most are built using some kind of RPC e.g. NFS or distributed object system. Metacomputing environments such as Legion [2] and Globus [1] also support distributed systems. The work presented here is unique in dealing with I/O in an environment supporting task migration and by supporting locality optimisation. Dual problems exist in mobile computing where resources are mobile or dynamically configured e.g. mobile TCP/IP or Jini [8]. However these systems do not have the same efficiency constraints as a parallel system. There are also clustered web servers which support adaptive utilisation of resources through DNS based load balancing. It is possible to construct a similar system using Java RMI [7] however Java RMI is synchronous and does not support locality optimisations. Java RMI is also rather slow compared with our optimised global method invocation; it uses a costly serialisation protocol. Like RMI our system supports communication between heterogeneous platforms; we also support task migration between heterogeneous platforms. Other more remotely related systems include LDAP (Lightweight Directory Access Protocol) [9] which provides a mechanism for connecting to, searching and modifying internet directories, and remote database access APIs such as OLEDB over DCOM. Parallel I/O is about high performance I/O rather than adaptive I/O: the goal of our work. However we were influenced by MPI-IO, part of MPI-2.0 [3], in that we wanted to use a mechanism analogous to communication for I/O. 7 Discussion The basic ideas presented in the previous sections have been implemented. This has provided a simple and effective means to perform I/O in Gardens. So far we have not implemented sophisticated caching mechanisms. Generally unless special facilities are required I/O in a nondedicated cluster is usually best performed by the existing distributed system e.g. NFS if it exists, since such systems have been heavily optimised. Nevertheless interfacing with such systems still requires the use of processor bound objects since NFS handles are not directly migrable. We have not addressed issues of parallel I/O. This is a complex issue for non-dedicated clusters. Since the set of available workstations changes over time some form of redundancy is required. A simple method is to replicate all s on all processors. However this is only valid for small data sets. Another issue concerns interactive I/O. Tasking in Gardens is non-preemptive this is fine for non-interactive I/O. However for interactive I/O the scheduling of tasks becomes more important. Some preliminary investigation has been started in this area. Related to scheduling is the issue of blocking I/O. The OS typically blocks certain I/O requests until they have been completed. In Gardens rather than the OS blocking the current OS thread and running another we really want to perform a Gardens block and reschedule another Gardens task. Finally since our system is designed for non-dedicated clusters, if an interactive workstation hosts a then remote accesses will potentially disturb the interactive user. If I/O intensive computation is required then either resource replication is necessary or some kind of dedicated system e.g. server must be utilised. Providing the programmer with a uniform model for communications and I/O has been an important achievement of this work. In addition it came as a pleasant surprise to us that little additional infrastructure is necessary to support I/O in Gardens. Acknowledgements We would like to thank other Gardeners for their help and useful discussions concerning I/O in Gardens. This study has been supported by the Gardens research project of the Programming Languages and Systems Research Centre at QUT.
5 References [1] Globus. [2] Legion. [3] MPI Forum. MPI [4] P. Roe. Adaptive synchronisation: Optimising the locality of collective communications in an adaptive setting. In to appear in: Sixth Australasian Conference on Parallel and Real- Time Systems (PART 99), Melbourne, Australia, Nov Springer. [5] P. Roe and C. Szyperski. Mianjin is Gardens Point: A parallel language taming asynchronous communication. In Fourth Australasian Conference on Parallel and Real-Time Systems (PART 97), Newcastle, Australia, Sept Springer. [6] P. Roe and C. Szyperski. The gardens approach to adaptive parallel computing. In R. Buyya, editor, Cluster Computing, volume 1, pages Prentice Hall, [7] Sun Microsystems. Java RMI. [8] Sun Microsystems. Jini. [9] P. Taylor. Introducing LDAP. Windows NT Systems, pages 47 51, Dec
Distributed Systems. The main method of distributed object communication is with remote method invocation
Distributed Systems Unit III Syllabus:Distributed Objects and Remote Invocation: Introduction, Communication between Distributed Objects- Object Model, Distributed Object Modal, Design Issues for RMI,
More informationA Capabilities Based Communication Model for High-Performance Distributed Applications: The Open HPC++ Approach
A Capabilities Based Communication Model for High-Performance Distributed Applications: The Open HPC++ Approach Shridhar Diwan, Dennis Gannon Department of Computer Science Indiana University Bloomington,
More informationPart V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection
Part V Process Management Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection Roadmap of Chapter 5 Notion of Process and Thread Data Structures Used to Manage
More informationMotivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4
Motivation Threads Chapter 4 Most modern applications are multithreaded Threads run within application Multiple tasks with the application can be implemented by separate Update display Fetch data Spell
More informationProcess Management. Processes. Processes. Processes, Threads, and Agents. A distributed system is a collection of cooperating processes.
Process Management A distributed system is a collection of cooperating es. Applications, services Middleware Processes, Threads, and Agents OS: kernel, libraries & servers OS1 Processes, threads, communication,...
More informationTwo Phase Commit Protocol. Distributed Systems. Remote Procedure Calls (RPC) Network & Distributed Operating Systems. Network OS.
A distributed system is... Distributed Systems "one on which I cannot get any work done because some machine I have never heard of has crashed". Loosely-coupled network connection could be different OSs,
More informationSistemi in Tempo Reale
Laurea Specialistica in Ingegneria dell'automazione Sistemi in Tempo Reale Giuseppe Lipari Introduzione alla concorrenza Fundamentals Algorithm: It is the logical procedure to solve a certain problem It
More informationLecture 06: Distributed Object
Lecture 06: Distributed Object Distributed Systems Behzad Bordbar School of Computer Science, University of Birmingham, UK Lecture 0? 1 Recap Interprocess communication Synchronous and Asynchronous communication
More informationA Comparison of Two Distributed Systems: Amoeba & Sprite. By: Fred Douglis, John K. Ousterhout, M. Frans Kaashock, Andrew Tanenbaum Dec.
A Comparison of Two Distributed Systems: Amoeba & Sprite By: Fred Douglis, John K. Ousterhout, M. Frans Kaashock, Andrew Tanenbaum Dec. 1991 Introduction shift from time-sharing to multiple processors
More informationCourse: Operating Systems Instructor: M Umair. M Umair
Course: Operating Systems Instructor: M Umair Process The Process A process is a program in execution. A program is a passive entity, such as a file containing a list of instructions stored on disk (often
More informationMiddleware and Interprocess Communication
Middleware and Interprocess Communication Reading Coulouris (5 th Edition): 41 4.1, 42 4.2, 46 4.6 Tanenbaum (2 nd Edition): 4.3 Spring 2015 CS432: Distributed Systems 2 Middleware Outline Introduction
More informationDistributed Systems Theory 4. Remote Procedure Call. October 17, 2008
Distributed Systems Theory 4. Remote Procedure Call October 17, 2008 Client-server model vs. RPC Client-server: building everything around I/O all communication built in send/receive distributed computing
More informationLecture 17: Threads and Scheduling. Thursday, 05 Nov 2009
CS211: Programming and Operating Systems Lecture 17: Threads and Scheduling Thursday, 05 Nov 2009 CS211 Lecture 17: Threads and Scheduling 1/22 Today 1 Introduction to threads Advantages of threads 2 User
More informationChapter 11: Implementing File-Systems
Chapter 11: Implementing File-Systems Chapter 11 File-System Implementation 11.1 File-System Structure 11.2 File-System Implementation 11.3 Directory Implementation 11.4 Allocation Methods 11.5 Free-Space
More informationDistributed Systems. Definitions. Why Build Distributed Systems? Operating Systems - Overview. Operating Systems - Overview
Distributed Systems Joseph Spring School of Computer Science Distributed Systems and Security Areas for Discussion Definitions Operating Systems Overview Challenges Heterogeneity Limitations and 2 Definitions
More informationBackground. 20: Distributed File Systems. DFS Structure. Naming and Transparency. Naming Structures. Naming Schemes Three Main Approaches
Background 20: Distributed File Systems Last Modified: 12/4/2002 9:26:20 PM Distributed file system (DFS) a distributed implementation of the classical time-sharing model of a file system, where multiple
More informationDISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA
DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA M. GAUS, G. R. JOUBERT, O. KAO, S. RIEDEL AND S. STAPEL Technical University of Clausthal, Department of Computer Science Julius-Albert-Str. 4, 38678
More informationLecture 5: Object Interaction: RMI and RPC
06-06798 Distributed Systems Lecture 5: Object Interaction: RMI and RPC Distributed Systems 1 Recap Message passing: send, receive synchronous versus asynchronous No global Time types of failure socket
More informationEI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)
EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) Dept. of Computer Science & Engineering Chentao Wu wuct@cs.sjtu.edu.cn Download lectures ftp://public.sjtu.edu.cn User:
More informationOmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP
OmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP (extended abstract) Mitsuhisa Sato 1, Motonari Hirano 2, Yoshio Tanaka 2 and Satoshi Sekiguchi 2 1 Real World Computing Partnership,
More informationDistributed Computing: PVM, MPI, and MOSIX. Multiple Processor Systems. Dr. Shaaban. Judd E.N. Jenne
Distributed Computing: PVM, MPI, and MOSIX Multiple Processor Systems Dr. Shaaban Judd E.N. Jenne May 21, 1999 Abstract: Distributed computing is emerging as the preferred means of supporting parallel
More informationThreads. Raju Pandey Department of Computer Sciences University of California, Davis Spring 2011
Threads Raju Pandey Department of Computer Sciences University of California, Davis Spring 2011 Threads Effectiveness of parallel computing depends on the performance of the primitives used to express
More informationOperating System Architecture. CS3026 Operating Systems Lecture 03
Operating System Architecture CS3026 Operating Systems Lecture 03 The Role of an Operating System Service provider Provide a set of services to system users Resource allocator Exploit the hardware resources
More informationMODELS OF DISTRIBUTED SYSTEMS
Distributed Systems Fö 2/3-1 Distributed Systems Fö 2/3-2 MODELS OF DISTRIBUTED SYSTEMS Basic Elements 1. Architectural Models 2. Interaction Models Resources in a distributed system are shared between
More informationVerteilte Systeme (Distributed Systems)
Verteilte Systeme (Distributed Systems) Karl M. Göschka Karl.Goeschka@tuwien.ac.at http://www.infosys.tuwien.ac.at/teaching/courses/ VerteilteSysteme/ Lecture 4: Operating System Support Processes and
More informationPage 1. Analogy: Problems: Operating Systems Lecture 7. Operating Systems Lecture 7
Os-slide#1 /*Sequential Producer & Consumer*/ int i=0; repeat forever Gather material for item i; Produce item i; Use item i; Discard item i; I=I+1; end repeat Analogy: Manufacturing and distribution Print
More informationWhat s in a process?
CSE 451: Operating Systems Winter 2015 Module 5 Threads Mark Zbikowski mzbik@cs.washington.edu Allen Center 476 2013 Gribble, Lazowska, Levy, Zahorjan What s in a process? A process consists of (at least):
More informationMODELS OF DISTRIBUTED SYSTEMS
Distributed Systems Fö 2/3-1 Distributed Systems Fö 2/3-2 MODELS OF DISTRIBUTED SYSTEMS Basic Elements 1. Architectural Models 2. Interaction Models Resources in a distributed system are shared between
More informationTITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP
TITLE: Implement sort algorithm and run it using HADOOP PRE-REQUISITE Preliminary knowledge of clusters and overview of Hadoop and its basic functionality. THEORY 1. Introduction to Hadoop The Apache Hadoop
More informationLecture 7: Distributed File Systems
06-06798 Distributed Systems Lecture 7: Distributed File Systems 5 February, 2002 1 Overview Requirements for distributed file systems transparency, performance, fault-tolerance,... Design issues possible
More informationCS6601 DISTRIBUTED SYSTEM / 2 MARK
UNIT III PEER TO PEER SERVICE AND FILE SYSTEM 1. Define Peer-Peer System. Part A - Questions Peer-to-peer system is a paradigm for the construction of distributed system and application in which data and
More informationMiddleware. Adapted from Alonso, Casati, Kuno, Machiraju Web Services Springer 2004
Middleware Adapted from Alonso, Casati, Kuno, Machiraju Web Services Springer 2004 Outline Web Services Goals Where do they come from? Understanding middleware Middleware as infrastructure Communication
More informationChapter 18 Distributed Systems and Web Services
Chapter 18 Distributed Systems and Web Services Outline 18.1 Introduction 18.2 Distributed File Systems 18.2.1 Distributed File System Concepts 18.2.2 Network File System (NFS) 18.2.3 Andrew File System
More informationHigh Reliability Intranets for Document Management
High Reliability Intranets for Document Management Simon Cleary and Andrew Jennings CSE,RMIT simon.cleary@rmit.edu.au, ajennings@rmit.edu.au Introduction The dramatic growth of IP based applications has
More informationOperating System Support
Operating System Support Dr. Xiaobo Zhou Adopted from Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 4, Addison-Wesley 2005 1 Learning Objectives Know what a modern
More informationSend me up to 5 good questions in your opinion, I ll use top ones Via direct message at slack. Can be a group effort. Try to add some explanation.
Notes Midterm reminder Second midterm next week (04/03), regular class time 20 points, more questions than midterm 1 non-comprehensive exam: no need to study modules before midterm 1 Online testing like
More informationPRODUCT DESCRIPTION. Version 0.01 (Alpha) By F. Scott Deaver Chief Technical Officer/Founder/Inventor February 27, 2017
Certitude Digital s AMULET Corporate Container Simulation Package (AMULET CCS ) ----------------------------------------------------------------------------------------------------------------- PRODUCT
More informationOPERATING SYSTEM. Chapter 12: File System Implementation
OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management
More informationAdaptive Cluster Computing using JavaSpaces
Adaptive Cluster Computing using JavaSpaces Jyoti Batheja and Manish Parashar The Applied Software Systems Lab. ECE Department, Rutgers University Outline Background Introduction Related Work Summary of
More informationProcess. Program Vs. process. During execution, the process may be in one of the following states
What is a process? What is process scheduling? What are the common operations on processes? How to conduct process-level communication? How to conduct client-server communication? Process is a program
More informationChapter 11: Implementing File Systems
Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation
More informationOperating System Support
Teaching material based on Distributed Systems: Concepts and Design, Edition 3, Addison-Wesley 2001. Copyright George Coulouris, Jean Dollimore, Tim Kindberg 2001 email: authors@cdk2.net This material
More informationIn the most general sense, a server is a program that provides information
d524720 Ch01.qxd 5/20/03 8:37 AM Page 9 Chapter 1 Introducing Application Servers In This Chapter Understanding the role of application servers Meeting the J2EE family of technologies Outlining the major
More informationDISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 1 Introduction Modified by: Dr. Ramzi Saifan Definition of a Distributed System (1) A distributed
More informationChapter 4: Threads. Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads
Chapter 4: Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads Chapter 4: Threads Objectives To introduce the notion of a
More information10/10/ Gribble, Lazowska, Levy, Zahorjan 2. 10/10/ Gribble, Lazowska, Levy, Zahorjan 4
What s in a process? CSE 451: Operating Systems Autumn 2010 Module 5 Threads Ed Lazowska lazowska@cs.washington.edu Allen Center 570 A process consists of (at least): An, containing the code (instructions)
More informationPARALLEL PROGRAM EXECUTION SUPPORT IN THE JGRID SYSTEM
PARALLEL PROGRAM EXECUTION SUPPORT IN THE JGRID SYSTEM Szabolcs Pota 1, Gergely Sipos 2, Zoltan Juhasz 1,3 and Peter Kacsuk 2 1 Department of Information Systems, University of Veszprem, Hungary 2 Laboratory
More informationCSE 120 Principles of Operating Systems
CSE 120 Principles of Operating Systems Fall 2000 Lecture 5: Threads Geoffrey M. Voelker Processes Recall that a process includes many things An address space (defining all the code and data pages) OS
More informationSoftware Architecture Patterns
Software Architecture Patterns *based on a tutorial of Michael Stal Harald Gall University of Zurich http://seal.ifi.uzh.ch/ase www.infosys.tuwien.ac.at Overview Goal Basic architectural understanding
More informationQuestions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation. What s in a process? Organizing a Process
Questions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation Why are threads useful? How does one use POSIX pthreads? Michael Swift 1 2 What s in a process? Organizing a Process A process
More informationAN OVERVIEW OF DISTRIBUTED FILE SYSTEM Aditi Khazanchi, Akshay Kanwar, Lovenish Saluja
www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 10 October, 2013 Page No. 2958-2965 Abstract AN OVERVIEW OF DISTRIBUTED FILE SYSTEM Aditi Khazanchi,
More informationChapter 1: Distributed Information Systems
Chapter 1: Distributed Information Systems Contents - Chapter 1 Design of an information system Layers and tiers Bottom up design Top down design Architecture of an information system One tier Two tier
More informationDistributed Systems Principles and Paradigms
Distributed Systems Principles and Paradigms Chapter 01 (version September 5, 2007) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.20.
More information1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5
Process Management A process can be thought of as a program in execution. A process will need certain resources such as CPU time, memory, files, and I/O devices to accomplish its task. These resources
More informationRun-Time Environments/Garbage Collection
Run-Time Environments/Garbage Collection Department of Computer Science, Faculty of ICT January 5, 2014 Introduction Compilers need to be aware of the run-time environment in which their compiled programs
More informationChapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems
Chapter 5: Processes Chapter 5: Processes & Threads Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems, Silberschatz, Galvin and
More informationOutline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems
Distributed Systems Outline Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems What Is A Distributed System? A collection of independent computers that appears
More informationNetwork+ Guide to Networks, Fourth Edition. Chapter 8 Network Operating Systems and Windows Server 2003-Based Networking
Network+ Guide to Networks, Fourth Edition Chapter 8 Network Operating Systems and Windows Server 2003-Based Networking Objectives Discuss the functions and features of a network operating system Define
More informationGustavo Alonso, ETH Zürich. Web services: Concepts, Architectures and Applications - Chapter 1 2
Chapter 1: Distributed Information Systems Gustavo Alonso Computer Science Department Swiss Federal Institute of Technology (ETHZ) alonso@inf.ethz.ch http://www.iks.inf.ethz.ch/ Contents - Chapter 1 Design
More informationChapter 4 Threads, SMP, and
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Roadmap Threads: Resource ownership
More informationBEAWebLogic Server and WebLogic Express. Programming WebLogic JNDI
BEAWebLogic Server and WebLogic Express Programming WebLogic JNDI Version 10.0 Document Revised: March 30, 2007 Contents 1. Introduction and Roadmap Document Scope and Audience.............................................
More informationCreating and Running Mobile Agents with XJ DOME
Creating and Running Mobile Agents with XJ DOME Kirill Bolshakov, Andrei Borshchev, Alex Filippoff, Yuri Karpov, and Victor Roudakov Distributed Computing & Networking Dept. St.Petersburg Technical University
More informationUPnP Services and Jini Clients
UPnP Services and Jini Clients Jan Newmarch School of Network Computing Monash University jan.newmarch@infotech.monash.edu.au Abstract UPnP is middleware designed for network plug and play. It is designed
More informationDISTRIBUTED SYSTEMS [COMP9243] Lecture 3b: Distributed Shared Memory DISTRIBUTED SHARED MEMORY (DSM) DSM consists of two components:
SHARED ADDRESS SPACE DSM consists of two components: DISTRIBUTED SYSTEMS [COMP9243] ➀ Shared address space ➁ Replication and consistency of memory objects Lecture 3b: Distributed Shared Memory Shared address
More informationOperating System. Chapter 4. Threads. Lynn Choi School of Electrical Engineering
Operating System Chapter 4. Threads Lynn Choi School of Electrical Engineering Process Characteristics Resource ownership Includes a virtual address space (process image) Ownership of resources including
More informationINTRODUCTION TO Object Oriented Systems BHUSHAN JADHAV
INTRODUCTION TO Object Oriented Systems 1 CHAPTER 1 Introduction to Object Oriented Systems Preview of Object-orientation. Concept of distributed object systems, Reasons to distribute for centralized objects.
More informationDiagram of Process State Process Control Block (PCB)
The Big Picture So Far Chapter 4: Processes HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection,
More informationOS Design Approaches. Roadmap. OS Design Approaches. Tevfik Koşar. Operating System Design and Implementation
CSE 421/521 - Operating Systems Fall 2012 Lecture - II OS Structures Roadmap OS Design and Implementation Different Design Approaches Major OS Components!! Memory management! CPU Scheduling! I/O Management
More informationChapter 11: Implementing File
Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency
More informationDistributed File Systems Issues. NFS (Network File System) AFS: Namespace. The Andrew File System (AFS) Operating Systems 11/19/2012 CSC 256/456 1
Distributed File Systems Issues NFS (Network File System) Naming and transparency (location transparency versus location independence) Host:local-name Attach remote directories (mount) Single global name
More informationChapter 3 Parallel Software
Chapter 3 Parallel Software Part I. Preliminaries Chapter 1. What Is Parallel Computing? Chapter 2. Parallel Hardware Chapter 3. Parallel Software Chapter 4. Parallel Applications Chapter 5. Supercomputers
More informationThreads Chapter 5 1 Chapter 5
Threads Chapter 5 1 Chapter 5 Process Characteristics Concept of Process has two facets. A Process is: A Unit of resource ownership: a virtual address space for the process image control of some resources
More informationCluster Computing with Single Thread Space
Cluster Computing with Single Thread Space Francis Lau, Matchy Ma, Cho-Li Wang, and Benny Cheung Abstract To achieve single system image (SSI) for cluster computing is a challenging task since SSI is a
More informationThe Big Picture So Far. Chapter 4: Processes
The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt
More informationChapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition
Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory
More informationFile-System Structure
Chapter 12: File System Implementation File System Structure File System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency and Performance Recovery Log-Structured
More informationHTRC Data API Performance Study
HTRC Data API Performance Study Yiming Sun, Beth Plale, Jiaan Zeng Amazon Indiana University Bloomington {plale, jiaazeng}@cs.indiana.edu Abstract HathiTrust Research Center (HTRC) allows users to access
More informationProcesses, Threads, SMP, and Microkernels
Processes, Threads, SMP, and Microkernels Slides are mainly taken from «Operating Systems: Internals and Design Principles, 6/E William Stallings (Chapter 4). Some materials and figures are obtained from
More informationProcess Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)
Chapter 4: Processes Process Concept Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems An operating system
More informationChapter 4: Processes
Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating
More informationTHE IMPACT OF E-COMMERCE ON DEVELOPING A COURSE IN OPERATING SYSTEMS: AN INTERPRETIVE STUDY
THE IMPACT OF E-COMMERCE ON DEVELOPING A COURSE IN OPERATING SYSTEMS: AN INTERPRETIVE STUDY Reggie Davidrajuh, Stavanger University College, Norway, reggie.davidrajuh@tn.his.no ABSTRACT This paper presents
More informationYi Shi Fall 2017 Xi an Jiaotong University
Threads Yi Shi Fall 2017 Xi an Jiaotong University Goals for Today Case for Threads Thread details Case for Parallelism main() read_data() for(all data) compute(); write_data(); endfor main() read_data()
More informationProgramming with MPI
Programming with MPI p. 1/?? Programming with MPI One-sided Communication Nick Maclaren nmm1@cam.ac.uk October 2010 Programming with MPI p. 2/?? What Is It? This corresponds to what is often called RDMA
More information! How is a thread different from a process? ! Why are threads useful? ! How can POSIX threads be useful?
Chapter 2: Threads: Questions CSCI [4 6]730 Operating Systems Threads! How is a thread different from a process?! Why are threads useful?! How can OSIX threads be useful?! What are user-level and kernel-level
More informationComputer System Overview OPERATING SYSTEM TOP-LEVEL COMPONENTS. Simplified view: Operating Systems. Slide 1. Slide /S2. Slide 2.
BASIC ELEMENTS Simplified view: Processor Slide 1 Computer System Overview Operating Systems Slide 3 Main Memory referred to as real memory or primary memory volatile modules 2004/S2 secondary memory devices
More informationLecture 4: Threads; weaving control flow
Lecture 4: Threads; weaving control flow CSE 120: Principles of Operating Systems Alex C. Snoeren HW 1 Due NOW Announcements Homework #1 due now Project 0 due tonight Project groups Please send project
More informationDistributed Systems Operation System Support
Hajussüsteemid MTAT.08.009 Distributed Systems Operation System Support slides are adopted from: lecture: Operating System(OS) support (years 2016, 2017) book: Distributed Systems: Concepts and Design,
More informationProcess Characteristics. Threads Chapter 4. Process Characteristics. Multithreading vs. Single threading
Process Characteristics Threads Chapter 4 Reading: 4.1,4.4, 4.5 Unit of resource ownership - process is allocated: a virtual address space to hold the process image control of some resources (files, I/O
More informationThreads Chapter 4. Reading: 4.1,4.4, 4.5
Threads Chapter 4 Reading: 4.1,4.4, 4.5 1 Process Characteristics Unit of resource ownership - process is allocated: a virtual address space to hold the process image control of some resources (files,
More informationOpenACC 2.6 Proposed Features
OpenACC 2.6 Proposed Features OpenACC.org June, 2017 1 Introduction This document summarizes features and changes being proposed for the next version of the OpenACC Application Programming Interface, tentatively
More informationCondor and BOINC. Distributed and Volunteer Computing. Presented by Adam Bazinet
Condor and BOINC Distributed and Volunteer Computing Presented by Adam Bazinet Condor Developed at the University of Wisconsin-Madison Condor is aimed at High Throughput Computing (HTC) on collections
More informationUsage of LDAP in Globus
Usage of LDAP in Globus Gregor von Laszewski and Ian Foster Mathematics and Computer Science Division Argonne National Laboratory, Argonne, IL 60439 gregor@mcs.anl.gov Abstract: This short note describes
More informationCHAPTER - 4 REMOTE COMMUNICATION
CHAPTER - 4 REMOTE COMMUNICATION Topics Introduction to Remote Communication Remote Procedural Call Basics RPC Implementation RPC Communication Other RPC Issues Case Study: Sun RPC Remote invocation Basics
More information!! How is a thread different from a process? !! Why are threads useful? !! How can POSIX threads be useful?
Chapter 2: Threads: Questions CSCI [4 6]730 Operating Systems Threads!! How is a thread different from a process?!! Why are threads useful?!! How can OSIX threads be useful?!! What are user-level and kernel-level
More informationDistributed Systems Principles and Paradigms. Chapter 01: Introduction. Contents. Distributed System: Definition.
Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 01: Version: February 21, 2011 1 / 26 Contents Chapter 01: 02: Architectures
More informationWhat s in a traditional process? Concurrency/Parallelism. What s needed? CSE 451: Operating Systems Autumn 2012
What s in a traditional process? CSE 451: Operating Systems Autumn 2012 Ed Lazowska lazowska @cs.washi ngton.edu Allen Center 570 A process consists of (at least): An, containing the code (instructions)
More informationLast Class: RPCs. Today:
Last Class: RPCs RPCs make distributed computations look like local computations Issues: Parameter passing Binding Failure handling Lecture 4, page 1 Today: Case Study: Sun RPC Lightweight RPCs Remote
More informationProcesses. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC]
Processes CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] 1 Outline What Is A Process? Process States & PCB Process Memory Layout Process Scheduling Context Switch Process Operations
More informationExample File Systems Using Replication CS 188 Distributed Systems February 10, 2015
Example File Systems Using Replication CS 188 Distributed Systems February 10, 2015 Page 1 Example Replicated File Systems NFS Coda Ficus Page 2 NFS Originally NFS did not have any replication capability
More informationINSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad
Course Name Course Code Class Branch INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad -500 043 COMPUTER SCIENCE AND ENGINEERING TUTORIAL QUESTION BANK 2015-2016 : DISTRIBUTED SYSTEMS
More information