ECE454 Tutorial. June 16, (Material prepared by Evan Jones)

Size: px
Start display at page:

Download "ECE454 Tutorial. June 16, (Material prepared by Evan Jones)"

Transcription

1 ECE454 Tutorial June 16, 2009 (Material prepared by Evan Jones)

2 2. Consider the following function: void strcpy(char* out, char* in) { while(*out++ = *in++); } which is invoked by the following code: void main( void ) { char buf[10] = "name"; strcpy(buf+4,buf); cout << buf << endl; } What is the result of executing this code if the strcpy function is a remote procedure using copy/restore semantics? What is the result if it is a local procedure, using the standard C/C++ call-by-value semantics?

3 Q2: RPC Copy & Restore Server copies buf into a local buffer void strcpy(char* out = char[n], char* in = name ) { while(*out++ = *in++); // out = name, in = name } Client stub copies this buffer into buf+4 // buf = [ n, a, m, e, 0, 0, 0, 0, 0, 0 ]; strcpy(buf+4,buf); // buf = [ n, a, m, e, n, a, m, e, 0, 0 ]; Result: namename

4 Q2: Local Semantics void strcpy(char* out = buf+4, char* in = buf) { while(*out++ = *in++); } The copy overwrites the null characters Result: infinite loop Eventually, a write to a forbidden address will terminate the program with a segmentation fault

5 3. Consider the following declaration in C: union { int a; char b; float c; } foo; At run-time there is no way to determine which of the entries in the foo union is valid. What implications does this have for RPC? What is the implication if instead of being char b; it was char* b?

6 Q3: Struct and Union Memory Layout

7 Q3: Unions Multiple data types occupy the same space Union is as wide as the largest data type Type retrieved should be the type last stored To marshal a union, we need its current type. Discriminated unions have a tag indicating the current type

8 Q3: RPC Problems Send all three Marshall invalid values Just send the bits What about different architectures? (Big or little endian, floating point format, etc.) Pointers (char* instead of char) May try to access invalid or inaccessible address space when marshalling Send an invalid address to the remote system

9 4. We wish to determine some of the benefits and drawbacks of caching the result of a server address lookup in an RPC system. Consider a system in which a client requests the server address for a given procedure from a binder. The time to execute this request is τ b. The client can then request execution of the procedure at the server, which takes a total time of τ s. Hint: This is basically Project 1

10 4 (continued). If the client caches the server address, it does not need to look it up on subsequent RPCs. However, the server may be shut down for maintenance from time to time. As a result, the client must now consider the possibility that the RPC will not execute because the server address information it has cached is stale. To determine this, the client will simply have a timeout period τ o > τ s. If after invoking an RPC using a cached server address, the server has not responded within this timeout period, the client will presume the server is down, and will request the binder lookup a different server address.

11 Q4: Normal Request

12 Q4: Cached Request

13 Q4: Cached Request Timeout

14 Q4: Summary System like Project 1 Binder lookup time: τ b Server request time: τ s Client timeout time: τ o > τ s

15 4. a) If the client does no caching, what is the minimum and maximum amount of time it takes to execute an RPC?

16 4. a) If the client does no caching, what is the minimum and maximum amount of time it takes to execute an RPC? Max time = Min time = τ b + τ s

17 4. b) If the client does caching, what is the minimum and maximum amount of time it takes to execute an RPC?

18 4. b) If the client does caching, what is the minimum and maximum amount of time it takes to execute an RPC? Min time = τ s Max time = τ o + τ b + τ s

19 4. c) Suppose a client executes k RPCs before the server is shutdown for maintenance and another server takes over. What is the average time to execute an RPC using the caching scheme?

20 4. c) 1 st = τ b + τ s 2 nd = τ s k th = τ s (k+1) th = τ o + τ b + τ s (k+2) th = same as for the 2nd Total time for k requests = τ o + τ b + kτ s Ignores the initial k requests because we assume an infinite series of requests Average time = τ s + (τ o + τ b )/k

21 4. d) For what value of k will the caching scheme outperform the non-caching scheme?

22 4. d) For what value of k will the caching scheme outperform the non-caching scheme? Non-caching = τ b + τ s Caching = τ s + (τ o + τ b )/k Equate the two and solve for k: k > τ o / τ b + 1

23 1. Identify four ways in which a Remote Procedure Call is different from a Local Procedure Call, and what is the significance of those differences.

24 Q1: Parameter Passing RPC: copy and restore Local: call by value or call by reference Remote references: Must use code to access data remotely Significance: Results can be different when using RPC or local calls

25 Q1: Failure Local calls: Failures are only due to local bugs RPC: Can fail due to network or server problems Significance: Client must have additional error handling for RPC calls

26 Q1: Performance Parameters must be marshaled Server must be accessed over the network Significance: RPC calls have much more overhead than local calls

27 Q1: Performance Workaround RPC has a lot of overhead Processing on the server occurs at full speed Conclusion: RMI interfaces should do a lot of work per call

28 Q1: Global Resources Local procedures all share the same global state RPC calls have no access to global shared resources. Example: if an operation depends on the value of the computer s clock, it may not work as an RPC.

29 Q1: At-most-once or At-leastonce semantics Client has sent the request The server crashes What should the client do? The server might have executed the call before it crashed, but the client has no way to tell

30 Q1: At-most-once The RPC call fails with an error It is the application s responsibility to handle it appropriately Query an application specific function about the state of the system Retry if the application knows it doesn t matter At-most-once operation: The call was executed one or zero times

31 Q1: At-least-once The RPC is retried until the client knows it was executed Appropriate if result does not change when the operation is performed multiple times An idempotent operation At-least-once operation: The call was executed one or more times

32 Q1: But aren t most operations not idempotent? Example: Adding a record to a database int addrecord( DataBase, Record ); If this is executed multiple times, the database will have duplicate entries But maybe we can rework this

33 Making operations idempotent EntryHandle createrecord(); int modifyrecord( DataBase, Record, EntryHandle ); Multiple createrecord: unused entries which can eventually be deleted Multiple modifyrecord: identical data In general, avoid keeping state on the server. This is not always possible.

34 5. Consider the following line of Java code: a = b.foo(c); where a, b, and c are objects of types A, B, and C respectively. The foo() method for type B is defined as: A foo( C c ) { return c.bar(); } Objects b and c are located on a server and client respectively. Object c does not have a remote interface defined (i.e C does not extend java.rmi.remote).

35 5. a) Can we determine where the process that is executing this line of code is located? If so, where is it, and why must it be there? If not, why can we not determine this?

36 5. a) Can we determine where the process that is executing this line of code is located? If so, where is it, and why must it be there? If not, why can we not determine this? a = b.foo(c); b: server c: client c: No remote interface

37 Q5a: Executing on the Client Because c has no remote interface and c is located on the client, the client is the only process that has a copy Therefore, this code must run on the client

38 5. b) In the process where this line of code is executing, is b a local or a remote reference, or is it not possible to determine?

39 5. b) In the process where this line of code is executing, is b a local or a remote reference, or is it not possible to determine? a = b.foo(c); b: server c: client c: No remote interface Executing on the client

40 Q5b: Remote Reference Executing on the client b is on the server Therefore, b must be a remote reference

41 5. c) Can we determine where object a is located?

42 5. c) Can we determine where object a is located? a = b.foo(c); b: remote reference (on server) c: client (no remote interface) Executing on the client A foo( C c ) { return c.bar(); }

43 Q5c: Unable to Determine Question does not provide information about type A If type A is a remote interface, a will be located on the server, and a remote reference will be returned to the client Otherwise, a will be copied back to the client, and be a local object

44 5. d) What is the sequence of action during the execution of this line of code. You should consider the possibility that the returned value is either a remote reference or a local object. Indicate at all stages when either a remote reference or a local object is passed.

45 Q5d: Sequence 1. The client calls foo on the server via RMI. b is passed as a remote reference, and a copy of c is sent because it is a local object. 2. The server executes b.foo with its own local copy of c. 3. The server returns a to the client. If a is a remote interface, then it is returned as a remote reference. Otherwise, a copy is sent to the client.

46 2. Consider the maximum server throughput, in client requests handled per second, for different numbers of threads. If a single thread has to perform all processing then the time for handling any request is on average 2 milliseconds of processing and 8 milliseconds of input-output delay when the server reads from a drive on behalf of the client. Any new messages that arrive while the server is handling a request are queued at the server port.

47 2. a) Compute the maximum throughput when the server has two threads that are independently scheduled and disk access requests can be serialized. T Disk = 8 ms T CPU = 2 ms

48 2. a) Compute the maximum throughput when the server has two threads that are independently scheduled and disk access requests can be serialized. T Disk = 8 ms T CPU = 2 ms Disk limited: A request completes each 8 ms Throughput = 1/0.008 = 125 requests/s

49 2. a) Compute the maximum throughput when the server has two threads that are independently scheduled and disk access requests can be serialized. T Disk = 8 ms T CPU = 2 ms If we add more threads, can we increase the throughput?

50 2. a) Compute the maximum throughput when the server has two threads that are independently scheduled and disk access requests can be serialized. T Disk = 8 ms T CPU = 2 ms If we add more threads, can we increase the throughput? No: We are limited by the disk performance, so we cannot take advantage of more threads.

51 2. b) Caching is introduced and a server thread that is asked to retrieve data, first looks at the shared cache, and avoids accessing the disk if it finds one, so there is no time cost in the I/O time. Assume a 75% success hit rate on the cache, and that the processing time due to cache search increases to 4 milliseconds per request.

52 2. b) Compute the maximum throughput: T Disk = 8 ms P(Disk) = 25% T CPU = 4 ms

53 2. b) Compute the maximum throughput: T Disk = 8 ms P(Disk) = 25% T CPU = 4 ms We care about the average case: T Disk Avg = = 2

54 2. b) Compute the maximum throughput: T Disk = 8 ms P(Disk) = 25% T CPU = 4 ms We care about the average case: T Disk Avg = = 2 CPU limited: A request completes each 4 ms Throughput = 1/0.004 = 250 requests/s

55 2.b) Reality Check How many threads do we need to get that maximum rate, with caching?

56 2.b) Reality Check How many threads do we need to get that maximum rate, with caching? In theory (ideal case), we only need 2: One thread using the CPU One thread waiting for the disk

57 2.b) Reality Check How many threads do we need to get that maximum rate, with caching? In reality, the order of cache hits and misses will be random. What happens if we get two requests in a row that need to access the disk?

58 2. c) Caching is introduced as above with a 75% hit rate, but now there are two processors using a shared memory model. T Disk = 8 ms P(Disk) = 25% T CPU = 4 ms

59 2. c) Caching is introduced as above with a 75% hit rate, but now there are two processors using a shared memory model. T Disk = 8 ms P(Disk) = 25% T CPU = 4 ms We care about the average case: T Disk Avg = = 2 T CPU Avg = 4/2 = 2

60 2. c) Caching is introduced as above with a 75% hit rate, but now there are two processors using a shared memory model. T Disk = 8 ms P(Disk) = 25% T CPU = 4 ms We care about the average case: T Disk Avg = = 2 T CPU Avg = 4/2 = 2 Overlap disk and CPU: 2 ms between completions Throughput = 1/0.002 = 500 requests/s

61 2.c) Reality Check How many threads do we need to get that maximum rate, with caching and two CPUs?

62 2.c) Reality Check How many threads do we need to get that maximum rate, with caching and two CPUs? Theoretically, we need 3: 2 using the CPUs 1 blocked waiting for disk

63 2.c) Reality Check How many threads do we need to get that maximum rate, with caching and two CPUs? Realistically, we still have a problem if multiple requests that need the disk arrive in a row.

64 5. Consider the following C-code for implementing a file copy command: void filecopy(char* dest, char* src) { const int bufsz = 1024; char buf[bufsz]; } int fd1 = open(src, O_RDONLY); // Open src int fd2 = open(dest, O_WRONLY O_CREAT); // Open dest bool done = false; while (!done) { int rc = read(fd1, buf, bufsz); if (rc <= 0) done = true; else write(fd2, buf, rc); } close(fd1); close(fd2);

65 5. (continued) Suppose that we wish to use this as a basis for a client/server file copy operation, in which the copy command is executed at the client, while the files reside on the server. 5. a) What are the advantages (if any) and disadvantages (if any) of using this code as is, except with the various file operations (open, read, write and close) implemented as remote procedure calls.

66 5. a) Advantages Very flexible: Implementing open, close, read and write as RPC calls would allow all possible file system operations. In fact, you could build a remote file system on top of them.

67 Q5: A File System Over RPC!?!? Are you Crazy? No: Sun s NFS, the de facto standard Unix network file system, was built on top of Sun s RPC system. It has some problems, but has been working pretty well since 1985.

68 5. a) Disadvantages Many RPC calls add overhead Data is copied over the network to the client, then copied back to the server: Waste of bandwidth, since the server already has the data. Reliability: If the client crashes in the middle of the copy, the server will have part of a copied file, and maybe the files will be locked.

69 5. b) What changes would you make to overcome any disadvantages you have identified, and how might those changes affect the advantages you have identified? Only implement a copy RPC: Only one RPC call overhead No bandwidth wasted moving data Less flexibility Implement copy as well as the others More complexity

70 1. Reading a file using a single-threaded file server and a multithreaded server. It takes 15 ms to get a request for work, dispatch it, and do the rest of the processing, assuming that the data needed are in a cache in main memory. If a disk operation is needed, as is the case one-third of the time, an additional 75 ms is required, during which time the thread sleeps. How many requests per second can the server handle if it is single threaded? If it is multithreaded?

71 Q1: Single-Threaded Diagram

72 Q1: Single-Threaded We care about the average case: T avg = CPU + Disk P(Disk) = /3 = 40 ms Requests per second = 1/0.040 = 25

73 Q1: Multi-Threaded Diagram

74 Q1: Multi-Threaded We care about the average case. On average, each task has: = 15 ms T CPU T Disk = 75/3 = 25 ms But we can overlap CPU and disk operations. T between completions = Max( T CPU, T disk ) = 25 ms Requests per second = 1/0.025 = 40

75 Q1: Multi-Threaded Average

76 3. We wish to examine the effect of the order of processing client requests at a server. A typical server will initiate the processing of clients requests in the order in which they are received. It does this because it has no knowledge of future requests. Suppose we have a server that has just received two requests, one big and one little. The big request will take T b time to process. The little request will take T l time to process.

77 3. In addition to these times, the time it takes to initiate the processing is T i. The time it takes a client to send the request is T c. Finally, the time it takes the server to package up and return the results to the client is T r.

78 3. i) Our server is single threaded. What is the least amount of time that the client initiating the little request will experience before getting the results back? Two requests have arrived at the server simultaneously T l = little request T b = big request T c = client request T i = initiate request T r = return data

79 3. i) Sequence of Events 1. Client initiates the requests (T c ) 2. Server initiates the little request (T i ) 3. Processing time for the little request (T l ) 4. Returns results for the little request (T r ) Total = T c + T i + T l + T r

80 3. ii) What is the greatest amount of time that the client initiating the little request will experience before getting the results back? Two requests have arrived at the server simultaneously T l = little request T b = big request T c = client request T i = initiate request T r = return data

81 3. ii) Sequence of Events 1. Client initiates the requests (T c ) 2. Server processes the big request first (T i + T b + T r ) 3. Server processes the little request (Steps 2-4 above: T i + T l + T r ) Total = T c + T i + T b + T r + T i + T l + T r

82 3. iii) Now suppose we have a multi-threaded server. The server makes a thread switch every T t, and it takes T s seconds to make the switch. Whenever a request is received, a thread is spawned to process the request. Spawning takes time T f. Thus, in this situation there would initially be a thread that received the requests, and it would spawn two additional threads to process them. The receiving thread would then remain blocked for the rest of the time, and can thus be ignored after this point.

83 3. iii) What are the least and greatest amounts of time that the client initiating the little request will experience before getting the results back?

84 3. iii) Receiving Thread loop: initiate request from queue or block) (T i ) spawn new thread for that request (T f )

85 3. iii) Assumptions The receiving thread does all its work in a single time slice (no switches) The little task takes one time slice The big task takes multiple time slices

86 3. iii) Best Case Sequence 1. Client initiates the requests (T c ) 2. The receiving thread initiates the requests and creates 2 threads (2(T i + T f )) 3. Switch to little task and complete (T s + T l ) 4. Switch to big task and execute (T s + T t ) 5. Switch to little task, return results (T s + T r )

87 3. iii) Worst Case Sequence 1. Client initiates the requests (T c ) 2. The receiving thread initiates the requests and creates 2 threads (2(T i + T f )) 3. Switch to big task and execute (T s + T t ) 4. Switch to little task and complete (T s + T l ) 5. Switch to big task and execute (T s + T t ) 6. Switch to little task, return results (T s + T r )

88 3. iv) Define server efficiency as the percent of time spent processing requests (i.e. the time T l or T b as a fraction of total time). What is the efficiency of this server in the single threaded case?

89 3. iv) Define server efficiency as the percent of time spent processing requests (i.e. the time T l or T b as a fraction of total time). What is the efficiency of this server in the single threaded case? Assume T c is predominately client and network, and so is not part of efficiency Efficiency = Ideal Time / Actual Time = (T l + T b ) / (T l + T b + 2(T i + T r ))

90 3. iv) What is the efficiency of this server in the multi-threaded case?

91 3. iv) What is the efficiency of this server in the multi-threaded case? Multi-threading adds overhead: T t of CPU time actually takes T t + T s

92 3. iv) What is the efficiency of this server in the multi-threaded case? Multi-threading adds overhead: T f to spawn threads T t of CPU time actually takes T t + T s Multiply the actual time by a factor of (T t + T s )/T t Efficiency = Ideal / ((Single Thread)(Context)) = (T l + T b ) / ((T l + T b + 2(T i + T r + T f ))(T t + T s )/T t )

93 3. v) Compute all the answers with actual numbers. One of the multi-threaded assumptions is no longer valid. The receiving thread does all its work in a single time slice: 2(T i + T f ) T t The numbers given: T i = 5 T f = 3 T t = 10 2(T i + T f ) = 16 > 10

94 3. v) What changes in the best case? 1. Client initiates the requests (T c = 5 ms) 2. The receiving thread initiates the requests and creates 2 threads (2(T i + T f ) = 16 ms) 3. Switch to little task and complete (T s + T l = 11 ms) 4. Switch to big task and execute (T s + T t = 11 ms) 5. Switch to little task, return results (T s + T r = 6 ms)

95 3. v) What changes in the best case? 1. Client initiates the requests (T c = 5 ms) 2. The receiving thread initiates and creates the little task thread (T i + T f = 8 ms) 3. It starts initiating the big task (2 ms) 4. Switch to little task and complete (T s + T l = 11 ms) 5. Switch to receiving thread and finish initiating (T s + T i + T f = 7 ms) 6. Switch to little task, return results (T s + T r = 6 ms)

96 3. iii) What changes in the worst case? 1. Client initiates the requests (T c = 5 ms) 2. The receiving thread initiates the requests and creates 2 threads (2(T i + T f ) = 16 ms) 3. Switch to big task and execute (T s + T t = 11 ms) 4. Switch to little task and complete (T s + T l = 11 ms) 5. Switch to big task and execute (T s + T t = 11 ms) 6. Switch to little task, return results (T s + T r = 6 ms)

97 3. iii) What changes in the worst case? 1. Client initiates the requests (T c = 5 ms) 2. The receiving thread initiates and creates the big task thread (T i + T f = 8 ms) 3. It starts initiating the little task (2 ms) 4. Switch to big task and execute (T s + T t = 11 ms) 5. Switch to receiving thread and finish initiating (T s + T i + T f - 2 = 7 ms) 6. Switch to big task and execute (T s + T t = 11 ms) 7. Switch to little task and complete (T s + T l = 11 ms) 8. Switch to big task and execute (T s + T t = 11 ms) 9. Switch to little task, return results (T s + T r = 6 ms)

Distributed File Systems

Distributed File Systems Distributed File Systems Today l Basic distributed file systems l Two classical examples Next time l Naming things xkdc Distributed File Systems " A DFS supports network-wide sharing of files and devices

More information

Administrivia. Remote Procedure Calls. Reminder about last time. Building up to today

Administrivia. Remote Procedure Calls. Reminder about last time. Building up to today Remote Procedure Calls Carnegie Mellon University 15-440 Distributed Systems Administrivia Readings are now listed on the syllabus See.announce post for some details The book covers a ton of material pretty

More information

6.033 Spring 2004, Quiz 1 Page 1 of Computer Systems Engineering: Spring Quiz I

6.033 Spring 2004, Quiz 1 Page 1 of Computer Systems Engineering: Spring Quiz I 6.033 Spring 2004, Quiz 1 Page 1 of 10 Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.033 Computer Systems Engineering: Spring 2004 Quiz I 1.0 Cumulative

More information

Remote Procedure Call (RPC) and Transparency

Remote Procedure Call (RPC) and Transparency Remote Procedure Call (RPC) and Transparency Brad Karp UCL Computer Science CS GZ03 / M030 10 th October 2014 Transparency in Distributed Systems Programmers accustomed to writing code for a single box

More information

MODELS OF DISTRIBUTED SYSTEMS

MODELS OF DISTRIBUTED SYSTEMS Distributed Systems Fö 2/3-1 Distributed Systems Fö 2/3-2 MODELS OF DISTRIBUTED SYSTEMS Basic Elements 1. Architectural Models 2. Interaction Models Resources in a distributed system are shared between

More information

MODELS OF DISTRIBUTED SYSTEMS

MODELS OF DISTRIBUTED SYSTEMS Distributed Systems Fö 2/3-1 Distributed Systems Fö 2/3-2 MODELS OF DISTRIBUTED SYSTEMS Basic Elements 1. Architectural Models 2. Interaction Models Resources in a distributed system are shared between

More information

Chapter 3: Client-Server Paradigm and Middleware

Chapter 3: Client-Server Paradigm and Middleware 1 Chapter 3: Client-Server Paradigm and Middleware In order to overcome the heterogeneity of hardware and software in distributed systems, we need a software layer on top of them, so that heterogeneity

More information

RMI: Design & Implementation

RMI: Design & Implementation RMI: Design & Implementation Operating Systems RMI 1 Middleware layers Applications, services RMI and RPC request-reply protocol marshalling and external data representation Middleware layers UDP and TCP

More information

Network File System (NFS)

Network File System (NFS) Network File System (NFS) Nima Honarmand User A Typical Storage Stack (Linux) Kernel VFS (Virtual File System) ext4 btrfs fat32 nfs Page Cache Block Device Layer Network IO Scheduler Disk Driver Disk NFS

More information

CHAPTER - 4 REMOTE COMMUNICATION

CHAPTER - 4 REMOTE COMMUNICATION CHAPTER - 4 REMOTE COMMUNICATION Topics Introduction to Remote Communication Remote Procedural Call Basics RPC Implementation RPC Communication Other RPC Issues Case Study: Sun RPC Remote invocation Basics

More information

Building up to today. Remote Procedure Calls. Reminder about last time. Threads - impl

Building up to today. Remote Procedure Calls. Reminder about last time. Threads - impl Remote Procedure Calls Carnegie Mellon University 15-440 Distributed Systems Building up to today 2x ago: Abstractions for communication example: TCP masks some of the pain of communicating across unreliable

More information

Agenda. The main body and cout. Fundamental data types. Declarations and definitions. Control structures

Agenda. The main body and cout. Fundamental data types. Declarations and definitions. Control structures The main body and cout Agenda 1 Fundamental data types Declarations and definitions Control structures References, pass-by-value vs pass-by-references The main body and cout 2 C++ IS AN OO EXTENSION OF

More information

REMOTE PROCEDURE CALLS EE324

REMOTE PROCEDURE CALLS EE324 REMOTE PROCEDURE CALLS EE324 Administrivia Course feedback Midterm plan Reading material/textbook/slides are updated. Computer Systems: A Programmer's Perspective, by Bryant and O'Hallaron Some reading

More information

Distributed Systems Theory 4. Remote Procedure Call. October 17, 2008

Distributed Systems Theory 4. Remote Procedure Call. October 17, 2008 Distributed Systems Theory 4. Remote Procedure Call October 17, 2008 Client-server model vs. RPC Client-server: building everything around I/O all communication built in send/receive distributed computing

More information

E&CE 454/750-5: Spring 2010 Programming Assignment 1 Due: 11:59 PM Friday 11 th June 2010

E&CE 454/750-5: Spring 2010 Programming Assignment 1 Due: 11:59 PM Friday 11 th June 2010 E&CE 454/750-5: Spring 2010 Programming Assignment 1 Due: 11:59 PM Friday 11 th June 2010 For this assignment you are required to implement a crude version of Remote Procedure Call (RPC). Normally this

More information

6. Results. This section describes the performance that was achieved using the RAMA file system.

6. Results. This section describes the performance that was achieved using the RAMA file system. 6. Results This section describes the performance that was achieved using the RAMA file system. The resulting numbers represent actual file data bytes transferred to/from server disks per second, excluding

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Processes and threads 1 Overview Process concept Process scheduling Thread

More information

Distributed Objects and Remote Invocation. Programming Models for Distributed Applications

Distributed Objects and Remote Invocation. Programming Models for Distributed Applications Distributed Objects and Remote Invocation Programming Models for Distributed Applications Extending Conventional Techniques The remote procedure call model is an extension of the conventional procedure

More information

How do modules communicate? Enforcing modularity. Modularity: client-server organization. Tradeoffs of enforcing modularity

How do modules communicate? Enforcing modularity. Modularity: client-server organization. Tradeoffs of enforcing modularity How do modules communicate? Enforcing modularity Within the same address space and protection domain local procedure calls Across protection domain system calls Over a connection client/server programming

More information

C 1. Recap: Finger Table. CSE 486/586 Distributed Systems Remote Procedure Call. Chord: Node Joins and Leaves. Recall? Socket API

C 1. Recap: Finger Table. CSE 486/586 Distributed Systems Remote Procedure Call. Chord: Node Joins and Leaves. Recall? Socket API Recap: Finger Table Finding a using fingers CSE 486/586 Distributed Systems Remote Procedure Call Steve Ko Computer Sciences and Engineering University at Buffalo N102" 86 + 2 4! N86" 20 +

More information

Announcements. P4: Graded Will resolve all Project grading issues this week P5: File Systems

Announcements. P4: Graded Will resolve all Project grading issues this week P5: File Systems Announcements P4: Graded Will resolve all Project grading issues this week P5: File Systems Test scripts available Due Due: Wednesday 12/14 by 9 pm. Free Extension Due Date: Friday 12/16 by 9pm. Extension

More information

Process Creation in UNIX

Process Creation in UNIX Process Creation in UNIX int fork() create a child process identical to parent Child process has a copy of the address space of the parent process On success: Both parent and child continue execution at

More information

Pointers, Dynamic Data, and Reference Types

Pointers, Dynamic Data, and Reference Types Pointers, Dynamic Data, and Reference Types Review on Pointers Reference Variables Dynamic Memory Allocation The new operator The delete operator Dynamic Memory Allocation for Arrays 1 C++ Data Types simple

More information

Remote Procedure Calls

Remote Procedure Calls CS 5450 Remote Procedure Calls Vitaly Shmatikov Abstractions Abstractions for communication TCP masks some of the pain of communicating over unreliable IP Abstractions for computation Goal: programming

More information

Short Notes of CS201

Short Notes of CS201 #includes: Short Notes of CS201 The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with < and > if the file is a system

More information

6. Pointers, Structs, and Arrays. 1. Juli 2011

6. Pointers, Structs, and Arrays. 1. Juli 2011 1. Juli 2011 Einführung in die Programmierung Introduction to C/C++, Tobias Weinzierl page 1 of 50 Outline Recapitulation Pointers Dynamic Memory Allocation Structs Arrays Bubble Sort Strings Einführung

More information

6. Pointers, Structs, and Arrays. March 14 & 15, 2011

6. Pointers, Structs, and Arrays. March 14 & 15, 2011 March 14 & 15, 2011 Einführung in die Programmierung Introduction to C/C++, Tobias Weinzierl page 1 of 47 Outline Recapitulation Pointers Dynamic Memory Allocation Structs Arrays Bubble Sort Strings Einführung

More information

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr.

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr. COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr. Kamran Sartipi Name: Student ID: Question 1 (Disk Block Allocation):

More information

Lecture 5: Object Interaction: RMI and RPC

Lecture 5: Object Interaction: RMI and RPC 06-06798 Distributed Systems Lecture 5: Object Interaction: RMI and RPC Distributed Systems 1 Recap Message passing: send, receive synchronous versus asynchronous No global Time types of failure socket

More information

CS201 - Introduction to Programming Glossary By

CS201 - Introduction to Programming Glossary By CS201 - Introduction to Programming Glossary By #include : The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with

More information

416 Distributed Systems. RPC Day 2 Jan 12, 2018

416 Distributed Systems. RPC Day 2 Jan 12, 2018 416 Distributed Systems RPC Day 2 Jan 12, 2018 1 Last class Finish networks review Fate sharing End-to-end principle UDP versus TCP; blocking sockets IP thin waist, smart end-hosts, dumb (stateless) network

More information

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory Recall: Address Space Map 13: Memory Management Biggest Virtual Address Stack (Space for local variables etc. For each nested procedure call) Sometimes Reserved for OS Stack Pointer Last Modified: 6/21/2004

More information

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013 Lecture 10: Cache Coherence: Part I Parallel Computer Architecture and Programming Cache design review Let s say your code executes int x = 1; (Assume for simplicity x corresponds to the address 0x12345604

More information

Asynchronous Events on Linux

Asynchronous Events on Linux Asynchronous Events on Linux Frederic.Rossi@Ericsson.CA Open System Lab Systems Research June 25, 2002 Ericsson Research Canada Introduction Linux performs well as a general purpose OS but doesn t satisfy

More information

The RPC abstraction. Procedure calls well-understood mechanism. - Transfer control and data on single computer

The RPC abstraction. Procedure calls well-understood mechanism. - Transfer control and data on single computer The RPC abstraction Procedure calls well-understood mechanism - Transfer control and data on single computer Goal: Make distributed programming look same - Code libraries provide APIs to access functionality

More information

COMMUNICATION IN DISTRIBUTED SYSTEMS

COMMUNICATION IN DISTRIBUTED SYSTEMS Distributed Systems Fö 3-1 Distributed Systems Fö 3-2 COMMUNICATION IN DISTRIBUTED SYSTEMS Communication Models and their Layered Implementation 1. Communication System: Layered Implementation 2. Network

More information

Today CSCI Communication. Communication in Distributed Systems. Communication in Distributed Systems. Remote Procedure Calls (RPC)

Today CSCI Communication. Communication in Distributed Systems. Communication in Distributed Systems. Remote Procedure Calls (RPC) Today CSCI 5105 Communication in Distributed Systems Overview Types Remote Procedure Calls (RPC) Instructor: Abhishek Chandra 2 Communication How do program modules/processes communicate on a single machine?

More information

Remote Invocation. Today. Next time. l Overlay networks and P2P. l Request-reply, RPC, RMI

Remote Invocation. Today. Next time. l Overlay networks and P2P. l Request-reply, RPC, RMI Remote Invocation Today l Request-reply, RPC, RMI Next time l Overlay networks and P2P Types of communication " Persistent or transient Persistent A submitted message is stored until delivered Transient

More information

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University CS 555: DISTRIBUTED SYSTEMS [THREADS] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Shuffle less/shuffle better Which actions?

More information

Chapter 5: Remote Invocation. Copyright 2015 Prof. Amr El-Kadi

Chapter 5: Remote Invocation. Copyright 2015 Prof. Amr El-Kadi Chapter 5: Remote Invocation Outline Introduction Request-Reply Protocol Remote Procedure Call Remote Method Invocation This chapter (and Chapter 6) Applications Remote invocation, indirect communication

More information

NFS Design Goals. Network File System - NFS

NFS Design Goals. Network File System - NFS Network File System - NFS NFS Design Goals NFS is a distributed file system (DFS) originally implemented by Sun Microsystems. NFS is intended for file sharing in a local network with a rather small number

More information

Lecture 5: RMI etc. Servant. Java Remote Method Invocation Invocation Semantics Distributed Events CDK: Chapter 5 TVS: Section 8.3

Lecture 5: RMI etc. Servant. Java Remote Method Invocation Invocation Semantics Distributed Events CDK: Chapter 5 TVS: Section 8.3 Lecture 5: RMI etc. Java Remote Method Invocation Invocation Semantics Distributed Events CDK: Chapter 5 TVS: Section 8.3 CDK Figure 5.7 The role of proxy and skeleton in remote method invocation client

More information

Distributed File Systems Part II. Distributed File System Implementation

Distributed File Systems Part II. Distributed File System Implementation s Part II Daniel A. Menascé Implementation File Usage Patterns File System Structure Caching Replication Example: NFS 1 Implementation: File Usage Patterns Static Measurements: - distribution of file size,

More information

Midterm Exam Answers

Midterm Exam Answers Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.824 Fall 2002 Midterm Exam Answers The average score was 55 (out of 80). Here s the distribution: 10 8

More information

File Systems: Consistency Issues

File Systems: Consistency Issues File Systems: Consistency Issues File systems maintain many data structures Free list/bit vector Directories File headers and inode structures res Data blocks File Systems: Consistency Issues All data

More information

Lecture 06: Distributed Object

Lecture 06: Distributed Object Lecture 06: Distributed Object Distributed Systems Behzad Bordbar School of Computer Science, University of Birmingham, UK Lecture 0? 1 Recap Interprocess communication Synchronous and Asynchronous communication

More information

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck Main memory management CMSC 411 Computer Systems Architecture Lecture 16 Memory Hierarchy 3 (Main Memory & Memory) Questions: How big should main memory be? How to handle reads and writes? How to find

More information

Chapter 9 Memory Management

Chapter 9 Memory Management Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual

More information

CS 167 Final Exam Solutions

CS 167 Final Exam Solutions CS 167 Final Exam Solutions Spring 2016 Do all questions. 1. The implementation given of thread_switch in class is as follows: void thread_switch() { thread_t NextThread, OldCurrent; } NextThread = dequeue(runqueue);

More information

Pointers. Addresses in Memory. Exam 1 on July 18, :00-11:40am

Pointers. Addresses in Memory. Exam 1 on July 18, :00-11:40am Exam 1 on July 18, 2005 10:00-11:40am Pointers Addresses in Memory When a variable is declared, enough memory to hold a value of that type is allocated for it at an unused memory location. This is the

More information

Virtual Memory. Kevin Webb Swarthmore College March 8, 2018

Virtual Memory. Kevin Webb Swarthmore College March 8, 2018 irtual Memory Kevin Webb Swarthmore College March 8, 2018 Today s Goals Describe the mechanisms behind address translation. Analyze the performance of address translation alternatives. Explore page replacement

More information

Last Class: RPCs. Today:

Last Class: RPCs. Today: Last Class: RPCs RPCs make distributed computations look like local computations Issues: Parameter passing Binding Failure handling Lecture 4, page 1 Today: Case Study: Sun RPC Lightweight RPCs Remote

More information

Orbix Release Notes

Orbix Release Notes Contents Orbix 2.3.4 Release Notes September 1999 Introduction 2 Development Environments 2 Solaris 2 Year 2000 Compliance 2 Solaris 2.5.1 Y2K Patches 3 NT 3 Compatibility with Other IONA Products 4 New

More information

Midterm Exam #2 Solutions April 20, 2016 CS162 Operating Systems

Midterm Exam #2 Solutions April 20, 2016 CS162 Operating Systems University of California, Berkeley College of Engineering Computer Science Division EECS Spring 2016 Anthony D. Joseph Midterm Exam #2 Solutions April 20, 2016 CS162 Operating Systems Your Name: SID AND

More information

Memory Consistency Models

Memory Consistency Models Memory Consistency Models Contents of Lecture 3 The need for memory consistency models The uniprocessor model Sequential consistency Relaxed memory models Weak ordering Release consistency Jonas Skeppstedt

More information

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs CSE 451: Operating Systems Winter 2005 Lecture 7 Synchronization Steve Gribble Synchronization Threads cooperate in multithreaded programs to share resources, access shared data structures e.g., threads

More information

Distributed Systems. Lecture 06 Remote Procedure Calls Thursday, September 13 th, 2018

Distributed Systems. Lecture 06 Remote Procedure Calls Thursday, September 13 th, 2018 15-440 Distributed Systems Lecture 06 Remote Procedure Calls Thursday, September 13 th, 2018 1 Announcements P0 Due today (Thursday 9/13) How is everyone doing on it? :-) P1 Released Friday 9/14 Dates:

More information

Sistemi in Tempo Reale

Sistemi in Tempo Reale Laurea Specialistica in Ingegneria dell'automazione Sistemi in Tempo Reale Giuseppe Lipari Introduzione alla concorrenza Fundamentals Algorithm: It is the logical procedure to solve a certain problem It

More information

DISTRIBUTED OBJECTS AND REMOTE INVOCATION

DISTRIBUTED OBJECTS AND REMOTE INVOCATION DISTRIBUTED OBJECTS AND REMOTE INVOCATION Introduction This chapter is concerned with programming models for distributed applications... Familiar programming models have been extended to apply to distributed

More information

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC]

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] Processes CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] 1 Outline What Is A Process? Process States & PCB Process Memory Layout Process Scheduling Context Switch Process Operations

More information

CS61C Machine Structures. Lecture 4 C Pointers and Arrays. 1/25/2006 John Wawrzynek. www-inst.eecs.berkeley.edu/~cs61c/

CS61C Machine Structures. Lecture 4 C Pointers and Arrays. 1/25/2006 John Wawrzynek. www-inst.eecs.berkeley.edu/~cs61c/ CS61C Machine Structures Lecture 4 C Pointers and Arrays 1/25/2006 John Wawrzynek (www.cs.berkeley.edu/~johnw) www-inst.eecs.berkeley.edu/~cs61c/ CS 61C L04 C Pointers (1) Common C Error There is a difference

More information

Network File System (NFS)

Network File System (NFS) Network File System (NFS) Brad Karp UCL Computer Science CS GZ03 / M030 14 th October 2015 NFS Is Relevant Original paper from 1985 Very successful, still widely used today Early result; much subsequent

More information

Network File System (NFS)

Network File System (NFS) Network File System (NFS) Brad Karp UCL Computer Science CS GZ03 / M030 19 th October, 2009 NFS Is Relevant Original paper from 1985 Very successful, still widely used today Early result; much subsequent

More information

Distributed File System

Distributed File System Distributed File System Project Report Surabhi Ghaisas (07305005) Rakhi Agrawal (07305024) Election Reddy (07305054) Mugdha Bapat (07305916) Mahendra Chavan(08305043) Mathew Kuriakose (08305062) 1 Introduction

More information

UNIT -3 PROCESS AND OPERATING SYSTEMS 2marks 1. Define Process? Process is a computational unit that processes on a CPU under the control of a scheduling kernel of an OS. It has a process structure, called

More information

CHAPTER 3 - PROCESS CONCEPT

CHAPTER 3 - PROCESS CONCEPT CHAPTER 3 - PROCESS CONCEPT 1 OBJECTIVES Introduce a process a program in execution basis of all computation Describe features of processes: scheduling, creation, termination, communication Explore interprocess

More information

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8 PROCESSES AND THREADS THREADING MODELS CS124 Operating Systems Winter 2016-2017, Lecture 8 2 Processes and Threads As previously described, processes have one sequential thread of execution Increasingly,

More information

3/7/2018. Sometimes, Knowing Which Thing is Enough. ECE 220: Computer Systems & Programming. Often Want to Group Data Together Conceptually

3/7/2018. Sometimes, Knowing Which Thing is Enough. ECE 220: Computer Systems & Programming. Often Want to Group Data Together Conceptually University of Illinois at Urbana-Champaign Dept. of Electrical and Computer Engineering ECE 220: Computer Systems & Programming Structured Data in C Sometimes, Knowing Which Thing is Enough In MP6, we

More information

UNIT I (Two Marks Questions & Answers)

UNIT I (Two Marks Questions & Answers) UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-

More information

CS , Fall 2003 Exam 2

CS , Fall 2003 Exam 2 Andrew login ID: Full Name: CS 15-213, Fall 2003 Exam 2 November 18, 2003 Instructions: Make sure that your exam is not missing any sheets, then write your full name and Andrew login ID on the front. Write

More information

Hard Disk Drives. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

Hard Disk Drives. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Hard Disk Drives Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Storage Stack in the OS Application Virtual file system Concrete file system Generic block layer Driver Disk drive Build

More information

ECE 462 Fall 2011, Second Exam

ECE 462 Fall 2011, Second Exam ECE 462 Fall 2011, Second Exam DO NOT START WORKING ON THIS UNTIL TOLD TO DO SO. You have until 9:20 to take this exam. Your exam should have 10 pages total (including this cover sheet). Please let Prof.

More information

File Systems. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

File Systems. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University File Systems Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu SSE3044: Operating Systems, Fall 2016, Jinkyu Jeong (jinkyu@skku.edu) File System Layers

More information

Static Vulnerability Analysis

Static Vulnerability Analysis Static Vulnerability Analysis Static Vulnerability Detection helps in finding vulnerabilities in code that can be extracted by malicious input. There are different static analysis tools for different kinds

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

The UNIVERSITY of EDINBURGH. SCHOOL of INFORMATICS. CS4/MSc. Distributed Systems. Björn Franke. Room 2414

The UNIVERSITY of EDINBURGH. SCHOOL of INFORMATICS. CS4/MSc. Distributed Systems. Björn Franke. Room 2414 The UNIVERSITY of EDINBURGH SCHOOL of INFORMATICS CS4/MSc Distributed Systems Björn Franke bfranke@inf.ed.ac.uk Room 2414 (Lecture 3: Remote Invocation and Distributed Objects, 28th September 2006) 1 Programming

More information

CMPSC 311- Introduction to Systems Programming Module: Concurrency

CMPSC 311- Introduction to Systems Programming Module: Concurrency CMPSC 311- Introduction to Systems Programming Module: Concurrency Professor Patrick McDaniel Fall 2013 Sequential Programming Processing a network connection as it arrives and fulfilling the exchange

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information

Highlights. - Making threads. - Waiting for threads. - Review (classes, pointers, inheritance)

Highlights. - Making threads. - Waiting for threads. - Review (classes, pointers, inheritance) Parallel processing Highlights - Making threads - Waiting for threads - Review (classes, pointers, inheritance) Review: CPUs Review: CPUs In the 2000s, computing too a major turn: multi-core processors

More information

416 Distributed Systems. RPC Day 2 Jan 11, 2017

416 Distributed Systems. RPC Day 2 Jan 11, 2017 416 Distributed Systems RPC Day 2 Jan 11, 2017 1 Last class Finish networks review Fate sharing End-to-end principle UDP versus TCP; blocking sockets IP thin waist, smart end-hosts, dumb (stateless) network

More information

Remote Procedure Calls CS 707

Remote Procedure Calls CS 707 Remote Procedure Calls CS 707 Motivation Send and Recv calls I/O Goal: make distributed nature of system transparent to the programmer RPC provides procedural interface to distributed services CS 707 2

More information

RCU. ò Walk through two system calls in some detail. ò Open and read. ò Too much code to cover all FS system calls. ò 3 Cases for a dentry:

RCU. ò Walk through two system calls in some detail. ò Open and read. ò Too much code to cover all FS system calls. ò 3 Cases for a dentry: Logical Diagram VFS, Continued Don Porter CSE 506 Binary Formats RCU Memory Management File System Memory Allocators System Calls Device Drivers Networking Threads User Today s Lecture Kernel Sync CPU

More information

VFS, Continued. Don Porter CSE 506

VFS, Continued. Don Porter CSE 506 VFS, Continued Don Porter CSE 506 Logical Diagram Binary Formats Memory Allocators System Calls Threads User Today s Lecture Kernel RCU File System Networking Sync Memory Management Device Drivers CPU

More information

Lecture #15: Translation, protection, sharing

Lecture #15: Translation, protection, sharing Lecture #15: Translation, protection, sharing Review -- 1 min Goals of virtual memory: protection relocation sharing illusion of infinite memory minimal overhead o space o time Last time: we ended with

More information

The Mercury project. Zoltan Somogyi

The Mercury project. Zoltan Somogyi The Mercury project Zoltan Somogyi The University of Melbourne Linux Users Victoria 7 June 2011 Zoltan Somogyi (Linux Users Victoria) The Mercury project June 15, 2011 1 / 23 Introduction Mercury: objectives

More information

Remote Procedure Call

Remote Procedure Call Remote Procedure Call Remote Procedure Call Integrate network communication with programming language Procedure call is well understood implementation use Control transfer Data transfer Goals Easy make

More information

DISTRIBUTED COMPUTER SYSTEMS

DISTRIBUTED COMPUTER SYSTEMS DISTRIBUTED COMPUTER SYSTEMS Communication Fundamental REMOTE PROCEDURE CALL Dr. Jack Lange Computer Science Department University of Pittsburgh Fall 2015 Outline Communication Architecture Fundamentals

More information

Homework #2 Nathan Balon CIS 578 October 31, 2004

Homework #2 Nathan Balon CIS 578 October 31, 2004 Homework #2 Nathan Balon CIS 578 October 31, 2004 1 Answer the following questions about the snapshot algorithm: A) What is it used for? It used for capturing the global state of a distributed system.

More information

CS377P Programming for Performance Multicore Performance Cache Coherence

CS377P Programming for Performance Multicore Performance Cache Coherence CS377P Programming for Performance Multicore Performance Cache Coherence Sreepathi Pai UTCS October 26, 2015 Outline 1 Cache Coherence 2 Cache Coherence Awareness 3 Scalable Lock Design 4 Transactional

More information

Part Two - Process Management. Chapter 3: Processes

Part Two - Process Management. Chapter 3: Processes Part Two - Process Management Chapter 3: Processes Chapter 3: Processes 3.1 Process Concept 3.2 Process Scheduling 3.3 Operations on Processes 3.4 Interprocess Communication 3.5 Examples of IPC Systems

More information

Network Object in C++

Network Object in C++ Network Object in C++ Final Project of HonorOS Professor David Maziere Po-yen Huang (Dennis) Dong-rong Wen May 9, 2003 Table of Content Abstract...3 Introduction...3 Architecture...3 The idea...3 More

More information

CS 403/534 Distributed Systems Midterm April 29, 2004

CS 403/534 Distributed Systems Midterm April 29, 2004 CS 403/534 Distributed Systems Midterm April 9, 004 3 4 5 Total Name: ID: Notes: ) Please answer the questions in the provided space after each question. ) Duration is 0 minutes 3) Closed books and closed

More information

CSE 333 Lecture fork, pthread_create, select

CSE 333 Lecture fork, pthread_create, select CSE 333 Lecture 22 -- fork, pthread_create, select Steve Gribble Department of Computer Science & Engineering University of Washington Administrivia HW4 out on Monday - you re gonna love it Final exam

More information

Networked Systems and Services, Fall 2018 Chapter 4. Jussi Kangasharju Markku Kojo Lea Kutvonen

Networked Systems and Services, Fall 2018 Chapter 4. Jussi Kangasharju Markku Kojo Lea Kutvonen Networked Systems and Services, Fall 2018 Chapter 4 Jussi Kangasharju Markku Kojo Lea Kutvonen Chapter Outline Overview of interprocess communication Remote invocations (RPC etc.) Persistence and synchronicity

More information

Midterm II December 3 rd, 2007 CS162: Operating Systems and Systems Programming

Midterm II December 3 rd, 2007 CS162: Operating Systems and Systems Programming Fall 2007 University of California, Berkeley College of Engineering Computer Science Division EECS John Kubiatowicz Midterm II December 3 rd, 2007 CS162: Operating Systems and Systems Programming Your

More information

CMSC 433 Programming Language Technologies and Paradigms. Concurrency

CMSC 433 Programming Language Technologies and Paradigms. Concurrency CMSC 433 Programming Language Technologies and Paradigms Concurrency What is Concurrency? Simple definition Sequential programs have one thread of control Concurrent programs have many Concurrency vs.

More information

Remote Procedure Call Implementations

Remote Procedure Call Implementations Remote Procedure Call Implementations Sun ONC(Open Network Computing) RPC. Implements at-most-once semantics by default. At-least-once (idempotent) can also be chosen as an option for some procedures.

More information

Section 9: Cache, Clock Algorithm, Banker s Algorithm and Demand Paging

Section 9: Cache, Clock Algorithm, Banker s Algorithm and Demand Paging Section 9: Cache, Clock Algorithm, Banker s Algorithm and Demand Paging CS162 March 16, 2018 Contents 1 Vocabulary 2 2 Problems 3 2.1 Caching.............................................. 3 2.2 Clock Algorithm.........................................

More information

CSE 410 Final Exam 6/09/09. Suppose we have a memory and a direct-mapped cache with the following characteristics.

CSE 410 Final Exam 6/09/09. Suppose we have a memory and a direct-mapped cache with the following characteristics. Question 1. (10 points) (Caches) Suppose we have a memory and a direct-mapped cache with the following characteristics. Memory is byte addressable Memory addresses are 16 bits (i.e., the total memory size

More information