Introduction. Chapter 1
|
|
- Susan Quinn
- 5 years ago
- Views:
Transcription
1 Chapter 1 Introduction Every year, processors get faster and cheaper: processor speeds double roughly every two years. This remarkable rate of improvement will probably continue for a while, but eventually, fundamental limitations such as the speed of light or heat dissipation will make further advances increasingly difficult. Another promising way to make make computing more effective is to exploit parallelism: harnessing multiple processors to work on a single task. Typically, multiple processes work in parallel on disjoint parts of a task, occasionally pausing to coordinate their activities. While parallelism has made considerable progress in the past decade, there is still an enormous potential for improvement. This course focuses on problems associated with coordinating (or synchronizing) parallel processors. Synchronization is a fundamental problem in Computer Science. It arises at all scales of multiprocessor systems - at a very small scale, processors within a single supercomputer need to allocate resources, and at a very large scale, allocating communication paths across the Internet. Synchronization is challenging because modern computer systems are inherently asynchronous: activities can be halted or delayed without warning by phenomena such as interrupts, pre-emption, cache misses, or failures. These delays are inherently unpredictable, and can vary enormously in scale: a cache miss might delay a processor for fewer than ten instructions, a page fault for a few million instructions, and operating system pre-emption for hundreds of millions of instructions. We will approach multiprocessor synchronization from two complementary directions: correctness and performance. In the first part of the course, we will focus on correctness: making sure that concurrent programs do what we expect. We will use an idealized model of computation in which multiple concurrent threads manipulate a set of shared objects. This model is essentially the model presented by standard Java(tm) or C++ threads packages. We will study a variety of synchronization algorithms, with an emphasis on informal reasoning about correctness. Reasoning about the correctness of 0 This chapter is part of the Manuscript Multiprocessor Synchronization by Maurice Herlihy and Nir Shavit copyright c 2003, all rights reserved. 1
2 2 CHAPTER 1. INTRODUCTION multiprocess synchronization algorithms is inherently different from reasoning about familiar sequential algorithms. Sequential correctness is mostly concerned with safety properties: does my program transform each before-state to the correct after-state? Concurrent correctness is also concerned with safety, but in the presence of a potentially vast number of concurrent threads. In addition, concurrent correctness encompasses a variety of liveness properties, such as fairness, freedom from deadlock or livelock, that have no counterparts in the sequential world. There is no need to panic: reasoning about synchronization is not vastly more difficult, but it is likely to be unfamiliar territory where your intuition may be unreliable. The second part of the course concerns performance. Analyzing the performance of synchronization algorithms is also very different in flavor from analyzing the performance of sequential programs. Sequential programming is based on a well-established and well-understood abstractions. When you write a sequential program, you usually do not need to be aware that underneath it all, pages are being swapped from disk to memory, and smaller units of memory are being moved in and out of a hierarchy of processor caches. This complex memory hierarchy is essentially invisible, hiding behind a simple programming abstraction. In the multiprocessor context, this abstraction breaks down, at least from a performance perspective. To achieve adequate performance, the programmer must sometimes outwit the underlying memory system, writing programs that would seem bizarre to someone unfamiliar with the underlying architecture. Perhaps someday, concurrent architectures will provide the same degree of efficient abstraction now provided by sequential architectures, but in the meantime, programmers should beware. 1.1 Shared Objects and Synchronization On the first day on your new job, your boss asks you to find all primes between 1 and (never mind why), using a parallel machine that supports ten concurrent threads. This machine is rented by the minute, so the longer your program takes, the more it costs your new employer. You want to make a good impression. What do you do? As a first attempt, you might consider giving each thread an equal share of the input domain. Each thread might check 10 9 numbers, as shown in Figure1.1. This approach fails, for an elementary, but important reason. Equal ranges of inputs do not necessarily produce equal amounts of work. Primes do not occur uniformly: there are may primes between 1 and 10 9, but hardly any between and To make matters worse, it usually takes longer to test whether a large number is prime than a small number. In short, there is no reason to believe that the work will be divided equally among the threads, and it is not clear even which threads will have the most work. A more promising way to split the work among the threads is to assign each thread one integer at a time (Figure 1.2). When a thread is done testing
3 1.1. SHARED OBJECTS AND SYNCHRONIZATION 3 void thread(int i) { // code for thread i in {1..10} int block = power(10, 9); for (int j = (i * block); j < (i + 1) block; j++) if (isprime(j)) print(j); } Figure 1.1: Load-Balancing by dividing up input domain Counter counter = new Counter(1); // shared by all threads void thread(int i) { // code for thread i in {1..10} long i = 0; long limit = power(10, 10); while (i < limit) { // loop until all numbers taken i = counter.fetchinc(); // take next untaken number if (isprime(i)) print(i); } Figure 1.2: Load-Balancing with a shared counter public class Counter { private long value = 1; public Counter(int i) { this.value = i; } public long fetchinc() { return this.value++; } // counter starts at one // constructor initializes counter // increment value & return prior value Figure 1.3: Load-Balancing with a shared counter
4 4 CHAPTER 1. INTRODUCTION an integer, it asks for another. To this end, we introduce a shared counter, an object that encapsulates an integer value, and that provides a fetchinc() method that increments its value, and returns to the caller the counter s prior value. Figure 1.3 shows an obvious, but naïve implementation of Counter in the Java(tm) language. This counter implementation works well when used by a single thread, but it fails when shared by multiple threads. The problem is that the expression return this.value++; is actually an abbreviation ( syntactic sugar ) for the following, more complex expression: long temp = this.value; this.value = temp + 1; return temp; In this code fragment, this.value is a field of the Counter object, and is shared among all the threads. Each thread, however, has its own copy of temp, which is a local variable to each thread. Now imagine that threads A and B both call counter.fetchinc() at about the same time. They might simultaneously read 1 from this.value, set their local temp variables to 1, this.value to 2, and both return 1. This behavior is not what we intended: concurrent calls to counter.fetchinc() return the same value, but we expected them to return distinct values. In fact, it could even get worst. One thread might read 1 from this.value, but before it sets this.value to 2, another thread would go through the increment loop several times, reading 1 and setting to 2, reading 2 and setting to 3. When the first thread finally completes his operation and sets this.value to 2, it will actually be setting the counter back from 3 to 2. The heart of the problem is that incrementing the counter s value requires two distinct operations on the shared variable: reading the this.value field into a temporary variable and writing it back to the Counter object. Something similar happens when you try to pass someone approaching you head-on in a corridor. You find yourself veering right then left several times to avoid the other person doing exactly the same. Each of you is performing two distinct steps: looking at ( reading ) the other s current position, and moving ( writing ) to one side or the other. Problem is, when you read the other s position, you have no way to know whether they have decided to move, or in which direction. In the same way that you and the annoying stranger must decide who goes left and who goes right, threads accessing a shared Counter must decide who goes first and who second. For the shared Counter object, there is a simple solution: execute the fetchinc() method atomically, that is, guarantee that only one thread executes the read and write sequence at a time. The problem of making sure that only one thread at a time may execute a particular block of code is called the
5 1.2. A FABLE 5 mutual exclusion problem, and is one of the classic problems in multiprocessor synchronization. As a practical matter, you are unlikely ever to find yourself having to design your own mutual exclusion algorithm (although you are highly likely to use someone else s). Nevertheless, understanding how to implement mutual exclusion from the basics is an essential condition for understanding concurrent computation in general. There is no more effective way to learn how to reason about essential and ubiquitous issues such as mutual exclusion, deadlock, bounded fairness, and blocking versus non-blocking synchronization. So pay attention. 1.2 A Fable Most textbooks instruct you to think of coordination problems (such as mutual exclusion) as programming problems. We think this approach is much too narrow, like asking Isaac Newton to treat gravity as a crossword puzzle. Instead, we present a sequence of fables to invite you to think of concurrent coordination problems as if they were physics problems. Like most authors of fables, we retell stories mostly invented by others (see the Chapter Notes at the end of this chapter). Alice and Bob are neighbors, and they share a yard. Alice owns a cat and Bob owns a dog. Both pets like to run around in the yard, but (naturally) they don t get along. After some unfortunate experiences, Alice and Bob agree that they should coordinate to make sure that both pets are never in the yard at the same time. How should they do it? Alice and Bob need to agree on mutually compatible procedures for deciding what to do. We call such an agreement a coordination protocol (or just a protocol, for short). The yard is large, so Alice can t simply look out the window to check whether Bob s dog is present. She could perhaps walk over to Bob s house and knock on the door, but that takes a long time, and what if it rains? Alice might lean out the window and shout Hey Bob! Can I let the cat out?. The problem is that Bob might not hear her. He could be watching TV, visiting his girlfriend, or out shopping for dog food. They could try to coordinate by cell phone, but the same difficulties arise if Bob is in the shower, driving through a tunnel, or recharging his phone s batteries. Alice has a clever idea. She sets up one or more empty beer cans on Bob s windowsill (Figure xxx), ties a string around each one, and runs the string back to her house. Bob does the same. When she wants to send a signal to Bob, she yanks the string to knock over one of the cans. When Bob notices a can has been knocked over, he resets the can. You might think that up-ending beer cans by remote control is a perfectly reasonable way to signal your intentions, but you would be wrong. The problem is that Alice can only place a limited number of cans on Bob s porch, and sooner or later, she is going to run out of cans to knock over. Granted, Bob resets a can as soon as he notices it has been knocked over, but what if he goes to Cancún
6 6 CHAPTER 1. INTRODUCTION for Spring Break? As long as Alice relies on Bob to reset the beer cans, sooner or later, she might run out. At long last, perhaps after reading some Computer Science textbooks, Alice and Bob settle on a different approach. Each one sets up a flag pole, easily visible to the other. When Alice wants to release her cat, she does the following. 1. She raises her flag. 2. When Bob s flag is lowered, she unleashes her cat. 3. When her cat comes back, she lowers her flag. Bob s behavior is a little more complicated. 1. He raises his flag. 2. While Alice s flag is raised (a) Bob lowers his flag (b) Bob waits until Alice s flag is lowered (c) Bob raises his flag 3. As soon as his flag is raised and hers is down, he unleashes his dog. 4. When his dog comes back, he lowers his flag. This protocol, though flawed, rewards study. Before you contemplate reading or writing a single line of code, you should understand what this protocol does, what it doesn t do, and why it is so. Consider the following flag principle, which shows up remarkably often. If Alice and Bob each 1. raises his or her own flag, and then 2. looks at the other s flag, then at least one will see the other s flag raised. Suppose not. Assume, without loss of generality, that Alice was the last to look. When she looks, her flag is raised. She did t see Bob s flag, so Bob must have raised his flag after Alice looked. But Bob looked after he raised his flag, which was after Alice looked, so Alice wasn t the last to look. This conclusion contradicts our original assumption, so the assumption itself must be wrong and the flag principle holds true. This kind of argument by contradiction shows up over and over again, so you should take some time and convince yourself (any way you like) that this claim is true.
7 1.2. A FABLE Properties of Mutual Exclusion To show that the flag protocol is a correct solution to Alice and Bob s problem, we must understand what properties are required of a solution, and then show that they are met by the protocol. First, we claim that both pets are never in the yard at the same time. If we assume otherwise, we should be able to derive a contradiction. Before unleashing their pets, Alice and Bob each raised their flag and looked at the other s. Since each one unleashed a pet, neither one saw the other s flag raised, which contradicts the flag principle. Mutual exclusion is only one of several properties of interest. After all, a protocol in which Alice and Bob never release a pet satisfies the mutual exclusion property, but it is unlikely to satisfy their pets. Here is another property of central importance: if a single pet wants to enter the yard, then it eventually succeeds, if both pets want to enter the yard, then eventually at least one of them succeeds. We call this property the no-deadlock property, and we consider it essential. We claim that Alice and Bob s protocol satisfies the no-deadlock property. Suppose both pets want to use the yard. Alice and Bob each raise their flags. Bob eventually notices that Alice s flag is raised, and defers to her by lowering his flag, allowing her cat into the yard. Another property of compelling interest is fairness: if one pet wants to enter the yard, will it eventually succeed? Here, Alice and Bob s protocol performs poorly. Whenever Alice and Bob conflict, Bob defers to Alice, so it is possible that Alice s cat can use the yard over and over again, while Bob s dog becomes increasing uncomfortable. This phenomenon is also known as starvation. Later on, we will see how to make protocols satisfy fairness. The last property of interest concerns waiting. Imagine that Alice raises her flag, and is then suddenly stricken with appendicitis. She (and the cat) are taken to the hospital, and after a successful operation, she spends the next week under observation at the hospital. Although Bob is relieved that Alice is well, he (and the dog) are peeved that they cannot use the yard for an entire week until Alice returns. The problem is that the protocol states that Bob (and his dog) must wait for Alice to lower her flag. If Alice is delayed (even for good reason), then Bob is also delayed (for no apparent good reason). The question of waiting is important as an example of fault-tolerance. Normally, we expect Alice and Bob to respond to each other in a reasonable amount of time, but what if they don t? The mutual exclusion problem, by its very essence, requires waiting: no mutual exclusion protocol avoids it, no matter how clever. Nevertheless, we will see that many other coordination problems can be solved without waiting, sometimes in unexpected ways The Moral At this point you should understand both the strengths and weaknesses of Bob and Alice s protocols. We now turn our attention back to Computer Science,
8 8 CHAPTER 1. INTRODUCTION and interpret this fable for people who write programs. Shouting across the yard and placing cell phone calls do not work. Why not? There are two kinds of communication that occur naturally in concurrent systems. Transient communication requires both parties to participate at the same time. Shouting, gestures, or cell phone calls are examples of transient communication. Persistent communication allows the sender and receiver to participate at different times. Writing letters, , or leaving messages under rocks are examples of persistent forms of communication. Mutual exclusion requires persistent communication. The problem with shouting across the yard or placing cell phone calls is that it may or may not be OK for Bob to unleash his dog, but if Alice is not able to respond to messages, he will never know. The can-and-string protocol might seem somewhat contrived, but it corresponds accurately to a common communication protocol in concurrent systems: interrupts. In modern operating systems, one common way for one thread to get the attention of another is to send it an interrupt. More precisely, thread A interrupts thread B by setting a bit at a location periodically checked by B. Sooner or later B notices the bit has been set and reacts. After reacting, B typically resets the bit (A cannot reset the bit). Even though interrupts cannot solve the mutual exclusion problem, they can still be very useful. For example, interrupt communication is the basis of the Java(tm) language s wait and notifyall calls. On a more positive note, the fable shows that mutual exclusion between two threads can be solved (however imperfectly) using only two one-bit variables, each of which can be written by one thread and read by the other. 1.3 The Producer-Consumer Problem Mutual Exclusion is far from the only problem worth investigating. Let us continue the fable. Eventually, Alice and Bob fall in love and marry. Eventually, they divorce. (What were they thinking?) The judge gives Alice custody of the pets, and tells Bob to feed them. The pets now get along with one another, but they side with Alice, and attack Bob whenever they see him. As a result, Alice and Bob need to devise a protocol for Bob to deliver food to the pets without both being in the yard. Moreover, it should not be wasteful of their time: Alice does not want to release her pets into the yard unless there is food there, and Bob does not want to interrupt his routine unless the pets have consumed all the food. This problem is known as the producer/consumer problem. Surprisingly, perhaps, the cans-and-string protocol we rejected for mutual exclusion does exactly what we need for producer/consumer. Bob places a can
9 1.3. THE PRODUCER-CONSUMER PROBLEM 9 standing up on Alice s front porch, ties one end of his string around the can, and puts the other end of the string in his living room. He then puts food in the yard and knocks the can down. From now on, when Alice wants to release the pets, she does the following. 1. She waits until the can is down. 2. She releases the pets. 3. When the pets return, Alice checks whether they finished the food. If so, she resets the can. Bob does the following: 1. He waits until the can is up. 2. He puts food in the yard. 3. He pulls the string and knocks the can down. The state of the can thus reflects the state of the yard. If the can is down, it means there is food and the pets can eat, and if the can is up, it means the food is gone and Bob can put some more out. We will check the following three properties: Mutual Exclusion: Bob and the pets are never in the yard together. No-Starvation: if Bob is always willing to feed, and the pets are always famished, then the pets will eat infinitely often. Producer/Consumer: The pets will not enter the yard unless there is food, and Bob will never provide more food if there is unconsumed food. This producer/consumer protocol and the mutual exclusion protocol considered in the last section both ensure that Alice and Bob are never in the yard at the same time. Nevertheless, Alice and Bob can not use this producer/consumer protocol for mutual exclusion, and it is important to understand why. Mutual exclusion requires no-deadlock: anyone must be able to enter the yard infinitely often on their own, even if the other is not there. By contrast, producer/consumer s no-starvation property assumes continuous cooperation from both parties. Here is how we reason about this protocol. Mutual Exclusion: Suppose not: both Bob and the pets find themselves in the yard at the same time. Either Bob or Alice s pets entered before the other. (If they entered at exactly the same time, we pretend that Bob got there first.) Suppose it was Bob. At the time he entered, Alice s can was up. Bob doesn t knock over the can while he is in the yard (he does it later), so the pets must have entered while the can was up, a contradiction. The case where Alice s pets entered first is essentially the same, and we leave it an an exercise.
10 10 CHAPTER 1. INTRODUCTION No-Starvation: Suppose not: It must be the case that infinitely often Alice s pets are hungry, there is no food, and Bob is trying to provide food, but doesn t succeed. It cannot be that the can is up, since Bob will place food down and knock the can, allowing the dogs to eat. So it must be that it is down, in which case, since the pets are famished, Alice will shortly raise the can, bringing us back to the former case. Producer/Consumer: In the beginning, Bob knocks the can down, so Bob will not enter the yard until Alice raises the can, which she will do only if there is no more food. Similarly, Bob will not lower the can again unless he has finished placing the food, so the pets will never enter until there is food. Like the mutual exclusion protocol we have already described, this protocol exhibits waiting. If Bob deposits food in the yard, and immediately goes on vacation without remembering to reset the can, then the pets may starve, despite the presence of food. Turning our attention back to Computer Science, the producer-consumer problem appears in almost all parallel and distributed systems. It is the way in which processors place data in communication buffers to be read or transmitted across a network interconnect or shared bus. 1.4 The Readers/Writers Problem Bob and Alice eventually decide they love their pets so much they need to communicate simple messages about them. Bob puts up a billboard in front of his house. The billboard holds a sequence of large tiles, each tile holding a single letter. Bob, at his leisure, posts a message on the bulletin board by lifting one tile at a time. Alice, at her leisure, reads the message by looking at the billboard through a telescope, one tile at a time. This may sound like a workable system, but it isn t. Imagine that Bob posts the message: sell the cat Alice, looking through her telescope, transcribes the message sell the At this point Bob takes down the tiles and writes out a new message: wash the dog Alice, continuing to scan across the billboard transcribes the message sell the dog You can imagine the rest. There are obvious ways to solve this problem.
11 1.5. MULTIPROCESSOR SYNCHRONIZATION 11 Alice and Bob can use the mutual exclusion protocol to make sure that Alice reads only complete sentences. She might still miss a sentence, however. They can use the can-and-string protocol, where Bob produces sentences and Alice consumes them. If this problem is so easy to solve, then why do we bring it up? Both the mutual exclusion and producer/consumer protocols require waiting: if one participant is subjected to an unexpected delay, so is the other. Turning our attention back to Computer Science, a solution to the readers-writers problem, in the context of shared multiprocessor memory, is a way of allowing a thread to capture an instantaneous view of several memory locations. Doing this without waiting, that is, without stopping other threads from concurrently modifying these locations while they are being read, is a powerful tool that can be used for backups, debugging, and in many other situations. Surprisingly, the readers/writers problem does have solutions that do not require waiting. We will examine several such solutions later on. 1.5 Multiprocessor Synchronization Recall the prime numbers printing problem, the simple computational problem which started our discussion, and required a dynamic algorithm to distribute work among processors in order to get good utilization of the tenfold increase of computing power that was made available to us. There is a large body of literature on how to effectively parallelize various computational problems, many of them in the area of numerical computation. That body of literature deals with how to break up specific problems among processes so that one minimizes the amount of coordination necessary. Our concern here is different. In many cases, no matter how much one optimizes, coordination among processes is unavoidable, and one must learn how to coordinate efficiently. This objective, the design of coordination algorithms to overcome a multiprocessor system s inherent asynchrony, is what multiprocessor synchronization is all about. Our hope is to expose to the reader various coordination paradigms and shared concurrent data structures, ones that appear useful in many concurrent algorithms, and explain how one should go about implementing them correctly and efficiently. Our hope in this book is to present the reader with a unified, comprehensive picture of multiprocessor synchronization, ranging from basic principles to best-practice engineering techniques. As we hope you have noticed, writing concurrent algorithms is a tricky task. You have seen a bit of mutual exclusion, possibly the oldest and definitely one of the most basic problems in the field. Our treatment of mutual exclusion is a good example of how this book will proceed. We will first consider the problem from classic mathematical perspective, and explore solutions on an idealized machine. We will use successive refinement to consider increasingly sophisticated issues that arise in the context
12 12 CHAPTER 1. INTRODUCTION of solving mutual exclusion. These issues will not be unique to mutual exclusion, yet mutual exclusion will be one of several algorithmic vehicles through which we will expose them in context. Later in the book, again through a process of successive refinement, we will introduce a sequence of increasingly realistic machine architectures, and explore how features such as non-uniform memory architectures and the choice of synchronization operations affect the performance of various solutions to problems such as mutual exclusion. Again, these problems will be the vehicles for exposing various synchronization issues. Finally, a note on style. The book matches syntax and semantics using the Java programming language in a way that allows us to specify individual synchronization objects through standard Java interfaces with an implied semantics provided through standard consistency conditions. With this approach we hope to make the algorithms both accurate and accessible to the programmer. Nough said, lets get to work. 1.6 Chapter Notes Most of the parable of Alice and Bob is adapted from Leslie Lamport s invited address to the 1983 ACM Symposium on Principles of Distributed Computing [1]. The readers-writers problem is a classical synchronization problem that has received attention in numerous papers over the past twenty years. 1.7 Exercises 1. In the producer/consumer fable, we assumed that Bob can see whether the can on Alice s windowsill is up or down. Design a producer/consumer protocol using cans and strings that works even if Bob can t see the state of Alice s can (This is how real-world interrupt bits work).
13 Bibliography [1] Leslie Lamport invited address solved problems, unsolved problems and non-problems in concurrency. In Proceedings of the third annual ACM symposium on Principles of distributed computing, pages 1 11,
Introduction to Concurrent Programming
Introduction to Concurrent Programming Based on the companion slides for the book The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit, 2008 From the New York Times SAN FRANCISCO, 7 May
More informationIntroduction. Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit. Art of Multiprocessor Programming
Introduction Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Art of Multiprocessor Programming Moore s Law Transistor count still rising Clock speed flattening
More informationIntroduction. Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit
Introduction Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Moore s Law Transistor count still rising Clock speed flattening sharply Art of Multiprocessor Programming
More informationProgramming Paradigms for Concurrency Introduction
Programming Paradigms for Concurrency Introduction Based on companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Thomas Wies New York University Moore
More informationCOMP 322: Principles of Parallel Programming Lecture 11: Parallel Programming Issues (Chapter 6) Fall 2009
COMP 322: Principles of Parallel Programming Lecture 11: Parallel Programming Issues (Chapter 6) Fall 2009 Vivek Sarkar Department of Computer Science Rice University vsarkar@rice.edu COMP 322 Lecture
More informationIntroduction to Concurrent Programming
Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών Τομέας Τεχνολογίας Πληροφορικής και Υπολογιστών Εθνικό Μετσόβιο Πολυτεχνείο Γλώσσες Προγραμματισμού ΙΙ Διδάσκοντες: Νικόλαος Παπασπύρου, Κωστής Σαγώνας
More informationThe Relative Power of Synchronization Methods
Chapter 5 The Relative Power of Synchronization Methods So far, we have been addressing questions of the form: Given objects X and Y, is there a wait-free implementation of X from one or more instances
More informationProcess Management And Synchronization
Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the
More informationConcurrent & Distributed Systems Supervision Exercises
Concurrent & Distributed Systems Supervision Exercises Stephen Kell Stephen.Kell@cl.cam.ac.uk November 9, 2009 These exercises are intended to cover all the main points of understanding in the lecture
More informationChapter 5 Concurrency: Mutual Exclusion and Synchronization
Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent
More informationInterprocess Communication By: Kaushik Vaghani
Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the
More informationConcurrent Objects and Linearizability
Chapter 3 Concurrent Objects and Linearizability 3.1 Specifying Objects An object in languages such as Java and C++ is a container for data. Each object provides a set of methods that are the only way
More informationIntroduction to Programming
CHAPTER 1 Introduction to Programming Begin at the beginning, and go on till you come to the end: then stop. This method of telling a story is as good today as it was when the King of Hearts prescribed
More informationAgreement in Distributed Systems CS 188 Distributed Systems February 19, 2015
Agreement in Distributed Systems CS 188 Distributed Systems February 19, 2015 Page 1 Introduction We frequently want to get a set of nodes in a distributed system to agree Commitment protocols and mutual
More informationOperating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy
Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference
More informationChapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?
Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions:» What are race conditions?»
More informationMutual Exclusion: Classical Algorithms for Locks
Mutual Exclusion: Classical Algorithms for Locks John Mellor-Crummey Department of Computer Science Rice University johnmc@cs.rice.edu COMP 422 Lecture 18 21 March 2006 Motivation Ensure that a block of
More informationPROCESS SYNCHRONIZATION
PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization
More informationChapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles
Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent
More informationCS4411 Intro. to Operating Systems Exam 1 Fall points 9 pages
CS4411 Intro. to Operating Systems Exam 1 Fall 2009 1 CS4411 Intro. to Operating Systems Exam 1 Fall 2009 150 points 9 pages Name: Most of the following questions only require very short answers. Usually
More informationChapter 5 Asynchronous Concurrent Execution
Chapter 5 Asynchronous Concurrent Execution Outline 5.1 Introduction 5.2 Mutual Exclusion 5.2.1 Java Multithreading Case Study 5.2.2 Critical Sections 5.2.3 Mutual Exclusion Primitives 5.3 Implementing
More informationModel answer of AS-4159 Operating System B.tech fifth Semester Information technology
Q.no I Ii Iii Iv V Vi Vii viii ix x Model answer of AS-4159 Operating System B.tech fifth Semester Information technology Q.1 Objective type Answer d(321) C(Execute more jobs in the same time) Three/three
More informationCOPYRIGHTED MATERIAL. An Introduction to Computers That Will Actually Help You in Life. Chapter 1. Memory: Not Exactly 0s and 1s. Memory Organization
Chapter 1 An Introduction to Computers That Will Actually Help You in Life Memory: Not Exactly 0s and 1s Memory Organization A Very Simple Computer COPYRIGHTED MATERIAL 2 Chapter 1 An Introduction to Computers
More informationLecture 7: Mutual Exclusion 2/16/12. slides adapted from The Art of Multiprocessor Programming, Herlihy and Shavit
Principles of Concurrency and Parallelism Lecture 7: Mutual Exclusion 2/16/12 slides adapted from The Art of Multiprocessor Programming, Herlihy and Shavit Time Absolute, true and mathematical time, of
More informationBackground. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.
Background The Critical-Section Problem Background Race Conditions Solution Criteria to Critical-Section Problem Peterson s (Software) Solution Concurrent access to shared data may result in data inconsistency
More informationType Checking and Type Equality
Type Checking and Type Equality Type systems are the biggest point of variation across programming languages. Even languages that look similar are often greatly different when it comes to their type systems.
More informationThe concept of concurrency is fundamental to all these areas.
Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency
More informationProgramming Paradigms for Concurrency Lecture 3 Concurrent Objects
Programming Paradigms for Concurrency Lecture 3 Concurrent Objects Based on companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Thomas Wies New York University
More informationBackground. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code
Old Producer Process Code Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Our
More informationCS 167 Final Exam Solutions
CS 167 Final Exam Solutions Spring 2018 Do all questions. 1. [20%] This question concerns a system employing a single (single-core) processor running a Unix-like operating system, in which interrupts are
More informationConcurrency. Glossary
Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the
More information(Refer Slide Time 00:01:09)
Computer Organization Part I Prof. S. Raman Department of Computer Science & Engineering Indian Institute of Technology Lecture 3 Introduction to System: Hardware In the previous lecture I said that I
More information1. Introduction to Concurrent Programming
1. Introduction to Concurrent Programming A concurrent program contains two or more threads that execute concurrently and work together to perform some task. When a program is executed, the operating system
More informationIntroduction to Multiprocessor Synchronization
Introduction to Multiprocessor Synchronization Maurice Herlihy http://cs.brown.edu/courses/cs176/lectures.shtml Moore's Law Transistor count still rising Clock speed flattening sharply Art of Multiprocessor
More informationThe Dining Philosophers Problem CMSC 330: Organization of Programming Languages
The Dining Philosophers Problem CMSC 0: Organization of Programming Languages Threads Classic Concurrency Problems Philosophers either eat or think They must have two forks to eat Can only use forks on
More information6.852: Distributed Algorithms Fall, Class 21
6.852: Distributed Algorithms Fall, 2009 Class 21 Today s plan Wait-free synchronization. The wait-free consensus hierarchy Universality of consensus Reading: [Herlihy, Wait-free synchronization] (Another
More informationConcurrent Processes Rab Nawaz Jadoon
Concurrent Processes Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS Lahore Pakistan Operating System Concepts Concurrent Processes If more than one threads
More informationIntroduction to Object-Oriented Modelling and UML
Naming Conventions Naming is not a side issue. It is one of the most fundamental mechanisms. This section deals with good and bad naming in models and code. This section is based on Stephen Kelvin Friedrich
More informationCMSC 330: Organization of Programming Languages. The Dining Philosophers Problem
CMSC 330: Organization of Programming Languages Threads Classic Concurrency Problems The Dining Philosophers Problem Philosophers either eat or think They must have two forks to eat Can only use forks
More informationLecture Notes on Contracts
Lecture Notes on Contracts 15-122: Principles of Imperative Computation Frank Pfenning Lecture 2 August 30, 2012 1 Introduction For an overview the course goals and the mechanics and schedule of the course,
More informationThirty one Problems in the Semantics of UML 1.3 Dynamics
Thirty one Problems in the Semantics of UML 1.3 Dynamics G. Reggio R.J. Wieringa September 14, 1999 1 Introduction In this discussion paper we list a number of problems we found with the current dynamic
More information! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA
Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics! Why is synchronization needed?! Synchronization Language/Definitions:» What are race
More informationCSCI [4 6] 730 Operating Systems. Example Execution. Process [& Thread] Synchronization. Why does cooperation require synchronization?
Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions: What are race conditions? What
More informationA Beginner s Guide to Successful Marketing
You ve got mail. A Beginner s Guide to Successful Email Marketing We believe that building successful email marketing campaigns has never been more important than it is now. But there s a problem; most
More informationFor this chapter, switch languages in DrRacket to Advanced Student Language.
Chapter 30 Mutation For this chapter, switch languages in DrRacket to Advanced Student Language. 30.1 Remembering changes Suppose you wanted to keep track of a grocery shopping list. You could easily define
More informationDistributed Systems. Fault Tolerance. Paul Krzyzanowski
Distributed Systems Fault Tolerance Paul Krzyzanowski Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. Faults Deviation from expected
More informationCMSC 330: Organization of Programming Languages. Threads Classic Concurrency Problems
: Organization of Programming Languages Threads Classic Concurrency Problems The Dining Philosophers Problem Philosophers either eat or think They must have two forks to eat Can only use forks on either
More informationDatabase Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.
Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. # 20 Concurrency Control Part -1 Foundations for concurrency
More informationSynchronization. CS 475, Spring 2018 Concurrent & Distributed Systems
Synchronization CS 475, Spring 2018 Concurrent & Distributed Systems Review: Threads: Memory View code heap data files code heap data files stack stack stack stack m1 m1 a1 b1 m2 m2 a2 b2 m3 m3 a3 m4 m4
More informationFault-Tolerance & Paxos
Chapter 15 Fault-Tolerance & Paxos How do you create a fault-tolerant distributed system? In this chapter we start out with simple questions, and, step by step, improve our solutions until we arrive at
More informationAsynchronous Models. Chapter Asynchronous Processes States, Inputs, and Outputs
Chapter 3 Asynchronous Models 3.1 Asynchronous Processes Like a synchronous reactive component, an asynchronous process interacts with other processes via inputs and outputs, and maintains an internal
More information1 Process Coordination
COMP 730 (242) Class Notes Section 5: Process Coordination 1 Process Coordination Process coordination consists of synchronization and mutual exclusion, which were discussed earlier. We will now study
More informationDistributed minimum spanning tree problem
Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with
More informationOutline More Security Protocols CS 239 Computer Security February 6, 2006
Outline More Security Protocols CS 239 Computer Security February 6, 2006 Combining key distribution and authentication Verifying security protocols Page 1 Page 2 Combined Key Distribution and Authentication
More informationIf Statements, For Loops, Functions
Fundamentals of Programming If Statements, For Loops, Functions Table of Contents Hello World Types of Variables Integers and Floats String Boolean Relational Operators Lists Conditionals If and Else Statements
More informationSynchronization in Concurrent Programming. Amit Gupta
Synchronization in Concurrent Programming Amit Gupta Announcements Project 1 grades are out on blackboard. Detailed Grade sheets to be distributed after class. Project 2 grades should be out by next Thursday.
More informationOrder from Chaos. University of Nebraska-Lincoln Discrete Mathematics Seminar
Order from Chaos University of Nebraska-Lincoln Discrete Mathematics Seminar Austin Mohr Department of Mathematics Nebraska Wesleyan University February 8, 20 The (, )-Puzzle Start by drawing six dots
More informationCHAPTER 6: PROCESS SYNCHRONIZATION
CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background
More informationMidterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9
CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9 Bruce Char and Vera Zaychik. All rights reserved by the author. Permission is given to students enrolled in CS361 Fall 2004 to reproduce
More informationIntro. Scheme Basics. scm> 5 5. scm>
Intro Let s take some time to talk about LISP. It stands for LISt Processing a way of coding using only lists! It sounds pretty radical, and it is. There are lots of cool things to know about LISP; if
More informationCS 161 Computer Security
Paxson Spring 2013 CS 161 Computer Security 3/14 Asymmetric cryptography Previously we saw symmetric-key cryptography, where Alice and Bob share a secret key K. However, symmetric-key cryptography can
More informationConcurrent Objects. Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit
Concurrent Objects Companion slides for The by Maurice Herlihy & Nir Shavit Concurrent Computation memory object object 2 Objectivism What is a concurrent object? How do we describe one? How do we implement
More informationProcess Synchronization
Process Synchronization Concurrent access to shared data may result in data inconsistency Multiple threads in a single process Maintaining data consistency requires mechanisms to ensure the orderly execution
More informationIT 540 Operating Systems ECE519 Advanced Operating Systems
IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles
More information1 Motivation for Improving Matrix Multiplication
CS170 Spring 2007 Lecture 7 Feb 6 1 Motivation for Improving Matrix Multiplication Now we will just consider the best way to implement the usual algorithm for matrix multiplication, the one that take 2n
More informationCPS 310 first midterm exam, 10/6/2014
CPS 310 first midterm exam, 10/6/2014 Your name please: Part 1. More fun with fork and exec* What is the output generated by this program? Please assume that each executed print statement completes, e.g.,
More informationCS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004
CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004 Lecture 9: Readers-Writers and Language Support for Synchronization 9.1.2 Constraints 1. Readers can access database
More informationHomework #2 Nathan Balon CIS 578 October 31, 2004
Homework #2 Nathan Balon CIS 578 October 31, 2004 1 Answer the following questions about the snapshot algorithm: A) What is it used for? It used for capturing the global state of a distributed system.
More information5 Classical IPC Problems
OPERATING SYSTEMS CLASSICAL IPC PROBLEMS 2 5 Classical IPC Problems The operating systems literature is full of interesting problems that have been widely discussed and analyzed using a variety of synchronization
More informationMutual Exclusion. Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit
Mutual Exclusion Companion slides for The by Maurice Herlihy & Nir Shavit Mutual Exclusion Today we will try to formalize our understanding of mutual exclusion We will also use the opportunity to show
More information6.001 Notes: Section 8.1
6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything
More informationMultiple Inheritance. Computer object can be viewed as
Multiple Inheritance We have seen that a class may be derived from a given parent class. It is sometimes useful to allow a class to be derived from more than one parent, inheriting members of all parents.
More informationProgramming and Data Structure
Programming and Data Structure Dr. P.P.Chakraborty Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture # 09 Problem Decomposition by Recursion - II We will
More informationCS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers
CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers 1 Critical sections and atomicity We have been seeing that sharing mutable objects between different threads is tricky We need some
More information14.1 Encoding for different models of computation
Lecture 14 Decidable languages In the previous lecture we discussed some examples of encoding schemes, through which various objects can be represented by strings over a given alphabet. We will begin this
More information6.001 Notes: Section 15.1
6.001 Notes: Section 15.1 Slide 15.1.1 Our goal over the next few lectures is to build an interpreter, which in a very basic sense is the ultimate in programming, since doing so will allow us to define
More informationLecture 2: September 9
CMPSCI 377 Operating Systems Fall 2010 Lecture 2: September 9 Lecturer: Prashant Shenoy TA: Antony Partensky & Tim Wood 2.1 OS & Computer Architecture The operating system is the interface between a user
More informationPre- and post- CS protocols. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 7. Other requirements for a mutual exclusion algorithm
CS 361 Concurrent programming Drexel University Fall 2004 Lecture 7 Bruce Char and Vera Zaychik. All rights reserved by the author. Permission is given to students enrolled in CS361 Fall 2004 to reproduce
More informationLinked Lists: Locking, Lock-Free, and Beyond. Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit
Linked Lists: Locking, Lock-Free, and Beyond Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Concurrent Objects Adding threads should not lower throughput Contention
More informationCOMP I/O, interrupts, exceptions April 3, 2016
In this lecture, I will elaborate on some of the details of the past few weeks, and attempt to pull some threads together. System Bus and Memory (cache, RAM, HDD) We first return to an topic we discussed
More information6.001 Notes: Section 17.5
6.001 Notes: Section 17.5 Slide 17.5.1 Now, let's look at one example in which changing the evaluation model allows us to explore a very different kind of computational problem. Our goal is to show how
More informationOperating Systems. Synchronization
Operating Systems Fall 2014 Synchronization Myungjin Lee myungjin.lee@ed.ac.uk 1 Temporal relations Instructions executed by a single thread are totally ordered A < B < C < Absent synchronization, instructions
More informationStanford University Computer Science Department CS 140 Midterm Exam Dawson Engler Winter 1999
Stanford University Computer Science Department CS 140 Midterm Exam Dawson Engler Winter 1999 Name: Please initial the bottom left corner of each page. This is an open-book exam. You have 50 minutes to
More informationOutline More Security Protocols CS 239 Computer Security February 4, 2004
Outline More Security Protocols CS 239 Computer Security February 4, 2004 Combining key distribution and authentication Verifying security protocols Page 1 Page 2 Combined Key Distribution and Authentication
More informationConcurrency. Chapter 5
Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits
More informationSPIN, PETERSON AND BAKERY LOCKS
Concurrent Programs reasoning about their execution proving correctness start by considering execution sequences CS4021/4521 2018 jones@scss.tcd.ie School of Computer Science and Statistics, Trinity College
More informationModels of concurrency & synchronization algorithms
Models of concurrency & synchronization algorithms Lecture 3 of TDA383/DIT390 (Concurrent Programming) Carlo A. Furia Chalmers University of Technology University of Gothenburg SP3 2016/2017 Today s menu
More information4.2 Variations on a Scheme -- Lazy Evaluation
[Go to first, previous, next page; contents; index] 4.2 Variations on a Scheme -- Lazy Evaluation Now that we have an evaluator expressed as a Lisp program, we can experiment with alternative choices in
More informationChapter 6: Process Synchronization
Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS
More informationp x i 1 i n x, y, z = 2 x 3 y 5 z
3 Pairing and encoding functions Our aim in this part of the course is to show that register machines can compute everything that can be computed, and to show that there are things that can t be computed.
More informationText Input and Conditionals
Text Input and Conditionals Text Input Many programs allow the user to enter information, like a username and password. Python makes taking input from the user seamless with a single line of code: input()
More informationComputer Science 161
Computer Science 161 150 minutes/150 points Fill in your name, logname, and TF s name below. Name Logname The points allocated to each problem correspond to how much time we think the problem should take.
More informationCoordination and Agreement
Coordination and Agreement Nicola Dragoni Embedded Systems Engineering DTU Informatics 1. Introduction 2. Distributed Mutual Exclusion 3. Elections 4. Multicast Communication 5. Consensus and related problems
More informationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization 1 Needs of Processes Allocation of processor time Allocation and sharing resources Communication among processes Synchronization of multiple processes
More informationRecommended Design Techniques for ECE241 Project Franjo Plavec Department of Electrical and Computer Engineering University of Toronto
Recommed Design Techniques for ECE241 Project Franjo Plavec Department of Electrical and Computer Engineering University of Toronto DISCLAIMER: The information contained in this document does NOT contain
More information1 One-Time Pad. 1.1 One-Time Pad Definition
1 One-Time Pad Secure communication is the act of conveying information from a sender to a receiver, while simultaneously hiding it from everyone else Secure communication is the oldest application of
More informationOutline. More Security Protocols CS 239 Security for System Software April 22, Needham-Schroeder Key Exchange
Outline More Security Protocols CS 239 Security for System Software April 22, 2002 Combining key distribution and authentication Verifying security protocols Page 1 Page 2 Combined Key Distribution and
More informationWeek - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02)
Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 04 Lecture - 01 Merge Sort (Refer
More informationChapter 6: Synchronization. Operating System Concepts 8 th Edition,
Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization
More informationCSE Traditional Operating Systems deal with typical system software designed to be:
CSE 6431 Traditional Operating Systems deal with typical system software designed to be: general purpose running on single processor machines Advanced Operating Systems are designed for either a special
More information