Lecture 21 Concurrent Programming

Similar documents
Chapter 4: Threads. Operating System Concepts. Silberschatz, Galvin and Gagne

For use by students enrolled in #71251 CSE430 Fall 2012 at Arizona State University. Do not use if not enrolled.

Distributed Systems Operation System Support

CS 450 Operating System Week 4 Lecture Notes

Processes and Threads

Operating Systems: Internals and Design Principles. Chapter 4 Threads Seventh Edition By William Stallings

Operating Systems 2 nd semester 2016/2017. Chapter 4: Threads

Learning Outcomes. Processes and Threads. Major Requirements of an Operating System. Processes and Threads

Questions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation. What s in a process? Organizing a Process

! How is a thread different from a process? ! Why are threads useful? ! How can POSIX threads be useful?

Lecture 2 Process Management

Processes. Process Concept

Threads. Computer Systems. 5/12/2009 cse threads Perkins, DW Johnson and University of Washington 1

!! How is a thread different from a process? !! Why are threads useful? !! How can POSIX threads be useful?

Verteilte Systeme (Distributed Systems)

Web Client And Server

Notes. CS 537 Lecture 5 Threads and Cooperation. Questions answered in this lecture: What s in a process?

Threads. CS-3013 Operating Systems Hugh C. Lauer. CS-3013, C-Term 2012 Threads 1

CSE 153 Design of Operating Systems Fall 2018

Chapter 4: Multi-Threaded Programming

Yi Shi Fall 2017 Xi an Jiaotong University

Chapter 4: Threads. Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads

Lecture 4: Threads; weaving control flow

CS 43: Computer Networks. 07: Concurrency and Non-blocking I/O Sep 17, 2018

Outline. Threads. Single and Multithreaded Processes. Benefits of Threads. Eike Ritter 1. Modified: October 16, 2012

Chapter 4: Threads. Chapter 4: Threads. Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues

Chapter 4: Multithreaded Programming. Operating System Concepts 8 th Edition,

CSE 120 Principles of Operating Systems

Threads. Raju Pandey Department of Computer Sciences University of California, Davis Spring 2011

Chapter 4: Threads. Chapter 4: Threads

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSCE 313: Intro to Computer Systems

Operating System. Chapter 4. Threads. Lynn Choi School of Electrical Engineering

Agenda. Threads. Single and Multi-threaded Processes. What is Thread. CSCI 444/544 Operating Systems Fall 2008

CS5460: Operating Systems

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 4: MULTITHREADED PROGRAMMING

Chapter 4: Threads. Operating System Concepts 9 th Edit9on

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

OPERATING SYSTEM. Chapter 4: Threads

CSE 120 Principles of Operating Systems

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Chapter 4: Multithreaded

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

CS307: Operating Systems

Chapter 4: Multithreaded Programming

Chapter 4: Threads. Chapter 4: Threads

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits

Asynchronous Events on Linux

Processes and Threads

Chapter 4: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux Threads Java Threads. Operating System Concepts

Introduction to Asynchronous Programming Fall 2014

CSE 120 Principles of Operating Systems

CHAPTER 2: PROCESS MANAGEMENT

Chapter 3: Processes. Operating System Concepts 8th Edition

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

CS333 Intro to Operating Systems. Jonathan Walpole

Problem Set: Processes

Introduction to Concurrent Software Systems. CSCI 5828: Foundations of Software Engineering Lecture 08 09/17/2015

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Chapter 4: Processes. Process Concept

Lecture Topics. Announcements. Today: Threads (Stallings, chapter , 4.6) Next: Concurrency (Stallings, chapter , 5.

CSE 120 Principles of Operating Systems Spring 2017

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

Processes & Threads. Process Management. Managing Concurrency in Computer Systems. The Process. What s in a Process?

Chapter 4: Processes

CSE 4/521 Introduction to Operating Systems

What s in a process?

10/10/ Gribble, Lazowska, Levy, Zahorjan 2. 10/10/ Gribble, Lazowska, Levy, Zahorjan 4

Resource Containers. A new facility for resource management in server systems. Presented by Uday Ananth. G. Banga, P. Druschel, J. C.

Introduction to Concurrent Software Systems. CSCI 5828: Foundations of Software Engineering Lecture 12 09/29/2016

CS 326: Operating Systems. Process Execution. Lecture 5

CISC2200 Threads Spring 2015

Process Description and Control

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition

CS 3305 Intro to Threads. Lecture 6

Lecture 5: Process Description and Control Multithreading Basics in Interprocess communication Introduction to multiprocessors

Threads. CS3026 Operating Systems Lecture 06

Chapter 3: Process Concept

Chapter 3: Process Concept

Operating Systems. Operating System Structure. Lecture 2 Michael O Boyle

Chapter 5: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux Threads Java Threads

Exercise (could be a quiz) Solution. Concurrent Programming. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - IV Threads

Chapter 4: Multithreaded Programming

CSE 120 Principles of Operating Systems

Background: I/O Concurrency

Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux

Problem Set: Processes

Operating System Support

0. Course Overview. 8 Architecture models; network architectures: OSI, Internet and LANs; interprocess communication III. Time and Global States

Chapter 4: Threads. Operating System Concepts 9 th Edition

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

OS structure. Process management. Major OS components. CSE 451: Operating Systems Spring Module 3 Operating System Components and Structure

CHAPTER 3 - PROCESS CONCEPT

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State

Computer Science 322 Operating Systems Mount Holyoke College Spring Topic Notes: Processes and Threads

Part Two - Process Management. Chapter 3: Processes

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Chapter 3: Process Concept

What is an Operating System? A Whirlwind Tour of Operating Systems. How did OS evolve? How did OS evolve?

Transcription:

Lecture 21 Concurrent Programming 13th November 2003 Leaders and followers pattern Problem: Clients periodically present service requests which requires a significant amount of work that along the way involves waiting for something else to happen. Example - on-line transaction processing. Characteristics: application should not block on any one source of requests First, some examples of other patterns for implementing such servers. Reactor (Dispatcher or Notifier) We saw this pattern before in Lecture 12 in the form of what I then called an event loop framework. For each different kind of event (request) to be handled an event handler is registered with the framework. The framework demultiplexes the arriving events delivering each one to the appropriate handler. As previously noted, reactor is fast and uses few resources, but can lead to rather complicated programs if the individual requests involve interactions with external entities that can cause delay. Illustrate: single thread with registration mechanism and dispatching loop. Forking server An approach you may have seen before: serversocket = socket(...) rc = bind(serversocket) rc = listen(serversocket) while true { newsocket = accept(serversocket) 1

if (!fork()) { close(serversocket); serviceoneclient(newsocket); This is the approach used in many Unix servers web, ftp, telnet, ssh, etc. The problem is that the fork() operation is quite costly both in time and space. It takes a lot of work to construct a new address space, and the new process occupies a significant amount of memory (even if the code and much of the data is implemented using shared memory techniques). The forking server approach was the only one used in the Apache web server until quite recently. Threaded server Using threads instead of processes can reduce the cost. In many threading systems, the memory cost is reduced to a separate, pre-allocated stack per thread, and the time cost is lessened because less address space manipulation is needed to create the thread. What if threads were really cheap? Notice that these threads have very little interaction with one another: what measures of thread performance are most important in this setting? Examples: recent Apache web servers, the AOLServer web server, many services written in Java. Leaders and followers Instead of a thread per client, the LF pattern allocates a fixed number of threads, just a sufficient number to keep the server busy. The threads take turns demultiplexing client requests. At any time, one thread is designated as the leader and only the leader thread is allowed to read a client request. Since it doesn t know which client will make the request it uses a selecting operator to wait for any existing client or a new client. When a request arrives, the leader removes that client from the selectable client list, lets one of the follower threads become the new leader and goes off to process the request it received. /* initialization */ serversocket = socket(...) rc = bind(serversocket) rc = listen(serversocket) selectset = emptyset; setadd(selectset, serversocket);... create n server threads... 2

/* each server thread */ while true { if iamleader() { rc = select(selectset) readyfd = findreadyfd() if readyfd==serversocket { newfd = accept(serversocket) selectset = union(selectset, {newfd) else { selectset = setremove(selectset,readyfd) /* why? */ leader = null; retireasleader(); readandserviceonerequest(newfd); else { waittobecomeleader(); synchronized boolean iamleader () { return leader==myid; synchronized waittobecomeleader() { while (leader!= null) {wait() leader = myid; synchronized retireasleader() { leader = null; notify(); How many threads should there be in a leaders/followers thread pool? Obvious: fewer than one per connected client otherwise we would just use the threaded server pattern which is simpler to program. Can we come up with a more precise characterization of the desired number of threads? Consider the role of this server: it serves client requests by making use of other resources. We want to use threads because these other resources impose delays, but there are many of them so we wish to overlap those delays for different clients. 3

Gaining more control over the set of waiting requests In the example code above we let the OS manage the queue of waiting requests: they are either connect requests or request messages from clients waiting to be read from file descriptors. What if we needed more control over which of the requests would be processed first? Idea: impose an intermediary. Bag of Tasks Leaders and followers is similar to the bag-of-tasks pattern discussed in the textbook (p. 132). Each task in the bag-of-tasks corresponds to what we have called a client request. The multiple threads of the leaders and followers pattern are the worker processes of bag-of-tasks. Note that LF explicitly addresses something that is left implicit in the BoT paradigm: which process/thread is to be the next to remove an element from the set of waiting tasks/requests. Example1: Akamai Akamai is a company that sells the service of web caching. Akamai has thousands of servers worldwide on thousands of network. The goal is to have servers that are close in network distance to all internet users. Akamai uses DNS manipulation to direct clients to nearby servers. Once a client request reaches an Akamai server, satisfying the request may involve interaction with Akamai s customer s servers. (Akamai s customer in this case is the company that owns the web site; the client might be that company s customer). An Akamai-like server might have thousands of connected clients simultaneously, only some of which have an outstanding request. The ones with outstanding requests will have interactions with the customer servers which need to be managed over several interactions a thread is a natural way to manage the state evolution of these. But we can t afford to tie up a thread for each connected client. Example 2: NFS server Before the Solaris OS had a multithreaded kernel, and still today in Linux, the leaders and followers pattern was used to implement the NFS server. All of the work of an NFS server can happen in the kernel request packets arrive first in the kernel, file reads and writes can only get to the disk by being handled by the kernel. Thus, although NFS can be implemented in a user-level process, doing so adds overhead that could be avoided by a kernel implementation. The problem is that the OS kernel in these systems is an asynchronous system, and as we ve seen, handling complex state evolution in an asynchronous system over long periods adds a lot of complexity. What the NFS implementors did was in essence create a fixed-size pool of user processes that do nothing but make a single system call 4

that waits in the kernel for NFS requests. As each request arrives it is assigned to one of these processes which goes through the sequence of steps necessary to satisfy that request. Once the request is satisfied the process is ready to accept the next request. In this system the NFS server can be working on a fixed maximum number of requests at a time. Once the kernel evolved to include kernel-level threads subterfuge of user-level processes was not needed any more, and threads could be created as needed to handle the load. If I were designing the system, I would certainly still use a thread pool so that I didn t have to create a thread for each request. But with kernel level threads available I could make the size of the thread pool change dynamically. Example 3: CORBA ORB Leaders and followers might be one of the options offered by a CORBA ORB for implementing servant objects. (CORBA ORBs typically let the system implementor choose between several of the patterns discussed above sometimes on an object-byobject basis. 5