Applications, services. Middleware. OS2 Processes, threads, Processes, threads, communication,... communication,... Platform

Similar documents
Figure 6.1 System layers

Operating System Support

Operating System Support

0. Course Overview. 8 Architecture models; network architectures: OSI, Internet and LANs; interprocess communication III. Time and Global States

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

Process Description and Control

3.1 Introduction. Computers perform operations concurrently

Threads, SMP, and Microkernels. Chapter 4

Operating System. Chapter 4. Threads. Lynn Choi School of Electrical Engineering

Lecture 5: Process Description and Control Multithreading Basics in Interprocess communication Introduction to multiprocessors

Threads. CS3026 Operating Systems Lecture 06

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

Processes and Non-Preemptive Scheduling. Otto J. Anshus

CSE473/Spring st Midterm Exam Tuesday, February 19, 2007 Professor Trent Jaeger

Exercise (could be a quiz) Solution. Concurrent Programming. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - IV Threads

SMD149 - Operating Systems

Chapter 8: Virtual Memory. Operating System Concepts

Exam Guide COMPSCI 386

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

Lecture 2 Process Management

Verteilte Systeme (Distributed Systems)

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

CSC Operating Systems Fall Lecture - II OS Structures. Tevfik Ko!ar. Louisiana State University. August 27 th, 2009.

Announcements. Computer System Organization. Roadmap. Major OS Components. Processes. Tevfik Ko!ar. CSC Operating Systems Fall 2009

Chapter 9: Virtual Memory

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

CHAPTER 3 - PROCESS CONCEPT

Threads. Raju Pandey Department of Computer Sciences University of California, Davis Spring 2011

Chapter 3: Processes. Operating System Concepts 9 th Edition

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

Distributed Systems CSCI-B 534/ENGR E-510. Spring 2019 Instructor: Prateek Sharma

Operating System. Operating System Overview. Structure of a Computer System. Structure of a Computer System. Structure of a Computer System

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

! How is a thread different from a process? ! Why are threads useful? ! How can POSIX threads be useful?

Lecture 15: I/O Devices & Drivers

Chapter 2: Operating-System Structures. Operating System Concepts 8 th Edition

!! How is a thread different from a process? !! Why are threads useful? !! How can POSIX threads be useful?

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

What is an Operating System? A Whirlwind Tour of Operating Systems. How did OS evolve? How did OS evolve?

(MCQZ-CS604 Operating Systems)

Chapter 4: Threads. Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads

For use by students enrolled in #71251 CSE430 Fall 2012 at Arizona State University. Do not use if not enrolled.

CS370 Operating Systems

CSE 4/521 Introduction to Operating Systems. Lecture 29 Windows 7 (History, Design Principles, System Components, Programmer Interface) Summer 2018

Threads, SMP, and Microkernels

2017 Pearson Educa2on, Inc., Hoboken, NJ. All rights reserved.

OS Design Approaches. Roadmap. OS Design Approaches. Tevfik Koşar. Operating System Design and Implementation

Computer Systems II. First Two Major Computer System Evolution Steps

Agenda. Threads. Single and Multi-threaded Processes. What is Thread. CSCI 444/544 Operating Systems Fall 2008

CSI3131 Final Exam Review

ENGR 3950U / CSCI 3020U Midterm Exam SOLUTIONS, Fall 2012 SOLUTIONS

Introduction. CS3026 Operating Systems Lecture 01

Native POSIX Thread Library (NPTL) CSE 506 Don Porter

Introduction to Operating Systems. Chapter Chapter

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Windows 7 Overview. Windows 7. Objectives. The History of Windows. CS140M Fall Lake 1

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition

Module 1. Introduction:

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Process. Heechul Yun. Disclaimer: some slides are adopted from the book authors slides with permission

CS533 Concepts of Operating Systems. Jonathan Walpole

Chapter 13: I/O Systems

Operating System Architecture. CS3026 Operating Systems Lecture 03

Questions answered in this lecture: CS 537 Lecture 19 Threads and Cooperation. What s in a process? Organizing a Process


Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

Last time: introduction. Networks and Operating Systems ( ) Chapter 2: Processes. This time. General OS structure. The kernel is a program!

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

OPERATING SYSTEM. Chapter 9: Virtual Memory

Noorul Islam College Of Engineering, Kumaracoil MCA Degree Model Examination (October 2007) 5 th Semester MC1642 UNIX Internals 2 mark Questions

Diagram of Process State Process Control Block (PCB)

Chapter 1: Introduction

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Computer-System Architecture (cont.) Symmetrically Constructed Clusters (cont.) Advantages: 1. Greater computational power by running applications

Chapter 13: I/O Systems

Chapter 3: Processes. Operating System Concepts 8th Edition

* What are the different states for a task in an OS?

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science

Threads Chapter 5 1 Chapter 5

Processes. OS Structure. OS Structure. Modes of Execution. Typical Functions of an OS Kernel. Non-Kernel OS. COMP755 Advanced Operating Systems

殷亚凤. Processes. Distributed Systems [3]

Chapter 4: Processes. Process Concept

Chapter 12: I/O Systems

Chapter 13: I/O Systems

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition

CS370 Operating Systems

Operating System. Chapter 3. Process. Lynn Choi School of Electrical Engineering

CS307: Operating Systems

Chapter 4: Processes

CS533 Concepts of Operating Systems. Jonathan Walpole

COP 5611 Operating Systems Spring Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies

Process Description and Control. Chapter 3

ELEC 377 Operating Systems. Week 1 Class 2

Processes and Threads

PROCESS CONTROL BLOCK TWO-STATE MODEL (CONT D)

Transcription:

Operating System Support Introduction Distributed systems act as resource managers for the underlying hardware, allowing users access to memory, storage, CPUs, peripheral devices, and the network Much of this is accomplished by operating systems and network operating systems 2

Virtual Machines Because multiple processes can run at the same time on a hardware device, an operating system provides a virtual machine, giving the user the impression of control of a system This includes protection of each user process from interference by another process To accomplish this, operating systems typically uses two levels of access to the system resources The only way to switch from one mode to another is through system calls not accessible to users 3 Kernel Mode All instructions are available All memory is accessible All registers are accessible 4

User Mode Memory access is restricted User not allowed to access memory locations that are outside a range of addresses Can t access device registers 5 Network Operating Systems Autonomous nodes Manage their own resources One system image per node Machines are connected via a network Processes only scheduled locally User action required for distribution E.g., Windows XP, UNIX, LINUX 6

Distributed Operating System Single (global) system image Processes scheduled across nodes E.g., for load balancing, optimization of communication No true distributed O/S in use Legacy problems User autonomy/integrity compromised Solution: middleware + O/S 7 System Layers Applications, services Middleware OS: kernel, libraries & servers OS1 OS2 Processes, threads, Processes, threads, communication,... communication,... Platform Computer & network hardware Computer & network hardware Node 1 Node 2 8

Requirements Main components: kernels & server processes Requirements: Encapsulation Transparency for clients Protection From illegitimate accesses Concurrent processing 9 Requirements (cont) Remote invocations require: Communication subsystem Scheduling of requests 10

Core O/S Components Process manager Communication manager Thread manager Memory manager Supervisor 11 Core O/S Functions Process management Creating, managing, and destroying processes Every process has an address space and one or more threads Thread management Creating, synchronizing, and scheduling threads Communications management All communications between threads in the same computer and may include remote processes Memory management Control of physical and virtual memory 12

The O/S Supervisor The supervisor Dispatches interrupts, system call traps and exceptions Control of the memory management unit and hardware caches Manipulation of the processor and floating point registers 13 Protection All resources must be protected from interference This includes protection from access by malicious code, but it also includes protection from faults and other processes that may be running on the same computer Protection ti might also prevent the bypassing of required activities such as login authentication and authorization 14

Kernels and Protection Most processors have a hardware mode register that permits privileged instructions to be strictly controlled Supervisor mode for privileged instructions User mode for unprivileged instructions Separate address spaces are allocated Only the privileged kernel can access the privileged spaces Usually a system call trap is required to access privileged instructions from user space 15 Protection Overhead Protection comes at a price Processor cycles to switch between address spaces Supervision and protection of system call traps Establishment, authentication, and authorization of privileged users and processes Since all overhead consumes expensive resources, it is always a key concern of IT managers 16

Microkernel Kernel Types Contains code that must execute ec in kernel mode Functions Setting device registers Switching the CPU between processes Manipulating p g the MMU Capturing hardware interrupts Other OS functions in user mode File systems, network communication 17 Kernel Types (cont) Monolithic kernel Performs all O/S functions Usually very large 18

Kernel vs. Microkernel Microkernel advantages Testability Extensibility Modularity Binary emulation Monolithic kernel advantages The way most OSs operate Performance Hybrid solutions may be ideal 19 Processes A typical process includes an execution environment and one or more threads An execution environment includes: An address space Thread synchronization Communication resources such as semaphores Communication interfaces such as sockets Higher level resources such as open files and windows Many earlier versions of processes only allowed a single thread, thus the term multi-threaded threaded process is often used for clarity 20

Address Spaces An address space is a management unit for the virtual memory of a process It consists of non-overlapping overlapping regions accessible by the threads of the owning process Each region has an extent (lowest virtual address and size), read/write/execute permissions for the processes threads, and whether it grows up or down It is page oriented, and gaps are left between regions to allow for growth 21 Address Space 2 N Auxiliary regions Stac k Heap 0 Text 22

Linux Address Spaces Address spaces are a generalization of the (L)Unix model, which had three regions: A fixed, unmodifiable text region containing program code A heap, extensible toward higher virtual addresses A stack, extensible toward lower virtual addresses An indefinite number of additional regions have since been added 23 Stack Generally there is a separate stack for each thread Whenever a thread or process is interrupted, status information is stored on the stack that permits the process or thread to continue from the point at which it was interrupted Usually, memory allocated to the stack is recovered when the process or thread retrieves the information and resumes, as interrupts occur and resumes in a last-in, first out (LIFO) manner 24

File Regions A file stored offline can be loaded into active memory A mapped file is accessed as an array of bytes in memory Such a file can reduce access overhead dramatically, as it is orders of magnitude faster to access memory than disk files 25 Shared Memory Regions Sometimes it is desirable to share memory between processes, or between a process and the kernel Reasons for sharing memory include: Libraries that might be large and would waste memory if each process loaded a copy Kernel calls that access system calls and exceptions Data sharing and communication between processes on shared tasks 26

Shared Memory Problems Bus contention Solution: per processor cache Introduces cache consistency problems Software cache consistency Solution approaches: Write-through through cache Don t t cache updateable shared segments Flush cache in critical sections Prevent concurrent access 27 Process Creation An operating system creates processes as needed d In a distributed environment, there are two independent aspects of the creation process: Choice of a target host Creation of an execution environment and an initial thread within it 28

Process Management Process creation Parent process spawns child process Choice of host Performed by distributed system service Creation & initialization of address space Process control Create, suspend, resume, kill Process migration Expensive operation! 29 Process Management - Creation Creating a new execution environment Standard, statically pre-defined From an existing environment E.g., by fork() Usually physically shared text region Some data regions shared or copied (copy- on-write) 30

Choice of Target Host A transfer policy decides whether to situate the process locally or remotely A location policy determines which node should host the new process Location policies may be static or adaptive Static policies ignore the current state of the system and are designed based on the expected long-term characteristics of the system Adaptive policies are based on unpredictable runtime factors such as load on a node 31 Dynamic Location Policies Load sharing policies use a load manger to allocate processes to hosts Sender-initiated policies require the node creating the process to specify the host Receiver-initiated initiated when load is below a certain threshold, advertises for more work Migratory policies can shift processes between hosts at any time 32

Process Management - Migration Expensive operation Not in widespread use for load balancing May be useful for maintenance Long-running tasks Choice of host (location policy) 33 Creating Execution Environment Once a host is selected, a new process requires an address space with initialized contents and default information such as files The new address space can be defined statically, or copied from an existing execution environment If copied, content may be shared and nothing written to the new environment until such time as a write instruction occurs for either process Then the shared content is divided. This technique is called copy-on on-write (next slide) 34

Copy-on-write Process A s address space Process B s address space RA RB copied from RA RB Kernel A's page table Shared frame B's page table a) Before write b) After write 35 Threads A single process can have more than one activity going on at the same time For example, it may be performing an activity while at the same time it needs to be aware of background events There may also be background activities such as loading information into a buffer from a socket or file Servers may service many requests from different users at the same time 36

Client and Server Threads Thread 1 generates results T1 Thread 2 makes requests to server Receipt & queuing Input-output Client Requests N threads Server 37 Multithreaded Server Architectures Worker Pool A predetermined fixed number of threads is available for use as needed d and returned to the pool after use Thread-per-requestrequest A new thread is allocated for each new request, and discarded after use Thread-per-connection A new thread is allocated for each new connection Several requests can use that thread sequentially The thread is discarded when the connection is closed 38

Multithreaded Server Architectures Thread-per-object A new thread is allocated for each remote object All requests for that object wait to use that thread The thread is discarded when the connection to the object is destroyed 39 Server Threads workers per-connection threads per-object threads I/O remote objects remote objects I/O remote objects a. Thread-per-request request b. Thread-per-connection c. Thread-per-object 40

Client Threads Clients block while awaiting response E.g., blocking receive() Schedule other threads while waiting Multiple server connections E.g., web browsers Access multiple pages concurrently Multiple requests for same page 41 What is a Tread? It is a program s path of execution Most programs run as a single thread, which could cause problems if a program needs multiple events or actions to occur at the same time 42

What is a Thread? Multi-threading threading literally means multiple lines of a single program can be executed at the same time However, it is different from multi- processing because all of the threads share the same address space for both code and data, causing it to be less overhead So, by starting a thread an efficient path of execution is created while still sharing the original data area from the parent 43 Threads vs. Processes Both methods work Threads more efficient Creation (no new execution environment) Context switch Initial page faults Caching Resource sharing Problem: concurrency control Threads share an environment with one another; therefore we have the problems of a shared memory 44

Thread States Running state: A thread is said to be in the running state when it is being executed Ready state (Not Runnable): A thread in this state is ready for execution, but is not currently executed Once a thread gets access to the CPU, it gets moved to the Running state 45 Dead State: Thread States A thread reaches the dead state when the run method has finished execution Waiting State (yielding): In this state the thread is waiting for some action to happen Once that action happens, the thread goes into the ready state Threads in the waiting state could be sleeping, suspended, blocked, or waiting on a monitor 46

Uses of Threads Threads are used for all sorts of applications, from general interactive drawing applications to games For instance a program is not capable of drawing pictures while reading keystrokes. So the program either has to give full attention to listening to keystrokes or drawing pictures Otherwise one thread can listen to the keyboard while the other draws the pictures 47 Uses of Threads Another good usage of threads is on a system with multiple l CPUs or cores In this case each thread runs on a separate CPU, resulting is true parallelism instead of time sharing 48

User Space Threads Kernel is unaware of threads Threads are managed by run-time system Thread information is in shared memory System calls have to be non-blocking Extra overhead (additional system call) Cannot transfer control on page fault Thread scheduling part of the application Inter-process thread scheduling impossible 49 Kernel Threads Managed by kernel Data structures in kernel space Create/destroy are system calls System calls and page faults not problematic Synchronization more expensive 50

Kernel vs. User Space Threads Kernel space threads better in I/O intensive applications User space threads better for fine grained parallel computing intensive applications Hybrid solutions possible E.g., application provides scheduling hints 51 Example: Fast Threads FastThreads is a hierarchic, event-based thread scheduling system It manages a kernel on a computer with one or more processors and a set of application processes Each process has a user level scheduler, while the kernel allocates virtual processors to processes. Part a of the next slide shows a kernel allocation processes on a three-processor machine. 52

Scheduler Activations Process A Kernel Process B Process P added SA preempted SA unblocked SA blocked Virtual processors Kernel P idle P needed A. Assignment of virtual processors to processes B. Events between user-level scheduler & kerne Key: P = processor; SA = scheduler activation 53 Kernel Notifications We see from previous slide: The process notifies the kernel when a virtual processor is idle and no longer needed or when an extra virtual processor is needed The kernel notifies a process when a scheduler activation notifies that process scheduler of an event. There are four types of scheduler activation events, shown on the next slide 54

User Level Schedule Notification A new virtual processor allocated Scheduler activation blocked Scheduler activation unblocked Scheduler activation preempted 55 Communication and Invocation What communication primitives does it supply? Which protocols does it support and how open is the communication implementation? What steps are taken to make communication as efficient as possible? What support is provided for high-latency and disconnected operation? 56

Invocation Overhead In a typical employment situation, a worker might be expected to complete a number of tasks each day that contribute toward the goals of the employer. To accomplish those tasks, the worker might also have to accomplish tasks, such as commuting to work, that do not contribute directly to the employer s goals A process also has tasks that are within the scope of the process and tasks that do not contribute directly to goals. Process invocation can be compared to commuting as an overhead expense 57 Invocation The invocation of a local process takes place within local memory and may involve a few tens of instruction cycles Invocation of a remote process involves network activities and possibly access to files, and may require billions of instruction cycles in processors running at speeds measured in gigahertz These invocation activities are external to the desired processing activities and increase costs without adding value 58

Delay Factors Address Space Changes 59 Invocation Performance Delay Factors Domain transitions Address space changes Network communication Bandwidth Number of packets Thread scheduling Context switching 60

Latency Invocation costs are the delays required to set up communications and the non-goal performing overhead of invocations These fixed overhead costs measure the latency of the connection Substantial ti efforts go into minimizing i i i and reducing latency costs in distributed applications 61 Invocation Costs as a Percentage of Throughput As more work is accomplished for a fixed amount of overhead, the overhead is less of a concern as a percentage of total costs With very small data sizes, most of the system time may be spent in overhead activities With large data transfers, the overhead costs may be negligible as a percentage of total t costs The next slide illustrates this as a graph of RPC delays against packet size in RPC transfers 62

RPC Delay Against Parameter Size RPC delay 0 1000 2000 Packet size Requested dat size (bytes) 63 Lightweight RPC The RPC s run on the same machine One way to reduce the overhead of a remote invocation is to share some of the costs While process invocation is expensive, some activities can be done once and then reused, while others must be done for each communication Lightweight RPC attempts to minimize overhead by sharing process activities within parent processes that are pre- established Overhead costs can be reduced by as much as two-thirdsthirds 64

Lightweight RPC Client Server A stack A 1. Copy args 4. Execute procedure and copy results User stub stub Kernel 2. Trap to Kernel 3. Upcall 5. Return (trap) 65 Concurrent Invocation Another approach to reducing communication overhead is to reduce the number of messages sent to establish the connection and to continue processing while communicating instead of suspending threads until a message is received Serialized and concurrent invocations are compared on the next slide 66

Serialized and Concurrent Invocation Timing Serialised invocations Concurrent invocations process args process args marshal marshal Send transmission Send process args marshal Receive Send Receive unmarshal unmarshal execute request marshal execute reques marshal Send Send Receive unmarshal process results Receive unmarshal process results Receive unmarshal execute reques marshal Send process args marshal Send Receive unmarshal process results Receive unmarshal time execute request marshal Send Receive unmarshal process results Client Server Client Server 67 Delay Factors Network Communication Bandwidth Packet size Delay increases almost linearly Packet initialization E.g., headers, checksums Marshalling & data copying Across address spaces Across protocol layers Synchronization 68

Characteristics of an Open Distributed System Run only that system software at each computer that is necessary for it to carry out its particular role Allow the software implementing any particular service to be changed independently of other facilities Allow for alternatives of the same service to be provided, when this is required to suit different users or applications Introduce new services without harming the integrity of existing ones 69 Kernel Architecture Monolithic kernels that serve all the functions of the O/S may not be ideally suited for distributed applications They are massive, undifferentiated and intractable An alternative is to have the kernel perform the most basic abstractions and allow microkernels that can be adapted for specialized functions to manage system resources 70

Monolithic Kernel and Microkernel S4... S1 S2 S3... S1 S2 S3 S4... Key: Server: Monolithic Kernel Kernel code and data: Microkernel Dynamically loaded server program: 71 Roll of Microkernel Middleware Language support subsystem Language support subsystem OS emulation subsystem... Microkernel Hardware The microkernel supports middleware via subsystems 72

Multiuser O/S and Semaphores With many processes in a single hardware device, multiple processes need controls to enable the CPU (or CPUs) to switch between tasks A semaphore can be thought of as an integer with two operations, down and up The down operation checks to see if the value is > 0 If it is, it decrements the value and continues If it is, the calling process is blocked The up operation does the opposite It first checks to see if there are any now blocked processes that could not complete earlier 73 Semaphores If so, it unblocks one of them and continues Otherwise, it increments the semaphore value Once an up or down operation begins, no other operation can access the semaphore until the operation is complete or a process blocks Semaphores are not a complete solution to preventing processes from interfering i with each other For that reason, we have monitors 74

Monitors A monitor is a construct similar to an object, containing i variables and procedures The variables can only be accessed by calling one of the procedures The monitor allows only one process at a time to execute a procedure 75 What Can the O/S Do For Me? In a course on distributed systems, it is logical l to look at operating systems and network operating systems from the perspective of the services we require for a distributed system What does a server require? What does a client require? 76

What Does a Server Do? Waits for client requests Serves many requests at the same time Gives priority to important tasks Manages background tasks Keeps on going like the Energizer bunny Gobbles up memory, CPU cycles, and disk space 77 Care and Feeding of a Server Servers tend to need a high level of concurrency This requires task management, which is best done by an operating system Using the stack concept, we separate support services from the application services that the server performs Many services are provided by Operating Systems (OS) and their extension, Network Operating Systems (NOS) 78

Basic Services of an O/S Task Preemption Task Priority Semaphores Local/Remote Inter Process Communications (IPC) File System Management Security Features Threads Intertask Protection High Performance Multi-user user Files Memory Management Dynamic Run-Time Extensions 79 Extended Services Extended services are a category of middleware that t exploit the potential ti of networks to distribute applications DBMS, TP monitors, and objects Distributed computing environment Network operating systems s Communications services 80

Service Needs Ubiquitous communications Access to network file and print services Binary large objects (BLOBS) Global directories and Network Yellow Pages Authentication and authorization services System management Network time Database and transaction services Internet services Object-oriented services 81 Summary Middleware and O/S Kernels Monolithic vs. microkernels Processes and threads RPC Delays Lightweight RPC 82