Modern Computers often do lots of things at the same time (web servers, accessing disks, background processes waiting for s,...).

Size: px
Start display at page:

Download "Modern Computers often do lots of things at the same time (web servers, accessing disks, background processes waiting for s,...)."

Transcription

1 Processes Modern Computers often do lots of things at the same time (web servers, accessing disks, background processes waiting for s,...). Hence, a system supporting multiple processes is needed. In a multiprogramming system the CPU switches from process to process running each process for milliseconds. The Process Model All run-able software (sometimes including the OS) is organized into a number of sequential processes. A process is an instance of an executing program, including the program counter, registers, and variables. Since the CPU switches from process to process, they must not be programmed with build-in assumptions about timing (wait for 2 seconds,...). If a program is running twice it counts as two processes! Section 2 1

2 Process Creation There are four principal events that cause processes to be created: System initialization: When a system is booted usually several processes are started, some interact with humans, some are background processes (daemons). Execution of a process creating a system call: a running process can issue system calls to create processes helping him (fetching data from a disk and processing data can be dome by two processes). A user request to create a new job like starting a program. Initiation of a batch job by the OS. Section 2 2

3 Example UNIX. There is only one system call to create a process: fork. It creates an exact clone, the two processes have the same memory image, open files,...). But these are physically two copies! After that the execve() (execute process) system call transforms the child into a new process (with different memory and program). Example Windows. Windows has a function CreateProcess that creates a new process and loads the new program. The call has 10 parameters. Section 2 3

4 Processe Termination There are usually four exit conditions. Normal exit because the work is finished. This is done by a system call (exit in Unix). Error Exit, for example the compiler is asked to compile a non-exiting program. Then a pop-up window might be displayed. Fatal Error, for example division by 0. Killed by another process, for example the kill call in Unix. Section 2 4

5 Process Hierarchies. Processes can have children, the children can have children,... Hence we get a process tree. In Unix a process and all its children form a group. When a user sends a signal from the keyboard, the signal is delivered to all processes of the group. The processes can individually decide what to do with the signal. UNIX initialises itself. A process init is present in the boot image. When it starts, it reads a file telling it how many terminals are active. Then it forks of one new process per terminal. The started processes wait for someone to log in. If a login is successful, the login process executes a shell accepting commands. These commands can create new processes belonging to the same tree, with init at its root. Windows has no process hierarchy. Section 2 5

6 Process States and Implementation There are 3 process states: A running process uses the CPU at the moment. A ready job waits for the CPU. A job is blocked it it waits for something, like an input,... Implementation. Usually the OS maintains a process table with one entry per process. The entry contains the state of the process, program counter, register contents,... All the information necessary to run the process after being blocked or stopped (see Figure 2-4 in the textbook). Section 2 6

7 Excursion: Disk Interrupt Suppose a user process is running when a disk interrupt happenes. Associated with each I/O class is a location (fixed location in main memory) called the interrupt vector. It contains the address of the interrupt service procedure. The interrupt hardware pushes PC, PSW, register,... onto the current stack. The computer jumps to the address specified in the interrupt vector. Now the interrupt service routine takes over. First it saves registers in process table and creates a new stack. The "real" interrupt service routine is run (C program) to deal with the interrupt. The scheduler decides which job should run next. Registers and memory map is loaded from the process table (done in assembly language) Section 2 7

8 Multiprogramming Heaving several "threads" (in the sense of lines of computation) improves the CPU utilization. Example: Suppose a process spends a fraction p of its time waiting for I/O. Assume we have n processes Then the probability that all processes are blocked is p n (they have to be independent!). The CPU utilisation is 1 p n, which increases with n. Section 2 8

9 Threads Threads work in the same address space whereas processes have their own address space. Reasons for having threads: This enables us to have "parallel" processes, this time they share memory. Eeasier and faster to create, compared to processes. Having threads allows computing (now divided in several threads) and I/O to overlap. Threads are great in systems with multiple CPUs. Section 2 9

10 Examples Word Text editor A word process displays the document in the same way it will appear on a printed page. Changing one word can result in a change of every page in the document. This takes a long time! It helps to have WWW one thread that interacts with the user, and one that formats the document. a third thread could now do regular security copies on the disk. Note: here the threads have to access the same document! No work for processes! Web pages that are often accessed are stored in a cache. A dispatcher thread reads incoming requests. It examines the request and gives it to an idle (blocked) worker thread. The worker thread wakes up, and checks the cache (all worked have to able to address the cache). If requested data is not in the cache the worker fetches the data from the disk. Section 2 10

11 Threads II A process groups related resources together (address space, open files, child processes,...). A process can also be regarded as a thread of computation. It has a program counter, variables, registers, a stack,... This is where threads come in. Threads are entities scheduled for execution on the CPU. Multiple executions can take place in the same process environment. All threads have access to the same variables. Threads enable parallelism. Threads are not protected from each other, it is assumed that they work together. Threads are also called lightweight processes! Section 2 11

12 Classical Model Threads can be in the state blocked, running, ready and terminated. Threads allows for multiple executions to take place in the same process environment, independent from each other. When a multi-threated process is run on a single CPU system, the threads have to take turns running. This gives the appearance of parallelism. Private to a thread: state. Program counter, register, stack and Shared by threads: Address space, global variables, open files, child processes, alarms, signals,... NOTE: All threads of one procedure have the same address space. There is no protection between threads! Section 2 12

13 Classical Model II Processes usually start with one thread and create more threads in their runtime. New threads can be created with something like create thread. Threads can have child threads,... When a thread is finished it calls something like thread exit. Threads can wait for some specific thread to exit by calling thread joint. There is no clock interrupt so that a new thread is scheduled. Scheduling is (usually) done on process level. thread yield allows a thread to voluntarily give up the CPU. Threads can also wait for another thread to finish some work. Section 2 13

14 Problems with Threads Threads can cause implementation problems: What happens if a Process calls fork? Does the new process inherit all threads? If not, does the process function properly without the threads?? Assume the new process gets a copy of all threads. If one of the parent threads is blocked, is the corresponding child thread blocked, too? What happens if one thread closes a file that is still used by another thread? Solving these problems in not easy but possible. Section 2 14

15 Example: POSIX Threads Unix supports Pthreads, an IEEE standard. The standard defines over 60 function calls for threads. Threads can have certain properties (scheduling information,...), they have identifiers, set of registers,... pthread create creates a new thread and the id of the thread is returned as function value. The call is similar to fork. pthread exit stops the thread and releases the stack. pthread join waits for another thread to finish. pthread yield allows another thread to run. pthread attr init creates the attribute structure associated with a thread and initializes it to the default values. pthread attr desetroy removes all attributes. Section 2 15

16 Implementing Threads There are two main ways to implement a thread package: in the user space or in the kernel. A hybrid implementation is alo possible. Implementation in user space The kernel knows nothing about threads. The threads run on top of a run time system, which is a collection of procedures that manage threads. Each process has an own thread table to keep track of the threads. The runtime system has to do the scheduling. Note: the OS scheduler will schedule threads from the same process until the kernel takes the CPU away! Advantages: Such a packet can be implemented over an OS that does not support threads. Very fast if the machine has instructions to save register. Faster as a kernel implementation since all procedures are "local". Processes can have their own specialized scheduling system. Section 2 16

17 Disadvantages Blocking system calls: If a thread makes a blocking system call then the process will be blocked. Hence, the parallelism is gone! Here one would need non-blocking system calls or other fixes. Scheduler has to work without clock interrupts, making it impossible to schedule threads in round robin fashion. Unless a thread gives up the CPU of its free will, the scheduler will get no chance! Threads make only sense in application where calculations are frequently blocked. Section 2 17

18 Implementing Threads II In the Kernel The kernel has the thread table and keeps track of all threads. Calls that block a thread are implemented as system calls. If a thread is blocked the scheduler can then schedule a thread from the same or from another process! Big advantage: there are no problems with blocking system calls, scheduling can be done from the OS. Biggest disadvantage the cost of thread system calls is much higher. There are still problems like "which thread should get a signal sent to the process?" Hybrid Implementations Here one usually has a mix of user-level and kernel-level threads. Section 2 18

19 Interprocess Communication Processes frequently need to communicate with each other. Here: not in the sense of sending messages, one could call it "process coordination". There are 3 main issues: 1. How can a process pass information to another one? 2. How to make sure that processes do not get in each other s way? Example: two processes try to grab the last seat in an air line reservation system. 3. How to enforce proper sequencing? Example: Process A produces data and process B reads the data. Then B has to wait until A is finished. (2) and (3) also apply to threads and they have the same solution for threads! (1) can be resolved easily since threads use the same memory space. Hence, they can communicate via that memory. Section 2 19

20 Race Conditions Now we talk about process coordination! Definition Race Condition: result of a process is unexpectedly and critically dependents on the sequence or timing of other events. Example: the value of a shared variable depends on the order in which threads are scheduled. Section 2 20

21 Example: printer spooler Implementation: The communication between processes is done via a spooler directory. The spooler directory has a large number of slots, each one can hold a file name. The spooler is implemented as a circular list with two shared variables: out pointing to the next file to be printed, and in, pointing to the next available slot in the directory The variables are available to all processes. When a process wants to print it enters the file name into the next slot of the spooler directory. Process printer daemon periodically checks if there are files to be printed in the directory. If yes it removes the name from the spooler and prints the file. Then it updates the out pointer. Section 2 21

22 The Race Condition: Assume Process A and Process B decide more or less to print at the same time. It could happen that A reads in (let s assume in=7) and stores the value in a local variable. The CPU decides that now process B should get the CPU. B stores the file name it wants to print in, lets say slot 7. At one point the CPU decides that it is again A s turn and A overwrites the file name of B. B will now wait forever,... Section 2 22

23 Critical Regions Definition: Part of a program that must complete execution before other processes can have access to the resources being used. Processes within a critical region can t be interleaved without threatening integrity of the operation. OR Definition: Part of a process where the shared memory is accessed or other things are done which result in races. Last example: the access to in and out is a critical region. The outcome depends on who runs precisely when. Solution: prohibit more than one process from reading and writing the shared data (in and out) at the same time. This solution is called mutual exclusion. Section 2 23

24 Mutual Exclusion Mutual exclusion is one of the major design issues with OS. Idea: ensure that only one processor enters its critical region at the same time. This avoids races! For a good solution we need 4 conditions: 1. No two processes may be simultaneously inside their critical regions. 2. No assumptions may be made about speeds or the number of CPU s. 3. No process running outside its critical region may block other processes. 4. No process should have to wait forever to enter its critical region. Section 2 24

25 Critical Regions II What should happen is the following: Process A enters the critical region at time T 1. At time T 2 > T 1 process B attempts to enter its critical regions but it fails because of A. B is temporarily suspended until the time when A leaves its critical region. Now B can enter its critical region. Section 2 25

26 Mutual Exclusion with Busy Waiting Busy-waiting is a technique in which a process repeatedly checks to see if a condition is true. Example: Is the lock is available? We will have a look at 1. Disabelling interrupts 2. Lock Variables 3. Strict Alteration 4. Person s Solution 5. TSL Instruction Section 2 26

27 1. Disabelling interrupts Very simple solution: when a process enters the critical region it disables interrupts so that it will keep the CPU as long as it takes to leave the critical region. The disadvantages here are does not help if a system has multiple CPUs. user processes should never be able to stop interrupts! Example: what happens if a process never allows interrupts again? Or the process goes into an endless loop? Or it is malicious? This is a good technique for the OS itself but not for user processes! Section 2 27

28 2. Lock Variables Here we have a lock variable that can have the values 0 and 1. Lock = 0 means no process is in a critical region, and Lock = 1 means a process is in its critical region. A process that wants to enter its critical region is only allowed to do so if lock = 0. If it leaves the critical region it sets lock = 0 again. Disadvantage who protects lock? A process could read lock and then be disabeld. Section 2 28

29 3. Strict Alteration Here we assume that we have only two processes and variable turn that can have the values 0 and 1. turn = 0 means that Process A can go into its critical region. After leaving the critical region it sets turn to 1. lock = 1 means that Process B can go into its critical region. After leaving the critical region it sets turn to 0. Hence, only one process at a time can be in a critical region. See Figure Advantage: this approach works. It can be generalised to several processes. Disadvantage: the processes have to take turns. If process A left the critical region it can only go into the critical region again if B was in its critical region. This might never happen,... The approach is already bad if the processes have different speeds. A process can be blocked by a process that is not in its critical region (violation of Condition 3). Section 2 29

30 4. Person s Solution Before entering its critical region each process calls enter region with its own process number as parameter. This will cause it to wait if necessary. After being finished with the critical region it calles leave region. See Figure 2-24 in the textbook. Note that turn and the array interested are a global variables. If both processes write the variable one will succeed and the other one will have to wait! Section 2 30

31 5. The TSL Instruction TSL stands for "test and set lock". This solution needs help from the hardware! Especially computers with several processors have an instruction like TSL Register, Lock. It reads the contents of the memory word lock into the register and stores a nonzero value at the memory address lock. The operation is indivisible! It locks the memory bus to prohibit other CPUs from accessing memory until it is done. See Figure 2-25 in the textbook. The solution to the problem is now easy. Before entering its critical region a process calls enter region which does busy waiting until the lock is free. (lock= 0). leave region stores a 0 in the variable lock an enables other processes to enter their critical regions. Note: busy waiting is not very efficient if the phase is too long! Section 2 31

32 Priority Inversion The solutions in the last section used busy waiting: a process that can not enter the critical region sits in a loop waiting until it is allowed to do so. This approach wastes CPU time! This can have unexpected effects called priority inversion. Assume a computer has two processes H and L. H has high priority, and L has low priority. H runs whenever it is in the ready state. Assume that, with L in its critical region, H becomes ready to run. Then the CPU schedules H. If H wants to enter the critical region it starts busy waiting. L never gets the CPU! As a result, both processes will be blocked forever. Solution: use sleep and wakeup calls instead. Section 2 32

33 Priority Inversion II Another book has the more standard view of Priority Inversion (Silberschatz, Galvin, Gagne). Assume we have 3 processes L, M, and H. For the priorities we have L < M < H. Assume H requires resource R, which is used by L. Also assume that H starts after L acquired the R. Then H has to wait wait until L is finished with R. (H is blocked.) Now assume M starts during this time (L has R and M blocked). M is the highest priority unblocked task. Hence, M will be scheduled and preempt L. Since L has been preempted it cannot finish with R. M will run till it is finished, then L will run (up to a point where it can free R) and then H will run. Thus, a task with medium priority ran before a task with high priority. Book says: At least 3 priority classes are needed! Solution: Increase priority of L when it claims the resource. Section 2 33

34 Sleep and Wake-up Example: Producer-Consumer Problem The producer creates some items and the consumer consumes them (can be generalized to n producers and m consumers). Realisation: Two processes share a common fixed-size buffer of size N. A shared variable count counts the number of items in the buffer. The producer can not put an item into a full buffer (meaning count = N) and the consumer can not read items out of an empty buffer (count = 0). The consumer goes to sleep when the buffer is empty to be awakened when the producer puts something into the buffer. The producer goes to sleep if the buffer is full and will be awakened if the consumer deleted one item. See Figure 2-27 in the textbook for an example. We assume that C has library calls sleep and wakeup that will be translated into system calls. We also assume that we have two procedures insert item and remove item that handle the book-keeping. Section 2 34

35 Problems with the Solution Assume the buffer is empty and the consumer has just read count which is 0. The scheduler stops the consumer. The producer inserts an item and increments count. Seeing that count was 0 it calls wakeup. The consumer is not asleep and the wakeup-call is lost. Section 2 35

36 Semaphores In 1965 Dijkstra suggested to use an integer variable that counts the number of wakeups. There are two operations for semaphores, up and down. down operation checks if the semaphore is > 0. If it is > 0 down decrements the value and the process continues. If it is 0 the process is put to sleep. up operation increments the value of the semaphore. If one or more processes were sleeping on the semaphore (unable to do a down operation), one of them is allowed to complete the down. The value of the semaphore will still be 0 after that! Important: Checking up and down and incrementing or decrementing them has to be done as a single, indivisible atomic action. This is absolutely necessary to avoid race conditions!! Section 2 36

37 Semaphores II Single CPU: up and down can be implemented as system calls, with the OS disabelling all interrupts. Note: different to allowing a user process to disable interrupts. Also, the atomic operation is very short! Multiple CPUs: each semaphore should be protected by a lock variable with TSL instruction to make sure that only one CPU at a time accesses the semaphore. Binary Semaphores: These semaphores are initialized with 1 and used by two or more processes to ensure that only one of them can enter its critical region. Each process does a down just before entering the critical region and an up after leaving it. This ensures mutual exclusion! Section 2 37

38 Producer-Consumer with Semaphores The solution uses 3 semaphores full, empty, and mutex. empty counts the number of empty slots. Initially we have empty = N. full counts the number of used slots. Initially full = 0. mutex makes sure the producer and consumer do not access the buffer at the same time. Initially mutex = 1. See Figure 2.28 for the solution. Note: insert item and remove item still have to handle the buffer in the sense that they have to decide where to insert/delete an item! They can not use empty and full. In that example semaphores are used in two different ways: 1. mutex is used for mutual exclusion. 2. full and empty are used to guarantee that certain event sequences do occur/not occur. This is called synchronization. Reading Assignment: Section Mutexes in Pthreads (pp ). Section 2 38

39 Producer-Consumer with Semaphores II Note: Assume the downs in Figure 2-28 before inserting or removing items from the buffer were reversed (mutex first). Assume the buffer is full and the producer calls down on mutex. Then the consumer would be blocked from emptying the buffer! This is called a deadlock! How to implement shared memory for Semaphores? There are three solutions: 1. Semaphores can be stored in the kernel and accessed by system calls. 2. Many OS allow processes to share some of their memory space. 3. In the worst case a shared file has to be used. Section 2 39

40 Mutexes Mutexes are used as a simplified version of semaphores if counting is not needed. They manage only mutual exclusion! A mutex variable can be only in one of two states: locked or unlocked. Usually 0 means unlocked and every other value locked. mutex lock is used if a process wants to enter the critical region. If the mutex is unlocked the call succeeds and the calling thread is free to enter the region. mutex unlock is called by the thread after leaving the critical region. If multiple threads are blocked on the mutex, one is chosen at random and unblocked. Mutexes can be implemented in user space, provided that a TSL instruction is available. For an example, see Figure 2-29 in the text book. This implementation is better that the one from Figure 2-25: the thread yields if it can not enter the critical region. Section 2 40

41 Monitors Monitors are a language concept that makes it easier to write correct programs. A monitor is a collection of procedures, variables, and data structures that are all grouped together in a module or packet. Processes might call all the procedures in a monitor whenever they want. Only one process can be active in a monitor at any point of time. Procedures declared outside of the monitor cannot access directly the monitor s internal data from It is up to the compiler to implement mutual exclusion on monitor entries. A common way is to use a mutex or binary semaphore. Typically, the first instructions of monitor procedures will check if any other process is currently active in the monitor. If so the calling process will be suspended. Section 2 41

42 Condition Variables for Monitors Monitors do not help when there are other reasons to block a process, for example a full or empty buffer. Here condition variables help! Producer-Consumer example Assume monitor procedure can not continue because the buffer is full (or empty). The procedure does a wait on some condition variable full (empty). This causes the calling process to be blocked. Now another process can enter the monitor. To wake up other process: Issue a signal on the condition variable. If a signal is done on a condition variable on which several processes are waiting, one of them is revived. It has to be decided if the process that is responsible for the signal remains in the monitor, or the one that got the signal will be allowed to enter the monitor. Note: Condition variables do not store signals! See Figure 2-34 in the textbook. Section 2 42

43 Message Passing Here real messages are send! Library procedure send( destination, & mesg) sends the message to the destination. Library procedure receive( source, & mesg.) receives the message send by the source. If no message is available the receiver can block until one arrives or return with an error code. It is also possible to receive a message from any source. Design Issues: What happens if a message does not arrive? Or an acknowledgement does not arrive? Hot to identify and name the processes? How to mark a message that has been re-send? Low performance if messages are copied. Section 2 43

44 CP with Message Passing We assume that the messages have all the same size, and that messages that are sent but not received are buffered by the OS. Initialization: the consumer sends N empty messages to the producer. The produces takes an empty message and sends back a full one. If all messages are full the producer will be blocked. The consumer reads a full message and sends an empty one back to the producer. If all messages are empty the consumer will be blocked. See Figure 2-36 in the textbook. Section 2 44

45 Scheduling The scheduler decides which jobs to run. A good scheduler can make a big difference in perceived performance and user satisfaction. CPUs are now so fast that scheduling is not so important on a personal machine, but scheduling is very important for networked servers. Important to consider: switching processes is expensive since one or more systems calls have to be performed. the whole memory map (registers, PSW,...) exchanged. has to be the whole memory cache is invalidated (we will see that later). Note: Scheduling applies both to threads and processes. Section 2 45

46 Process Behavior First we have to see how processes behave! Nearly all processes alternate between bursts of computation, and disk I/O requests. Example: The process runs for a while without stopping, then a system call is made to read or write from a file, then the prosess runs again using the data,... Compute-bound processes have long CPU bursts I/O-bound processes have short CPU bursts. Note: the amount of computing between I/O counts! Section 2 46

47 Process Behavior II Modern systems: the CPUs are getting faster and faster. Consequence: the fraction of I/O bound jobs gets larger (disk access time does not improve that fast!). Very rough scheduling rule: For I/O bound jobs it is a good strategy to schedule a job as fast as possible, whenever it wants to run. Most probably, such a job will only run for a short time only and then wait for I/O again for a long time. Compute-bound processes have to be scheduled fair. Section 2 47

48 When to schedule Scheduling decisions are needed when a new process is created. a process exits a process blocks/un-blocks an I/O interrupt occurs the time for a process is over (clock interrupt) A non-preemptive scheduling algorithm picks a process to run and lets it run until it blocks or exits. A preemptive scheduling algorithm picks a process and lets it run for a certain time. Preemptive scheduling often requires a clock interrupt to give the CPU control back to the scheduler. Section 2 48

49 Categories of Scheduling Algorithms Different application areas come with different scheduling goals. One distinguishes between batch, interactive, and real time scheduling. Batch jobs are still widespread for periodic tasks like payroll, inventory, interest calculation,... Interactive users: preemption is essential to keep one process from hogging the CPU (malicious or bug in a program) and denying service to others. Real-time systems Preemption is often not necessary because the processes do not run for long periods of time. Real time systems run only programs that are intended to further the application at hand, there is no competition. See Figure 2-39 in the textbook for scheduling goals. Section 2 49

50 Batch Systems First-Come-First-Serve All jobs are stored in a queue. The first job out of the queue is scheduled. There is no preemption. A new job that arrives at the system is appended to the end of the queue. When the running job blocks the first job of the queue is scheduled, and the blocked job is put at the end of the queue. Advantages: Easy to program Easy to understand Fair Disadvantages: Very bad for a mixture of compute-bound and I/O-bound processes. Section 2 50

51 Batch Systems Shortest Job First We assume that we have an estimation of the running time. The scheduling rule is to run the shortest job first. The strategy has no preemption! The turnaround time of a job is the time between entering the system and leaving it (waiting time + time to work on the job). Advantages: The rule minimizes the average turnaround time (easy to show) if all jobs arrive at the same time. Counter-Example for jobs arriving at different times: A, B, C, D, E with run times 2, 4, 1, 1, 1 and arrival time 0, 0, 3, 3, 3. Easy to understand Disadvantages: Long jobs wait for ages. Fairness? Section 2 51

52 Batch Systems Shortest Remaining Time Again,we assume that we have an estimation how long jobs will take. The scheduling rule is to run the shortest job first. The strategy has preemption, if a shorter job arrives (shorter compared to the remaining time), the new job is scheduled. Advantages: The rule minimizes the average turnaround time (easy to show). Easy to understand Disadvantages: Long jobs wait for ages. Fairness? Other rules: longest in system, longest job first,... Section 2 52

53 Interactive Systems Round Robin The ready processes are hold in a list. The first process of the list is assigned a quantum during which it can run. If the process is running when the quantum is over, the CPU is preempted and given to the next job in the list. The preempted job is appended to the end of the list. Usually, blocking processes will be suspended, too. Big question: how big should the quantum be? Too small means that lots of CPU time is wasted doing context switches. Too large results in a bad response time. Good: around mean CPU burst of the jobs. Advantages: Fair and easy to understand. Good performance if the quantum time is chosen properly. Disadvantages: Bad performance if the quantum time is not chosen well. Section 2 53

54 Interactive Systems Priority Scheduling Assumption: there are important and not important processes (daemon process versus process displaying video). Each process is assigned a priority and the ready job with the highest priority is allowed to run. Priorities can be assigned statically or dynamically. To prevent jobs from running forever The CPU might decrease the priority of the running job by every clock tick. If the priority now is below that of another job, this job will be scheduled. Each process might have a maximum time quantum it is allowed to run. Examples: give a high priority to highly I/O bound jobs, important jobs, jobs that were not scheduled for a long time,... Simple algorithm: set the priority 1/f, where f is the fraction of the last quantum used by that process. Section 2 54

55 Priority Scheduling with Multiple Queues The processes are grouped into priority classes. In general: priority scheduling among the classes and round robin scheduling within each class. Priorities might be adjusted or other rules might be used to prevent "starving" of jobs with a low priority. Example: Whenever there are jobs in the highest priority class, schedule these jobs in a round robin fashion. If not, schedule the next priority class, and so on. Also possible: Queues can have different quanta. Section 2 55

56 Priority Scheduling: Detailed Example We assume a timesharing system that can hold only one process in memory (meaning very expensive swaps!). There are a k priority classes (k is the largest priority). Jobs in Class k are run for 1 quanta. Jobs in class i are run for 2 k i quanta. Whenever a job used up all quanta allocated to it, it is moved down one priority class. Older jobs are scheduled less and less frequently, saving time for short and interactive jobs. On the other hand, the time that jobs are scheduled increases over time. Note: for a job that needs 100 quanta only 7 swaps are needed ( ). Section 2 56

57 Priority Scheduling: XDS 940 There are a 4 priority classes, called terminal, I/O, short quantum, and long quantum. Processes that are waiting for a terminal input go into class terminal. Processes were are waiting for a disk but became ready in the meantime went into the second class. Processes that are not finished when the first quantum ran out are moved to Class 3. Jobs that used up their quantum too many times are moved into Class 4. Problematic rule: whenever the enter key was typed at the terminal, the job was moved up the the highest priority class. Users tend to learn these rules! Section 2 57

58 Shortest Process Next The idea is to schedule the shortest job first, similar to batch systems. Remember: This minimises the average turnaround time. Problem: in interactive systems the runtime of a job is normally not known in advance. Runtime prediction: (based on the runtime of previous jobs) Assume the estimated time for the jobs of some terminal (creating these jobs) is T 0. Let T 1 be the real runtime for the next job from that terminal. Then an estimate for the next job would be αt 0 +(1 α) T 1. α determines how fast old runtimes are neglected. Example with α = 1/2. T 0 T 0 /2 + T 1 /2 T 0 /4 + T 1 /4 + T 2 /2 T 0 /8 + T 1 /8 + T 2 /4 + T 3 /2... Section 2 58

59 Guatanteed Scheduling Here the idea is that the OS gives guarantees to the users and tries to live up to the promises. Example: each of n users gets a fraction of 1/n of the CPU time. The CPU calculates for every job the fraction of time that the job got so far and compares the time with the required time. Example: A ratio of 0.5 means the job got only half of its time. The scheduler schedules the process with the lowest ratio, until its ratio has moved above the second highest ratio (or something in these lines). Problem: this is not easy to implement! Section 2 59

60 Lottery Scheduling Each process holds a certain number of tickets. The scheduler randomly draws a ticket. It schedules the owner of the chosen ticker for, say, 20 ms. Then it holds a new lottery. Advantages: Very flexible! An important job can get many tickets, unimportant jobs can get less. New jobs can get many tickets and then they can be scheduled immediately. Cooperating processes might exchange tickets. A client that sends a request to a server can also forward its tickets. The server will send the answer and the tickets back. Every user could get a fixed amount of tickets and decide how to divide them among its jobs. The number of tickets can depend on the job requirements, like frames per second for video servers. There is a huge amount of possibilities! Disadvantages: the variance that comes with random choices! Section 2 60

61 Scheduling in Real Time Systems How does such a system work? Typically, one or more physical devices external to the computer generate stimuli, and the computer must react to them within a fixed amount of time. Examples: compact disc player, autopilot, hospital intensive care unit,... There are hard real time systems and soft real time systems. They both come with deadlines. Section 2 61

62 Scheduling in Real Time Systems II Real time behaviour is usually achieved by dividing the task into a number of processes. The behaviour of the processes is predictable and known in advance. The processes are usually short-lived, under one second. Jobs can come with a hard or a soft deadline. Jobs can be preempted or not. Jobs can be periodic and aperiodic. A periodic job creates a new task after a fixed number of time steps. The job of the scheduler is to schedule the jobs so that the deadlines are met. Algorithms can by static or dynamic. Static scheduling only works when there is perfect information available, for example for periodic tasks. Section 2 62

63 Periodic jobs: Note: periodic tasks are quite common in real time systems. Example: a hospital intensive care unit checks hard rate, breathing rate and blood pressure every x seconds. In gerenal: scheduling periodic tasks is easier, it makes guarantees possible. When can the jobs be scheduled? Assume we have m periodic jobs. Let p i be the period of job i and c i its runtime (in seconds). Then c i /p i is the runtime of the ith job per second. The CPU can handle the jobs only if m i=1 c i p i 1. The system is then said to be scheduleable. Note: finding such a schedule is still hard and maybe not possible without preemption. Section 2 63

64 Scheduling in Real Time Systems III Example: We have 3 multimedia processes A, B, C. Process A runs every 30 msec.(approx. 33 times per second). Each frame requires 10 msec of CPU time. Hence, A would run in bursts A 1, A 2, A 3,... each one starting 30 msec after the predecessor. Each job has a deadline (arrival of next job). Process B runs every 40 seconds (approx. 25 times per second). Each frame requires 15 msec of CPU time. Process C Hence, runs every 50 seconds (approx. 20 times per second). Each frame requires 5 msec of CPU time. A is uses a fraction of 1/3 of the CPU time, B is using 15/40 = 3/8, and C uses 1/10. Together they eat of the CPU time. scheduleable! (See Figure 7-13.) Hence, jobs are Section 2 64

65 Rate Monitor Scheduling This is the classical scheduling for preemptable and periodic processes (short RMS). It can be used for processes that meet the following conditions: Each periodic process must completed within its period. No process is dependent on other processes. All non-periodic processes have no deadlines. Process preemption occurs instantaneously and with no overhead. (Not really reasonable, it is a simplification). The algorithm works as follows: It assigns each process a fixed priority which is equal to the frequency (frequency x means job occurs x times per second). Hence, A has priority 33 (1000/30), B has priority 25, and C has priority 20. The scheduler always runs ready process with highest priority. If a process with higher priority becomes ready than the running job will be preempted. In our example, A can preempt B and C. B can preempt C. C can preempt nothing. Section 2 65

66 Earliest Deadline First This is a dynamic algorithm that does not require periodic jobs or the same runtime per job instance. The Algorithm The CPU holds a list of all ready jobs sorted on deadline. New jobs are inserted into the list. The scheduler schedules the job with the closest deadline. If a job with a deadline arrives that is earlier than the deadline of the running job, the running job is preempted. In Figure 7-15 is an example where RMS and EDF produce different schedules. Process A runs every 30 msec. Each frame requires 15 msec of CPU time. Process B runs every 40 msec. (PAL) Each frame requires 15 msec of CPU time. Process C runs every 50 times per second. requires 5 msec of CPU time. Each frame The CPU usage is now Section 2 66

67 Scheduling in Real Time Systems EDF versus RMS Usually static priorities only work if the system usage is not too big. Result by Liu and Layland: RMS is guaranteed to work for periodic processes if m i=1 c i p i m (2 1/m 1). This gives an usage of 0.780, 0.757, 0.743, 0.718, and for 3, 4, 5, 10, 20 and 100 jobs. Asymtotically the usage approaches ln 2 for m. EDF always works, but the complexity is higher. Section 2 67

68 Thread Scheduling If we have processes with several threads, we have parallelism on multiple layers. Threads on user level Here the kernel is nor aware that threads exist. schedules processes. It only The processes decide which thread should run. This could be one thread or several threads. The thread run time system can have a scheduler, but there is no clock interrupt. Kernel-level threads Here, the kernel schedules the threads. It can take into account to which process a thread belongs but it is not necessary. Section 2 68

69 Classical IPC problems We will study two classical communication problems. These problems are often used to study synchronisation methods. Section 2 69

70 The dining philosophers problem Often used to show how good a synchronization primitive is. Five philosophers are sitting around a table. Each of them has a plate of spaghetti and there is a fork between every pair of plates. Every philosopher needs two forks to eat (they are not Italian). Each philosopher can be in one of two states: eating and thinking. When a philosopher gets hungry, he tries to get the left and the right fork, in either order. If he is finished with eating they he puts down the forks (in either order). See Figure 2-45 for an obvious (but wrong) solution. Section 2 70

71 The dining philosophers problem II Another solution: First take the left fork, than the right fork. If the right fork is not available but the left one down again. Well, it does nor work, too. Yet another solution: Protect the five statements in Figure 2-45 with a mutex variable. Problem here is that only one philosopher can eat at the time, but two is possible. For a good solution see Figure Section 2 71

72 Reader-Writer Problem We have reader and writer processes that want to access a data base. Several reader can access the data base at the same time but only one writer! See Figure 2-47 for a solution. Section 2 72

73 Reader-Writer Problem II The solution of Figure 2-47 has one problem. The writer has to wait until there is no active reader! It is very well possible that the writer has to wait for readers that tried to access the dada base much later. This happens if a continuous flow of readers enters the system. Another solution: When a reader arrives and a writer is waiting, the reader is suspended behind the writer instead of being admitted immediately. But then the parallelism is not as good! Section 2 73

Processes The Process Model. Chapter 2. Processes and Threads. Process Termination. Process Creation

Processes The Process Model. Chapter 2. Processes and Threads. Process Termination. Process Creation Chapter 2 Processes The Process Model Processes and Threads 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling Multiprogramming of four programs Conceptual

More information

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies Chapter 2 Processes and Threads Processes The Process Model 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling Multiprogramming of four programs Conceptual

More information

Chapter 2 Processes and Threads

Chapter 2 Processes and Threads MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads The Process Model Figure 2-1. (a) Multiprogramming of four programs. (b) Conceptual model of four independent,

More information

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Processes Prof. James L. Frankel Harvard University Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Process Model Each process consists of a sequential program

More information

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018 SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T 2. 3. 8 A N D 2. 3. 1 0 S P R I N G 2018 INTER-PROCESS COMMUNICATION 1. How a process pass information to another process

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

CS450/550 Operating Systems

CS450/550 Operating Systems CS450/550 Operating Systems Lecture 2 Processes and Threads Dr. Xiaobo Zhou Department of Computer Science CS450/550 P&T.1 Review: Summary of Lecture 1 Two major OS functionalities: machine extension and

More information

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions Chapter 2 Processes and Threads [ ] 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling 85 Interprocess Communication Race Conditions Two processes want to access shared memory at

More information

CS450/550 Operating Systems

CS450/550 Operating Systems CS450/550 Operating Systems Lecture 2 Processes and Threads Palden Lama Department of Computer Science CS450/550 P&T.1 Review: Summary of Lecture 1 Two major OS functionalities: machine extension and resource

More information

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5 Threads CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5 1 Threads The Thread Model (1) (a) Three processes each with one thread (b) One process with three threads 2 1 The Thread Model (2)

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

Process behavior. Categories of scheduling algorithms.

Process behavior. Categories of scheduling algorithms. Week 5 When a computer is multiprogrammed, it frequently has multiple processes competing for CPU at the same time. This situation occurs whenever two or more processes are simultaneously in the ready

More information

What are they? How do we represent them? Scheduling Something smaller than a process? Threads Synchronizing and Communicating Classic IPC problems

What are they? How do we represent them? Scheduling Something smaller than a process? Threads Synchronizing and Communicating Classic IPC problems Processes What are they? How do we represent them? Scheduling Something smaller than a process? Threads Synchronizing and Communicating Classic IPC problems Processes The Process Model a) Multiprogramming

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table.

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table. CSCI 4500 / 8506 Sample Questions for Quiz 3 Covers Modules 5 and 6 1. In the dining philosophers problem, the philosophers spend their lives alternating between thinking and a. working. b. eating. c.

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski Operating Systems Design Fall 2010 Exam 1 Review Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 To a programmer, a system call looks just like a function call. Explain the difference in the underlying

More information

8: Scheduling. Scheduling. Mark Handley

8: Scheduling. Scheduling. Mark Handley 8: Scheduling Mark Handley Scheduling On a multiprocessing system, more than one process may be available to run. The task of deciding which process to run next is called scheduling, and is performed by

More information

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions: Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Timers 1 / 46. Jiffies. Potent and Evil Magic

Timers 1 / 46. Jiffies. Potent and Evil Magic Timers 1 / 46 Jiffies Each timer tick, a variable called jiffies is incremented It is thus (roughly) the number of HZ since system boot A 32-bit counter incremented at 1000 Hz wraps around in about 50

More information

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps Interactive Scheduling Algorithms Continued o Priority Scheduling Introduction Round-robin assumes all processes are equal often not the case Assign a priority to each process, and always choose the process

More information

Unit I. Chapter 2: Process and Threads

Unit I. Chapter 2: Process and Threads Unit I Chapter 2: Process and Threads Introduction: Processes are one of the oldest and most important abstractions that operating systems provide. They support the ability to have (pseudo) simultaneous

More information

Mon Sep 17, 2007 Lecture 3: Process Management

Mon Sep 17, 2007 Lecture 3: Process Management Mon Sep 17, 2007 Lecture 3: Process Management September 19, 2007 1 Review OS mediates between hardware and user software QUIZ: Q: Name three layers of a computer system where the OS is one of these layers.

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L10 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Development project: You

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug PROCESSES & THREADS Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA 22807 Voice Phone: 540-568-8746; Cell Phone: 443-956-9424 E-mail: abzugcx@jmu.edu OR CharlesAbzug@ACM.org

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Course Syllabus. Operating Systems

Course Syllabus. Operating Systems Course Syllabus. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation of Processes 3. Scheduling Paradigms; Unix; Modeling

More information

MARUTHI SCHOOL OF BANKING (MSB)

MARUTHI SCHOOL OF BANKING (MSB) MARUTHI SCHOOL OF BANKING (MSB) SO IT - OPERATING SYSTEM(2017) 1. is mainly responsible for allocating the resources as per process requirement? 1.RAM 2.Compiler 3.Operating Systems 4.Software 2.Which

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS Chapter1: Introduction of Operating System An operating system acts as an intermediary between the

More information

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005 INF060: Introduction to Operating Systems and Data Communication Operating Systems: Processes & CPU Pål Halvorsen /9-005 Overview Processes primitives for creation and termination states context switches

More information

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

148 PROCESSES CHAP. 2

148 PROCESSES CHAP. 2 148 PROCESSES CHAP. 2 Interprocess communication primitives can be used to solve such problems as the producer-consumer, dining philosophers, reader-writer, and sleeping barber. Even with these primitives,

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) 1/32 CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 2/32 CPU Scheduling Scheduling decisions may take

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

CS3733: Operating Systems

CS3733: Operating Systems CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 1 / 31 CPU Scheduling new admitted interrupt exit terminated

More information

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; } Semaphore Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Can only be accessed via two indivisible (atomic) operations wait (S) { while

More information

Scheduling. The Basics

Scheduling. The Basics The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU

More information

CS370 Operating Systems Midterm Review

CS370 Operating Systems Midterm Review CS370 Operating Systems Midterm Review Yashwant K Malaiya Fall 2015 Slides based on Text by Silberschatz, Galvin, Gagne 1 1 What is an Operating System? An OS is a program that acts an intermediary between

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 The Process Concept 2 The Process Concept Process a program in execution

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Running. Time out. Event wait. Schedule. Ready. Blocked. Event occurs

Running. Time out. Event wait. Schedule. Ready. Blocked. Event occurs Processes ffl Process: an abstraction of a running program. ffl All runnable software is organized into a number of sequential processes. ffl Each process has its own flow of control(i.e. program counter,

More information

CHAPTER NO - 1 : Introduction:

CHAPTER NO - 1 : Introduction: Sr. No L.J. Institute of Engineering & Technology Semester: IV (26) Subject Name: Operating System Subject Code:21402 Faculties: Prof. Saurin Dave CHAPTER NO - 1 : Introduction: TOPIC:1 Basics of Operating

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

TDIU25: Operating Systems II. Processes, Threads and Scheduling

TDIU25: Operating Systems II. Processes, Threads and Scheduling TDIU25: Operating Systems II. Processes, Threads and Scheduling SGG9: 3.1-3.3, 4.1-4.3, 5.1-5.4 o Process concept: context switch, scheduling queues, creation o Multithreaded programming o Process scheduling

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

Scheduling Bits & Pieces

Scheduling Bits & Pieces Scheduling Bits & Pieces 1 Windows Scheduling 2 Windows Scheduling Priority Boost when unblocking Actual boost dependent on resource Disk (1), serial (2), keyboard (6), soundcard (8).. Interactive, window

More information

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems: Quiz2 December 15, Class: No. Name: Operating Systems: Quiz2 December 15, 2006 Class: No. Name: Part I (30%) Multiple Choice Each of the following questions has only one correct answer. Fill the correct one in the blank in front of each

More information

Operating Systems. Process scheduling. Thomas Ropars.

Operating Systems. Process scheduling. Thomas Ropars. 1 Operating Systems Process scheduling Thomas Ropars thomas.ropars@univ-grenoble-alpes.fr 2018 References The content of these lectures is inspired by: The lecture notes of Renaud Lachaize. The lecture

More information

Processes, PCB, Context Switch

Processes, PCB, Context Switch THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering EIE 272 CAOS Operating Systems Part II Processes, PCB, Context Switch Instructor Dr. M. Sakalli enmsaka@eie.polyu.edu.hk

More information

Multiprocessor and Real- Time Scheduling. Chapter 10

Multiprocessor and Real- Time Scheduling. Chapter 10 Multiprocessor and Real- Time Scheduling Chapter 10 Classifications of Multiprocessor Loosely coupled multiprocessor each processor has its own memory and I/O channels Functionally specialized processors

More information

CHAPTER 2: PROCESS MANAGEMENT

CHAPTER 2: PROCESS MANAGEMENT 1 CHAPTER 2: PROCESS MANAGEMENT Slides by: Ms. Shree Jaswal TOPICS TO BE COVERED Process description: Process, Process States, Process Control Block (PCB), Threads, Thread management. Process Scheduling:

More information

2 Processes. 2 Processes. 2 Processes. 2.1 The Process Model. 2.1 The Process Model PROCESSES OPERATING SYSTEMS

2 Processes. 2 Processes. 2 Processes. 2.1 The Process Model. 2.1 The Process Model PROCESSES OPERATING SYSTEMS OPERATING SYSTEMS PROCESSES 2 All modern computers often do several things at the same time. A modern operating system sees each software as a process. When a user PC is booted, many processes are secretly

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Announcement. Exercise #2 will be out today. Due date is next Monday

Announcement. Exercise #2 will be out today. Due date is next Monday Announcement Exercise #2 will be out today Due date is next Monday Major OS Developments 2 Evolution of Operating Systems Generations include: Serial Processing Simple Batch Systems Multiprogrammed Batch

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

ECE 574 Cluster Computing Lecture 8

ECE 574 Cluster Computing Lecture 8 ECE 574 Cluster Computing Lecture 8 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 16 February 2017 Announcements Too many snow days Posted a video with HW#4 Review HW#5 will

More information

Remaining Contemplation Questions

Remaining Contemplation Questions Process Synchronisation Remaining Contemplation Questions 1. The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and

More information

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems University of California, Berkeley College of Engineering Computer Science Division EECS all 2016 Anthony D. Joseph Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems Your Name: SID AND

More information

Midterm Exam. October 20th, Thursday NSC

Midterm Exam. October 20th, Thursday NSC CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included

More information

(b) External fragmentation can happen in a virtual memory paging system.

(b) External fragmentation can happen in a virtual memory paging system. Alexandria University Faculty of Engineering Electrical Engineering - Communications Spring 2015 Final Exam CS333: Operating Systems Wednesday, June 17, 2015 Allowed Time: 3 Hours Maximum: 75 points Note:

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Class: MCA Semester: II Year:2013 Paper Title: Principles of Operating Systems Max Marks: 60 Section A: (All

More information

CS-537: Midterm Exam (Spring 2001)

CS-537: Midterm Exam (Spring 2001) CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1 Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Operating System Concepts Ch. 5: Scheduling

Operating System Concepts Ch. 5: Scheduling Operating System Concepts Ch. 5: Scheduling Silberschatz, Galvin & Gagne Scheduling In a multi-programmed system, multiple processes may be loaded into memory at the same time. We need a procedure, or

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University Semaphores Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu EEE3052: Introduction to Operating Systems, Fall 2017, Jinkyu Jeong (jinkyu@skku.edu) Synchronization

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke CSCI-GA.2250-001 Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke frankeh@cs.nyu.edu Processes Vs Threads The unit of dispatching is referred to as a thread or lightweight

More information

Concurrency. Chapter 5

Concurrency. Chapter 5 Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits

More information

Operating Systems Design Exam 1 Review: Spring 2012

Operating Systems Design Exam 1 Review: Spring 2012 Operating Systems Design Exam 1 Review: Spring 2012 Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 UNIX-derived systems execute new programs via a two-step process of fork and execve. Other systems

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

Operating Systems. Scheduling

Operating Systems. Scheduling Operating Systems Scheduling Process States Blocking operation Running Exit Terminated (initiate I/O, down on semaphore, etc.) Waiting Preempted Picked by scheduler Event arrived (I/O complete, semaphore

More information

Preview. The Thread Model Motivation of Threads Benefits of Threads Implementation of Thread

Preview. The Thread Model Motivation of Threads Benefits of Threads Implementation of Thread Preview The Thread Model Motivation of Threads Benefits of Threads Implementation of Thread Implement thread in User s Mode Implement thread in Kernel s Mode CS 431 Operating System 1 The Thread Model

More information

Jan 20, 2005 Lecture 2: Multiprogramming OS

Jan 20, 2005 Lecture 2: Multiprogramming OS Jan 20, 2005 Lecture 2: Multiprogramming OS February 17, 2005 1 Review OS mediates between hardware and user software QUIZ: Q: What is the most important function in an OS? A: To support multiprogramming

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 Process creation in UNIX All processes have a unique process id getpid(),

More information

Interprocess Communication and Synchronization

Interprocess Communication and Synchronization Chapter 2 (Second Part) Interprocess Communication and Synchronization Slide Credits: Jonathan Walpole Andrew Tanenbaum 1 Outline Race Conditions Mutual Exclusion and Critical Regions Mutex s Test-And-Set

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:

More information

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3 Operating Systems Antonio Vivace - 2016 revision 4 Licensed under GPLv3 Process Synchronization Background A cooperating process can share directly a logical address space (code, data) or share data through

More information

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) Announcements Program #1 Due 2/15 at 5:00 pm Reading Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) 1 Scheduling criteria Per processor, or system oriented CPU utilization

More information

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important?

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important? Learning Outcomes Scheduling Understand the role of the scheduler, and how its behaviour influences the performance of the system. Know the difference between I/O-bound and CPU-bound tasks, and how they

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 12: Scheduling & Deadlock Priority Scheduling Priority Scheduling Choose next job based on priority» Airline checkin for first class passengers Can

More information

Process Synchronization

Process Synchronization CSC 4103 - Operating Systems Spring 2007 Lecture - VI Process Synchronization Tevfik Koşar Louisiana State University February 6 th, 2007 1 Roadmap Process Synchronization The Critical-Section Problem

More information