Homework 1. b. Can we ensure the same degree of security in a time shared machine as in a dedicated machine? Explain your answer.

Size: px
Start display at page:

Download "Homework 1. b. Can we ensure the same degree of security in a time shared machine as in a dedicated machine? Explain your answer."

Transcription

1 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 1.13, 1.22, 1.23, 1.25, 2.5, 2.17, 2.18 Home Page: Homework 1 Question: 1.13 In a multiprogramming and time sharing environment, several users share the system simultaneously. This situation can result in various security problems. a. What are two such problems? b. Can we ensure the same degree of security in a time shared machine as in a dedicated machine? Explain your answer. Answer: a. First problem: No operating system is bug free, thus one user s program may lead the whole system to crash. This will harm other users unsaved data. Second problem: some simple time sharing operating system may do not use Memory Management Unit (MMU), such as uclinux, µc OS (I do not want to mention the DOS which is out of date). On those operating systems, one user s program can read or modify other users memory space which may lead to security problem. b. No, we cannot ensure that. Here is a very basic case, the user may try to reduce the current running programs which may reduce the possibility of crash before running a very important task. On the multi users system, the user cannot do this. Question 1.22 What is the purpose of interrupts? What are the differences between a trap and an interrupt? Can traps be generated intentionally by a user program? If so, for what purpose? Answer: When the interrupt happened, the CPU will jump to the corresponding interrupt service code. Interrupt contains two types, hardware generated and software generated. The trap is software generated interrupt which could be an exception (e.g. divide by zero) or caused by the user s code. The hardware generated interrupt could be caused by the external hardware devices. The trap can be generated by a user program. E.g. on x86 system, if I put an INT3 (0xCC) instruction in a program, which could generate a trap. INT3 usually used as break point function

2 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: for the debugger. If we replace the instruction to INT3, when the program execute to this point, and trap will be generated, which usually will use a debugger to handle this trap. Question 1.23 Direct memory access is used for high speed I/O devices in order to avoid increasing the CPU s execution load. a. How does the CPU interface with the device to coordinate the transfer? b. How does the CPU know when the memory operations are complete? c. The CPU is allowed to execute other programs while the DMA controller is transferring data. Does this process interfere with the execution of the user programs? If so, describe what forms of interference are caused. Answer: a. The device driver on the operating system can access the CPU s IOs, some of those IOs connect to the device as bus (serial or parallel). Thus, CPU can communicate with the controller in the device. The controller controls the device s status by receiving the commands or data from the bus. The controller can also sending data to the CPU via the bus. The CPU can control the bus itself or using the DMA controller, if using the DMA, the CPU need to write the data to the buffer (any space in the memory), and the data will be transferred to the device by the DMA channel. The DMA channel was configured using the address and the size of the buffer, and the hardware data transmitter (E.g. using the address of SPI or I2C transmitters to configure the DMA) was also configured with the DMA controller. b. The CPU will get an interrupt. c. This is dependence. The DMA controller or the device can generate an interrupt to tell the CPU they finish the job, but the CPU can be configured to handle the interrupt or just ignore it. If the CPU is configured to handle the interrupt, then the user programs will be suspended and the CPU will jump to the corresponding interrupt service code. Question 1.25 Give two reasons why caches are useful. What problems do they solve? What problems do they cause? If a cache can be made as large as the device for which it is caching (for instance, a cache as large as a disk), why not make it that large and eliminate the device? Answer: Problem 1:

3 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: Cache is useful when the CPU needs data from disk, when the program uses the file on the disk, the file can be loaded to the cache for better read/ write performance. Problem 2: Cache is useful then the CPU needs data from memory, when CPU needs file or the program s code, those data can be load to the cache for better performance. The cache solves the problem about the performance of exchange data between devices of different IO speed. The cache causes the problem about dirty data, we have to keep update the current data between the low speed device and the cache. Today, we did not make the very large cache, because implement the memory on the chip which contains the CPU is still relative expensive. If some day, we can implement the non volatile memory with the CPU for the accepted cost, we may do this (this should be just what we did in 8051 MCU which contains no cache, or we could say 8051 s cache cached everything ). Question 2.5 What is the purpose of the command interpreter? Why is it usually separate from the kernel? Answer: The command interpreter is used to read commands form the standard inputs and execute them. The command can be treat as some system APIs executing in the sequence for a simple task. Thus the user may want to write some new program to handle some new commands, if the command interpreter is in the kernel, it will be not easy to add new commands (have to re compile the kernel). Question 2.17 Would it be possible for the user to develop a new command interpreter using the system call interface provided by the operating system? Answer: Of course, this is possible, since all command interpreters are implemented by using the API which provide by the operating system.

4 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Question 2.18 Home Page: What are the two models of interprocess communication? What are the strengths and weaknesses of the two approaches? Answer: Message passing model and shared memory model. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared memory for intercomputer communication (Such as MPI for the cluster, open MP need special designed computer but MPI can be implemented on all kind of cluster, but the data throughout is usually lower than the open MP due to the protocol stack and the network performance). Shared memory allows maximum speed and convenience of communication, since it can be done at memory transfer speeds when it takes place within a computer. Problems exist, however, in the areas of protection and synchronization between the processes sharing memory.

5 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 3.7, 3.13, 3.14, 4.10, 4.13 and 4.14 Home Page: Homework 2 Problem 3.7 Describe the actions taken by a kernel to context switch between processes. Answer: There is a hardware clock used for the processes scheduler. Once the hardware clock interrupt happens, the kernel will save all registers (maybe the number of the registers which need to save lesser than the CPU has, this according to the designer of the OS) and the current stack pointer of the currently executing process into the Process control block (PCB). The scheduler will get and restore the state (registers, stack printer etc.) of the next process which need to be executed, and then the next process will start to execute its code form the program counter which saved in its previously interrupt. Repeat above process, the kernel can context switch between processes. Problem 3.13 Using the program shown in Figure 3.30, explain what the output will be at Line A. #include <sys/types.h> #include <stdio.h> #include <unistd.h> int value = 5; int main() { pid t pid; pid = fork(); if (pid == 0) { /* child process */ value += 15; return 0; } else if (pid > 0) { /* parent process */ wait(null); printf("parent: value = %d",value); /* LINE A */ return 0; }

6 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 } Figure 3.30 What output will be at Line A? Home Page: Answer: The output should still be 5, since after execute fork(), the child process will be a copy of the parent process, thus the child process can only modify its own copy of variable value. Typically (I sad typically, since using OpenMP will be another story), if we want to share information between processes (Interposes Communication), then we have to use pipe, message queue, shared memory, TCP/IP socket or Semaphore. Problem 3.14 What are the benefits and the disadvantages of each of the following? Consider both the system level and the programmer level. a. Synchronous and asynchronous communication b. Automatic and explicit buffering c. Send by copy and send by reference d. Fixed sized and variable sized messages Answer: a. For synchronous communication, which is suitable to implement the serial program, since the operation will not return until get the result. But if the operation is blocked, then the process may waste time on waiting for a result if it did not need to wait. b. Automatic buffering provide unlimited length queue (large enough), which means the operation will never be blocked while sending message for waiting for the available memory. But automatic buffering usually used more memory than it actually need. Explicit buffering indicate the size of the buffer. Although the operation may blocked for waiting for available memory in the queue, but explicit buffering usually use less memory than automatic buffering. c. Send by reference allows the receiver to alter state of the parameter, but send by copy does not allow the receiver to modify the state of the parameter. d. Fixed sized message buffer is easier to be implemented by the kernel but for programmers, if they want to send variable sized messages by using fixed sized message buffer, they will never know how many variable sized messages can be held by the fixed sized message buffer. Variable sized messages buffer is harder to be implemented but with more freedom for programmers.

7 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problem 4.10 Home Page: Which of the following components of program state are shared across threads in a multithreaded process? a. Register values b. Heap memory c. Global variables d. Stack memory Answer: According to the book, Heap memory and Global variables are shared across threads in a multithreading system. Problem 4.13 The program shown in Figure 4.14 uses the Pthreads API. What would be the output from the program at LINE C and LINE P? #include <pthread.h> #include <stdio.h> int value = 0; void *runner(void *param); /* the thread */ int main(int argc, char *argv[]) { int pid; pthread t tid; pthread attr t attr; pid = fork(); if (pid == 0) { /* child process */ pthread attr init(&attr); pthread create(&tid,&attr,runner,null);

8 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: pthread join(tid,null); printf("child: value = %d",value); /* LINE C */ } else if (pid > 0) { /* parent process */ wait(null); printf("parent: value = %d",value); /* LINE P */ } } void *runner(void *param) { value = 5; pthread exit(0); } Figure 4.14 C program for Exercise Answer: Output at LINE C is 5. Output at LINE P is 0. The LINE C is 5 since thread runner can access the Global variables of the copy of the child process. Problem 4.14 Consider a multiprocessor system and a multithreaded program written using the many tomany threading model. Let the number of user level threads in the program be more than the number of processors in the system. Discuss the performance implications of the following scenarios. a. The number of kernel threads allocated to the program is less than the number of processors. b. The number of kernel threads allocated to the program is equal to the number of processors. c. The number of kernel threads allocated to the program is greater than the number of processors but less than the number of user level threads. Answer:

9 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: a. Not all processors will be used, this is waste some potential performance. Every kernel thread will run on a professor and the user threads need content switched between kernel threads. The processor will do nothing useful calculating once a kernel thread on this professor is blocked. b. All processors will be used. Every kernel thread will run on a processor and the user threads need content switched between kernel threads. The content switching of the user threads will be less than (a) since this situation has more kernel threads. The processor will do nothing useful calculating once a kernel thread on this professor is blocked. c. All processors will be used if there are enough ready kernels threads. The kernel threads will need to be context switched by the processor and the user threads also need to be context switched by the kernel thread, thus this method should be slower than (b). Additionally, if a kernel thread on a processor is blocked, than other kernel threads will be switched in for useful calculating.

10 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 5.12, 5.13, 5.14, 5.18 Home Page: Homework 3 Problem 5.12 Consider the following set of processes, with the length of the CPU burst given in milliseconds: Process Burst Time Priority P P2 1 1 P3 2 3 P4 1 4 P5 5 2 The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0. a. Draw four Gantt charts that illustrate the execution of these processes using the following scheduling algorithms: FCFS, SJF, nonpreemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1). b. What is the turnaround time of each process for each of the scheduling algorithms in part a? c. What is the waiting time of each process for each of these scheduling algorithms? d. Which of the algorithms results in the minimum average waiting time (over all processes)? Answer

11 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problem 5.13 Home Page: Which of the following scheduling algorithms could result in starvation? a. First-come, first-served b. Shortest job first c. Round robin d. Priority Answer Shortest job first and Priority scheduling algorithms could result in starvation. Problem 5.14 Consider a variant of the RR scheduling algorithm in which the entries in the ready queue are pointers to the PCBs. a. What would be the effect of putting two pointers to the same process in the ready queue? b. What would be two major advantages and two disadvantages of this scheme? c. How would you modify the basic RR algorithm to achieve the same effect without the duplicate pointers? Answer a. The process will be run twice as many times. b. Advantages: Easy to identify the important process. Prevents starvation of low priority process Disadvantages: More costs for context switching than before More cost for removing processes. c. Could modify the quantum time for each process. Problem 5.18 Explain the differences in how much the following scheduling algorithms discriminate in favor of short processes: a. FCFS b. RR c. Multilevel feedback queues Answer

12 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: a. This is depends on the arriving order, if the long processes arrive first, then short processes have to wait for a long time. b. Short processes usually finished earlier than long processes since each process has equal quantum. c. Long processes will be moved to lower priority queue, short processes will have higher priority, thus short processes are favored.

13 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 6.11, 6.12, 6.16, 6.18, 6.25, 6.34, 6.35 Home Page: Homework 4 Problem 6.11 What is the meaning of the term busy waiting? What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Explain your answer. Answer Busy waiting means a process is waiting for a condition using loop, which also means this process will use the CPU time for no real computing. Other kind of waiting could be wait but did not waste CPU time by the loop. If the process needs to wait for a condition, than it will be blocked (entering sleep state) and wait for a wakeup signal. Busy waiting can be avoided by the sleep/wake up method, but this also increased the overhead about entering sleep state and waking up from the sleep state. Problem 6.12 Explain why spinlocks are not appropriate for single-processor systems yet are often used in multiprocessor systems. Answer Spinlock will use loop to wait until lock available, thus it will cause the busy waiting. On the single processor systems, for the very short task, spinlock may not waste a lot of resources, but for the relative larger task, spinlock has low performance since if one thread entering the critical section, other threads will not be allowed to enter the critical section by the spinlock. For the worst case, the processor will deadlock when thread A is waiting on a lock for shared resource which B is holding and B is also waiting on a lock for shared resource that thread A is holding. Thus we usually consider other methods instated of spinlock for better performance on the single processor systems. On the multiprocessor system, spinlocks are efficient.

14 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problem 6.16 Home Page: Describe how the Swap() instruction can be used to provide mutual exclusion that satisfies the bounded-waiting requirement. Answer We have learned the topic about Bounded waiting Mutual Exclusion with TestandSet(), actually both basic instructions TestandSet() and Swap() can provide mutual exclusion, thus we can modify the given algorithm from do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; // critical section j = (i + 1) % n; while ((j!= i) &&!waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE); To do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) swap(&lock, &key); waiting[i] = FALSE; // critical section j = (i + 1) % n; //select next process while ((j!= i) &&!waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE); //key = TestAndSet(&lock); Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key Problem 6.18 Show that, if the wait() and signal() semaphore operations are not executed atomically, then mutual exclusion may be violated.

15 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Answer Home Page: If wait() and signal() are not executed atomically, which means when a thread entering in the wait() and signal(), the scheduler can still turn to other threads and other threads can access the shared resource within the wait() and signal(). For example: Suppose the value of semaphore S=1 and processes P1 and P2 execute wait(s) concurrently: Time1: P1 get the value of S is 1 Time2: P2 get the value of S is 1 Time3:P1 reduce value by 1 and entering the critical section Time4:P2 reduce value by 1 and entering the critical section Problem 6.25 Discuss the tradeoff between fairness and throughput of operations in the readers writers problem. Propose a method for solving the readers writers problem without causing starvation. Answer The writers may be starved if there are some readers constantly reading from the resource. Those readers will keep the semaphore tied up and the writer will never have chance to write. This is not fair for the writer but increased the throughput of the readers since any number of reader can read at the same time. For a more fair solution, just like what we did in the project 2, we should have a queue, the following readers which need read and wait for only currently reading reader to be completed and the semaphore to be available. New reader that wish to read are placed behind the write in the queue. After all queued operations before the writer are completed, the writer takes its turn with the semaphore. After the writer done its job, all of the waiting readers can take turns. Problem 6.34 In log-based systems that provide support for transactions, updates to data items cannot be performed before the corresponding entries are logged. Why is this restriction necessary?

16 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Answer Home Page: If the transaction needs to be aborted, then the values of the updated data values need to be rolled back to the old values which means the old values of the data entries have to be logged before the updates are performed. Problem 6.35 Show that the two-phase locking protocol ensures conflict serializability. Answer For Example: Process 1 update a record in table A. Process 2 update a record in table B. Process 1 attempt to update the same record in table B. This will cause a lock wait, if process 2 still did not unlock the table B. This ensures conflict serializability. Moreover, if process 2 attempt to update the same record in table A. This will be a deadlock; each session is now waiting on another to complete, neither can proceed.

17 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 7.10, 7.3, 7.20 Home Page: Homework 5 Problem 7.10 Consider the traffic deadlock depicted in Figure 7.9. a. Show that the four necessary conditions for deadlock hold in this example. b. State a simple rule for avoiding deadlocks in this system. Answer a. 1. Mutual exclusion: only one process at a time can use a resource Every car on the road hold a non sharable resource, the part of the road they used cannot be shared with other cars. 2. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes Every car holds its current part of the road (current resource) and waiting for the space in front of them which are currently hold by other cars. 3. No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task For this situation, preemption is not possible, because we have no way to pull a car off the road right now.

18 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: 4. Circular wait: there exists a set {P0, P1,, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2,, Pn 1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. Each car is waiting for the road resource in front of it and some cars hold the road resource which 2 cars wait on them. Therefore, this situation may cause a cycle of waiting cars. b. We could use traffic light rules to avoid the deadlock. In such system, intersections could always be clear by the light changing. And the resources of intersections can be used by one lane in the period time. Problem 7.3 A possible method for preventing deadlocks is to have a single, higher order resource that must be requested before any other resource. For example, if multiple threads attempt to access the synchronization objects A E, deadlock is possible. (Such synchronization objects may include mutexes, semaphores, condition variables, and the like.)we can prevent the deadlock by adding a sixth object F. Whenever a thread wants to acquire the synchronization lock for any object A E, it must first acquire the lock for object F. This solution is known as containment: the locks for objects A E are contained within the lock for object F. Compare this scheme with the circular-wait scheme of Section Answer Both schemes can prevent deadlock, but circular wait scheme has better performance, since circular wait scheme allow multi processes to use the resources if there is no deadlock, but containment scheme allow only one process to use the resources. Although containment scheme should be easier to be implement than circular waiting scheme. Problem 7.20 Consider the following snapshot of a system:

19 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: Answer the following questions using the banker s algorithm: a. What is the content of the matrix Need? b. Is the system in a safe state? c. If a request from process P1 arrives for (0,4,2,0), can the request be granted immediately? Answer a. Need = Max Allocation A B C D P P P P P b. Yes, there are several sequence that meet the safety requirements. E.g. After P0 > Available = can run P2, P3 Finish After P2 > Available = can run P1, P3,P4 Finish After P1 > Available = can run P3, P4 Finish After P3 > Available = can run P4 Finish After P4 > Available = Finish Safe c. Yes, E.g. At first the Available is which can grant (0,4,2,0) After P0 > Available = Finish can grant (0,4,2,0) After P2 > Available = Finish can grant (0,4,2,0) After P3 > Available = Finish can grant (0,4,2,0) After P1 > Available = Finish can grant (0,4,2,0) After P4 > Available = Finish can grant (0,4,2,0) Safe

20 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 8.9, 8.11, 8.13, 8.20 Home Page: Homework 6 Problem 8.9 Explain the difference between internal and external fragmentation. Answer Internal fragmentation means the memory space occupied by a process but cannot be used by that process. The space will be unusable by the system until the process release the space. External fragmentation means the total free memory is enough for the new process but not contiguous and could not satisfy the request. Problem 8.11 Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB (in order), how would the first-fit, best-fit, and worst-fit algorithms place processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)? Which algorithm makes the most efficient use of memory? Answer (a) First fit The first fit algorithm selects the first free partition that is large enough to accommodate the request. Process Partition Free Memory After meet the process s need 212K 500K 288K 417K 600K 183K 112K 288K 176K 462K Not Possible (b) Best fit The best fit algorithm selects the partition whose size is closest in size (and large enough) to the requested size.

21 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: Process Partition Free Memory After meet the process s need 212K 300K 88K 417K 500K 83K 112K 200K 88K 426K 600K 174K (c) Worst fit The worst-fit algorithm effectively selects the largest partition for each request. Process Partition Free Memory After meet the process s need 212K 600K 388K 417K 500K 83K 112K 388K 276K 426K Not Possible Best fit makes the most efficient use of memory since it is only one meets all processes memory requests. Problem 8.13 Compare the memory organization schemes of contiguous memory allocation, pure segmentation, and pure paging with respect to the following issues: a. External fragmentation b. Internal fragmentation c. Ability to share code across processes Answer Contiguous memory This scheme has external fragmentation problem but no internal fragmentation problem since the address spaces are contiguously. The old processes will make holes when they die and the external fragmentation will be made when the new processes initiation. This scheme does not allow processes to share code, since process s virtual memory segment does not break into noncontiguous segments.

22 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Pure Segmentation Home Page: Pure segmentation scheme has external fragmentation problem but no internal fragmentation problem, because the segment of a process is contiguously in physical memory. When the segments of dead processes are replaced by the new segments of new processes, the external fragmentation will happen. This scheme allows processes to share code, such as processes could share code segment but different data segment. Pure Paging Pure Paging scheme has internal fragmentation problem but no external fragmentation problem. Processes allocated in page granularity. If a page is not completely utilized, it will result internal fragmentation. This scheme allows processes to share code at the granularity of pages. Problem 8.20 Consider a paging system with the page table stored in memory. a. If a memory reference takes 200 nanoseconds, how long does a paged memory reference take? b. If we add TLBs, and 75 percent of all page-table references are found in the TLBs, what is the effective memory reference time? (Assume that finding a pagetable entry in the TLBs takes zero time, if the entry is there.) Answer (a) Memory reference needs 2 actions of memory accessing, page lookup followed by actual access, thus: It takes 2*200ns = 400ns (b) 75% present of all page table references are found in the TLBs means 75% accesses do not need page lookup in memory but just need actual access, in other worlds, 75% accesses only need one action of memory accessing. There are still 25% accesses cannot be found in the TLBs, thus 25% accesses still need page lookup in memory and actual access, they need 2 actions of memory accessing, thus: 75% TLBs hit+25 TLBs miss= 75%*200ns+25%*400ns = 250ns

23 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 9.19, 9.21, 9.28, 9.34 Home Page: Homework 7 Problem 9.19 What is the copy-on-write feature, and under what circumstances is it beneficial to use this feature? What hardware support is required to implement this feature? Answer Copy on write allow processes to share pages, which does not requires each process having a separate copy of the pages. A trap will be generated when one process want to write to a shared page, the operating system will makes a separate copy of the page for each process when trap happens. This feature is useful for fork(), since the child could have a copy of the parent s address space without create a separate copy until one of parent and child tries to write on shared page. For the hardware support, on each memory access, the page table need to consider the page is write protected or not, if the page is write protected, then the trap will be caused which can inform the operating system to handle the problem. Problem 9.21 Assume that we have a demand-paged memory. The page table is held in registers. It takes 8 milliseconds to service a page fault if an empty frame is available or if the replaced page is not modified and 20 milliseconds if the replaced page is modified. Memory-access time is 100 nanoseconds. Assume that the page to be replaced is modified 70 percent of the time. What is the maximum acceptable pagefault rate for an effective access time of no more than 200 nanoseconds? Answer Let page fault rate be x. Total average access time = average time for pages available in main memory + average time for page faults if empty pages available or replaced pages are not modified + average time for page faults if the replaced pages are modified Thus, Total average access time = (1 x)*100*10^ 9+x*(1 70%)*8*10^ 3+x*70%*20*10^ 3

24 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: We requires that Total average access time < 200 ns, thus: (1 x)*100*10^ 9+x*(1 70%)*8*10^ 3+x*70%*20*10^ 3<=200*10^ 9 Finally, x <= *10^ 6 The maximum acceptable page fault rate for an effective access time of no more than 200 ns is *10^ 6. Problem 9.28 Consider a demand-paging system with the following time-measured utilizations: CPU utilization 20% Paging disk 97.7% Other I/O devices 5% For each of the following, say whether it will (or is likely to) improve CPU utilization. Explain your answers. a. Install a faster CPU. b. Install a bigger paging disk. c. Increase the degree of multiprogramming. d. Decrease the degree of multiprogramming. e. Install more main memory. f. Install a faster hard disk or multiple controllers with multiple hard disks. g. Add prepaging to the page-fetch algorithms. h. Increase the page size. Answer a. This have small effort, since the major limiting factor is available memory of each program. b. Have no effort at all. c. This may decrease the CPU utilization, since increase the degree of multiprogramming means reduce the available memory to each program, the page faults will increases.

25 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: d. This may increase the CPU utilization, since this will keep more working programs in the memory, which means this will reduce the page faults. e. This will increase the CPU utilization, since this will keep more programs in the memory and reducing the CPU time of paging. f. This may increase the CPU utilization, since this will decrease the waiting time for page to be brought in. The number of pages faults may still same as before, thus this solution may not increase CPU utilization that much. g. This will increase the CPU utilization, since this will avoid page faults by loading the pages into memory before they are needed. h. This will increase the CPU utilization, since larger page size will reduce the page faults. But this solution will also cause more internal fragmentation which may reduce the number of programs that can have a working set in memory. Problem 9.34 Is it possible for a process to have two working sets, one representing data and another representing code? Explain. Answer Yes, this is possible. E.g. the code can be accessed by a process retain the same working set for a long period but the data code accesses my change. Thus some processors provide 2 TLBs.

26 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 10.10, 10.15, 11.10, Home Page: Homework 8 Problem Consider a file system in which a file can be deleted and its disk space reclaimed while links to that file still exist. What problems may occur if a new file is created in the same storage area or with the same absolute path name? How can these problems be avoided? Answer The problem could be the user want to access old file using an existing link, but he/she actually will access the new file at the same storage area. The access protection for old file is also used for the new file. This problem can be avoided by ensuring all links to a deleted file are deleted by maintain a list of all links to a file, removing each of them when the file is deleted. Problem If the operating system knew that a certain application was going to access file data in a sequential manner, how could it exploit this information to improve performance? Answer The prefetch could improve the performance, since prefetching the subsequent blocks of future needed can reduce the waiting time by the process for future requests. Problem What are the advantages of the variant of linked allocation that uses a FAT to chain together the blocks of a file? Answer

27 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: The advantage is when processing a block that is stored at the middle of a file, the location can be found out by the pointers stored in the FAT, and thus we do not need to access all blocks of the file in sequential manner to find out the target block. Problem Fragmentation on a storage device can be eliminated by recompaction of the information. Typical disk devices do not have relocation or base registers (such as those used when memory is to be compacted), so how can we relocate files? Give three reasons why recompacting and relocation of files are often avoided. Answer For relocate files, at first we need to collect all information of the files on the disk, and there are attributes associated to the files such as its starting block and its length. We can sort these files by their staring block. And then, we process each file one by one, comparing the sum of its starting block and its total blocks with the starting block of the next file. If the sum is less than the starting block of the next file, which means there are free blocks between the current file and the next file, thus we can change the starting block of the next file to this sum, and moving all blocks of the next file to the new starting block. After processing all files, there should be no free blocks between any files. Recompacting and relocation of files are often avoided because: 1. This kind of tasks take a lot of time. 2. When we do these tasks which will cause a lot of IO operations of disk, this may reduce the life time of the disk. 3. When we do these tasks, if the system lose power suddenly, the file in process may be damaged. When we use the contiguous allocation method, we will encounter the problem of external fragmentation. When we use the linked or indexed allocation method, we will not encounter the problem of external fragmentation. We can eliminate the problem of external fragmentation by recompaction. First, we can collect all the files on the disk, and its associated attributes such as its starting block and its length (total blocks). Then, we can sort these files according their starting blocks. Now, we look at each file one by one, to compare the sum of its starting block and its total blocks with the starting block of the next file. If this sum is less than the starting block of the next file, we know there are free blocks between the current file and the next file, so we can change the starting address (block) of the next file to this sum, and moving all the blocks belonging to the next file to the

28 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Home Page: new starting address. After processing all the files, we will have all of them compacted, and there aren t any free blocks between any files. Recompacting and relocation of files are often avoided, because: 1. Doing these things take a lot of time. 2. When we do these things, the users of the system cannot do any other things. 3. When we do these things, if the power is suddenly turned down, the file in process may be damaged.

29 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Problems: 15.1, 15.2, 15.9, 15.12, Home Page: Homework 10 Problem 15.1 Buffer-overflow attacks can be avoided by adopting a better programming methodology or by using special hardware support. Discuss these solutions. Answer For the better programming methodology, we need bounds checking to guard against buffer overflows, e.g. Java does bounds checking which guaranteed every array access to be within bounds. This kind of methods does not require hardware support but will result runtime costs of bounds checking. For the hardware support, we know the buffer overflow need to overflow the return address of the function which can jumping to another portion of the stack frame that contains executable code. Thus if the hardware can prevent the execution of code that is in the stack segment of a process s address space, then the buffer overflow can be avoided. Problem 15.2 A password may become known to other users in a variety of ways. Is there a simple method for detecting that such an event has occurred? Explain your answer. Answer Just like Linux or some online system implemented, when the user logs in, the system shows the last time of the user logged into the system. It should be better if the system could shows the recent logged in history. Problem 15.9 Make a list of six security concerns for a bank s computer system. For each item on your list, state whether this concern relates to physical, human, or operatingsystem security.

30 CSE 430 Name: Bing Hao Operating Systems (2013 Summer) 2013 Answer The system should be located in a safe location The location of the system should be well guarded All operations should be recorded in a log system Backup the system frequency The backup medias need to be protected The operating system need to be updated frequency for fixing the bugs Home Page: Physical Human Operating System, Human Human Human Operating system, Human Problem Compare symmetric and asymmetric encryption schemes, and discuss under what circumstances a distributed system would use one or the other. Answer Symmetric encryption schemes only have one key for both encryption and decryption. Symmetric encryptions usually have better performance than asymmetric encryption, thus it usual used for transfer larger amount of data between client side and server side. E.g. OpenVPN can use symmetric encryption AES or blowfish. Asymmetric encryption schemes have two keys, public key and private key. Public key is used to encrypt data and private key is used to decrypt data. Thus this scheme can be used to exchange small amount data or authentication. E.g. SSL used asymmetric encryption for getting the key for symmetric encryption and digital signature. The communication still used symmetric encryption for better performance. Problem Why doesn t D(ke, N)(E(kd, N)(m)) provide authentication of the sender? To what uses can such an encryption be put? Answer This means the message is encrypted using the public key, and then decrypted using the private key. This cannot provide authentication to the sender since anyone can get the public key which means could have fabricated message. This scheme guaranteed that only one who has private key can decrypt the message which means only one who has the private key can decrypt the message form the sender to the one who has the private key.

31 Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009

32 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server Systems 3.2 Silberschatz, Galvin and Gagne 2009

33 Objectives To introduce the notion of a process -- a program in execution, which forms the basis of all computation To describe the various features of processes, including scheduling, creation and termination, and communication To describe communication in client-server systems 3.3 Silberschatz, Galvin and Gagne 2009

34 Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks Textbook uses the terms job and process almost interchangeably Process a program in execution; process execution must progress in sequential fashion A process includes: program counter stack data section 3.4 Silberschatz, Galvin and Gagne 2009

35 Process in Memory 3.5 Silberschatz, Galvin and Gagne 2009

36 Process State As a process executes, it changes state new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a processor terminated: The process has finished execution 3.6 Silberschatz, Galvin and Gagne 2009

37 Diagram of Process State 3.7 Silberschatz, Galvin and Gagne 2009

38 Process Control Block (PCB) Information associated with each process Process state Program counter CPU registers CPU scheduling information Memory-management information Accounting information I/O status information 3.8 Silberschatz, Galvin and Gagne 2009

39 Process Control Block (PCB) 3.9 Silberschatz, Galvin and Gagne 2009

40 CPU Switch From Process to Process 3.10 Silberschatz, Galvin and Gagne 2009

41 Process Scheduling Queues Job queue set of all processes in the system Ready queue set of all processes residing in main memory, ready and waiting to execute Device queues set of processes waiting for an I/O device Processes migrate among the various queues 3.11 Silberschatz, Galvin and Gagne 2009

42 Ready Queue And Various I/O Device Queues 3.12 Silberschatz, Galvin and Gagne 2009

43 Representation of Process Scheduling 3.13 Silberschatz, Galvin and Gagne 2009

44 Schedulers Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) selects which process should be executed next and allocates CPU 3.14 Silberschatz, Galvin and Gagne 2009

45 Addition of Medium Term Scheduling 3.15 Silberschatz, Galvin and Gagne 2009

46 Schedulers (Cont) Short-term scheduler is invoked very frequently (milliseconds) (must be fast) Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow) The long-term scheduler controls the degree of multiprogramming Processes can be described as either: I/O-bound process spends more time doing I/O than computations, many short CPU bursts CPU-bound process spends more time doing computations; few very long CPU bursts 3.16 Silberschatz, Galvin and Gagne 2009

47 Context Switch When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch Context of a process represented in the PCB Context-switch time is overhead; the system does no useful work while switching Time dependent on hardware support 3.17 Silberschatz, Galvin and Gagne 2009

48 Process Creation Parent process create children processes, which, in turn create other processes, forming a tree of processes Generally, process identified and managed via a process identifier (pid) Resource sharing Parent and children share all resources Children share subset of parent s resources Parent and child share no resources Execution Parent and children execute concurrently Parent waits until children terminate 3.18 Silberschatz, Galvin and Gagne 2009

49 Process Creation (Cont) Address space Child duplicate of parent Child has a program loaded into it UNIX examples fork system call creates new process exec system call used after a fork to replace the process memory space with a new program 3.19 Silberschatz, Galvin and Gagne 2009

50 Process Creation 3.20 Silberschatz, Galvin and Gagne 2009

51 C Program Forking Separate Process int main() { pid_t pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls", "ls", NULL); } else { /* parent process */ /* parent will wait for the child to complete */ wait (NULL); printf ("Child Complete"); exit(0); } } 3.21 Silberschatz, Galvin and Gagne 2009

52 A tree of processes on a typical Solaris 3.22 Silberschatz, Galvin and Gagne 2009

53 Process Termination Process executes last statement and asks the operating system to delete it (exit) Output data from child to parent (via wait) Process resources are deallocated by operating system Parent may terminate execution of children processes (abort) Child has exceeded allocated resources Task assigned to child is no longer required If parent is exiting Some operating system do not allow child to continue if its parent terminates All children terminated - cascading termination 3.23 Silberschatz, Galvin and Gagne 2009

54 Interprocess Communication Processes within a system may be independent or cooperating Cooperating process can affect or be affected by other processes, including sharing data Reasons for cooperating processes: Information sharing Computation speedup Modularity Convenience Cooperating processes need interprocess communication (IPC) Two models of IPC Shared memory Message passing 3.24 Silberschatz, Galvin and Gagne 2009

55 Communications Models 3.25 Silberschatz, Galvin and Gagne 2009

56 Cooperating Processes Independent process cannot affect or be affected by the execution of another process Cooperating process can affect or be affected by the execution of another process Advantages of process cooperation Information sharing Computation speed-up Modularity Convenience 3.26 Silberschatz, Galvin and Gagne 2009

57 Producer-Consumer Problem Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process unbounded-buffer places no practical limit on the size of the buffer bounded-buffer assumes that there is a fixed buffer size 3.27 Silberschatz, Galvin and Gagne 2009

58 Bounded-Buffer Shared-Memory Solution Shared data #define BUFFER_SIZE 10 typedef struct {... } item; item buffer[buffer_size]; int in = 0; int out = 0; Solution is correct, but can only use BUFFER_SIZE-1 elements 3.28 Silberschatz, Galvin and Gagne 2009

59 Bounded-Buffer Producer while (true) { /* Produce an item */ } while (((in = (in + 1) % BUFFER SIZE count) == out) ; /* do nothing -- no free buffers */ buffer[in] = item; in = (in + 1) % BUFFER SIZE; 3.29 Silberschatz, Galvin and Gagne 2009

60 Bounded Buffer Consumer while (true) { while (in == out) ; // do nothing -- nothing to consume // remove an item from the buffer item = buffer[out]; out = (out + 1) % BUFFER SIZE; return item; } 3.30 Silberschatz, Galvin and Gagne 2009

61 Interprocess Communication Message Passing Mechanism for processes to communicate and to synchronize their actions Message system processes communicate with each other without resorting to shared variables IPC facility provides two operations: send(message) message size fixed or variable receive(message) If P and Q wish to communicate, they need to: establish a communication link between them exchange messages via send/receive Implementation of communication link physical (e.g., shared memory, hardware bus) logical (e.g., logical properties) 3.31 Silberschatz, Galvin and Gagne 2009

62 Implementation Questions How are links established? Can a link be associated with more than two processes? How many links can there be between every pair of communicating processes? What is the capacity of a link? Is the size of a message that the link can accommodate fixed or variable? Is a link unidirectional or bi-directional? 3.32 Silberschatz, Galvin and Gagne 2009

63 Direct Communication Processes must name each other explicitly: send (P, message) send a message to process P receive(q, message) receive a message from process Q Properties of communication link Links are established automatically A link is associated with exactly one pair of communicating processes Between each pair there exists exactly one link The link may be unidirectional, but is usually bi-directional 3.33 Silberschatz, Galvin and Gagne 2009

64 Indirect Communication Messages are directed and received from mailboxes (also referred to as ports) Each mailbox has a unique id Processes can communicate only if they share a mailbox Properties of communication link Link established only if processes share a common mailbox A link may be associated with many processes Each pair of processes may share several communication links Link may be unidirectional or bi-directional 3.34 Silberschatz, Galvin and Gagne 2009

65 Indirect Communication Operations create a new mailbox send and receive messages through mailbox destroy a mailbox Primitives are defined as: send(a, message) send a message to mailbox A receive(a, message) receive a message from mailbox A 3.35 Silberschatz, Galvin and Gagne 2009

66 Indirect Communication Mailbox sharing P 1, P 2, and P 3 share mailbox A P 1, sends; P 2 and P 3 receive Who gets the message? Solutions Allow a link to be associated with at most two processes Allow only one process at a time to execute a receive operation Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was Silberschatz, Galvin and Gagne 2009

67 Synchronization Message passing may be either blocking or non-blocking Blocking is considered synchronous Blocking send has the sender block until the message is received Blocking receive has the receiver block until a message is available Non-blocking is considered asynchronous Non-blocking send has the sender send the message and continue Non-blocking receive has the receiver receive a valid message or null 3.37 Silberschatz, Galvin and Gagne 2009

68 Buffering Queue of messages attached to the link; implemented in one of three ways 1. Zero capacity 0 messages Sender must wait for receiver (rendezvous) 2. Bounded capacity finite length of n messages Sender must wait if link full 3. Unbounded capacity infinite length Sender never waits 3.38 Silberschatz, Galvin and Gagne 2009

69 Examples of IPC Systems - POSIX POSIX Shared Memory Process first creates shared memory segment segment id = shmget(ipc PRIVATE, size, S IRUSR S IWUSR); Process wanting access to that shared memory must attach to it shared memory = (char *) shmat(id, NULL, 0); Now the process could write to the shared memory sprintf(shared memory, "Writing to shared memory"); When done a process can detach the shared memory from its address space shmdt(shared memory); 3.39 Silberschatz, Galvin and Gagne 2009

70 Examples of IPC Systems - Mach Mach communication is message based Even system calls are messages Each task gets two mailboxes at creation- Kernel and Notify Only three system calls needed for message transfer msg_send(), msg_receive(), msg_rpc() Mailboxes needed for commuication, created via port_allocate() 3.40 Silberschatz, Galvin and Gagne 2009

71 Examples of IPC Systems Windows XP Message-passing centric via local procedure call (LPC) facility Only works between processes on the same system Uses ports (like mailboxes) to establish and maintain communication channels Communication works as follows: The client opens a handle to the subsystem s connection port object The client sends a connection request The server creates two private communication ports and returns the handle to one of them to the client The client and server use the corresponding port handle to send messages or callbacks and to listen for replies 3.41 Silberschatz, Galvin and Gagne 2009

72 Local Procedure Calls in Windows XP 3.42 Silberschatz, Galvin and Gagne 2009

73 Communications in Client-Server Systems Sockets Remote Procedure Calls Remote Method Invocation (Java) 3.43 Silberschatz, Galvin and Gagne 2009

74 Sockets A socket is defined as an endpoint for communication Concatenation of IP address and port The socket :1625 refers to port 1625 on host Communication consists between a pair of sockets 3.44 Silberschatz, Galvin and Gagne 2009

75 Socket Communication 3.45 Silberschatz, Galvin and Gagne 2009

76 Remote Procedure Calls Remote procedure call (RPC) abstracts procedure calls between processes on networked systems Stubs client-side proxy for the actual procedure on the server The client-side stub locates the server and marshalls the parameters The server-side stub receives this message, unpacks the marshalled parameters, and peforms the procedure on the server 3.46 Silberschatz, Galvin and Gagne 2009

77 Execution of RPC 3.47 Silberschatz, Galvin and Gagne 2009

78 Remote Method Invocation Remote Method Invocation (RMI) is a Java mechanism similar to RPCs RMI allows a Java program on one machine to invoke a method on a remote object 3.48 Silberschatz, Galvin and Gagne 2009

79 Marshalling Parameters 3.49 Silberschatz, Galvin and Gagne 2009

80 End of Chapter 3, Silberschatz, Galvin and Gagne 2009

81 Chapter 4: Threads, Silberschatz, Galvin and Gagne 2009

82 Chapter 4: Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads 4.2 Silberschatz, Galvin and Gagne 2009

83 Objectives To introduce the notion of a thread a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems To discuss the APIs for the Pthreads, Win32, and Java thread libraries To examine issues related to multithreaded programming 4.3 Silberschatz, Galvin and Gagne 2009

84 Single and Multithreaded Processes 4.4 Silberschatz, Galvin and Gagne 2009

85 Benefits Responsiveness Resource Sharing Economy Scalability 4.5 Silberschatz, Galvin and Gagne 2009

86 Multicore Programming Multicore systems putting pressure on programmers, challenges include Dividing activities Balance Data splitting Data dependency Testing and debugging 4.6 Silberschatz, Galvin and Gagne 2009

87 Multithreaded Server Architecture 4.7 Silberschatz, Galvin and Gagne 2009

88 Concurrent Execution on a Single-core System 4.8 Silberschatz, Galvin and Gagne 2009

89 Parallel Execution on a Multicore System 4.9 Silberschatz, Galvin and Gagne 2009

90 User Threads Thread management done by user-level threads library Three primary thread libraries: POSIX Pthreads Win32 threads Java threads 4.10 Silberschatz, Galvin and Gagne 2009

91 Kernel Threads Supported by the Kernel Examples Windows XP/2000 Solaris Linux Tru64 UNIX Mac OS X 4.11 Silberschatz, Galvin and Gagne 2009

92 Multithreading Models Many-to-One One-to-One Many-to-Many 4.12 Silberschatz, Galvin and Gagne 2009

93 Many-to-One Many user-level threads mapped to single kernel thread Examples: Solaris Green Threads GNU Portable Threads 4.13 Silberschatz, Galvin and Gagne 2009

94 Many-to-One Model 4.14 Silberschatz, Galvin and Gagne 2009

95 One-to-One Each user-level thread maps to kernel thread Examples Windows NT/XP/2000 Linux Solaris 9 and later 4.15 Silberschatz, Galvin and Gagne 2009

96 One-to-one Model 4.16 Silberschatz, Galvin and Gagne 2009

97 Many-to-Many Model Allows many user level threads to be mapped to many kernel threads Allows the operating system to create a sufficient number of kernel threads Solaris prior to version 9 Windows NT/2000 with the ThreadFiber package 4.17 Silberschatz, Galvin and Gagne 2009

98 Many-to-Many Model 4.18 Silberschatz, Galvin and Gagne 2009

99 Two-level Model Similar to M:M, except that it allows a user thread to be bound to kernel thread Examples IRIX HP-UX Tru64 UNIX Solaris 8 and earlier 4.19 Silberschatz, Galvin and Gagne 2009

100 Two-level Model 4.20 Silberschatz, Galvin and Gagne 2009

101 Thread Libraries Thread library provides programmer with API for creating and managing threads Two primary ways of implementing Library entirely in user space Kernel-level library supported by the OS 4.21 Silberschatz, Galvin and Gagne 2009

102 Pthreads May be provided either as user-level or kernel-level A POSIX standard (IEEE c) API for thread creation and synchronization API specifies behavior of the thread library, implementation is up to development of the library Common in UNIX operating systems (Solaris, Linux, Mac OS X) 4.22 Silberschatz, Galvin and Gagne 2009

103 Java Threads Java threads are managed by the JVM Typically implemented using the threads model provided by underlying OS Java threads may be created by: Extending Thread class Implementing the Runnable interface 4.23 Silberschatz, Galvin and Gagne 2009

104 Threading Issues Semantics of fork() and exec() system calls Thread cancellation of target thread Asynchronous or deferred Signal handling Thread pools Thread-specific data Scheduler activations 4.24 Silberschatz, Galvin and Gagne 2009

105 Semantics of fork() and exec() Does fork() duplicate only the calling thread or all threads? 4.25 Silberschatz, Galvin and Gagne 2009

106 Thread Cancellation Terminating a thread before it has finished Two general approaches: Asynchronous cancellation terminates the target thread immediately Deferred cancellation allows the target thread to periodically check if it should be cancelled 4.26 Silberschatz, Galvin and Gagne 2009

107 Signal Handling Signals are used in UNIX systems to notify a process that a particular event has occurred A signal handler is used to process signals 1. Signal is generated by particular event 2. Signal is delivered to a process 3. Signal is handled Options: Deliver the signal to the thread to which the signal applies Deliver the signal to every thread in the process Deliver the signal to certain threads in the process Assign a specific threa to receive all signals for the process 4.27 Silberschatz, Galvin and Gagne 2009

108 Thread Pools Create a number of threads in a pool where they await work Advantages: Usually slightly faster to service a request with an existing thread than create a new thread Allows the number of threads in the application(s) to be bound to the size of the pool 4.28 Silberschatz, Galvin and Gagne 2009

109 Thread Specific Data Allows each thread to have its own copy of data Useful when you do not have control over the thread creation process (i.e., when using a thread pool) 4.29 Silberschatz, Galvin and Gagne 2009

110 Scheduler Activations Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application Scheduler activations provide upcalls - a communication mechanism from the kernel to the thread library This communication allows an application to maintain the correct number kernel threads 4.30 Silberschatz, Galvin and Gagne 2009

111 Operating System Examples Windows XP Threads Linux Thread 4.31 Silberschatz, Galvin and Gagne 2009

112 Windows XP Threads 4.32 Silberschatz, Galvin and Gagne 2009

113 Linux Threads 4.33 Silberschatz, Galvin and Gagne 2009

114 Windows XP Threads Implements the one-to-one mapping, kernel-level Each thread contains A thread id Register set Separate user and kernel stacks Private data storage area The register set, stacks, and private storage area are known as the context of the threads The primary data structures of a thread include: ETHREAD (executive thread block) KTHREAD (kernel thread block) TEB (thread environment block) 4.34 Silberschatz, Galvin and Gagne 2009

115 Linux Threads Linux refers to them as tasks rather than threads Thread creation is done through clone() system call clone() allows a child task to share the address space of the parent task (process) 4.35 Silberschatz, Galvin and Gagne 2009

116 End of Chapter 4, Silberschatz, Galvin and Gagne 2009

117 Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009

118 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic Transactions 6.2 Silberschatz, Galvin and Gagne 2009

119 Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To present both software and hardware solutions of the critical-section problem To introduce the concept of an atomic transaction and describe mechanisms to ensure atomicity 6.3 Silberschatz, Galvin and Gagne 2009

120 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer. 6.4 Silberschatz, Galvin and Gagne 2009

121 Producer while (true) { } /* produce an item and put in nextproduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextproduced; in = (in + 1) % BUFFER_SIZE; count++; 6.5 Silberschatz, Galvin and Gagne 2009

122 Consumer while (true) { while (count == 0) ; // do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; } /* consume the item in nextconsumed 6.6 Silberschatz, Galvin and Gagne 2009

123 Race Condition count++ could be implemented as register1 = count register1 = register1 + 1 count = register1 count-- could be implemented as register2 = count register2 = register2-1 count = register2 Consider this execution interleaving with count = 5 initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2-1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4} 6.7 Silberschatz, Galvin and Gagne 2009

124 Solution to Critical-Section Problem 1. Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes 6.8 Silberschatz, Galvin and Gagne 2009

125 Initial Attempts to Solve Problem Only 2 processes, P 0 and P 1 General structure of process P i (other process P j ) do { entry section critical section exit section } while (1); remainder section Processes may share some common variables to synchronize their actions. 6.9 Silberschatz, Galvin and Gagne 2009

126 Process P i do { Algorithm 1 Shared variables: int turn; initially turn = 0 turn == i P i can enter its critical section while (turn!= i) ; critical section turn = j; remainder section } while (1); Satisfies mutual exclusion, but not progress 6.10 Silberschatz, Galvin and Gagne 2009

127 Process P i do { Algorithm 2 Shared variables boolean flag[2]; initially flag [0] = flag [1] = false. flag [i] == true P i ready to enter its critical section flag[i] = true; while (flag[j]) ; critical section flag [i] = false; remainder section } while (1); Satisfies mutual exclusion, but not progress requirement Silberschatz, Galvin and Gagne 2009

128 Peterson s Solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready! 6.12 Silberschatz, Galvin and Gagne 2009

129 Algorithm for Process P i do { flag[i] = TRUE; turn = j; while (flag[j] && turn == j); critical section flag[i] = FALSE; remainder section } while (TRUE); 6.13 Silberschatz, Galvin and Gagne 2009

130 Synchronization Hardware Many systems provide hardware support for critical section code Uniprocessors could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions Atomic = non-interruptable Either test memory word and set value Or swap contents of two memory words 6.14 Silberschatz, Galvin and Gagne 2009

131 Solution to Critical-section Problem Using Locks do { acquire lock critical section release lock remainder section } while (TRUE); 6.15 Silberschatz, Galvin and Gagne 2009

132 TestAndndSet Instruction Definition: boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } 6.16 Silberschatz, Galvin and Gagne 2009

133 Solution using TestAndSet Shared boolean variable lock., initialized to false. Solution: do { while ( TestAndSet (&lock )) ; // do nothing // critical section lock = FALSE; // remainder section } while (TRUE); 6.17 Silberschatz, Galvin and Gagne 2009

134 Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: } 6.18 Silberschatz, Galvin and Gagne 2009

135 Solution using Swap Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while (TRUE); 6.19 Silberschatz, Galvin and Gagne 2009

136 Bounded-waiting Mutual Exclusion with TestandSet() do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; // critical section j = (i + 1) % n; while ((j!= i) &&!waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE); 6.20 Silberschatz, Galvin and Gagne 2009

137 Semaphore Synchronization tool that does not require busy waiting Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; } 6.21 Silberschatz, Galvin and Gagne 2009

138 Semaphore as General Synchronization Tool Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE); 6.22 Silberschatz, Galvin and Gagne 2009

139 Semaphore Implementation Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time Thus, implementation becomes the critical section problem where the wait and signal code are placed in the crtical section. Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution Silberschatz, Galvin and Gagne 2009

140 Semaphore Implementation with no Busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list Two operations: block place the process invoking the operation on the appropriate waiting queue. wakeup remove one of processes in the waiting queue and place it in the ready queue Silberschatz, Galvin and Gagne 2009

141 Semaphore Implementation with no Busy waiting (Cont.) Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(p); } } 6.25 Silberschatz, Galvin and Gagne 2009

142 Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q); wait (Q); wait (S); signal (S); signal (Q); signal (Q); signal (S); Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher-priority process 6.26 Silberschatz, Galvin and Gagne 2009

143 Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem 6.27 Silberschatz, Galvin and Gagne 2009

144 Bounded-Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N Silberschatz, Galvin and Gagne 2009

145 Bounded Buffer Problem (Cont.) The structure of the producer process do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (TRUE); 6.29 Silberschatz, Galvin and Gagne 2009

146 Bounded Buffer Problem (Cont.) The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (TRUE); 6.30 Silberschatz, Galvin and Gagne 2009

147 Readers-Writers Problem A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write Problem allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time Shared Data Data set Semaphore mutex initialized to 1 Semaphore wrt initialized to 1 Integer readcount initialized to Silberschatz, Galvin and Gagne 2009

148 Readers-Writers Problem (Cont.) The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE); 6.32 Silberschatz, Galvin and Gagne 2009

149 Readers-Writers Problem (Cont.) The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (TRUE); 6.33 Silberschatz, Galvin and Gagne 2009

150 Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to Silberschatz, Galvin and Gagne 2009

151 Dining-Philosophers Problem (Cont.) The structure of Philosopher i: do { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE); 6.35 Silberschatz, Galvin and Gagne 2009

152 Problems with Semaphores Correct use of semaphore operations: signal (mutex). wait (mutex) wait (mutex) wait (mutex) Omitting of wait (mutex) or signal (mutex) (or both) 6.36 Silberschatz, Galvin and Gagne 2009

153 Monitors A high-level abstraction that provides a convenient and effective mechanism for process synchronization Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations procedure P1 ( ) {. } procedure Pn ( ) { } } Initialization code (.) { } } 6.37 Silberschatz, Galvin and Gagne 2009

154 Schematic view of a Monitor 6.38 Silberschatz, Galvin and Gagne 2009

155 Condition Variables condition x, y; Two operations on a condition variable: x.wait () a process that invokes the operation is suspended. x.signal () resumes one of processes (if any) that invoked x.wait () 6.39 Silberschatz, Galvin and Gagne 2009

156 Monitor with Condition Variables 6.40 Silberschatz, Galvin and Gagne 2009

157 Solution to Dining Philosophers monitor DP { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i]!= EATING) self [i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); } 6.41 Silberschatz, Galvin and Gagne 2009

158 Solution to Dining Philosophers (cont) void test (int i) { if ( (state[(i + 4) % 5]!= EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5]!= EATING) ) { state[i] = EATING ; self[i].signal () ; } } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } 6.42 Silberschatz, Galvin and Gagne 2009

159 Solution to Dining Philosophers (cont) Each philosopher I invokes the operations pickup() and putdown() in the following sequence: DiningPhilosophters.pickup (i); EAT DiningPhilosophers.putdown (i); 6.43 Silberschatz, Galvin and Gagne 2009

160 Monitor Implementation Using Semaphores Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; Each procedure F will be replaced by wait(mutex); body of F; if (next_count > 0) signal(next) else signal(mutex); Mutual exclusion within a monitor is ensured Silberschatz, Galvin and Gagne 2009

161 Monitor Implementation For each condition variable x, we have: semaphore x_sem; // (initially = 0) int x-count = 0; The operation x.wait can be implemented as: x-count++; if (next_count > 0) signal(next); else signal(mutex); wait(x_sem); x-count--; 6.45 Silberschatz, Galvin and Gagne 2009

162 Monitor Implementation The operation x.signal can be implemented as: if (x-count > 0) { next_count++; signal(x_sem); wait(next); next_count--; } 6.46 Silberschatz, Galvin and Gagne 2009

163 A Monitor to Allocate Single Resource monitor ResourceAllocator { boolean busy; condition x; void acquire(int time) { if (busy) x.wait(time); busy = TRUE; } void release() { busy = FALSE; x.signal(); } initialization code() { busy = FALSE; } } 6.47 Silberschatz, Galvin and Gagne 2009

164 Synchronization Examples Solaris Windows XP Linux Pthreads 6.48 Silberschatz, Galvin and Gagne 2009

165 Solaris Synchronization Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing Uses adaptive mutexes for efficiency when protecting data from short code segments Uses condition variables and readers-writers locks when longer sections of code need access to data Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock 6.49 Silberschatz, Galvin and Gagne 2009

166 Windows XP Synchronization Uses interrupt masks to protect access to global resources on uniprocessor systems Uses spinlocks on multiprocessor systems Also provides dispatcher objects which may act as either mutexes and semaphores Dispatcher objects may also provide events An event acts much like a condition variable 6.50 Silberschatz, Galvin and Gagne 2009

167 Linux Synchronization Linux: Prior to kernel Version 2.6, disables interrupts to implement short critical sections Version 2.6 and later, fully preemptive Linux provides: semaphores spin locks 6.51 Silberschatz, Galvin and Gagne 2009

168 Pthreads Synchronization Pthreads API is OS-independent It provides: mutex locks condition variables Non-portable extensions include: read-write locks spin locks 6.52 Silberschatz, Galvin and Gagne 2009

169 Atomic Transactions System Model Log-based Recovery Checkpoints Concurrent Atomic Transactions 6.53 Silberschatz, Galvin and Gagne 2009

170 System Model Assures that operations happen as a single logical unit of work, in its entirety, or not at all Related to field of database systems Challenge is assuring atomicity despite computer system failures Transaction - collection of instructions or operations that performs single logical function Here we are concerned with changes to stable storage disk Transaction is series of read and write operations Terminated by commit (transaction successful) or abort (transaction failed) operation Aborted transaction must be rolled back to undo any changes it performed 6.54 Silberschatz, Galvin and Gagne 2009

171 Types of Storage Media Volatile storage information stored here does not survive system crashes Example: main memory, cache Nonvolatile storage Information usually survives crashes Example: disk and tape Stable storage Information never lost Not actually possible, so approximated via replication or RAID to devices with independent failure modes Goal is to assure transaction atomicity where failures cause loss of information on volatile storage 6.55 Silberschatz, Galvin and Gagne 2009

172 Log-Based Recovery Record to stable storage information about all modifications by a transaction Most common is write-ahead logging Log on stable storage, each log record describes single transaction write operation, including Transaction name Data item name Old value New value <T i starts> written to log when transaction T i starts <T i commits> written when T i commits Log entry must reach stable storage before operation on data occurs 6.56 Silberschatz, Galvin and Gagne 2009

173 Log-Based Recovery Algorithm Using the log, system can handle any volatile memory errors Undo(T i ) restores value of all data updated by T i Redo(T i ) sets values of all data in transaction T i to new values Undo(T i ) and redo(t i ) must be idempotent Multiple executions must have the same result as one execution If system fails, restore state of all updated data via log If log contains <T i starts> without <T i commits>, undo(t i ) If log contains <T i starts> and <T i commits>, redo(t i ) 6.57 Silberschatz, Galvin and Gagne 2009

174 Checkpoints Log could become long, and recovery could take long Checkpoints shorten log and recovery time. Checkpoint scheme: 1. Output all log records currently in volatile storage to stable storage 2. Output all modified data from volatile to stable storage 3. Output a log record <checkpoint> to the log on stable storage Now recovery only includes Ti, such that Ti started executing before the most recent checkpoint, and all transactions after Ti All other transactions already on stable storage 6.58 Silberschatz, Galvin and Gagne 2009

175 Concurrent Transactions Must be equivalent to serial execution serializability Could perform all transactions in critical section Inefficient, too restrictive Concurrency-control algorithms provide serializability 6.59 Silberschatz, Galvin and Gagne 2009

176 Serializability Consider two data items A and B Consider Transactions T 0 and T 1 Execute T 0, T 1 atomically Execution sequence called schedule Atomically executed transaction order called serial schedule For N transactions, there are N! valid serial schedules 6.60 Silberschatz, Galvin and Gagne 2009

177 Schedule 1: T 0 then T Silberschatz, Galvin and Gagne 2009

178 Nonserial Schedule Nonserial schedule allows overlapped execute Resulting execution not necessarily incorrect Consider schedule S, operations O i, O j Conflict if access same data item, with at least one write If O i, O j consecutive and operations of different transactions & O i and O j don t conflict Then S with swapped order O j O i equivalent to S If S can become S via swapping nonconflicting operations S is conflict serializable 6.62 Silberschatz, Galvin and Gagne 2009

179 Schedule 2: Concurrent Serializable Schedule 6.63 Silberschatz, Galvin and Gagne 2009

180 Locking Protocol Ensure serializability by associating lock with each data item Follow locking protocol for access control Locks Shared T i has shared-mode lock (S) on item Q, T i can read Q but not write Q Exclusive Ti has exclusive-mode lock (X) on Q, T i can read and write Q Require every transaction on item Q acquire appropriate lock If lock already held, new request may have to wait Similar to readers-writers algorithm 6.64 Silberschatz, Galvin and Gagne 2009

181 Two-phase Locking Protocol Generally ensures conflict serializability Each transaction issues lock and unlock requests in two phases Growing obtaining locks Shrinking releasing locks Does not prevent deadlock 6.65 Silberschatz, Galvin and Gagne 2009

182 Timestamp-based Protocols Select order among transactions in advance timestamp-ordering Transaction T i associated with timestamp TS(T i ) before T i starts TS(T i ) < TS(T j ) if Ti entered system before T j TS can be generated from system clock or as logical counter incremented at each entry of transaction Timestamps determine serializability order If TS(T i ) < TS(T j ), system must ensure produced schedule equivalent to serial schedule where T i appears before T j 6.66 Silberschatz, Galvin and Gagne 2009

183 Timestamp-based Protocol Implementation Data item Q gets two timestamps W-timestamp(Q) largest timestamp of any transaction that executed write(q) successfully R-timestamp(Q) largest timestamp of successful read(q) Updated whenever read(q) or write(q) executed Timestamp-ordering protocol assures any conflicting read and write executed in timestamp order Suppose Ti executes read(q) If TS(T i ) < W-timestamp(Q), Ti needs to read value of Q that was already overwritten read operation rejected and T i rolled back If TS(T i ) W-timestamp(Q) read executed, R-timestamp(Q) set to max(r-timestamp(q), TS(T i )) 6.67 Silberschatz, Galvin and Gagne 2009

184 Timestamp-ordering Protocol Suppose Ti executes write(q) If TS(T i ) < R-timestamp(Q), value Q produced by T i was needed previously and T i assumed it would never be produced Write operation rejected, T i rolled back If TS(T i ) < W-tiimestamp(Q), T i attempting to write obsolete value of Q Write operation rejected and T i rolled back Otherwise, write executed Any rolled back transaction T i is assigned new timestamp and restarted Algorithm ensures conflict serializability and freedom from deadlock 6.68 Silberschatz, Galvin and Gagne 2009

185 Schedule Possible Under Timestamp Protocol 6.69 Silberschatz, Galvin and Gagne 2009

186 End of Chapter 6, Silberschatz, Galvin and Gagne 2009

187 Chapter 7: Deadlocks, Silberschatz, Galvin and Gagne 2009

188 Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Recovery from Deadlock 7.2 Silberschatz, Galvin and Gagne 2009

189 Chapter Objectives To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks To present a number of different methods for preventing or avoiding deadlocks in a computer system 7.3 Silberschatz, Galvin and Gagne 2009

190 The Deadlock Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set Example System has 2 disk drives P 1 and P 2 each hold one disk drive and each needs another one Example semaphores A and B, initialized to 1 P 0 P 1 wait (A); wait(b) wait (B); wait(a) 7.4 Silberschatz, Galvin and Gagne 2009

191 Bridge Crossing Example Traffic only in one direction Each section of a bridge can be viewed as a resource If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback) Several cars may have to be backed up if a deadlock occurs Starvation is possible Note Most OSes do not prevent or deal with deadlocks 7.5 Silberschatz, Galvin and Gagne 2009

192 System Model Resource types R 1, R 2,..., R m CPU cycles, memory space, I/O devices Each resource type R i has W i instances. Each process utilizes a resource as follows: request use release 7.6 Silberschatz, Galvin and Gagne 2009

193 Deadlock Characterization Deadlock can arise if four conditions hold simultaneously. Mutual exclusion: only one process at a time can use a resource Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task Circular wait: there exists a set {P 0, P 1,, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2,, P n 1 is waiting for a resource that is held by P n, and P 0 is waiting for a resource that is held by P Silberschatz, Galvin and Gagne 2009

194 Resource-Allocation Graph A set of vertices V and a set of edges E. V is partitioned into two types: P = {P 1, P 2,, P n }, the set consisting of all the processes in the system R = {R 1, R 2,, R m }, the set consisting of all resource types in the system request edge directed edge P 1 R j assignment edge directed edge R j P i 7.8 Silberschatz, Galvin and Gagne 2009

195 Resource-Allocation Graph (Cont.) Process Resource Type with 4 instances P i requests instance of R j P i P i is holding an instance of R j R j P i R j 7.9 Silberschatz, Galvin and Gagne 2009

196 Example of a Resource Allocation Graph 7.10 Silberschatz, Galvin and Gagne 2009

197 Resource Allocation Graph With A Deadlock 7.11 Silberschatz, Galvin and Gagne 2009

198 Graph With A Cycle But No Deadlock 7.12 Silberschatz, Galvin and Gagne 2009

199 Basic Facts If graph contains no cycles no deadlock If graph contains a cycle if only one instance per resource type, then deadlock if several instances per resource type, possibility of deadlock 7.13 Silberschatz, Galvin and Gagne 2009

200 Methods for Handling Deadlocks Ensure that the system will never enter a deadlock state Allow the system to enter a deadlock state and then recover Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX 7.14 Silberschatz, Galvin and Gagne 2009

201 Deadlock Prevention Restrain the ways request can be made Mutual Exclusion not required for sharable resources; must hold for nonsharable resources Hold and Wait must guarantee that whenever a process requests a resource, it does not hold any other resources Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none Low resource utilization; starvation possible 7.15 Silberschatz, Galvin and Gagne 2009

202 Deadlock Prevention (Cont.) No Preemption If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released Preempted resources are added to the list of resources for which the process is waiting Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting Circular Wait impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration 7.16 Silberschatz, Galvin and Gagne 2009

203 Deadlock Avoidance Requires that the system has some additional a priori information available Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes 7.17 Silberschatz, Galvin and Gagne 2009

204 Safe State When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state System is in safe state if there exists a sequence <P 1, P 2,, P n > of ALL the processes is the systems such that for each P i, the resources that P i can still request can be satisfied by currently available resources + resources held by all the P j, with j < i That is: If P i resource needs are not immediately available, then P i can wait until all P j have finished When P j is finished, P i can obtain needed resources, execute, return allocated resources, and terminate When P i terminates, P i +1 can obtain its needed resources, and so on 7.18 Silberschatz, Galvin and Gagne 2009

205 Basic Facts If a system is in safe state no deadlocks If a system is in unsafe state possibility of deadlock Avoidance ensure that a system will never enter an unsafe state Silberschatz, Galvin and Gagne 2009

206 Safe, Unsafe, Deadlock State 7.20 Silberschatz, Galvin and Gagne 2009

207 Avoidance algorithms Single instance of a resource type Use a resource-allocation graph Multiple instances of a resource type Use the banker s algorithm 7.21 Silberschatz, Galvin and Gagne 2009

208 Resource-Allocation Graph Scheme Claim edge P i R j indicated that process P j may request resource R j ; represented by a dashed line Claim edge converts to request edge when a process requests a resource Request edge converted to an assignment edge when the resource is allocated to the process When a resource is released by a process, assignment edge reconverts to a claim edge Resources must be claimed a priori in the system 7.22 Silberschatz, Galvin and Gagne 2009

209 Resource-Allocation Graph 7.23 Silberschatz, Galvin and Gagne 2009

210 Unsafe State In Resource-Allocation Graph 7.24 Silberschatz, Galvin and Gagne 2009

211 Resource-Allocation Graph Algorithm Suppose that process P i requests a resource R j The request can be granted only if converting the request edge to an assignment edge does not result in the formation of a cycle in the resource allocation graph 7.25 Silberschatz, Galvin and Gagne 2009

212 Banker s Algorithm Multiple instances Each process must a priori claim maximum use When a process requests a resource it may have to wait When a process gets all its resources it must return them in a finite amount of time 7.26 Silberschatz, Galvin and Gagne 2009

213 Data Structures for the Banker s Algorithm Let n = number of processes, and m = number of resources types. Available: Vector of length m. If available [j] = k, there are k instances of resource type R j available Max: n x m matrix. If Max [i,j] = k, then process P i may request at most k instances of resource type R j Allocation: n x m matrix. If Allocation[i,j] = k then P i is currently allocated k instances of R j Need: n x m matrix. If Need[i,j] = k, then P i may need k more instances of R j to complete its task Need [i,j] = Max[i,j] Allocation [i,j] 7.27 Silberschatz, Galvin and Gagne 2009

214 Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i = 0, 1,, n Find and i such that both: (a) Finish [i] = false (b) Need i Work If no such i exists, go to step 4 3. Work = Work + Allocation i Finish[i] = true go to step 2 4. If Finish [i] == true for all i, then the system is in a safe state 7.28 Silberschatz, Galvin and Gagne 2009

215 Resource-Request Algorithm for Process P i Request = request vector for process P i. If Request i [j] = k then process P i wants k instances of resource type R j 1. If Request i Need i go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim 2. If Request i Available, go to step 3. Otherwise P i must wait, since resources are not available 3. Pretend to allocate requested resources to P i by modifying the state as follows: Available = Available Request; Allocation i = Allocation i + Request i ; Need i = Need i Request i ; If safe the resources are allocated to Pi If unsafe Pi must wait, and the old resource-allocation state is restored 7.29 Silberschatz, Galvin and Gagne 2009

216 Example of Banker s Algorithm 5 processes P 0 through P 4 ; 3 resource types: A (10 instances), B (5instances), and C (7 instances) Snapshot at time T 0 : Allocation Max Available A B C A B C A B C P P P P P Silberschatz, Galvin and Gagne 2009

217 Example (Cont.) The content of the matrix Need is defined to be Max Allocation Need A B C P P P P P The system is in a safe state since the sequence < P 1, P 3, P 4, P 2, P 0 > satisfies safety criteria 7.31 Silberschatz, Galvin and Gagne 2009

218 Example: P 1 Request (1,0,2) Check that Request Available (that is, (1,0,2) (3,3,2) true Allocation Need Available A B C A B C A B C P P P P P Executing safety algorithm shows that sequence < P 1, P 3, P 4, P 0, P 2 > satisfies safety requirement Can request for (3,3,0) by P 4 be granted? Can request for (0,2,0) by P 0 be granted? 7.32 Silberschatz, Galvin and Gagne 2009

219 Deadlock Detection Allow system to enter deadlock state Detection algorithm Recovery scheme 7.33 Silberschatz, Galvin and Gagne 2009

220 Single Instance of Each Resource Type Maintain wait-for graph Nodes are processes P i P j if P i is waiting for P j Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock An algorithm to detect a cycle in a graph requires an order of n 2 operations, where n is the number of vertices in the graph 7.34 Silberschatz, Galvin and Gagne 2009

221 Resource-Allocation Graph and Wait-for Graph Resource-Allocation Graph Corresponding wait-for graph 7.35 Silberschatz, Galvin and Gagne 2009

222 Several Instances of a Resource Type Available: A vector of length m indicates the number of available resources of each type. Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. Request: An n x m matrix indicates the current request of each process. If Request [i j ] = k, then process P i is requesting k more instances of resource type. R j Silberschatz, Galvin and Gagne 2009

223 Detection Algorithm 1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work = Available (b) For i = 1,2,, n, if Allocation i 0, then Finish[i] = false;otherwise, Finish[i] = true 2. Find an index i such that both: (a) Finish[i] == false (b) Request i Work If no such i exists, go to step Silberschatz, Galvin and Gagne 2009

224 Detection Algorithm (Cont.) 3. Work = Work + Allocation i Finish[i] = true go to step 2 4. If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state. Moreover, if Finish[i] == false, then P i is deadlocked Algorithm requires an order of O(m x n 2) operations to detect whether the system is in deadlocked state 7.38 Silberschatz, Galvin and Gagne 2009

225 Example of Detection Algorithm Five processes P 0 through P 4 ; three resource types A (7 instances), B (2 instances), and C (6 instances) Snapshot at time T 0 : Allocation Request Available A B C A B C A B C P P P P P Sequence <P 0, P 2, P 3, P 1, P 4 > will result in Finish[i] = true for all i 7.39 Silberschatz, Galvin and Gagne 2009

226 Example (Cont.) P 2 requests an additional instance of type C State of system? Request A B C P P P P P Can reclaim resources held by process P 0, but insufficient resources to fulfill other processes; requests Deadlock exists, consisting of processes P 1, P 2, P 3, and P Silberschatz, Galvin and Gagne 2009

227 Detection-Algorithm Usage When, and how often, to invoke depends on: How often a deadlock is likely to occur? How many processes will need to be rolled back? one for each disjoint cycle If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes caused the deadlock 7.41 Silberschatz, Galvin and Gagne 2009

228 Recovery from Deadlock: Process Termination Abort all deadlocked processes Abort one process at a time until the deadlock cycle is eliminated In which order should we choose to abort? Priority of the process How long process has computed, and how much longer to completion Resources the process has used Resources process needs to complete How many processes will need to be terminated Is process interactive or batch? 7.42 Silberschatz, Galvin and Gagne 2009

229 Recovery from Deadlock: Resource Preemption Selecting a victim minimize cost Rollback return to some safe state, restart process for that state Starvation same process may always be picked as victim, include number of rollback in cost factor 7.43 Silberschatz, Galvin and Gagne 2009

230 End of Chapter 7, Silberschatz, Galvin and Gagne 2009

231 Chapter 8: Main Memory, Silberschatz, Galvin and Gagne 2009

232 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 8.2 Silberschatz, Galvin and Gagne 2009

233 Objectives To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging 8.3 Silberschatz, Galvin and Gagne 2009

234 Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage CPU can access directly Register access in one CPU clock (or less) Main memory can take many cycles Cache sits between main memory and CPU registers Protection of memory required to ensure correct operation 8.4 Silberschatz, Galvin and Gagne 2009

235 Base and Limit Registers A pair of base and limit registers define the logical address space 8.5 Silberschatz, Galvin and Gagne 2009

236 Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers) 8.6 Silberschatz, Galvin and Gagne 2009

237 Multistep Processing of a User Program 8.7 Silberschatz, Galvin and Gagne 2009

238 Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address space is central to proper memory management Logical address generated by the CPU; also referred to as virtual address Physical address address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme 8.8 Silberschatz, Galvin and Gagne 2009

239 Memory-Management Unit (MMU) Hardware device that maps virtual to physical address In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logical addresses; it never sees the real physical addresses 8.9 Silberschatz, Galvin and Gagne 2009

240 Dynamic relocation using a relocation register 8.10 Silberschatz, Galvin and Gagne 2009

241 Dynamic Loading Routine is not loaded until it is called Better memory-space utilization; unused routine is never loaded Useful when large amounts of code are needed to handle infrequently occurring cases No special support from the operating system is required implemented through program design 8.11 Silberschatz, Galvin and Gagne 2009

242 Dynamic Linking Linking postponed until execution time Small piece of code, stub, used to locate the appropriate memory-resident library routine Stub replaces itself with the address of the routine, and executes the routine Operating system needed to check if routine is in processes memory address Dynamic linking is particularly useful for libraries System also known as shared libraries 8.12 Silberschatz, Galvin and Gagne 2009

243 Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing store fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, roll in swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) System maintains a ready queue of ready-to-run processes which have memory images on disk 8.13 Silberschatz, Galvin and Gagne 2009

244 Schematic View of Swapping 8.14 Silberschatz, Galvin and Gagne 2009

245 Contiguous Allocation Main memory usually into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes then held in high memory Relocation registers used to protect user processes from each other, and from changing operating-system code and data Base register contains value of smallest physical address Limit register contains range of logical addresses each logical address must be less than the limit register MMU maps logical address dynamically 8.15 Silberschatz, Galvin and Gagne 2009

246 Hardware Support for Relocation and Limit Registers 8.16 Silberschatz, Galvin and Gagne 2009

247 Contiguous Allocation (Cont) Multiple-partition allocation Hole block of available memory; holes of various size are scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information about: a) allocated partitions b) free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process Silberschatz, Galvin and Gagne 2009

248 Dynamic Storage-Allocation Problem How to satisfy a request of size n from a list of free holes First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole First-fit and best-fit better than worst-fit in terms of speed and storage utilization 8.18 Silberschatz, Galvin and Gagne 2009

249 Fragmentation External Fragmentation total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block Compaction is possible only if relocation is dynamic, and is done at execution time I/O problem Latch job in memory while it is involved in I/O Do I/O only into OS buffers 8.19 Silberschatz, Galvin and Gagne 2009

250 Paging Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8,192 bytes) Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size n pages, need to find n free frames and load program Set up a page table to translate logical to physical addresses Internal fragmentation 8.20 Silberschatz, Galvin and Gagne 2009

251 Address Translation Scheme Address generated by CPU is divided into: Page number (p) used as an index into a page table which contains base address of each page in physical memory Page offset (d) combined with base address to define the physical memory address that is sent to the memory unit page number p m - n page offset d n For given logical address space 2 m and page size 2 n 8.21 Silberschatz, Galvin and Gagne 2009

252 Paging Hardware 8.22 Silberschatz, Galvin and Gagne 2009

253 Paging Model of Logical and Physical Memory 8.23 Silberschatz, Galvin and Gagne 2009

254 Paging Example 32-byte memory and 4-byte pages 8.24 Silberschatz, Galvin and Gagne 2009

255 Free Frames Before allocation After allocation 8.25 Silberschatz, Galvin and Gagne 2009

256 Implementation of Page Table Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) Some TLBs store address-space identifiers (ASIDs) in each TLB entry uniquely identifies each process to provide address-space protection for that process 8.26 Silberschatz, Galvin and Gagne 2009

257 Associative Memory Associative memory parallel search Page # Frame # Address translation (p, d) If p is in associative register, get frame # out Otherwise get frame # from page table in memory 8.27 Silberschatz, Galvin and Gagne 2009

258 Paging Hardware With TLB 8.28 Silberschatz, Galvin and Gagne 2009

259 Effective Access Time Associative Lookup = time unit Assume memory cycle time is 1 microsecond Hit ratio percentage of times that a page number is found in the associative registers; ratio related to number of associative registers Hit ratio = Effective Access Time (EAT) EAT = (1 + ) + (2 + )(1 ) = Silberschatz, Galvin and Gagne 2009

260 Memory Protection Memory protection implemented by associating protection bit with each frame Valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and is thus a legal page invalid indicates that the page is not in the process logical address space 8.30 Silberschatz, Galvin and Gagne 2009

261 Valid (v) or Invalid (i) Bit In A Page Table 8.31 Silberschatz, Galvin and Gagne 2009

262 Shared Pages Shared code One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes Private code and data Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space 8.32 Silberschatz, Galvin and Gagne 2009

263 Shared Pages Example 8.33 Silberschatz, Galvin and Gagne 2009

264 Structure of the Page Table Hierarchical Paging Hashed Page Tables Inverted Page Tables 8.34 Silberschatz, Galvin and Gagne 2009

265 Hierarchical Page Tables Break up the logical address space into multiple page tables A simple technique is a two-level page table 8.35 Silberschatz, Galvin and Gagne 2009

266 Two-Level Page-Table Scheme 8.36 Silberschatz, Galvin and Gagne 2009

267 Two-Level Paging Example A logical address (on 32-bit machine with 1K page size) is divided into: a page number consisting of 22 bits a page offset consisting of 10 bits Since the page table is paged, the page number is further divided into: a 12-bit page number a 10-bit page offset Thus, a logical address is as follows: page number page offset p i p 2 d where p i is an index into the outer page table, and p 2 is the displacement within the page of the outer page table 8.37 Silberschatz, Galvin and Gagne 2009

268 Address-Translation Scheme 8.38 Silberschatz, Galvin and Gagne 2009

269 Three-level Paging Scheme 8.39 Silberschatz, Galvin and Gagne 2009

270 Hashed Page Tables Common in address spaces > 32 bits The virtual page number is hashed into a page table This page table contains a chain of elements hashing to the same location Virtual page numbers are compared in this chain searching for a match If a match is found, the corresponding physical frame is extracted 8.40 Silberschatz, Galvin and Gagne 2009

271 Hashed Page Table 8.41 Silberschatz, Galvin and Gagne 2009

272 Inverted Page Table One entry for each real page of memory Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs Use hash table to limit the search to one or at most a few page-table entries 8.42 Silberschatz, Galvin and Gagne 2009

273 Inverted Page Table Architecture 8.43 Silberschatz, Galvin and Gagne 2009

274 Segmentation Memory-management scheme that supports user view of memory A program is a collection of segments A segment is a logical unit such as: main program procedure function method object local variables, global variables common block stack symbol table arrays 8.44 Silberschatz, Galvin and Gagne 2009

275 User s View of a Program 8.45 Silberschatz, Galvin and Gagne 2009

276 Logical View of Segmentation user space physical memory space 8.46 Silberschatz, Galvin and Gagne 2009

277 Segmentation Architecture Logical address consists of a two tuple: <segment-number, offset>, Segment table maps two-dimensional physical addresses; each table entry has: base contains the starting physical address where the segments reside in memory limit specifies the length of the segment Segment-table base register (STBR) points to the segment table s location in memory Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR 8.47 Silberschatz, Galvin and Gagne 2009

278 Segmentation Architecture (Cont.) Protection With each entry in segment table associate: validation bit = 0 illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem A segmentation example is shown in the following diagram 8.48 Silberschatz, Galvin and Gagne 2009

279 Segmentation Hardware 8.49 Silberschatz, Galvin and Gagne 2009

280 Example of Segmentation 8.50 Silberschatz, Galvin and Gagne 2009

281 Example: The Intel Pentium Supports both segmentation and segmentation with paging CPU generates logical address Given to segmentation unit Which produces linear addresses Linear address given to paging unit Which generates physical address in main memory Paging units form equivalent of MMU 8.51 Silberschatz, Galvin and Gagne 2009

282 Logical to Physical Address Translation in Pentium 8.52 Silberschatz, Galvin and Gagne 2009

283 Intel Pentium Segmentation 8.53 Silberschatz, Galvin and Gagne 2009

284 Pentium Paging Architecture 8.54 Silberschatz, Galvin and Gagne 2009

285 Linear Address in Linux Broken into four parts: 8.55 Silberschatz, Galvin and Gagne 2009

286 Three-level Paging in Linux 8.56 Silberschatz, Galvin and Gagne 2009

287 End of Chapter 8, Silberschatz, Galvin and Gagne 2009

288 Chapter 9: Virtual Memory, Silberschatz, Galvin and Gagne 2009

289 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples 9.2 Silberschatz, Galvin and Gagne 2009

290 Objectives To describe the benefits of a virtual memory system To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames To discuss the principle of the working-set model 9.3 Silberschatz, Galvin and Gagne 2009

291 Background Virtual memory separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution Logical address space can therefore be much larger than physical address space Allows address spaces to be shared by several processes Allows for more efficient process creation Virtual memory can be implemented via: Demand paging Demand segmentation 9.4 Silberschatz, Galvin and Gagne 2009

292 Virtual Memory That is Larger Than Physical Memory 9.5 Silberschatz, Galvin and Gagne 2009

293 Virtual-address Space 9.6 Silberschatz, Galvin and Gagne 2009

294 Shared Library Using Virtual Memory 9.7 Silberschatz, Galvin and Gagne 2009

295 Demand Paging Bring a page into memory only when it is needed Less I/O needed Less memory needed Faster response More users Page is needed reference to it invalid reference abort not-in-memory bring to memory Lazy swapper never swaps a page into memory unless page will be needed Swapper that deals with pages is a pager 9.8 Silberschatz, Galvin and Gagne 2009

296 Transfer of a Paged Memory to Contiguous Disk Space 9.9 Silberschatz, Galvin and Gagne 2009

297 Valid-Invalid Bit With each page table entry a valid invalid bit is associated (v in-memory, i not-in-memory) Initially valid invalid bit is set to i on all entries Example of a page table snapshot: Frame # valid-invalid bit v v v v i. page table i i During address translation, if valid invalid bit in page table entry is I page fault 9.10 Silberschatz, Galvin and Gagne 2009

298 Page Table When Some Pages Are Not in Main Memory 9.11 Silberschatz, Galvin and Gagne 2009

299 Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault 1. Operating system looks at another table to decide: Invalid reference abort Just not in memory 2. Get empty frame 3. Swap page into frame 4. Reset tables 5. Set validation bit = v 6. Restart the instruction that caused the page fault 9.12 Silberschatz, Galvin and Gagne 2009

300 Page Fault (Cont.) Restart instruction block move auto increment/decrement location 9.13 Silberschatz, Galvin and Gagne 2009

301 Steps in Handling a Page Fault 9.14 Silberschatz, Galvin and Gagne 2009

302 Performance of Demand Paging Page Fault Rate 0 p 1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 p) x memory access + p (page fault overhead + swap page out + swap page in + restart overhead ) 9.15 Silberschatz, Galvin and Gagne 2009

303 Demand Paging Example Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds EAT = (1 p) x p (8 milliseconds) = (1 p x p x 8,000,000 = p x 7,999,800 If one access out of 1,000 causes a page fault, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!! 9.16 Silberschatz, Galvin and Gagne 2009

304 Process Creation Virtual memory allows other benefits during process creation: - Copy-on-Write - Memory-Mapped Files (later) 9.17 Silberschatz, Galvin and Gagne 2009

305 Copy-on-Write Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory If either process modifies a shared page, only then is the page copied COW allows more efficient process creation as only modified pages are copied Free pages are allocated from a pool of zeroed-out pages 9.18 Silberschatz, Galvin and Gagne 2009

306 Before Process 1 Modifies Page C 9.19 Silberschatz, Galvin and Gagne 2009

307 After Process 1 Modifies Page C 9.20 Silberschatz, Galvin and Gagne 2009

308 What happens if there is no free frame? Page replacement find some page in memory, but not really in use, swap it out algorithm performance want an algorithm which will result in minimum number of page faults Same page may be brought into memory several times 9.21 Silberschatz, Galvin and Gagne 2009

309 Page Replacement Prevent over-allocation of memory by modifying page-fault service routine to include page replacement Use modify (dirty) bit to reduce overhead of page transfers only modified pages are written to disk Page replacement completes separation between logical memory and physical memory large virtual memory can be provided on a smaller physical memory 9.22 Silberschatz, Galvin and Gagne 2009

310 Need For Page Replacement 9.23 Silberschatz, Galvin and Gagne 2009

311 Basic Page Replacement 1. Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame 3. Bring the desired page into the (newly) free frame; update the page and frame tables 4. Restart the process 9.24 Silberschatz, Galvin and Gagne 2009

312 Page Replacement 9.25 Silberschatz, Galvin and Gagne 2009

313 Page Replacement Algorithms Want lowest page-fault rate Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, Silberschatz, Galvin and Gagne 2009

314 Graph of Page Faults Versus The Number of Frames 9.27 Silberschatz, Galvin and Gagne 2009

315 First-In-First-Out (FIFO) Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process) page faults 4 frames page faults Belady s Anomaly: more frames more page faults 9.28 Silberschatz, Galvin and Gagne 2009

316 FIFO Page Replacement 9.29 Silberschatz, Galvin and Gagne 2009

317 FIFO Illustrating Belady s Anomaly 9.30 Silberschatz, Galvin and Gagne 2009

318 Optimal Algorithm Replace page that will not be used for longest period of time 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, page faults How do you know this? Used for measuring how well your algorithm performs 9.31 Silberschatz, Galvin and Gagne 2009

319 Optimal Page Replacement 9.32 Silberschatz, Galvin and Gagne 2009

320 Least Recently Used (LRU) Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to determine which are to change 9.33 Silberschatz, Galvin and Gagne 2009

321 LRU Page Replacement 9.34 Silberschatz, Galvin and Gagne 2009

322 LRU Algorithm (Cont.) Stack implementation keep a stack of page numbers in a double link form: Page referenced: move it to the top requires 6 pointers to be changed No search for replacement 9.35 Silberschatz, Galvin and Gagne 2009

323 Use Of A Stack to Record The Most Recent Page References 9.36 Silberschatz, Galvin and Gagne 2009

324 LRU Approximation Algorithms Reference bit With each page associate a bit, initially = 0 When page is referenced bit set to 1 Replace the one which is 0 (if one exists) We do not know the order, however Second chance Need reference bit Clock replacement If page to be replaced (in clock order) has reference bit = 1 then: set reference bit 0 leave page in memory replace next page (in clock order), subject to same rules 9.37 Silberschatz, Galvin and Gagne 2009

325 Second-Chance (clock) Page-Replacement Algorithm 9.38 Silberschatz, Galvin and Gagne 2009

326 Counting Algorithms Keep a counter of the number of references that have been made to each page LFU Algorithm: replaces page with smallest count MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used 9.39 Silberschatz, Galvin and Gagne 2009

327 Allocation of Frames Each process needs minimum number of pages Example: IBM pages to handle SS MOVE instruction: instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Two major allocation schemes fixed allocation priority allocation 9.40 Silberschatz, Galvin and Gagne 2009

328 Fixed Allocation Equal allocation For example, if there are 100 frames and 5 processes, give each process 20 frames. Proportional allocation Allocate according to the size of process s size of process p S s a i i i m total number of allocation for m 64 s s a a i p i i frames si m S Silberschatz, Galvin and Gagne 2009

329 Priority Allocation Use a proportional allocation scheme using priorities rather than size If process P i generates a page fault, select for replacement one of its frames select for replacement a frame from a process with lower priority number 9.42 Silberschatz, Galvin and Gagne 2009

330 Global vs. Local Allocation Global replacement process selects a replacement frame from the set of all frames; one process can take a frame from another Local replacement each process selects from only its own set of allocated frames 9.43 Silberschatz, Galvin and Gagne 2009

331 Thrashing If a process does not have enough pages, the page-fault rate is very high. This leads to: low CPU utilization operating system thinks that it needs to increase the degree of multiprogramming another process added to the system Thrashing a process is busy swapping pages in and out 9.44 Silberschatz, Galvin and Gagne 2009

332 Thrashing (Cont.) 9.45 Silberschatz, Galvin and Gagne 2009

333 Demand Paging and Thrashing Why does demand paging work? Locality model Process migrates from one locality to another Localities may overlap Why does thrashing occur? size of locality > total memory size 9.46 Silberschatz, Galvin and Gagne 2009

334 Locality In A Memory-Reference Pattern 9.47 Silberschatz, Galvin and Gagne 2009

335 Working-Set Model working-set window a fixed number of page references Example: 10,000 instruction WSS i (working set of Process P i ) = total number of pages referenced in the most recent (varies in time) if too small will not encompass entire locality if too large will encompass several localities if = will encompass entire program D = WSS i total demand frames if D > m Thrashing Policy if D > m, then suspend one of the processes 9.48 Silberschatz, Galvin and Gagne 2009

336 Working-set model 9.49 Silberschatz, Galvin and Gagne 2009

337 Keeping Track of the Working Set Approximate with interval timer + a reference bit Example: = 10,000 Timer interrupts after every 5000 time units Keep in memory 2 bits for each page Whenever a timer interrupts copy and sets the values of all reference bits to 0 If one of the bits in memory = 1 page in working set Why is this not completely accurate? Improvement = 10 bits and interrupt every 1000 time units 9.50 Silberschatz, Galvin and Gagne 2009

338 Page-Fault Frequency Scheme Establish acceptable page-fault rate If actual rate too low, process loses frame If actual rate too high, process gains frame 9.51 Silberschatz, Galvin and Gagne 2009

339 Working Sets and Page Fault Rates 9.52 Silberschatz, Galvin and Gagne 2009

340 Memory-Mapped Files Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory A file is initially read using demand paging. A page-sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated as ordinary memory accesses. Simplifies file access by treating file I/O through memory rather than read() write() system calls Also allows several processes to map the same file allowing the pages in memory to be shared 9.53 Silberschatz, Galvin and Gagne 2009

341 Memory Mapped Files 9.54 Silberschatz, Galvin and Gagne 2009

342 Memory-Mapped Shared Memory in Windows 9.55 Silberschatz, Galvin and Gagne 2009

343 Allocating Kernel Memory Treated differently from user memory Often allocated from a free-memory pool Kernel requests memory for structures of varying sizes Some kernel memory needs to be contiguous 9.56 Silberschatz, Galvin and Gagne 2009

344 Buddy System Allocates memory from fixed-size segment consisting of physicallycontiguous pages Memory allocated using power-of-2 allocator Satisfies requests in units sized as power of 2 Request rounded up to next highest power of 2 When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2 Continue until appropriate sized chunk available 9.57 Silberschatz, Galvin and Gagne 2009

345 Buddy System Allocator 9.58 Silberschatz, Galvin and Gagne 2009

346 Slab Allocator Alternate strategy Slab is one or more physically contiguous pages Cache consists of one or more slabs Single cache for each unique kernel data structure Each cache filled with objects instantiations of the data structure When cache created, filled with objects marked as free When structures stored, objects marked as used If slab is full of used objects, next object allocated from empty slab If no empty slabs, new slab allocated Benefits include no fragmentation, fast memory request satisfaction 9.59 Silberschatz, Galvin and Gagne 2009

347 Slab Allocation 9.60 Silberschatz, Galvin and Gagne 2009

348 Other Issues -- Prepaging Prepaging To reduce the large number of page faults that occurs at process startup Prepage all or some of the pages a process will need, before they are referenced But if prepaged pages are unused, I/O and memory was wasted Assume s pages are prepaged and α of the pages is used Is cost of s * α save pages faults > or < than the cost of prepaging s * (1- α) unnecessary pages? α near zero prepaging loses 9.61 Silberschatz, Galvin and Gagne 2009

349 Other Issues Page Size Page size selection must take into consideration: fragmentation table size I/O overhead locality 9.62 Silberschatz, Galvin and Gagne 2009

350 Other Issues TLB Reach TLB Reach - The amount of memory accessible from the TLB TLB Reach = (TLB Size) X (Page Size) Ideally, the working set of each process is stored in the TLB Otherwise there is a high degree of page faults Increase the Page Size This may lead to an increase in fragmentation as not all applications require a large page size Provide Multiple Page Sizes This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation 9.63 Silberschatz, Galvin and Gagne 2009

351 Other Issues Program Structure Program structure Int[128,128] data; Each row is stored in one page Program 1 for (j = 0; j <128; j++) for (i = 0; i < 128; i++) data[i,j] = 0; 128 x 128 = 16,384 page faults Program 2 for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data[i,j] = 0; 128 page faults 9.64 Silberschatz, Galvin and Gagne 2009

352 Other Issues I/O interlock I/O Interlock Pages must sometimes be locked into memory Consider I/O - Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm 9.65 Silberschatz, Galvin and Gagne 2009

353 Reason Why Frames Used For I/O Must Be In Memory 9.66 Silberschatz, Galvin and Gagne 2009

354 Operating System Examples Windows XP Solaris 9.67 Silberschatz, Galvin and Gagne 2009

355 Windows XP Uses demand paging with clustering. Clustering brings in pages surrounding the faulting page Processes are assigned working set minimum and working set maximum Working set minimum is the minimum number of pages the process is guaranteed to have in memory A process may be assigned as many pages up to its working set maximum When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory Working set trimming removes pages from processes that have pages in excess of their working set minimum 9.68 Silberschatz, Galvin and Gagne 2009

356 Solaris Maintains a list of free pages to assign faulting processes Lotsfree threshold parameter (amount of free memory) to begin paging Desfree threshold parameter to increasing paging Minfree threshold parameter to being swapping Paging is performed by pageout process Pageout scans pages using modified clock algorithm Scanrate is the rate at which pages are scanned. This ranges from slowscan to fastscan Pageout is called more frequently depending upon the amount of free memory available 9.69 Silberschatz, Galvin and Gagne 2009

357 Solaris 2 Page Scanner 9.70 Silberschatz, Galvin and Gagne 2009

358 End of Chapter 9, Silberschatz, Galvin and Gagne 2009

359 Chapter 10: File-System Interface, Silberschatz, Galvin and Gagne 2009

360 Chapter 10: File-System Interface File Concept Access Methods Directory Structure File-System Mounting File Sharing Protection 10.2 Silberschatz, Galvin and Gagne 2009

361 Objectives To explain the function of file systems To describe the interfaces to file systems To discuss file-system design tradeoffs, including access methods, file sharing, file locking, and directory structures To explore file-system protection 10.3 Silberschatz, Galvin and Gagne 2009

362 File Concept Contiguous logical address space Types: Data numeric character binary Program 10.4 Silberschatz, Galvin and Gagne 2009

363 File Structure None - sequence of words, bytes Simple record structure Lines Fixed length Variable length Complex Structures Formatted document Relocatable load file Can simulate last two with first method by inserting appropriate control characters Who decides: Operating system Program 10.5 Silberschatz, Galvin and Gagne 2009

364 File Attributes Name only information kept in human-readable form Identifier unique tag (number) identifies file within file system Type needed for systems that support different types Location pointer to file location on device Size current file size Protection controls who can do reading, writing, executing Time, date, and user identification data for protection, security, and usage monitoring Information about files are kept in the directory structure, which is maintained on the disk 10.6 Silberschatz, Galvin and Gagne 2009

365 File Operations File is an abstract data type Create Write Read Reposition within file Delete Truncate Open(F i ) search the directory structure on disk for entry F i, and move the content of entry to memory Close (F i ) move the content of entry F i in memory to directory structure on disk 10.7 Silberschatz, Galvin and Gagne 2009

366 Open Files Several pieces of data are needed to manage open files: File pointer: pointer to last read/write location, per process that has the file open File-open count: counter of number of times a file is open to allow removal of data from open-file table when last processes closes it Disk location of the file: cache of data access information Access rights: per-process access mode information 10.8 Silberschatz, Galvin and Gagne 2009

367 Open File Locking Provided by some operating systems and file systems Mediates access to a file Mandatory or advisory: Mandatory access is denied depending on locks held and requested Advisory processes can find status of locks and decide what to do 10.9 Silberschatz, Galvin and Gagne 2009

368 File Locking Example Java API import java.io.*; import java.nio.channels.*; public class LockingExample { public static final boolean EXCLUSIVE = false; public static final boolean SHARED = true; public static void main(string arsg[]) throws IOException { FileLock sharedlock = null; FileLock exclusivelock = null; try { RandomAccessFile raf = new RandomAccessFile("file.txt", "rw"); // get the channel for the file FileChannel ch = raf.getchannel(); // this locks the first half of the file - exclusive exclusivelock = ch.lock(0, raf.length()/2, EXCLUSIVE); /** Now modify the data... */ // release the lock exclusivelock.release(); Silberschatz, Galvin and Gagne 2009

369 File Locking Example Java API (cont) } } // this locks the second half of the file - shared sharedlock = ch.lock(raf.length()/2+1, raf.length(), SHARED); /** Now read the data... */ // release the lock sharedlock.release(); } catch (java.io.ioexception ioe) { System.err.println(ioe); }finally { if (exclusivelock!= null) exclusivelock.release(); if (sharedlock!= null) sharedlock.release(); } Silberschatz, Galvin and Gagne 2009

370 File Types Name, Extension Silberschatz, Galvin and Gagne 2009

371 Access Methods Sequential Access Direct Access n = relative block number read next write next reset no read after last write (rewrite) read n write n position to n read next write next rewrite n Silberschatz, Galvin and Gagne 2009

372 Sequential-access File Silberschatz, Galvin and Gagne 2009

373 Simulation of Sequential Access on Direct-access File Silberschatz, Galvin and Gagne 2009

374 Example of Index and Relative Files Silberschatz, Galvin and Gagne 2009

375 Directory Structure A collection of nodes containing information about all files Directory Files F 1 F 2 F 3 F 4 F n Both the directory structure and the files reside on disk Backups of these two structures are kept on tapes Silberschatz, Galvin and Gagne 2009

376 Disk Structure Disk can be subdivided into partitions Disks or partitions can be RAID protected against failure Disk or partition can be used raw without a file system, or formatted with a file system Partitions also known as minidisks, slices Entity containing file system known as a volume Each volume containing file system also tracks that file system s info in device directory or volume table of contents As well as general-purpose file systems there are many special-purpose file systems, frequently all within the same operating system or computer Silberschatz, Galvin and Gagne 2009

377 A Typical File-system Organization Silberschatz, Galvin and Gagne 2009

378 Operations Performed on Directory Search for a file Create a file Delete a file List a directory Rename a file Traverse the file system Silberschatz, Galvin and Gagne 2009

379 Organize the Directory (Logically) to Obtain Efficiency locating a file quickly Naming convenient to users Two users can have same name for different files The same file can have several different names Grouping logical grouping of files by properties, (e.g., all Java programs, all games, ) Silberschatz, Galvin and Gagne 2009

380 Single-Level Directory A single directory for all users Naming problem Grouping problem Silberschatz, Galvin and Gagne 2009

381 Two-Level Directory Separate directory for each user Path name Can have the same file name for different user Efficient searching No grouping capability Silberschatz, Galvin and Gagne 2009

382 Tree-Structured Directories Silberschatz, Galvin and Gagne 2009

383 Tree-Structured Directories (Cont) Efficient searching Grouping Capability Current directory (working directory) cd /spell/mail/prog type list Silberschatz, Galvin and Gagne 2009

384 Tree-Structured Directories (Cont) Absolute or relative path name Creating a new file is done in current directory Delete a file rm <file-name> Creating a new subdirectory is done in current directory mkdir <dir-name> Example: if in current directory /mail mkdir count mail prog copy prt exp count Deleting mail deleting the entire subtree rooted by mail Silberschatz, Galvin and Gagne 2009

385 Acyclic-Graph Directories Have shared subdirectories and files Silberschatz, Galvin and Gagne 2009

386 Acyclic-Graph Directories (Cont.) Two different names (aliasing) If dict deletes list dangling pointer Solutions: Backpointers, so we can delete all pointers Variable size records a problem Backpointers using a daisy chain organization Entry-hold-count solution New directory entry type Link another name (pointer) to an existing file Resolve the link follow pointer to locate the file Silberschatz, Galvin and Gagne 2009

387 General Graph Directory Silberschatz, Galvin and Gagne 2009

388 General Graph Directory (Cont.) How do we guarantee no cycles? Allow only links to file not subdirectories Garbage collection Every time a new link is added use a cycle detection algorithm to determine whether it is OK Silberschatz, Galvin and Gagne 2009

389 File System Mounting A file system must be mounted before it can be accessed A unmounted file system (i.e. Fig (b)) is mounted at a mount point Silberschatz, Galvin and Gagne 2009

390 (a) Existing. (b) Unmounted Partition Silberschatz, Galvin and Gagne 2009

391 Mount Point Silberschatz, Galvin and Gagne 2009

392 File Sharing Sharing of files on multi-user systems is desirable Sharing may be done through a protection scheme On distributed systems, files may be shared across a network Network File System (NFS) is a common distributed file-sharing method Silberschatz, Galvin and Gagne 2009

393 File Sharing Multiple Users User IDs identify users, allowing permissions and protections to be per-user Group IDs allow users to be in groups, permitting group access rights Silberschatz, Galvin and Gagne 2009

394 File Sharing Remote File Systems Uses networking to allow file system access between systems Manually via programs like FTP Automatically, seamlessly using distributed file systems Semi automatically via the world wide web Client-server model allows clients to mount remote file systems from servers Server can serve multiple clients Client and user-on-client identification is insecure or complicated NFS is standard UNIX client-server file sharing protocol CIFS is standard Windows protocol Standard operating system file calls are translated into remote calls Distributed Information Systems (distributed naming services) such as LDAP, DNS, NIS, Active Directory implement unified access to information needed for remote computing Silberschatz, Galvin and Gagne 2009

395 File Sharing Failure Modes Remote file systems add new failure modes, due to network failure, server failure Recovery from failure can involve state information about status of each remote request Stateless protocols such as NFS include all information in each request, allowing easy recovery but less security Silberschatz, Galvin and Gagne 2009

396 File Sharing Consistency Semantics Consistency semantics specify how multiple users are to access a shared file simultaneously Similar to Ch 7 process synchronization algorithms Tend to be less complex due to disk I/O and network latency (for remote file systems Andrew File System (AFS) implemented complex remote file sharing semantics Unix file system (UFS) implements: Writes to an open file visible immediately to other users of the same open file Sharing file pointer to allow multiple users to read and write concurrently AFS has session semantics Writes only visible to sessions starting after the file is closed Silberschatz, Galvin and Gagne 2009

397 Protection File owner/creator should be able to control: what can be done by whom Types of access Read Write Execute Append Delete List Silberschatz, Galvin and Gagne 2009

398 Access Lists and Groups Mode of access: read, write, execute Three classes of users RWX a) owner access RWX b) group access RWX c) public access Ask manager to create a group (unique name), say G, and add some users to the group. For a particular file (say game) or subdirectory, define an appropriate access. owner group public Attach a group to a file chmod 761 game chgrp G game Silberschatz, Galvin and Gagne 2009

399 Windows XP Access-control List Management Silberschatz, Galvin and Gagne 2009

400 A Sample UNIX Directory Listing Silberschatz, Galvin and Gagne 2009

401 End of Chapter 10, Silberschatz, Galvin and Gagne 2009

402 Chapter 11: File System Implementation, Silberschatz, Galvin and Gagne 2009

403 Chapter 11: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency and Performance Recovery Log-Structured File Systems NFS Example: WAFL File System 11.2 Silberschatz, Galvin and Gagne 2009

404 Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block allocation and free-block algorithms and trade-offs 11.3 Silberschatz, Galvin and Gagne 2009

405 File-System Structure File structure Logical storage unit Collection of related information File system resides on secondary storage (disks) File system organized into layers File control block storage structure consisting of information about a file 11.4 Silberschatz, Galvin and Gagne 2009

406 Layered File System 11.5 Silberschatz, Galvin and Gagne 2009

407 A Typical File Control Block 11.6 Silberschatz, Galvin and Gagne 2009

408 In-Memory File System Structures The following figure illustrates the necessary file system structures provided by the operating systems. Figure 12-3(a) refers to opening a file. Figure 12-3(b) refers to reading a file Silberschatz, Galvin and Gagne 2009

409 In-Memory File System Structures 11.8 Silberschatz, Galvin and Gagne 2009

410 Virtual File Systems Virtual File Systems (VFS) provide an object-oriented way of implementing file systems. VFS allows the same system call interface (the API) to be used for different types of file systems. The API is to the VFS interface, rather than any specific type of file system Silberschatz, Galvin and Gagne 2009

411 Schematic View of Virtual File System Silberschatz, Galvin and Gagne 2009

412 Directory Implementation Linear list of file names with pointer to the data blocks. simple to program time-consuming to execute Hash Table linear list with hash data structure. decreases directory search time collisions situations where two file names hash to the same location fixed size Silberschatz, Galvin and Gagne 2009

413 Allocation Methods An allocation method refers to how disk blocks are allocated for files: Contiguous allocation Linked allocation Indexed allocation Silberschatz, Galvin and Gagne 2009

414 Contiguous Allocation Each file occupies a set of contiguous blocks on the disk Simple only starting location (block #) and length (number of blocks) are required Random access Wasteful of space (dynamic storage-allocation problem) Files cannot grow Silberschatz, Galvin and Gagne 2009

415 Contiguous Allocation Mapping from logical to physical LA/512 Block to be accessed =! + starting address Displacement into block = R Q R Silberschatz, Galvin and Gagne 2009

416 Contiguous Allocation of Disk Space Silberschatz, Galvin and Gagne 2009

417 Extent-Based Systems Many newer file systems (I.e. Veritas File System) use a modified contiguous allocation scheme Extent-based file systems allocate disk blocks in extents An extent is a contiguous block of disks Extents are allocated for file allocation A file consists of one or more extents Silberschatz, Galvin and Gagne 2009

418 Linked Allocation Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk. block = pointer Silberschatz, Galvin and Gagne 2009

419 Linked Allocation (Cont.) Simple need only starting address Free-space management system no waste of space No random access Mapping LA/511 Q R Block to be accessed is the Qth block in the linked chain of blocks representing the file. Displacement into block = R + 1 File-allocation table (FAT) disk-space allocation used by MS-DOS and OS/ Silberschatz, Galvin and Gagne 2009

420 Linked Allocation Silberschatz, Galvin and Gagne 2009

421 File-Allocation Table Silberschatz, Galvin and Gagne 2009

422 Indexed Allocation Brings all pointers together into the index block. Logical view. index table Silberschatz, Galvin and Gagne 2009

423 Example of Indexed Allocation Silberschatz, Galvin and Gagne 2009

424 Indexed Allocation (Cont.) Need index table Random access Dynamic access without external fragmentation, but have overhead of index block. Mapping from logical to physical in a file of maximum size of 256K words and block size of 512 words. We need only 1 block for index table. LA/512 Q R Q = displacement into index table R = displacement into block Silberschatz, Galvin and Gagne 2009

425 Indexed Allocation Mapping (Cont.) Mapping from logical to physical in a file of unbounded length (block size of 512 words). Linked scheme Link blocks of index table (no limit on size). Q 1 = block of index table R 1 is used as follows: LA / (512 x 511) Q 1 R 1 R 1 / 512 Q 2 R 2 Q 2 = displacement into block of index table R 2 displacement into block of file: Silberschatz, Galvin and Gagne 2009

426 Indexed Allocation Mapping (Cont.) Two-level index (maximum file size is ) LA / (512 x 512) Q 1 R 1 Q 1 = displacement into outer-index R 1 is used as follows: R 1 / 512 Q 2 R 2 Q 2 = displacement into block of index table R 2 displacement into block of file: Silberschatz, Galvin and Gagne 2009

427 Indexed Allocation Mapping (Cont.) outer-index index table file Silberschatz, Galvin and Gagne 2009

428 Combined Scheme: UNIX (4K bytes per block) Silberschatz, Galvin and Gagne 2009

429 Free-Space Management Bit vector (n blocks) n-1 bit[i] = 0 block[i] free 1 block[i] occupied Block number calculation (number of bits per word) * (number of 0-value words) + offset of first 1 bit Silberschatz, Galvin and Gagne 2009

430 Free-Space Management (Cont.) Bit map requires extra space Example: block size = 2 12 bytes disk size = 2 30 bytes (1 gigabyte) n = 2 30 /2 12 = 2 18 bits (or 32K bytes) Easy to get contiguous files Linked list (free list) Cannot get contiguous space easily No waste of space Grouping Counting Silberschatz, Galvin and Gagne 2009

431 Free-Space Management (Cont.) Need to protect: Pointer to free list Bit map Must be kept on disk Copy in memory and disk may differ Cannot allow for block[i] to have a situation where bit[i] = 1 in memory and bit[i] = 0 on disk Solution: Set bit[i] = 1 in disk Allocate block[i] Set bit[i] = 1 in memory Silberschatz, Galvin and Gagne 2009

432 Directory Implementation Linear list of file names with pointer to the data blocks simple to program time-consuming to execute Hash Table linear list with hash data structure decreases directory search time collisions situations where two file names hash to the same location fixed size Silberschatz, Galvin and Gagne 2009

433 Linked Free Space List on Disk Silberschatz, Galvin and Gagne 2009

434 Efficiency and Performance Efficiency dependent on: disk allocation and directory algorithms types of data kept in file s directory entry Performance disk cache separate section of main memory for frequently used blocks free-behind and read-ahead techniques to optimize sequential access improve PC performance by dedicating section of memory as virtual disk, or RAM disk Silberschatz, Galvin and Gagne 2009

435 Page Cache A page cache caches pages rather than disk blocks using virtual memory techniques Memory-mapped I/O uses a page cache Routine I/O through the file system uses the buffer (disk) cache This leads to the following figure Silberschatz, Galvin and Gagne 2009

436 I/O Without a Unified Buffer Cache Silberschatz, Galvin and Gagne 2009

437 Unified Buffer Cache A unified buffer cache uses the same page cache to cache both memorymapped pages and ordinary file system I/O Silberschatz, Galvin and Gagne 2009

438 I/O Using a Unified Buffer Cache Silberschatz, Galvin and Gagne 2009

439 Recovery Consistency checking compares data in directory structure with data blocks on disk, and tries to fix inconsistencies Use system programs to back up data from disk to another storage device (floppy disk, magnetic tape, other magnetic disk, optical) Recover lost file or disk by restoring data from backup Silberschatz, Galvin and Gagne 2009

440 Log Structured File Systems Log structured (or journaling) file systems record each update to the file system as a transaction All transactions are written to a log A transaction is considered committed once it is written to the log However, the file system may not yet be updated The transactions in the log are asynchronously written to the file system When the file system is modified, the transaction is removed from the log If the file system crashes, all remaining transactions in the log must still be performed Silberschatz, Galvin and Gagne 2009

441 The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs (or WANs) The implementation is part of the Solaris and SunOS operating systems running on Sun workstations using an unreliable datagram protocol (UDP/IP protocol and Ethernet Silberschatz, Galvin and Gagne 2009

442 NFS (Cont.) Interconnected workstations viewed as a set of independent machines with independent file systems, which allows sharing among these file systems in a transparent manner A remote directory is mounted over a local file system directory The mounted directory looks like an integral subtree of the local file system, replacing the subtree descending from the local directory Specification of the remote directory for the mount operation is nontransparent; the host name of the remote directory has to be provided Files in the remote directory can then be accessed in a transparent manner Subject to access-rights accreditation, potentially any file system (or directory within a file system), can be mounted remotely on top of any local directory Silberschatz, Galvin and Gagne 2009

443 NFS (Cont.) NFS is designed to operate in a heterogeneous environment of different machines, operating systems, and network architectures; the NFS specifications independent of these media This independence is achieved through the use of RPC primitives built on top of an External Data Representation (XDR) protocol used between two implementation-independent interfaces The NFS specification distinguishes between the services provided by a mount mechanism and the actual remote-file-access services Silberschatz, Galvin and Gagne 2009

444 Three Independent File Systems Silberschatz, Galvin and Gagne 2009

445 Mounting in NFS Mounts Cascading mounts Silberschatz, Galvin and Gagne 2009

446 NFS Mount Protocol Establishes initial logical connection between server and client Mount operation includes name of remote directory to be mounted and name of server machine storing it Mount request is mapped to corresponding RPC and forwarded to mount server running on server machine Export list specifies local file systems that server exports for mounting, along with names of machines that are permitted to mount them Following a mount request that conforms to its export list, the server returns a file handle a key for further accesses File handle a file-system identifier, and an inode number to identify the mounted directory within the exported file system The mount operation changes only the user s view and does not affect the server side Silberschatz, Galvin and Gagne 2009

447 NFS Protocol Provides a set of remote procedure calls for remote file operations. The procedures support the following operations: searching for a file within a directory reading a set of directory entries manipulating links and directories accessing file attributes reading and writing files NFS servers are stateless; each request has to provide a full set of arguments (NFS V4 is just coming available very different, stateful) Modified data must be committed to the server s disk before results are returned to the client (lose advantages of caching) The NFS protocol does not provide concurrency-control mechanisms Silberschatz, Galvin and Gagne 2009

448 Three Major Layers of NFS Architecture UNIX file-system interface (based on the open, read, write, and close calls, and file descriptors) Virtual File System (VFS) layer distinguishes local files from remote ones, and local files are further distinguished according to their file-system types The VFS activates file-system-specific operations to handle local requests according to their file-system types Calls the NFS protocol procedures for remote requests NFS service layer bottom layer of the architecture Implements the NFS protocol Silberschatz, Galvin and Gagne 2009

449 Schematic View of NFS Architecture Silberschatz, Galvin and Gagne 2009

450 NFS Path-Name Translation Performed by breaking the path into component names and performing a separate NFS lookup call for every pair of component name and directory vnode To make lookup faster, a directory name lookup cache on the client s side holds the vnodes for remote directory names Silberschatz, Galvin and Gagne 2009

451 NFS Remote Operations Nearly one-to-one correspondence between regular UNIX system calls and the NFS protocol RPCs (except opening and closing files) NFS adheres to the remote-service paradigm, but employs buffering and caching techniques for the sake of performance File-blocks cache when a file is opened, the kernel checks with the remote server whether to fetch or revalidate the cached attributes Cached file blocks are used only if the corresponding cached attributes are up to date File-attribute cache the attribute cache is updated whenever new attributes arrive from the server Clients do not free delayed-write blocks until the server confirms that the data have been written to disk Silberschatz, Galvin and Gagne 2009

452 Example: WAFL File System Used on Network Appliance Filers distributed file system appliances Write-anywhere file layout Serves up NFS, CIFS, http, ftp Random I/O optimized, write optimized NVRAM for write caching Similar to Berkeley Fast File System, with extensive modifications Silberschatz, Galvin and Gagne 2009

453 The WAFL File Layout Silberschatz, Galvin and Gagne 2009

454 Snapshots in WAFL Silberschatz, Galvin and Gagne 2009

455 Silberschatz, Galvin and Gagne 2009

456 End of Chapter 11, Silberschatz, Galvin and Gagne 2009

457 Chapter 15: Security, Silberschatz, Galvin and Gagne 2009

458 Chapter 15: Security The Security Problem Program Threats System and Network Threats Cryptography as a Security Tool User Authentication Implementing Security Defenses Firewalling to Protect Systems and Networks Computer-Security Classifications An Example: Windows XP 15.2 Silberschatz, Galvin and Gagne 2009

459 Objectives To discuss security threats and attacks To explain the fundamentals of encryption, authentication, and hashing To examine the uses of cryptography in computing To describe the various countermeasures to security attacks 15.3 Silberschatz, Galvin and Gagne 2009

460 The Security Problem Security must consider external environment of the system, and protect the system resources Intruders (crackers) attempt to breach security Threat is potential security violation Attack is attempt to breach security Attack can be accidental or malicious Easier to protect against accidental than malicious misuse 15.4 Silberschatz, Galvin and Gagne 2009

461 Security Violations Categories Breach of confidentiality Breach of integrity Breach of availability Theft of service Denial of service Methods Masquerading (breach authentication) Replay attack Message modification Man-in-the-middle attack Session hijacking 15.5 Silberschatz, Galvin and Gagne 2009

462 Standard Security Attacks 15.6 Silberschatz, Galvin and Gagne 2009

463 Security Measure Levels Security must occur at four levels to be effective: Physical Human Avoid social engineering, phishing, dumpster diving Operating System Network Security is as week as the weakest chain 15.7 Silberschatz, Galvin and Gagne 2009

464 Program Threats Trojan Horse Code segment that misuses its environment Exploits mechanisms for allowing programs written by users to be executed by other users Spyware, pop-up browser windows, covert channels Trap Door Specific user identifier or password that circumvents normal security procedures Could be included in a compiler Logic Bomb Program that initiates a security incident under certain circumstances Stack and Buffer Overflow Exploits a bug in a program (overflow either the stack or memory buffers) 15.8 Silberschatz, Galvin and Gagne 2009

465 C Program with Buffer-overflow Condition #include <stdio.h> #define BUFFER SIZE 256 int main(int argc, char *argv[]) { char buffer[buffer SIZE]; if (argc < 2) return -1; else { strcpy(buffer,argv[1]); return 0; } } 15.9 Silberschatz, Galvin and Gagne 2009

466 Layout of Typical Stack Frame Silberschatz, Galvin and Gagne 2009

467 Modified Shell Code #include <stdio.h> int main(int argc, char *argv[]) { execvp( \bin\sh, \bin \sh, NULL); return 0; } Silberschatz, Galvin and Gagne 2009

468 Hypothetical Stack Frame Before attack After attack Silberschatz, Galvin and Gagne 2009

469 Program Threats (Cont.) Viruses Code fragment embedded in legitimate program Very specific to CPU architecture, operating system, applications Usually borne via or as a macro Visual Basic Macro to reformat hard drive Sub AutoOpen() Dim ofs Set ofs = CreateObject( Scripting.FileSystemObject ) vs = Shell( c:command.com /k format c:,vbhide) End Sub Silberschatz, Galvin and Gagne 2009

470 Program Threats (Cont.) Virus dropper inserts virus onto the system Many categories of viruses, literally many thousands of viruses File Boot Macro Source code Polymorphic Encrypted Stealth Tunneling Multipartite Armored Silberschatz, Galvin and Gagne 2009

471 A Boot-sector Computer Virus Silberschatz, Galvin and Gagne 2009

472 System and Network Threats Worms use spawn mechanism; standalone program Internet worm Exploited UNIX networking features (remote access) and bugs in finger and sendmail programs Grappling hook program uploaded main worm program Port scanning Automated attempt to connect to a range of ports on one or a range of IP addresses Denial of Service Overload the targeted computer preventing it from doing any useful work Distributed denial-of-service (DDOS) come from multiple sites at once Silberschatz, Galvin and Gagne 2009

473 The Morris Internet Worm Silberschatz, Galvin and Gagne 2009

474 Cryptography as a Security Tool Broadest security tool available Source and destination of messages cannot be trusted without cryptography Means to constrain potential senders (sources) and / or receivers (destinations) of messages Based on secrets (keys) Silberschatz, Galvin and Gagne 2009

475 Secure Communication over Insecure Medium Silberschatz, Galvin and Gagne 2009

476 Encryption Encryption algorithm consists of Set of K keys Set of M Messages Set of C ciphertexts (encrypted messages) A function E : K (M C). That is, for each k K, E(k) is a function for generating ciphertexts from messages. Both E and E(k) for any k should be efficiently computable functions. A function D : K (C M). That is, for each k K, D(k) is a function for generating messages from ciphertexts. Both D and D(k) for any k should be efficiently computable functions. An encryption algorithm must provide this essential property: Given a ciphertext c C, a computer can compute m such that E(k)(m) = c only if it possesses D(k). Thus, a computer holding D(k) can decrypt ciphertexts to the plaintexts used to produce them, but a computer not holding D(k) cannot decrypt ciphertexts. Since ciphertexts are generally exposed (for example, sent on the network), it is important that it be infeasible to derive D(k) from the ciphertexts Silberschatz, Galvin and Gagne 2009

477 Symmetric Encryption Same key used to encrypt and decrypt E(k) can be derived from D(k), and vice versa DES is most commonly used symmetric block-encryption algorithm (created by US Govt) Encrypts a block of data at a time Triple-DES considered more secure Advanced Encryption Standard (AES), twofish up and coming RC4 is most common symmetric stream cipher, but known to have vulnerabilities Encrypts/decrypts a stream of bytes (i.e wireless transmission) Key is a input to psuedo-random-bit generator Generates an infinite keystream Silberschatz, Galvin and Gagne 2009

478 Asymmetric Encryption Public-key encryption based on each user having two keys: public key published key used to encrypt data private key key known only to individual user used to decrypt data Must be an encryption scheme that can be made public without making it easy to figure out the decryption scheme Most common is RSA block cipher Efficient algorithm for testing whether or not a number is prime No efficient algorithm is know for finding the prime factors of a number Silberschatz, Galvin and Gagne 2009

479 Asymmetric Encryption (Cont.) Formally, it is computationally infeasible to derive D(k d, N) from E(k e, N), and so E(k e, N) need not be kept secret and can be widely disseminated E(k e, N) (or just k e ) is the public key D(k d, N) (or just k d ) is the private key N is the product of two large, randomly chosen prime numbers p and q (for example, p and q are 512 bits each) Encryption algorithm is E(k e, N)(m) = m k e mod N, where k e satisfies k e k d mod (p 1)(q 1) = 1 The decryption algorithm is then D(k d, N)(c) = c k d mod N Silberschatz, Galvin and Gagne 2009

480 Asymmetric Encryption Example For example. make p = 7and q = 13 We then calculate N = 7 13 = 91 and (p 1)(q 1) = 72 We next select k e relatively prime to 72 and< 72, yielding 5 Finally,we calculate k d such that k e k d mod 72 = 1, yielding 29 We how have our keys Public key, k e, N = 5, 91 Private key, k d, N = 29, 91 Encrypting the message 69 with the public key results in the cyphertext 62 Cyphertext can be decoded with the private key Public key can be distributed in cleartext to anyone who wants to communicate with holder of public key Silberschatz, Galvin and Gagne 2009

481 Encryption and Decryption using RSA Asymmetric Cryptography Silberschatz, Galvin and Gagne 2009

482 Cryptography (Cont.) Note symmetric cryptography based on transformations, asymmetric based on mathematical functions Asymmetric much more compute intensive Typically not used for bulk data encryption Silberschatz, Galvin and Gagne 2009

483 Authentication Constraining set of potential senders of a message Complementary and sometimes redundant to encryption Also can prove message unmodified Algorithm components A set K of keys A set M of messages A set A of authenticators A function S : K (M A) That is, for each k K, S(k) is a function for generating authenticators from messages Both S and S(k) for any k should be efficiently computable functions A function V : K (M A {true, false}). That is, for each k K, V(k) is a function for verifying authenticators on messages Both V and V(k) for any k should be efficiently computable functions Silberschatz, Galvin and Gagne 2009

484 Authentication (Cont.) For a message m, a computer can generate an authenticator a A such that V(k)(m, a) = true only if it possesses S(k) Thus, computer holding S(k) can generate authenticators on messages so that any other computer possessing V(k) can verify them Computer not holding S(k) cannot generate authenticators on messages that can be verified using V(k) Since authenticators are generally exposed (for example, they are sent on the network with the messages themselves), it must not be feasible to derive S(k) from the authenticators Silberschatz, Galvin and Gagne 2009

485 Authentication Hash Functions Basis of authentication Creates small, fixed-size block of data (message digest, hash value) from m Hash Function H must be collision resistant on m Must be infeasible to find an m m such that H(m) = H(m ) If H(m) = H(m ), then m = m The message has not been modified Common message-digest functions include MD5, which produces a 128-bit hash, and SHA-1, which outputs a 160-bit hash Silberschatz, Galvin and Gagne 2009

486 Authentication - MAC Symmetric encryption used in message-authentication code (MAC) authentication algorithm Simple example: MAC defines S(k)(m) = f (k, H(m)) Where f is a function that is one-way on its first argument k cannot be derived from f (k, H(m)) Because of the collision resistance in the hash function, reasonably assured no other message could create the same MAC A suitable verification algorithm is V(k)(m, a) ( f (k,m) = a) Note that k is needed to compute both S(k) and V(k), so anyone able to compute one can compute the other Silberschatz, Galvin and Gagne 2009

487 Authentication Digital Signature Based on asymmetric keys and digital signature algorithm Authenticators produced are digital signatures In a digital-signature algorithm, computationally infeasible to derive S(k s ) from V(k v ) V is a one-way function Thus, k v is the public key and k s is the private key Consider the RSA digital-signature algorithm Similar to the RSA encryption algorithm, but the key use is reversed Digital signature of message S(k s )(m) = H(m) k s mod N The key k s again is a pair d, N, where N is the product of two large, randomly chosen prime numbers p and q Verification algorithm is V(k v )(m, a) (a k v mod N = H(m)) Where k v satisfies k v k s mod (p 1)(q 1) = Silberschatz, Galvin and Gagne 2009

488 Authentication (Cont.) Why authentication if a subset of encryption? Fewer computations (except for RSA digital signatures) Authenticator usually shorter than message Sometimes want authentication but not confidentiality Signed patches et al Can be basis for non-repudiation Silberschatz, Galvin and Gagne 2009

489 Key Distribution Delivery of symmetric key is huge challenge Sometimes done out-of-band Asymmetric keys can proliferate stored on key ring Even asymmetric key distribution needs care man-in-the-middle attack Silberschatz, Galvin and Gagne 2009

490 Man-in-the-middle Attack on Asymmetric Cryptography Silberschatz, Galvin and Gagne 2009

491 Digital Certificates Proof of who or what owns a public key Public key digitally signed a trusted party Trusted party receives proof of identification from entity and certifies that public key belongs to entity Certificate authority are trusted party their public keys included with web browser distributions They vouch for other authorities via digitally signing their keys, and so on Silberschatz, Galvin and Gagne 2009

492 Encryption Example - SSL Insertion of cryptography at one layer of the ISO network model (the transport layer) SSL Secure Socket Layer (also called TLS) Cryptographic protocol that limits two computers to only exchange messages with each other Very complicated, with many variations Used between web servers and browsers for secure communication (credit card numbers) The server is verified with a certificate assuring client is talking to correct server Asymmetric cryptography used to establish a secure session key (symmetric encryption) for bulk of communication during session Communication between each computer theb uses symmetric key cryptography Silberschatz, Galvin and Gagne 2009

493 User Authentication Crucial to identify user correctly, as protection systems depend on user ID User identity most often established through passwords, can be considered a special case of either keys or capabilities Also can include something user has and /or a user attribute Passwords must be kept secret Frequent change of passwords Use of non-guessable passwords Log all invalid access attempts Passwords may also either be encrypted or allowed to be used only once Silberschatz, Galvin and Gagne 2009

494 Implementing Security Defenses Defense in depth is most common security theory multiple layers of security Security policy describes what is being secured Vulnerability assessment compares real state of system / network compared to security policy Intrusion detection endeavors to detect attempted or successful intrusions Signature-based detection spots known bad patterns Anomaly detection spots differences from normal behavior Can detect zero-day attacks False-positives and false-negatives a problem Virus protection Auditing, accounting, and logging of all or specific system or network activities Silberschatz, Galvin and Gagne 2009

495 Firewalling to Protect Systems and Networks A network firewall is placed between trusted and untrusted hosts The firewall limits network access between these two security domains Can be tunneled or spoofed Tunneling allows disallowed protocol to travel within allowed protocol (i.e. telnet inside of HTTP) Firewall rules typically based on host name or IP address which can be spoofed Personal firewall is software layer on given host Can monitor / limit traffic to and from the host Application proxy firewall understands application protocol and can control them (i.e. SMTP) System-call firewall monitors all important system calls and apply rules to them (i.e. this program can execute that system call) Silberschatz, Galvin and Gagne 2009

496 Network Security Through Domain Separation Via Firewall Silberschatz, Galvin and Gagne 2009

Chapter 3: Process-Concept. Operating System Concepts 8 th Edition,

Chapter 3: Process-Concept. Operating System Concepts 8 th Edition, Chapter 3: Process-Concept, Silberschatz, Galvin and Gagne 2009 Chapter 3: Process-Concept Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin

More information

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 3.2 Silberschatz,

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

Chapter 3: Processes. Operating System Concepts Essentials 2 nd Edition

Chapter 3: Processes. Operating System Concepts Essentials 2 nd Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Chapter 3: Processes. Operating System Concepts 8 th Edition, Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2009

More information

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Chapter 3: Processes. Operating System Concepts 8 th Edition, Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems Chapter 5: Processes Chapter 5: Processes & Threads Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems, Silberschatz, Galvin and

More information

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2011 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Part Two - Process Management. Chapter 3: Processes

Part Two - Process Management. Chapter 3: Processes Part Two - Process Management Chapter 3: Processes Chapter 3: Processes 3.1 Process Concept 3.2 Process Scheduling 3.3 Operations on Processes 3.4 Interprocess Communication 3.5 Examples of IPC Systems

More information

CHAPTER 2: PROCESS MANAGEMENT

CHAPTER 2: PROCESS MANAGEMENT 1 CHAPTER 2: PROCESS MANAGEMENT Slides by: Ms. Shree Jaswal TOPICS TO BE COVERED Process description: Process, Process States, Process Control Block (PCB), Threads, Thread management. Process Scheduling:

More information

Chapter 3: Processes. Operating System Concepts 8th Edition,

Chapter 3: Processes. Operating System Concepts 8th Edition, Chapter 3: Processes, Administrivia Friday: lab day. For Monday: Read Chapter 4. Written assignment due Wednesday, Feb. 25 see web site. 3.2 Outline What is a process? How is a process represented? Process

More information

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Sana a University, Dr aimen Process Concept

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Silberschatz, Galvin and Gagne 2013! Chapter 3: Process Concept Process Concept" Process Scheduling" Operations on Processes" Inter-Process Communication (IPC)" Communication

More information

Processes and Threads

Processes and Threads TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Processes and Threads [SGG7] Chapters 3 and 4 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin

More information

Processes. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & Apps 1

Processes. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & Apps 1 Processes Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & Apps 1 Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess

More information

Chapter 3: Processes

Chapter 3: Processes Operating Systems Chapter 3: Processes Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication (IPC) Examples of IPC

More information

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB) Chapter 4: Processes Process Concept Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems An operating system

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2

More information

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Chapter 3: Processes. Operating System Concepts 8 th Edition, Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Outline Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server

More information

Process a program in execution; process execution must progress in sequential fashion. Operating Systems

Process a program in execution; process execution must progress in sequential fashion. Operating Systems Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks 1 Textbook uses the terms job and process almost interchangeably Process

More information

Chapter 3: Processes

Chapter 3: Processes Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 3: Process. Zhi Wang Florida State University

COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 3: Process. Zhi Wang Florida State University COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 3: Process Zhi Wang Florida State University Contents Process concept Process scheduling Operations on processes Inter-process communication

More information

Diagram of Process State Process Control Block (PCB)

Diagram of Process State Process Control Block (PCB) The Big Picture So Far Chapter 4: Processes HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection,

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept By Worawut Srisukkham Updated By Dr. Varin Chouvatut, Silberschatz, Galvin and Gagne 2010 Chapter 3: Process-Concept Process Concept Process Scheduling Operations on Processes

More information

CS370 Operating Systems Midterm Review

CS370 Operating Systems Midterm Review CS370 Operating Systems Midterm Review Yashwant K Malaiya Fall 2015 Slides based on Text by Silberschatz, Galvin, Gagne 1 1 What is an Operating System? An OS is a program that acts an intermediary between

More information

What Is A Process? Process States. Process Concept. Process Control Block (PCB) Process State Transition Diagram 9/6/2013. Process Fundamentals

What Is A Process? Process States. Process Concept. Process Control Block (PCB) Process State Transition Diagram 9/6/2013. Process Fundamentals What Is A Process? A process is a program in execution. Process Fundamentals #include int main(int argc, char*argv[]) { int v; printf( hello world\n ); scanf( %d, &v); return 0; Program test

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information

Chapter 3: Processes. Operating System Concepts 8th Edition, modified by Stewart Weiss

Chapter 3: Processes. Operating System Concepts 8th Edition, modified by Stewart Weiss Chapter 3: Processes Operating System Concepts 8 Edition, Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information

Chapter 3: Processes

Chapter 3: Processes Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2013

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science Processes and More CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating Systems Concepts,

More information

COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 3: Process. Zhi Wang Florida State University

COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 3: Process. Zhi Wang Florida State University COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 3: Process Zhi Wang Florida State University Contents Process concept Process scheduling Operations on processes Inter-process communication

More information

Chapter 3: Processes. Operating System Concepts 9 th Edition

Chapter 3: Processes. Operating System Concepts 9 th Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.) Process Concept Chapter 3 Processes Computers can do several activities at a time Executing user programs, reading from disks writing to a printer, etc. In multiprogramming: CPU switches from program to

More information

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Operating System Concepts 4.1 Process Concept An operating system executes

More information

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication 4.1 Process Concept An operating system executes a variety of programs: Batch

More information

Notice: This set of slides is based on the notes by Professor Perrone of Bucknell and the textbook authors Silberschatz, Galvin, and Gagne

Notice: This set of slides is based on the notes by Professor Perrone of Bucknell and the textbook authors Silberschatz, Galvin, and Gagne Process Fundamentals Notice: This set of slides is based on the notes by Professor Perrone of Bucknell and the textbook authors Silberschatz, Galvin, and Gagne CSCI 315 Operating Systems Design 1 What

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019 CS370 Operating Systems Midterm Review Yashwant K Malaiya Spring 2019 1 1 Computer System Structures Computer System Operation Stack for calling functions (subroutines) I/O Structure: polling, interrupts,

More information

Chapter 4: Processes. Process Concept. Process State

Chapter 4: Processes. Process Concept. Process State Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection Part V Process Management Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection Roadmap of Chapter 5 Notion of Process and Thread Data Structures Used to Manage

More information

Chapter 3: Processes. Operating System Concepts 8th Edition

Chapter 3: Processes. Operating System Concepts 8th Edition Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server Systems 3.2 Objectives

More information

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions: Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)

More information

Process Concept Process in Memory Process State new running waiting ready terminated Diagram of Process State

Process Concept Process in Memory Process State new running waiting ready terminated Diagram of Process State Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks Textbook uses the terms job and process almost interchangeably Process a

More information

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - III Processes. Louisiana State University. Virtual Machines Processes

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - III Processes. Louisiana State University. Virtual Machines Processes CSC 4103 - Operating Systems Spring 2008 Lecture - III Processes Tevfik Ko!ar Louisiana State University January 22 nd, 2008 1 Roadmap Virtual Machines Processes Basic Concepts Context Switching Process

More information

CHAPTER 3 - PROCESS CONCEPT

CHAPTER 3 - PROCESS CONCEPT CHAPTER 3 - PROCESS CONCEPT 1 OBJECTIVES Introduce a process a program in execution basis of all computation Describe features of processes: scheduling, creation, termination, communication Explore interprocess

More information

Operating Systems. Figure: Process States. 1 P a g e

Operating Systems. Figure: Process States. 1 P a g e 1. THE PROCESS CONCEPT A. The Process: A process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity,

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

Processes-Process Concept:

Processes-Process Concept: UNIT-II PROCESS MANAGEMENT Processes-Process Concept: An operating system executes a variety of programs: O Batch system jobs o Time-shared systems user programs or tasks We will use the terms job and

More information

* What are the different states for a task in an OS?

* What are the different states for a task in an OS? * Kernel, Services, Libraries, Application: define the 4 terms, and their roles. The kernel is a computer program that manages input/output requests from software, and translates them into data processing

More information

Process. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Process. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) Process Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review System calls implementation

More information

Department of Computer applications. [Part I: Medium Answer Type Questions]

Department of Computer applications. [Part I: Medium Answer Type Questions] Department of Computer applications BBDNITM, Lucknow MCA 311: OPERATING SYSTEM [Part I: Medium Answer Type Questions] UNIT 1 Q1. What do you mean by an Operating System? What are the main functions of

More information

Chapter 3: Processes. Operating System Concepts 9 th Edit9on

Chapter 3: Processes. Operating System Concepts 9 th Edit9on Chapter 3: Processes Operating System Concepts 9 th Edit9on Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes 1. Process Concept 2. Process Scheduling 3. Operations on Processes 4. Interprocess

More information

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009 CSC 4103 - Operating Systems Fall 2009 Lecture - III Processes Tevfik Ko!ar Louisiana State University September 1 st, 2009 1 Roadmap Processes Basic Concepts Process Creation Process Termination Context

More information

TDIU25: Operating Systems II. Processes, Threads and Scheduling

TDIU25: Operating Systems II. Processes, Threads and Scheduling TDIU25: Operating Systems II. Processes, Threads and Scheduling SGG9: 3.1-3.3, 4.1-4.3, 5.1-5.4 o Process concept: context switch, scheduling queues, creation o Multithreaded programming o Process scheduling

More information

CS307 Operating Systems Processes

CS307 Operating Systems Processes CS307 Processes Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2018 Process Concept Process a program in execution An operating system executes a variety of

More information

Processes. Process Concept. The Process. The Process (Cont.) Process Control Block (PCB) Process State

Processes. Process Concept. The Process. The Process (Cont.) Process Control Block (PCB) Process State CS307 Process Concept Process a program in execution Processes An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks All these activities are

More information

Chapter 3: Processes. Operating System Concepts 9 th Edition

Chapter 3: Processes. Operating System Concepts 9 th Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept DM510-14 Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server

More information

! The Process Control Block (PCB) " is included in the context,

! The Process Control Block (PCB)  is included in the context, CSE 421/521 - Operating Systems Fall 2012 Lecture - III Processes Tevfik Koşar Roadmap Processes Basic Concepts Process Creation Process Termination Context Switching Process Queues Process Scheduling

More information

CHAPTER NO - 1 : Introduction:

CHAPTER NO - 1 : Introduction: Sr. No L.J. Institute of Engineering & Technology Semester: IV (26) Subject Name: Operating System Subject Code:21402 Faculties: Prof. Saurin Dave CHAPTER NO - 1 : Introduction: TOPIC:1 Basics of Operating

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

Killing Zombies, Working, Sleeping, and Spawning Children

Killing Zombies, Working, Sleeping, and Spawning Children Killing Zombies, Working, Sleeping, and Spawning Children CS 333 Prof. Karavanic (c) 2015 Karen L. Karavanic 1 The Process Model The OS loads program code and starts each job. Then it cleans up afterwards,

More information

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC]

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] Processes CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] 1 Outline What Is A Process? Process States & PCB Process Memory Layout Process Scheduling Context Switch Process Operations

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

Processes and Threads

Processes and Threads Processes and Threads Giuseppe Anastasi g.anastasi@iet.unipi.it Pervasive Computing & Networking Lab. () Dept. of Information Engineering, University of Pisa Based on original slides by Silberschatz, Galvin

More information

CPSC 341 OS & Networks. Processes. Dr. Yingwu Zhu

CPSC 341 OS & Networks. Processes. Dr. Yingwu Zhu CPSC 341 OS & Networks Processes Dr. Yingwu Zhu Process Concept Process a program in execution What is not a process? -- program on a disk A process is an active object, but a program is just a file It

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song CSCE 313 Introduction to Computer Systems Instructor: Dezhen Song Programs, Processes, and Threads Programs and Processes Threads Programs, Processes, and Threads Programs and Processes Threads Processes

More information

CSCE 313: Intro to Computer Systems

CSCE 313: Intro to Computer Systems CSCE 313 Introduction to Computer Systems Instructor: Dr. Guofei Gu http://courses.cse.tamu.edu/guofei/csce313/ Programs, Processes, and Threads Programs and Processes Threads 1 Programs, Processes, and

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

1. a. Show that the four necessary conditions for deadlock indeed hold in this example.

1. a. Show that the four necessary conditions for deadlock indeed hold in this example. Tutorial 7 (Deadlocks) 1. a. Show that the four necessary conditions for deadlock indeed hold in this example. b. State a simple rule for avoiding deadlocks in this system. a. The four necessary conditions

More information

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017 CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 Outline o Process concept o Process creation o Process states and scheduling o Preemption and context switch o Inter-process communication

More information

4.8 Summary. Practice Exercises

4.8 Summary. Practice Exercises Practice Exercises 191 structures of the parent process. A new task is also created when the clone() system call is made. However, rather than copying all data structures, the new task points to the data

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01 1 UNIT 2 Basic Concepts of CPU Scheduling UNIT -02/Lecture 01 Process Concept An operating system executes a variety of programs: **Batch system jobs **Time-shared systems user programs or tasks **Textbook

More information

UNIT - II PROCESS MANAGEMENT

UNIT - II PROCESS MANAGEMENT UNIT - II PROCESS MANAGEMENT Processes Process Concept A process is an instance of a program in execution. An operating system executes a variety of programs: o Batch system jobs o Time-shared systems

More information

PROCESS MANAGEMENT. Operating Systems 2015 Spring by Euiseong Seo

PROCESS MANAGEMENT. Operating Systems 2015 Spring by Euiseong Seo PROCESS MANAGEMENT Operating Systems 2015 Spring by Euiseong Seo Today s Topics Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 5 CPU scheduling

Chapter 5 CPU scheduling Chapter 5 CPU scheduling Contents Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling

More information

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition Module 6: Process Synchronization 6.1 Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory CS 540 - Operating Systems - Final Exam - Name: Date: Wenesday, May 12, 2004 Part 1: (78 points - 3 points for each problem) ( C ) 1. In UNIX a utility which reads commands from a terminal is called: (A)

More information

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits CS307 What is a thread? Threads A thread is a basic unit of CPU utilization contains a thread ID, a program counter, a register set, and a stack shares with other threads belonging to the same process

More information

Processes, PCB, Context Switch

Processes, PCB, Context Switch THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering EIE 272 CAOS Operating Systems Part II Processes, PCB, Context Switch Instructor Dr. M. Sakalli enmsaka@eie.polyu.edu.hk

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne Histogram of CPU-burst Times 6.2 Silberschatz, Galvin and Gagne Alternating Sequence of CPU And I/O Bursts 6.3 Silberschatz, Galvin and Gagne CPU

More information

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far. Midterm Exam Reviews ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far. Particular attentions on the following: System call, system kernel Thread/process, thread vs process

More information

2. PROCESS. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

2. PROCESS. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn 2. PROCESS Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn SPOILER http://io9.com/if-your-brain-were-a-computer-howmuch-storage-space-w-509687776 2.5 petabytes ~ record 3

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Process. Program Vs. process. During execution, the process may be in one of the following states

Process. Program Vs. process. During execution, the process may be in one of the following states What is a process? What is process scheduling? What are the common operations on processes? How to conduct process-level communication? How to conduct client-server communication? Process is a program

More information

Operating System Design

Operating System Design Operating System Design Processes Operations Inter Process Communication (IPC) Neda Nasiriani Fall 2018 1 Process 2 Process Lifecycle 3 What information is needed? If you want to design a scheduler to

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information