CSE 4/521 Introduction to Operating Systems Lecture 27 (Final Exam Review) Summer 2018
Overview Objective: Revise topics and questions for the final-exam. 1. Main Memory 2. Virtual Memory 3. Mass Storage 4. File System Implementation 5. I/O System 2
Overview 1. Main Memory 2. Virtual Memory 3. Mass Storage 4. File System Implementation 5. I/O System 3
Main Memory Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table 4
Background Logical address generated by the CPU; also referred to as virtual address Physical address address seen by the memory unit Logical address space is the set of all logical addresses generated by a program Physical address space is the set of all physical addresses generated by a program A pair of base and limit registers define the logical address space 5
Main Memory Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table 6
Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution 7
Main Memory Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table 8
Contiguous Memory Allocation(1/2) External Fragmentation total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is wasted memory internal to a partition 9
Contiguous Memory Allocation(2/2) Multiple-partition Allocation First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list Produces the smallest leftover hole Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole 10
Main Memory Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table 11
Segmentation Logical View of Segmentation Segmentation Hardware 1 1 4 3 2 2 4 3 user space physical memory space 12
Main Memory Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table 13
Paging (1/2) Address generated by CPU is divided into: Page number (p) used as an index into a page table which contains base address of each page in physical memory Page offset (d) combined with base address to define the physical memory address that is sent to the memory unit page number p m - n page offset For given logical address space 2 m and page size 2 n d n 14
Paging (2/2) Paging Paging with Translation Look-aside Buffer (TLB) 15
Main Memory Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table 16
Structure of the Page Table 1. Hierarchical Page Table 2. Hashed Page Table 3. Inverted Page Table 17
Questions 1. Name two differences between logical and physical addresses. 2. Compare contiguous memory allocation, pure segmentation, and pure paging with respect to: External fragmentation Internal fragmentation Ability to share code across processes 3. Assume 1-KB page size, what are the page numbers and offsets for the addresses: 3085 42095 215201 650000 2000001 18
Overview 1. Main Memory 2. Virtual Memory 3. Mass Storage 4. File System Implementation 5. I/O System 19
Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Algorithms Allocation of Frames Thrashing 20
Background Virtual Memory larger than Physical Memory 21
Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Algorithms Allocation of Frames Thrashing 22
Demand Paging (1/2) Bring in a page only when it is needed In virtual memory, page table looks like: 23
Demand Paging (2/2) Steps in handling a page fault Practice numerical problems of Demand Paging for exam 24
Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Algorithms Allocation of Frames Thrashing 25
Copy-on-Write Before Process 1 modifies page C After Process 1 modifies page C 26
Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Algorithms Allocation of Frames Thrashing 27
Page Replacement Algorithms(1/2) FIFO First page in is the first page out. Optimal Algorithm Replace page not going to be used for longest 28
Page Replacement Algorithms (2/2) LRU Algorithm Replace page that has not been used for longest period of time Others: Second-Chance (clock) Page-Replacement Algorithm Enhanced Second-Chance Algorithm Counting Algorithms Page-Buffer Algorithms 29
Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Algorithms Allocation of Frames Thrashing 30
Allocation of Frames Fixed Allocation Equal Allocation Allocate frames equally among processes Proportional Allocation Allocate according to size of the process. Priority Allocation A frame allocation scheme using priorities rather than size. Global vs. Local Allocation Global Replacement frame selected from set of all frames. Local Replacement frame from only its own set of allocated frames. 31
Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Algorithms Allocation of Frames Thrashing 32
Thrashing Thrashing a process is busy swapping pages in and out Leads to: Low CPU utilization 33
Questions 1. We have an OS that uses base and limit registers, but we have modified the OS to provide a page table. Can page table be set up to simulate base and limit registers? 2. Consider a system that allocates pages of different sizes to its processes. What modifications to the virtual memory system provide this functionality? 3. What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem? 4. All Homework Problems. 34
Overview 1. Main Memory 2. Virtual Memory 3. Mass Storage 4. File System Implementation 5. I/O System 35
Mass Storage Overview of Mass-Storage Structure Disk Structure Disk Scheduling RAID Structure Stable-Storage Implementation 36
Overview of Mass-Storage Structure Access Latency = Average access time = average seek time + average latency Average I/O time = average access time + (amount to transfer / transfer rate) + controller overhead 37
Mass Storage Overview of Mass-Storage Structure Disk Structure Disk Scheduling RAID Structure Stable-Storage Implementation 38
Disk Structure Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost 39
Mass Storage Overview of Mass-Storage Structure Disk Structure Disk Scheduling RAID Structure Stable-Storage Implementation 40
Disk Scheduling Objective: Minimize seek time First Come First Serve (FCFS) Shortest Seek Time First (SSTF) SCAN and C-SCAN LOOK and C-LOOK 41
Mass Storage Overview of Mass-Storage Structure Disk Structure Disk Scheduling RAID Structure Stable-Storage Implementation 42
RAID Structure 43
Mass Storage Overview of Mass-Storage Structure Disk Structure Disk Scheduling RAID Structure Stable-Storage Implementation 44
Stable-Storage Implementation Write-ahead log scheme 1. Write to 1 st physical 2. When successful, write to 2 nd physical 3. Declare complete only after second write completes successfully 45
Questions 1. None of the disk-scheduling disciplines, except FCFS, is truly fair (starvation may occur). a. Explain why. b. Describe a way to modify SCAN to ensure fairness. 2. Compare the performance of write operations achieved by RAID level 5 with RAID level 1. 3. Why is rotational latency usually not considered in disk scheduling? How would you modify SSTF, SCAN, and C-SAN to include rotational latency if its high. 46
Overview 1. Main Memory 2. Virtual Memory 3. Mass Storage 4. File System Implementation 5. I/O System 47
File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management 48
File-System Structure 49
File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management 50
File-System Implementation On-disk data structures: 1. Boot control block 2. Volume control block 3. Directory structure 4. Per File FCB In-memory data structures: 1. Mount table 2. Directory structure 3. Per-process open-file table 4. System-wide open-file table 51
File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management 52
Directory Implementation Linear List List of file names with pointer to data blocks. Hash Table Linear list with hash data structure 53
File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management 54
Allocation Methods Contiguous Linked Indexed 55
File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management 56
Free-Space Management Bit vector Linked List 0 1 2 n-1 1 block[i] free bit[i] = 0 block[i] occupied Grouping Counting Space Maps 57
Questions 1. Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular file? 2. What are the advantages of the variant of linked allocation that uses a FAT to chain together the blocks of a file? 3. All H/W questions. 58
Overview 1. Main Memory 2. Virtual Memory 3. Mass Storage 4. File System Implementation 5. I/O System 59
I/O System Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Performance 60
Application I/O Interface Kernel I/O Structure Characteristics of I/O Devices 61
I/O System Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Performance 62
Kernel I/O Subsystem Scheduling Facilitate I/O requests via device queues. Buffering Store data in memory while transferring b/w devices. Caching Faster device holding copy of data. Spooling Hold output for a device. Eg: Printers Device reservation Provides exclusive access to a device. Error handling Recover from errors. 63
I/O System Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Performance 64
Transforming I/O Requests to Hardware Operations 65
I/O System Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Performance 66
Performance Reduce number of context switches Reduce data copying Reduce interrupts by using large transfers, smart controllers, polling Use DMA etc 67
Questions 1. Why might a system use interrupt-driven I/O to manage a single serial port and polling I/O to manage a front-end processor, such as a terminal concentrator? 2. In each of the scenarios, would you use buffering, spooling, caching or a combination? A mouse used with a graphical user interface. A tape drive on a multitasking OS. A disk drive containing user files A graphics card with direct bus connection, accessible through memory-mapped I/O 68
Good Luck!!!!! Credit to all dinosaurs that made the course fun to learn. 69