CS140 Final Review. Winter 2014

Similar documents
CS140 Operating Systems Final December 12, 2007 OPEN BOOK, OPEN NOTES

Page Frame Reclaiming

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Question Points Score Total 100

Midterm Exam #2 Solutions April 20, 2016 CS162 Operating Systems

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

COMP 3361: Operating Systems 1 Final Exam Winter 2009

Operating Systems Design Exam 2 Review: Spring 2011

CS 550 Operating Systems Spring File System

CS 416: Opera-ng Systems Design March 23, 2012

Administrative Details. CS 140 Final Review Session. Pre-Midterm. Plan For Today. Disks + I/O. Pre-Midterm, cont.

Virtual Memory. Chapter 8

CS 4284 Systems Capstone

File Systems. Kartik Gopalan. Chapter 4 From Tanenbaum s Modern Operating System

CS 4284 Systems Capstone

Final Exam Preparation Questions

CS-537: Midterm Exam (Spring 2009) The Future of Processors, Operating Systems, and You

(a) Which of these two conditions (high or low) is considered more serious? Justify your answer.

CS162 Operating Systems and Systems Programming Lecture 12. Address Translation. Page 1

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Midterm Exam #2 April 20, 2016 CS162 Operating Systems

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr.

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 23

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

(b) External fragmentation can happen in a virtual memory paging system.

Lecture 12: Demand Paging

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Chapter 3 - Memory Management

File System Internals. Jo, Heeseung

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 13: Address Translation

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 18: Naming, Directories, and File Caching

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 18: Naming, Directories, and File Caching

I, J A[I][J] / /4 8000/ I, J A(J, I) Chapter 5 Solutions S-3.

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems

CSE 120: Principles of Operating Systems. Lecture 10. File Systems. February 22, Prof. Joe Pasquale

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Lecture 24: Filesystems: FAT, FFS, NTFS

Computer Systems Laboratory Sungkyunkwan University

CSE 451: Operating Systems. Section 10 Project 3 wrap-up, final exam review

we are here Page 1 Recall: How do we Hide I/O Latency? I/O & Storage Layers Recall: C Low level I/O

COMPUTER ARCHITECTURE. Virtualization and Memory Hierarchy

CSC369 Operating Systems. Spring 2007

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Last Class: Demand Paged Virtual Memory

How to create a process? What does process look like?

we are here I/O & Storage Layers Recall: C Low level I/O Recall: C Low Level Operations CS162 Operating Systems and Systems Programming Lecture 18

Motivation. Operating Systems. File Systems. Outline. Files: The User s Point of View. File System Concepts. Solution? Files!

This exam paper contains 8 questions (12 pages) Total 100 points. Please put your official name and NOT your assumed name. First Name: Last Name:

Midterm Exam #3 Solutions November 30, 2016 CS162 Operating Systems

CS 140 Project 4 File Systems Review Session

CS370 Operating Systems

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Process size is independent of the main memory present in the system.

Lecture 19: File System Implementation. Mythili Vutukuru IIT Bombay

Virtualization and memory hierarchy

UW CSE 351, Winter 2013 Final Exam

Operating Systems Design Exam 2 Review: Spring 2012

ECE 598 Advanced Operating Systems Lecture 12

OS COMPONENTS OVERVIEW OF UNIX FILE I/O. CS124 Operating Systems Fall , Lecture 2

File Systems. Information Server 1. Content. Motivation. Motivation. File Systems. File Systems. Files

CS 153 Design of Operating Systems Winter 2016

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 23

New York University CSCI-UA.202: Operating Systems (Undergrad): Spring 2015 Midterm Exam I

Dan Noé University of New Hampshire / VeloBit

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Memory Management: The process by which memory is shared, allocated, and released. Not applicable to cache memory.

CS 153 Design of Operating Systems Winter 2016

CS 537: Introduction to Operating Systems Fall 2015: Midterm Exam #1

CSE 120: Principles of Operating Systems. Lecture 10. File Systems. November 6, Prof. Joe Pasquale

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

Virtual Memory: Systems

RCU. ò Walk through two system calls in some detail. ò Open and read. ò Too much code to cover all FS system calls. ò 3 Cases for a dentry:

OPERATING SYSTEM. Chapter 9: Virtual Memory

VFS, Continued. Don Porter CSE 506

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 24

File System Performance (and Abstractions) Kevin Webb Swarthmore College April 5, 2018

CSE 120 Principles of Operating Systems

Midterm II December 4 th, 2006 CS162: Operating Systems and Systems Programming

Practice Exercises 449

Filesystem. Disclaimer: some slides are adopted from book authors slides with permission 1

Lab 4 File System. CS140 February 27, Slides adapted from previous quarters

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CSE 120 Principles of Operating Systems

CS370 Operating Systems

CSE 153 Design of Operating Systems

File System Internals. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Long-term Information Storage Must store large amounts of data Information stored must survive the termination of the process using it Multiple proces

CSE380 - Operating Systems

Architectural Support. Processes. OS Structure. Threads. Scheduling. CSE 451: Operating Systems Spring Module 28 Course Review

File Systems. Before We Begin. So Far, We Have Considered. Motivation for File Systems. CSE 120: Principles of Operating Systems.

Transcription:

CS140 Final Review Winter 2014

Administrivia Friday, March 21, 12:15-3:15pm Open book, covers all 17 lectures (including topics already on the midterm) 50% of grade based on exams using this quanoty: max (midterm > 0? final : 0,(midterm + final) /2)

Topics Threads & Processes Concurrency & SynchronizaOon Scheduling Virtual Memory I/O Disks, File systems, Network file systems ProtecOon & Security Virtual machines SuggesOons: you should know pintos material well by now, so study material not covered in pintos (equal representaoon on exam)

Peterson s Algorithm /* Set all fields to zero on inioalizaoon */ struct peterson { int wants[2]; /* wants[i] = 1 means thread i wants or has lock */ int not_turn; /* not this thread s turn if other one wants lock */ }; /* i is thread number, which must be 0 or 1 */ void peterson_lock (struct peterson *p, int i) { p- >wants[i] = 1; p- >not_turn = i; while (p- >wants[1- i] && p- >not_turn == i); } /* i is thread number, which must be 0 or 1 */ void peterson_unlock (struct peterson *p, int i){ p- >wants[i] = 0; } By declaring a global struct peterson globallock;, the two threads can then implement criocal secoons as follows: peterson_lock (&globallock, i); criocal_secoon_for_thread_i (); peterson_unlock (&globallock, i);

/* Set turn to zero on inioalizaoon */ struct mylock { int not_turn; }; /* i is thread number, which must be 0 or 1 */ void my_lock (struct mylock *p, int i) { while (p- >not_turn == i) ; } /* i is thread number, which must be 0 or 1 */ void my_unlock (struct mylock *p, int i) { p- >not_turn = i; } Does this fail to guarantee any of the following? (Circle All That Apply) A) mutual exclusion. B) progress. C) bounded waiong.

Does not guarantee. A. mutual exclusion B. Progress C. Bounded WaiOng D. Both A and B E. Both A and C

Answer Answer: Fails to guarantee progress. Explana<on: If thread 0 spends all of its Ome in the remainder secoon (i.e., does not enter the criocal secoon), then thread 1 will never get into the criocal secoon.

void loop (void) { } char *p, *begin = sbrk (0); /* end of the heap */ char *end = begin + 0x20000000; /* grow heap by 512 MB */ brk (end); for (p = begin; p += 4096) { } if (p >= end) p = begin; *p += 1; /* modify first byte of page */ Page Size = 4096 bytes, 256MB Physical Mem

Which evicoon policy minimizes the expected # of page faults A. LRU B. MRU C. Random D. LRU & Random (Tie) E. About the Same

Answer: B (MRU) Explana<on: Because the funcoon is looping over memory, the most recently accessed page is the one that will be accessed furthest in the future, and thus the best one to evict. With MRU, roughly the first half of the area being scanned will be brought into memory and stay there, while the second half of the loop will be all page faults. In contract, LRU will fault in every iteraoon of the loop, and random will perform between the other two.

Suppose you run a single virtual machine with 1,024 MB of physical memory on top of a VMM, yet the underlying hardware only has 768 MB of machine memory available. The guest OS uses LRU. Will the system have berer performance if the VMM uses LRU page replacement or random page replacement (keeping in mind that the guest OS always does LRU)? Choose The Best Answer A) Performance will be berer if the VMM uses LRU replacement. B) Performance will be berer if the VMM uses random replacement. C) Expected performance will be the same whether the VMM uses LRU or random replacement.

Which policy gets berer performance? A. LRU B. Random C. Equivalent

Answer: B Explana<on: With LRU replacement, the kernel will constantly page out the least recently used page, but this means the virtual page least recently used by an applicaoon. The kernel itself will make use of the physical page while paging it out (if it is dirty), and then immediately recycle the frame to saosfy its own memory pressure. Thus, the least recently used physical page is the most likely to be used by the kernel as soon as it needs to do a page evicoon, and thus is a good candidate not to page out.

Suppose you have a single- core machine with a Unix- like OS offering two thread packages: libuthreads.a is a user- level implementaoon of threads, in which each process has only one kernel thread (which runs all applicaoon- level threads). Hence, all thread scheduling decisions are made by the user- level thread library. (This is what we called n : 1 threads in class.) libkthreads.a uses kernel- level threads to implement applicaoon- level threads, so that each kernel- level thread runs only one applicaoon- level thread and all scheduling decisions are made in the kernel. (This is what we called 1 : 1 threads in class.) You decide to run a fine- grained simulaoon of 100 parocles. Each parocle has its own thread, and, once per period, updates its posioon based on the posioons of all other parocles, requiring 200 instrucoons. The simulaoon does not perform any I/O. With which of the two thread packages would you expect your simulaoon to run faster? Circle the best answer. A. The simulaoon will run faster with libuthreads.a (n : 1 threads). B. The simulaoon will run faster with libkthreads.a (1 : 1 threads). C. The choice of thread package will only affect performance minimally, and it could go either way.

Which approach is best? A. User Library B. Kernel Library C. Minimal Difference

Answer: libuthreads.a (n : 1 threads). Explana<on: The program requires switching threads every few hundred instrucoons. With 1 : 1 threads, such switches require a system call to yield the CPU. By contrast, with n : 1 threads, a yield is just a funcoon call. FuncOon calls are much faster than system calls. Moreover, the system call overhead will likely be much more than a few hundred instrucoons.

Now you decide to run a web server on a big machine with 10 disks in it. The server gets concurrent requests from dozens of clients for random files. Because the 10 disks are storing so much data, the overwhelming majority of requests miss in the buffer cache and require reading data from disk (which takes an average of 10 milliseconds). In order to process requests concurrently, the web server handles each request in a separate thread. Which thread package would allow the server to handle more requests per second? Circle the best answer. A. The simulaoon will run faster with libuthreads.a (n : 1 threads). B. The simulaoon will run faster with libkthreads.a (1 : 1 threads). C. The choice of thread package will only affect performance minimally, and it could go either way.

Which approach is best? A. User Library B. Kernel Library C. Minimal Difference

Answer: libkthreads.a (1 : 1 threads). To get high throughput you want to maximize uolizaoon of the CPU and disks. However, an n : 1 thread package will put all threads to sleep when any thread blocks on a disk read. By contrast, 1 : 1 threads can have all 10 disks and the CPU busy at the same Ome.

A hacker gains access to your Linux filesystem. Can the aracker use the files on your laptop to recover your login username/password? If so, describe how, if not, explain why not. A. Yes, the aracker can figure out your password from the contents of files on your hard disk. B. No, with overwhelming probability the aracker will not be able to recover your password despite having the contents of your hard disk.

A. Yes B. No Can the aracker recover your password?

Answer: No Explana<on: The laptop stores a hash of the password (H(salt,password)), not the actual password. The aracker can guess the password by compuong H(salt,guess) and seeing if the hash matches the one stored in your file system just as login verifies your password when you log in. However, even if the aracker can guess a billion ( 230) passwords per second (a highly unlikely rate), it would soll take an expected 249 seconds ( 18 million years) to guess the right one.

Consider the following program: #define FILE "tmp" int main (int argc, char **argv) { int i, fd; for (i = 0; i < 100; i++) { fd = open (FILE, O_CREAT O_WRONLY O_TRUNC, 0666); /* create FILE */ close (fd); unlink (FILE); /* delete FILE */ } return 0; } Q: Is the total number of disk writes required by this program (including writes delayed un<l ager the program exits) likely to be more on XFS or on FFS with sog updates? Circle the best answer. A) The program requires more disk writes with XFS. B) The program requires more disk writes with FFS + so} updates. C) The program should require a comparable number of writes in both cases.

Which FS has more disk writes? A. XFS B. FFS + So} Updates C. About the same

Answer: More with XFS Explana<on: With XFS, all metadata changes (such as creaong and deleong files) must be wriren to the log. Thus, the program will require 200 log entries to be wriren. By contrast, so} updates allow the file system to recognize when one operaoon is simply undoing another operaoon that has not yet been commired to disk. Thus, FFS + so} updates doesn t require any disk writes for each iteraoon of the loop. (In both cases, the modificaoon Ome of the directory will eventually have to be updated, but this only has to happen once for the whole execuoon of the program.)

VMware ESX saves memory by idenofying mulople copies of the same physical page and mapping them all copy- on- write to a single machine page. This technique saves memory when, for example, two virtual machines are running the same guest operaong system, as the pages containing kernel code will be idenocal across the two VMs. Could this technique ever save memory when a single virtual machine is running (assuming the VM is configured to see more physical memory than there is machine memory)? Circle The Best Answer A) Yes B) No

Could this save memory? A. Yes B. No

Answer: Yes Explana<on: If you copy a file, both the old and new file blocks may be stored in the buffer cache. Another example is that many OSes zero pages in the idle loop, so as to have many zero- filled pages on hand to map into BSS and stack segments of new processes.

In class we described three file descriptor structures: (a) Indexed files. (b) Linked files. (c) ConOguous (extent- based) allocaoon. Each of the structures has its advantages and disadvantages depending on the goals for the file system and the expected file access parern.

SequenOal Access to Large Files A. Extent, Link, Index B. Extent, Index, Link C. Link, Extent, Index D. Link, Index, Extent E. Index, Extent, Link F. Index, Link, Extent

Answer: extent, link, index ExplanaOon: It is easy to see that extent is the best structure for sequenoal access to very large files, since in extent files are conoguously allocated and the next block to read is physically the next on the disk. No seek Ome to find the next block, and each block will be read sequenoally as the disk head moves. Both link and index require some looking up operaoon in order to know where the next block is. However, (b) performances are worse, since for "very large files" it may require mulople disk accesses to the indirect blocks.

Random Access to Large Files A. Extent, Link, Index B. Extent, Index, Link C. Link, Extent, Index D. Link, Index, Extent E. Index, Extent, Link F. Index, Link, Extent

Answer: Extent, Index, Link Extent is soll the best structure here: just need to use an offset. Index will probably need to look at some levels of indirect blocks in order to find the right block to access (we are dealing with very large files). Link is absolutely the worst structure. In fact, in order to find a random block, we will need to traverse a linked list of blocks, which will take a Ome linear in the offset size.

Disk UOlizaOon A. Extent, Link, Index B. Extent, Index, Link C. Link, Extent, Index D. Link, Index, Extent E. Index, Extent, Link F. Index, Link, Extent

Answer: Link, Index, Extent Exent can suffer heavily of external fragmentaoon, especially for large files. So it is not the best stucture for ge~ng the most bytes on the disk, since lots of space will be wasted. However, for small files and large block size, (c) might prove to be berer than (a) and (b). (a) and (b) stuctures are generally more suitable for this quesoon. The metadata overhead for (b) is likely to be smaller than the one for (a), since it only needs pointers to the next allocated block rather than an enore block (or blocks) which may or may not be totally used.

The End