Second Midterm Exam March 21, 2017 CS162 Operating Systems

Similar documents
Midterm Exam October 15, 2012 CS162 Operating Systems

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Midterm Exam #2 April 20, 2016 CS162 Operating Systems

Midterm Exam March 3, 1999 CS162 Operating Systems

Midterm Exam #2 Solutions April 20, 2016 CS162 Operating Systems

Midterm Exam March 13, 2013 CS162 Operating Systems

Midterm Exam Solutions and Grading Guidelines March 3, 1999 CS162 Operating Systems

Problem Max Points Score Total 100

Student Name: University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science

Final Exam Solutions May 11, 2012 CS162 Operating Systems

Final Exam Solutions May 17, 2013 CS162 Operating Systems

Third Midterm Exam April 24, 2017 CS162 Operating Systems

Midterm Exam #2 December 4, 2013 CS162 Operating Systems

University of California College of Engineering Computer Science Division- EECS. Midterm Exam March 13, 2003 CS162 Operating Systems

Midterm Exam March 7, 2001 CS162 Operating Systems

First Midterm Exam Solutions October 1, 2018 CS162 Operating Systems

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

CSE 410 Final Exam Sample Solution 6/08/10

Third Midterm Exam April 24, 2017 CS162 Operating Systems

Midterm #2 Exam April 26, 2006 CS162 Operating Systems

Midterm II December 4 th, 2006 CS162: Operating Systems and Systems Programming

CPSC/ECE 3220 Summer 2017 Exam 2

Midterm I October 18 th, 2010 CS162: Operating Systems and Systems Programming

Midterm II December 3 rd, 2007 CS162: Operating Systems and Systems Programming

IMPORTANT: Circle the last two letters of your class account:

CS 537: Introduction to Operating Systems Fall 2015: Midterm Exam #1

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Midterm Exam Solutions March 7, 2001 CS162 Operating Systems

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

and data combined) is equal to 7% of the number of instructions. Miss Rate with Second- Level Cache, Direct- Mapped Speed

Midterm Exam #2 April 16, 2008 CS162 Operating Systems

CS 537: Introduction to Operating Systems Fall 2015: Midterm Exam #1

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

First Midterm Exam September 28, 2017 CS162 Operating Systems

Operating Systems (1DT020 & 1TT802)

Midterm Exam #2 April 29, 2003 CS162 Operating Systems

Structure of Computer Systems

CMU Introduction to Computer Architecture, Spring 2014 HW 5: Virtual Memory, Caching and Main Memory

Midterm Exam #1 February 28, 2018 CS162 Operating Systems

EECE.4810/EECE.5730: Operating Systems Spring 2017

COSC 3406: COMPUTER ORGANIZATION

University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science

University of Waterloo Midterm Examination Model Solution CS350 Operating Systems

Caching and Demand-Paged Virtual Memory

(b) External fragmentation can happen in a virtual memory paging system.

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

EECE.4810/EECE.5730: Operating Systems Spring 2017 Homework 3 Solution

Operating Systems (Classroom Practice Booklet Solutions)

CS 5523 Operating Systems: Memory Management (SGG-8)

Midterm I October 11 th, 2006 CS162: Operating Systems and Systems Programming

Midterm #2 Exam Solutions April 26, 2006 CS162 Operating Systems

CS 537: Introduction to Operating Systems Fall 2016: Midterm Exam #1. All cell phones must be turned off and put away.

Lecture 14: Cache & Virtual Memory

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Final Exam April 14, 2007 COSC 3407: Operating Systems

CIS Operating Systems Memory Management Page Replacement. Professor Qiang Zeng Spring 2018

Spring 2001 Midterm Exam, Anthony D. Joseph

Midterm I October 19 th, 2009 CS162: Operating Systems and Systems Programming

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

CSE 451 Midterm Exam May 13 th, 2009

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Virtual Memory. Chapter 8

VIRTUAL MEMORY READING: CHAPTER 9

EECE.4810/EECE.5730: Operating Systems Spring Midterm Exam March 8, Name: Section: EECE.4810 (undergraduate) EECE.

Virtual Memory 1. Virtual Memory

Virtual Memory 1. Virtual Memory

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

CS 2200 Spring 2008 Test 2

Midterm Exam #3 Solutions November 30, 2016 CS162 Operating Systems

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017

ECE331 Homework 4. Due Monday, August 13, 2018 (via Moodle)

Course Description: This course includes the basic concepts of operating system

CS162, Spring 2004 Discussion #6 Amir Kamil UC Berkeley 2/26/04

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

MC7204 OPERATING SYSTEMS

1. Creates the illusion of an address space much larger than the physical memory

COMP 3361: Operating Systems 1 Midterm Winter 2009

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

R1 R2 R3 R CS 370: Operating Systems Department of Computer Science Colorado State University

CS370 Operating Systems

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Spring Caches and the Memory Hierarchy

Cache memory. Lecture 4. Principles, structure, mapping

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS140 Operating Systems and Systems Programming

CS 265. Computer Architecture. Wei Lu, Ph.D., P.Eng.

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

Third Midterm Exam November 29, 2017 CS162 Operating Systems

CS 31: Intro to Systems Caching. Martin Gagne Swarthmore College March 23, 2017

6.004 Tutorial Problems L14 Cache Implementation

Transcription:

University of California, Berkeley College of Engineering Computer Science Division EECS Spring 2017 Ion Stoica Second Midterm Exam March 21, 2017 CS162 Operating Systems Your Name: SID AND 162 Login: TA Name: Discussion Section Time: General Information: This is a closed book and one 2-sided handwritten note examination. You have 80 minutes to answer as many questions as possible. The number in parentheses at the beginning of each question indicates the number of points for that question. You should read all of the questions before starting the exam, as some of the questions are substantially more time consuming. Write all of your answers directly on this paper. Make your answers as concise as possible. If there is something in a question that you believe is open to interpretation, then please ask us about it! Good Luck!! QUESTION POINTS ASSIGNED POINTS OBTAINED 1 24 2 20 3 20 4 22 5 14 TOTAL 100 Page 1/15

P1 (24 points total) True/False and CIRCLE YOUR ANSWER. For each question: 1 point for true/false correct, 2 point for explanation. An explanation cannot exceed 2 sentences. a) A multilevel page table hierarchy will always take less storage space than a single level page table, given the same virtual address space size. When all pages in the virtual memory are allocated, a single level page table will take less space, as all its PTEs will be used. While this will also be true in the case of a multilevel page table hierarchy, in this case you need additional storage for the page table at the first level. b) Forcing all threads to request resources in the same order (e.g., resource A before B, B before C, and so on) will prevent deadlock. Imposing a strict order prevents a cycle. Page 2/15

c) Increasing the size of a cache always decreases the number of total cache misses, all things held constant (replacement policy, workload, associativity). Recall the Belady s anomaly. d) Adding a TLB may make process context switching slower. Each process has its own TLB. Thus, the process needs to flush the TLB on context switching which will lead to increasing the context switching time. e) It is not possible to share memory between two processes when using multiplelevel page tables. Two PTEs in the page tables of the two processes can point to the same physical page. Page 3/15

f) Round-robin scheduling has always a higher average response time than shortestjob-first. If all jobs have the same size and quanta is larger than the job size, then both RR and SJF will have the same average response times. g) One way to respond to thrashing is to kill a process. Yes. By killing a process we free the memory and we give a chance to other processes to fit their working sets in memory. h) With uniprogramming, applications can access any physical address. In the case of uniporgramming there is a single application running on the machine and there is no memory protection. Page 4/15

P2 (20 points) Demand Paging: Consider the following sequence of page accesses: A, B, D, C, B, A, B, A a) (10 points) Fill in the table below assuming the FIFO, LRU, and MIN page replacement policies. There are 3 frames of physical memory. The top row indicates the frame number, and each entry below contains the page that resides in the corresponding physical frame, after each memory reference (we have already filled in the row corresponding to accessing A). For readability purposes, only fill in the table entries that have changed and leave unchanged entries blank. If there are several pages that meet the replacement criteria, break ties using the lexicographical order, i.e., A takes priority over B, B over C, and C over D. FIFO LRU MIN F1 F2 F3 F1 F2 F3 F1 F2 F3 A A A A B B B B D D D D C C C C B A A A B B A # Page Faults 6 5 4 Page 5/15

b) (4 points) Give a sequence of page accesses in which LRU will generate a page fault on every access. If the pattern was ABCDABCDABCD.. And there were only 3 pages in physical memory, then you would get a page fault on every access. c) (3 points) Demand paging can be thought of as using main memory as a cache for disk. Fill in the properties of this cache: a. Associativity: fully associative b. Write-through/write-back? : write-back c. Block Size (assume 32-bit addresses, 4-byte words, and 4KB pages)? : 4KB, 1 page d) (3 points) The Nth chance replacement algorithm relies on a parameter N. Why might one choose a large N value? Why might one choose a small N value? Choose a large N value for a better approximation of LRU. Choose a small N value for efficiency; a large N value would take a long time to evict a page b/c many loops required. Page 6/15

P3 (20 points) Caching: Assume an 8KB cache with 32B blocks, on a machine that uses 32-bit virtual and physical addresses. a) (6 points) Specify the size and name of each field (i.e., cache tag, cache index, and byte select / offset) in the physical address for the following cache types: a. Direct Mapped 31 0 31 12 4 0 Tag = 19b Index = 8b Offset = 5b b. Fully associative 31 0 31 4 0 Tag = 27b Offset = 5b c. Four-way associative 31 0 31 10 4 0 Tag = 21b Index = 6b Offset = 5b Page 7/15

b) (11 points) You ve finished implementing your cache, which ended up being a 2- Way Set Associative cache that uses an LRU replacement policy. To test it out, you try the following sequence of reads and writes: read from 0x705F3140 write 0x1 to 0x705F3140 write 0x2 to 0x705F3150 write 0x2 to 0x705F3148 write 0x3 to 0x707A2150 write 0x3 to 0x035F2154 read from 0x705F3140 Recall that caching happens at the block level, i.e., the unit of transfer between cache and main memory is one block. Also, assume that a write leads to caching the data, the same as a read. Initially, assume the cache is empty. Please answer the following questions: a. (5 points) How many misses does the above access pattern exhibit? For a 2-way associative cache we have: 31 11 4 0 Tag = 20b Index = 7b Offset = 5b These are the tag/index/offset for each address: 0x705F3140: (tag = 0x705F3, index = 0xA, offset = 0x0) 0x705F3150: (tag = 0x705F3, index = 0xA, offset = 0x10) 0x705F3148: (tag = 0x705F3, index = 0xA, offset = 0x8) 0x707A2150: (tag = 0x707A2, index = 0xA, offset = 0x10) 0x035F2154: (tag = 0x035F2, index = 0xA, offset = 0x14) Since all addresses have the same index all accesses are in the same associative set. read from 0x705F3140 # cache block (tag = 0x705F3, index = 0xA) write 0x1 to 0x705F3140 # hit block (tag = 0x705F3, index = 0xA) write 0x2 to 0x705F3150 # hit block (tag = 0x705F3, index = 0xA) write 0x2 to 0x705F3148 # hit block (tag = 0x705F3, index = 0xA) write 0x3 to 0x707A2150 # cache block (tag = 0x707A2, index = 0xA) write 0x3 to 0x035F2154 # cache block (tag = 0x035F2, index = 0xA) # evict block (tag = 0x705F3, index = 0xA) # according to LRU policy read from 0x705F3140 # cache block (tag = 0x705F3, index = 0xA) # evict block (tag = 0x707A2, index = 0xA) # according to LRU policy Page 8/15

Every cache is a miss, so we have 4 misses. b. (3 points) Assume the cache is a write-through cache. How many writes are taking place between cache an memory? Every write results in writing to main memory, so we have 5 writes to memory. c. (3 points) Assume that cache is a write-back cache. How many writes are taking place between cache and memory? The only times we write to memory is when we evict a dirty blocks, so in this case we have only two writes to memory. c) (4 points) Now consider that your system has 5ns of latency when accessing cache and 70ns when accessing main memory. What should your application s average hit rate be in order to have an average memory access latency of 15ns? Let HR be the hit rate 0 <= HR <= 1 HR * 5ns + (1-HR) * 75ns = 15ns 5 * HR + 75 - HR * 75 = 15 Page 9/15

70 * HR = 60 HR = 60/70 ~ 86% P4 (22 points) Address Translation: Consider a computer with 16 bit virtual and physical addresses. Address translation is implemented by a two-level scheme combining segmentation and paging. The page size is 256 bytes. Virtual address format: 2b (segment #) 6b (page #) 8b (offset) Segment table (Base Address specifies the address of the page table associated with the segment, and Limit specifies the total number of bytes in the segment): Segment ID Base Address Limit 0x0 0x4000 0x1000 0x1 0xC000 0x2000 0x2 0x8000 0x0200 Page Table start address: 0x4000 Page # 0x20 0x30 0x31 Page Table start address: 0x8000 Page # 0xC0 0xC1 Page Table start address: 0xC000 Page # Page 10/15

0x40 0x44 a) (6 points) What is physical address corresponding to virtual address 0x4125? 0x4125 = 0100 0001 0010 0101 Segment # = 01 => page table address 0xC000 Page # = 000001 => physical page # = 0x44 Physical address is then 0x4425 b) (12 points) Consider the following assembly code computing a string s length: 0x8150.data str:.asciiz "Hello World" 0x01F4 main: li $t1, 0 # $t1 is the counter; set it to 0 0x01F8 la $t0, str # load address (la) of str into $t0 0x01FC cnt: lb $t2, 0($t0) # Load first byte in $t2 from address 0x0200 beqz $t2, end # if $t2 == 0 then goto label end 0x0204 add $t0, $t0, 1 # else increment the address 0x0208 add $t1, $t1, 1 # and increment the counter 0x020C j cnt # goto cnt 0x0210 end: Fill in the following table after executing each of the first 8 instructions in the above code. The program counter (PC) is initialized to 0x01F4, i.e., the execution starts with instruction li $t1, 0. Instruction Physical address of instruction Content of $t0 (after executing instruction) Content of $t1 (after executing instruction) Content of $t2 (after executing instruction) li $t1, 0 0x30F4 0 la $t0, str 0x30F8 0x8150 lb $t2, 0($t0) 0x30FC H beqz $t2, end 0x3100 add $t0,$t0,1 0x3104 0x8151 add $t1, $t1, 1 0x3108 1 j cnt 0x310C Page 11/15

1lb $t2, 0($t0) 0x30FC e 1 1 c) (4 points) Assume that instead of Hello World we have a 200-byte long string. Are there any changes we need to make to the segment or page tables to support the 200-byte string? If yes, use one sentence to describe the change. (The code is unchanged and addresses for each instruction and data remain the same.) The string starts at 0x8150 = 1000 0001 0101 0000 Segment # = 10 => page table address = 0x8000 Page # = 1 => physical page # = 0xC1 Since offset is 0x50 (i.e., 80), and the page size is 256 bytes, the 200-byte string doesn t fit in the page, so we need to allocate an extra physical page. Note: we did not give any credit for answers stating that you need to increase the segment limit since the segment limit is large enough the segment limit refers to the number of pages in the segment and not to bytes. Page 12/15

P5 (14 points) Banker s Algorithm: a) (6 points) Suppose that we have the following resources: A, B, C and threads T1, T2, T3, T4. The total number of instances for each resource is: Total A B C 11 21 19 Further, assume that the threads have the following maximum requirements and current allocations: Thread ID Current Allocation Maximum Allocation A B C A B C T1 2 3 10 4 10 19 T2 1 6 3 5 9 5 T3 4 7 3 9 13 6 T4 2 3 1 4 5 3 Is the system in a safe state? If yes, show a non-blocking sequence of thread executions. Otherwise, provide a proof that the system is unsafe. Show your work and justify each step of your answer. Yes. A safe sequence is: T4 à T2 à T3 à T1 Page 13/15

b) (4 points) Repeat question (a) if the total number of B instances is 20 instead of 21. No, there is no safe sequence because no thread can satisfy its allocation for resource B. c) (4 points) Give one reason using no more than two sentences of why you might decide not to use Banker s algorithm to avoid deadlock. Banker s algorithm is slow; needs to re-evaluate on every request. It is hard to accurately quantify how many resources a system has or how much each process will actually need. Page 14/15

Page 15/15