CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

Similar documents
CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Recall: Paging. Recall: Paging. Recall: Paging. CS162 Operating Systems and Systems Programming Lecture 13. Address Translation, Caching

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

CS162 Operating Systems and Systems Programming Lecture 13. Address Translation (con t) Caches and TLBs

CS162 Operating Systems and Systems Programming Lecture 12. Address Translation. Page 1

Memory Hierarchy Review

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS Advanced Operating Systems Structures and Implementation Lecture 16. Virtual Memory (finished) Paging Devices.

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Processes and Virtual Memory Concepts

Virtual Memory: From Address Translation to Demand Paging

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Virtual Memory. Virtual Memory

1. Creates the illusion of an address space much larger than the physical memory

Address spaces and memory management

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 153 Design of Operating Systems Winter 2016

Virtual Memory. Samira Khan Apr 27, 2017

Computer Systems. Virtual Memory. Han, Hwansoo

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

CSE 120 Principles of Operating Systems

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Modern Computer Architecture

CS 318 Principles of Operating Systems

CSE 120 Principles of Operating Systems Spring 2017

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Topic 18: Virtual Memory

VIRTUAL MEMORY II. Jo, Heeseung

Topic 18 (updated): Virtual Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

HY225 Lecture 12: DRAM and Virtual Memory

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

14 May 2012 Virtual Memory. Definition: A process is an instance of a running program

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Virtual Memory: From Address Translation to Demand Paging

Chapter 8. Virtual Memory

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

ECE468 Computer Organization and Architecture. Virtual Memory

CSE 351. Virtual Memory

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

ECE4680 Computer Organization and Architecture. Virtual Memory

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Handout 4 Memory Hierarchy

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Lecture 19: Virtual Memory: Concepts

Caching and Demand-Paged Virtual Memory

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Chapter 5 (Part II) Large and Fast: Exploiting Memory Hierarchy. Baback Izadi Division of Engineering Programs

Lecture 14: Cache & Virtual Memory

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

Modern Virtual Memory Systems. Modern Virtual Memory Systems

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

Computer Architecture. Lecture 8: Virtual Memory

ECE331: Hardware Organization and Design

Memory hierarchy review. ECE 154B Dmitri Strukov

CSE 153 Design of Operating Systems

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

ADDRESS TRANSLATION AND TLB

Computer Science 146. Computer Architecture

LECTURE 11. Memory Hierarchy

Virtual Memory Review. Page faults. Paging system summary (so far)

Virtual Memory. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 24

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

ADDRESS TRANSLATION AND TLB

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

EE 4683/5683: COMPUTER ARCHITECTURE

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Virtual Memory II CSE 351 Spring

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Virtual Memory: Concepts

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Transcription:

CS162 Operating Systems and Systems Programming Lecture 14 Caching (Finished), Demand Paging October 11 th, 2017 Neeraja J. Yadwadkar http://cs162.eecs.berkeley.edu

Recall: Caching Concept Cache: a repository for copies that can be accessed more quickly than the original Make frequent case fast and infrequent case less dominant Caching underlies many techniques used today to make computers fast Can cache: memory locations, address translations, pages, file blocks, file names, network routes, etc Lec 14.2

Recall: In Machine Structures (eg. 61C) Caching is the key to memory system performance Processor Access time = 100ns 100ns Main Memory (DRAM) Processor Second Level Cache (SRAM) Main Memory (DRAM) 100ns Average Access time=(hit Rate x HitTime) + (Miss Rate x MissTime) HitRate + MissRate = 1 HitRate = 90% => Avg. Access Time=(0.9 x 10) + (0.1 x 100)=19ns HitRate = 99% => Avg. Access Time=(0.99 x 10) + (0.01 x 100)=10.9 ns 10ns Lec 14.3

Caching Applied to Address Translation CPU Virtual Address TLB Cached? Yes No Physical Address Physical Memory Translate (MMU) Data Read or Write (untranslated) Question is one of page locality: does it exist? Instruction accesses spend a lot of time on the same page (since accesses sequential) Stack accesses have definite locality of reference Data accesses have less page locality, but still some Can we have a TLB hierarchy? Sure: multiple levels at different sizes/speeds Lec 14.4

Recall: What Actually Happens on a TLB Miss? Software traversed Page tables (like MIPS) On TLB miss, processor receives TLB fault Kernel traverses page table to find PTE» If PTE valid, fills TLB and returns from fault» If PTE marked as invalid, internally calls Page Fault handler Hardware traversed page tables: On TLB miss, hardware in MMU looks at current page table to fill TLB (may walk multiple levels)» If PTE valid, hardware fills TLB and processor never knows» If PTE marked as invalid, causes Page Fault, after which kernel decides what to do afterwards Most chip sets provide hardware traversal Modern operating systems tend to have more TLB faults since they use translation for many things Examples:» shared segments» user-level portions of an operating system Lec 14.5

Transparent Exceptions: TLB/Page fault (1/2) User Faulting Inst 1 Faulting Inst 1 Faulting Inst 2 Faulting Inst 2 TLB Faults OS Software Load TLB Fetch page/ Load TLB How to transparently restart faulting instructions? (Consider load or store that gets TLB or Page fault) Could we just skip faulting instruction?» No: need to perform load or store after reconnecting physical page Lec 14.6

Transparent Exceptions: TLB/Page fault (2/2) User Faulting Inst 1 Faulting Inst 1 Faulting Inst 2 Faulting Inst 2 TLB Faults OS Software Load TLB Fetch page/ Load TLB Hardware must help out by saving: Faulting instruction and partial state» Need to know which instruction caused fault» Is single PC sufficient to identify faulting position???? Processor State: sufficient to restart user thread» Save/restore registers, stack, etc What if an instruction has side-effects? Lec 14.7

Consider weird things that can happen What if an instruction has side effects? Options:» Unwind side-effects (easy to restart)» Finish off side-effects (messy!) Example 1: mov (sp)+,10» What if page fault occurs when write to stack pointer?» Did sp get incremented before or after the page fault? Example 2: strcpy (r1), (r2)» Source and destination overlap: can t unwind in principle!» IBM S/370 and VAX solution: execute twice once read-only What about RISC processors? For instance delayed branches?» Example: bne somewhere ld r1,(sp)» Precise exception state consists of two PCs: PC and npc (next PC) Delayed exceptions:» Example: div r1, r2, r3 ld r1, (sp)» What if takes many cycles to discover divide by zero, but load has already caused page fault? Lec 14.8

Precise Exceptions Precise Þ state of the machine is preserved as if program executed up to the offending instruction All previous instructions completed Offending instruction and all following instructions act as if they have not even started Same system code will work on different implementations Difficult in the presence of pipelining, out-of-order execution,... MIPS takes this position Imprecise Þ system software has to figure out what is where and put it all back together Performance goals often lead to forsaking precise interrupts system software developers, user, markets etc. usually wish they had not done this Modern techniques for out-of-order execution and branch prediction help implement precise interrupts Lec 14.9

Recall: TLB Organization CPU TLB Cache Memory Needs to be really fast Critical path of memory access» In simplest view: before the cache» Thus, this adds to access time (reducing cache speed) Seems to argue for Direct Mapped or Low Associativity However, needs to have very few conflicts! With TLB, the Miss Time extremely high! This argues that cost of Conflict (Miss Time) is much higher than slightly increased cost of access (Hit Time) Thrashing: continuous conflicts between accesses What if use low order bits of page as index into TLB?» First page of code, data, stack may map to same entry» Need 3-way associativity at least? What if use high order bits as index?» TLB mostly unused for small programs Lec 14.10

Reducing translation time further CPU TLB Cache Memory As described, TLB lookup is in serial with cache lookup: 10 Virtual Address V page no. offset TLB Lookup V Access Rights PA P page no. offset 10 Physical Address Machines with TLBs go one step further: they overlap TLB lookup with cache access. Works because offset available early Lec 14.11

Overlapping TLB & Cache Access (1/2) Main idea: Offset in virtual address exactly covers the cache index and byte select Thus can select the cached byte(s) in parallel to perform address translation virtual address Virtual Page # Offset physical address tag / page # index byte Lec 14.12

Overlapping TLB & Cache Access Here is how this might work with a 4K cache: 32 TLB assoc lookup index 4K Cache 1 K Hit/ Miss FN 20 page # 10 2 disp 00 What if cache size is increased to 8KB? Overlap not complete Need to do something else. See CS152/252 Another option: Virtual Caches Tags in cache are virtual addresses Translation only happens on cache misses = 4 bytes FN Data Hit/ Miss Lec 14.13

Putting Everything Together: Address Translation Virtual Address: Virtual Virtual P1 index P2 index Offset Physical Memory: PageTablePtr Physical Address: Physical Page # Offset Page Table (1 st level) Page Table (2 nd level) Lec 14.14

Virtual Address: Virtual Virtual P1 index P2 index Putting Everything Together: TLB Offset Physical Memory: PageTablePtr Physical Address: Physical Page # Offset Page Table (1 st level) TLB: Page Table (2 nd level) Lec 14.15

Virtual Address: Virtual Virtual P1 index P2 index Putting Everything Together: Cache Offset Physical Memory: PageTablePtr Physical Address: Physical Page # Offset Page Table (1 st level) Page Table (2 nd level) tag index cache: tag: byte block: TLB: Lec 14.16

Next Up: What happens when Process virtual address instruction MMU retry exception page fault page# PT physical address frame# offset frame# Operating System Page Fault Handler update PT entry offset load page from disk scheduler Lec 14.17

BREAK Lec 14.18

Where are all places that caching arises in OSes? Direct use of caching techniques TLB (cache of PTEs) Paged virtual memory (memory as cache for disk) File systems (cache disk blocks in memory) DNS (cache hostname => IP address translations) Web proxies (cache recently accessed pages) Which pages to keep in memory? All-important Policy aspect of virtual memory Will spend a bit more time on this in a moment Lec 14.19

Impact of caches on Operating Systems (1/2) Indirect - dealing with cache effects (e.g., sync state across levels) Maintaining the correctness of various caches E.g., TLB consistency:» With PT across context switches?» Across updates to the PT? Process scheduling Which and how many processes are active? Priorities? Large memory footprints versus small ones? Shared pages mapped into VAS of multiple processes? Lec 14.20

Impact of caches on Operating Systems (2/2) Impact of thread scheduling on cache performance Rapid interleaving of threads (small quantum) may degrade cache performance» Increase average memory access time (AMAT)!!! Designing operating system data structures for cache performance Lec 14.21

Working Set Model As a program executes it transitions through a sequence of working sets consisting of varying sized subsets of the address space Address Time Lec 14.22

Cache Behavior under WS model 1 Hit Rate new working set fits 0 Cache Size Amortized by fraction of time the Working Set is active Transitions from one WS to the next Capacity, Conflict, Compulsory misses Applicable to memory caches and pages. Others? Lec 14.23

Popularity (% accesses) 20% 15% 10% 5% 0% Another model of Locality: Zipf P access(rank) = 1/rank pop a=1 Hit Rate(cache) 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 Rank Likelihood of accessing item of rank r is α 1/r a Although rare to access items below the top few, there are so many that it yields a heavy tailed distribution Substantial value from even a tiny cache Substantial misses from even a very large cache 1 0.8 0.6 0.4 0.2 0 Estimated Hit Rate Lec 14.24

Demand Paging Modern programs require a lot of physical memory Memory per system growing faster than 25%-30%/year But they don t use all their memory all of the time 90-10 rule: programs spend 90% of their time in 10% of their code Wasteful to require all of user s code to be in memory Solution: use main memory as cache for disk Processor Datapath Control On-Chip Cache Second Level Cache (SRAM) Main Memory (DRAM) Caching Secondary Storage (Disk) Tertiary Storage (Tape) Lec 14.25

Illusion of Infinite Memory (1/2) TLB Virtual Memory 4 GB Page Table Physical Memory 512 MB Disk 500GB Disk is larger than physical memory Þ In-use virtual memory can be bigger than physical memory Combined memory of running processes much larger than physical memory» More programs fit into memory, allowing more concurrency Lec 14.26

Illusion of Infinite Memory (2/2) TLB Virtual Memory 4 GB Page Table Physical Memory 512 MB Disk 500GB Principle: Transparent Level of Indirection (page table) Supports flexible placement of physical data» Data could be on disk or somewhere across network Variable location of data transparent to user program» Performance issue, not correctness issue Lec 14.27

Since Demand Paging is Caching, Must Ask What is block size? 1 page What is organization of this cache (i.e. direct-mapped, setassociative, fully-associative)? Fully associative: arbitrary virtual physical mapping How do we find a page in the cache when look for it? First check TLB, then page-table traversal What is page replacement policy? (i.e. LRU, Random ) This requires more explanation (kinda LRU) What happens on a miss? Go to lower level to fill miss (i.e. disk) What happens on a write? (write-through, write back) Definitely write-back need dirty bit! Lec 14.28

Recall: What is in a Page Table Entry What is in a Page Table Entry (or PTE)? Pointer to next-level page table or to actual page Permission bits: valid, read-only, read-write, write-only Example: Intel x86 architecture PTE: Address same format previous slide (10, 10, 12-bit offset) Intermediate page tables called Directories Page Frame Number (Physical Page Number) Free (OS) 0 L D A PWT PCD 31-12 11-9 8 7 6 5 4 3 P: Present (same as valid bit in other architectures) W: Writeable U: User accessible PWT: Page write transparent: external cache write-through PCD: Page cache disabled (page cannot be cached) A: Accessed: page has been accessed recently D: Dirty (PTE only): page has been modified recently L: L=1Þ4MB page (directory only). Bottom 22 bits of virtual address serve as offset U W P 2 1 0 Lec 14.29

Demand Paging Mechanisms PTE helps us implement demand paging Valid Þ Page in memory, PTE points at physical page Not Valid Þ Page not in memory; use info in PTE to find it on disk when necessary Suppose user references page with invalid PTE? Memory Management Unit (MMU) traps to OS» Resulting trap is a Page Fault What does OS do on a Page Fault?:» Choose an old page to replace» If old page modified ( D=1 ), write contents back to disk» Change its PTE and any cached TLB to be invalid» Load new page into memory from disk» Update page table entry, invalidate TLB for new entry» Continue thread from original faulting location TLB for new page will be loaded when thread continued! While pulling pages off disk for one process, OS runs another process from ready queue» Suspended process sits on wait queue Lec 14.30

Summary: Steps in Handling a Page Fault Lec 14.31

Summary A cache of translations called a Translation Lookaside Buffer (TLB) Relatively small number of PTEs and optional process IDs (< 512) Fully Associative (Since conflict misses expensive) On TLB miss, page table must be traversed and if located PTE is invalid, cause Page Fault On change in page table, TLB entries must be invalidated TLB is logically in front of cache (need to overlap with cache access) Precise Exception specifies a single instruction for which: All previous instructions have completed (committed state) No following instructions nor actual instruction have started Can manage caches in hardware or software or both Goal is highest hit rate, even if it means more complex cache management Lec 14.32