CS252 Spring 2017 Graduate Computer Architecture. Lecture 17: Virtual Memory and Caches

Similar documents
CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Lecture 9 - Virtual Memory

Lecture 9 Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 9 Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory. Last?me in Lecture 9

Modern Virtual Memory Systems. Modern Virtual Memory Systems

Virtual Memory: From Address Translation to Demand Paging

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

CS 61C: Great Ideas in Computer Architecture Virtual Memory. Instructors: John Wawrzynek & Vladimir Stojanovic

Virtual Memory: From Address Translation to Demand Paging

Last =me in Lecture 7 3 C s of cache misses Compulsory, Capacity, Conflict

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 8 Address Transla>on

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Learning to Play Well With Others

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Improving Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Highly-Associative Caches

CS252 Spring 2017 Graduate Computer Architecture. Lecture 18: Virtual Machines

CS252 Spring 2017 Graduate Computer Architecture. Lecture 8: Advanced Out-of-Order Superscalar Designs Part II

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory. Bernhard Boser & Randy Katz

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

EECS 470. Lecture 16 Virtual Memory. Fall 2018 Jon Beaumont

COSC3330 Computer Architecture Lecture 20. Virtual Memory

John Wawrzynek & Nick Weaver

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Chapter 8. Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 10 - Complex Pipelines, Out-of-Order Issue, Register Renaming

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Lecture 14: Multithreading

Memory management units

Computer Science 146. Computer Architecture

New-School Machine Structures. Overarching Theme for Today. Agenda. Review: Memory Management. The Problem 8/1/2011

VIRTUAL MEMORY II. Jo, Heeseung

CS252 Spring 2017 Graduate Computer Architecture. Lecture 14: Multithreading Part 2 Synchronization 1

Virtual Memory. Motivation:

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 14: Multithreading

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Virtual Memory Oct. 29, 2002

Lecture 12 Branch Prediction and Advanced Out-of-Order Superscalars

Processes and Virtual Memory Concepts

Address Translation. Tore Larsen Material developed by: Kai Li, Princeton University

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Lecture 7 - Memory Hierarchy-II

Virtual Memory. Samira Khan Apr 27, 2017

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 7 Memory III

CS252 Spring 2017 Graduate Computer Architecture. Lecture 15: Synchronization and Memory Models Part 2

CSE 153 Design of Operating Systems

CSE 351. Virtual Memory

CS 61C: Great Ideas in Computer Architecture Virtual Memory. Instructors: Krste Asanovic & Vladimir Stojanovic h>p://inst.eecs.berkeley.

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

CS 152 Computer Architecture and Engineering. Lecture 18: Multithreading

16 Sharing Main Memory Segmentation and Paging

Virtual Memory and Interrupts

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

@2010 Badri Computer Architecture Assembly II. Virtual Memory. Topics (Chapter 9) Motivations for VM Address translation

Memory Management. An expensive way to run multiple processes: Swapping. CPSC 410/611 : Operating Systems. Memory Management: Paging / Segmentation 1

Lecture 17: Address Translation. James C. Hoe Department of ECE Carnegie Mellon University

Transla'on, Protec'on, and Virtual Memory. 2/25/16 CS 152 Sec'on 6 Colin Schmidt

Lecture 22: Virtual Memory: Survey of Modern Systems

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Lecture 19: Survey of Modern VMs. Housekeeping

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Virtual Memory. Computer Systems Principles

Computer Architecture. Lecture 8: Virtual Memory

Memory management: outline

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

Virtual Memory, Address Translation

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

A Few Problems with Physical Addressing. Virtual Memory Process Abstraction, Part 2: Private Address Space

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Topic 18 (updated): Virtual Memory

COS 318: Operating Systems. Virtual Memory and Address Translation

Computer Systems. Virtual Memory. Han, Hwansoo

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Motivation, Concept, Paging January Winter Term 2008/09 Gerd Liefländer Universität Karlsruhe (TH), System Architecture Group

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Virtual Memory, Address Translation

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Transcription:

CS252 Spring 2017 Graduate Computer Architecture Lecture 17: Virtual Memory and Caches Lisa Wu, Krste Asanovic http://inst.eecs.berkeley.edu/~cs252/sp17 WU UCB CS252 SP17

Last Time in Lecture 16 Memory Management Address translation: base and bound, segmentation, paging (Hierarchical) Page tables, TLB Address protection Page fault handler 2 WU UCB CS252 SP17

Address Translation in CPU Pipeline PC Inst TLB Inst. Cache D Decode E + M TLB Cache W TLB miss? Page Fault? Protection violation? TLB miss? Page Fault? Protection violation? Need to cope with additional latency of TLB: - slow down the clock? - pipeline the TLB and cache access? - virtual address caches - parallel TLB/cache access 3

Virtual-Address Caches CPU VA PA Physical PA TLB Cache Primary Memory Alternative: place the cache before the TLB CPU VA Virtual Cache VA TLB PA Primary Memory (StrongARM) one-step process in case of a hit (+) protection (-) cache needs to be flushed on a context switch unless address space identifiers (ASIDs) included in tags called process-identifiers (PIDs) (-) aliasing problems due to the sharing of pages (-) I/O uses physical addresses (-) 4

Processes and Context Switches ARMv7 MMU uses ASIDs to distinguish between individual tasks. Linux assigns an 8-bit value (a max of 256 ASIDs) to each task. Linux has a bitmap of ASIDs currently in use. When a process exits, the entries associated with that particular ASID is invalidated from the BP, caches, and TLBs. If all ASIDs are taken when a new process starts, all entries in the BP, caches, and TLBs are invalidated. The ASIDs are then again allocated on a first-come first-serve basis as if no tasks were running before. All threads within a process share the same ASID. OS stores one PTBR (Page Table Base Register) in the PCB (Process Control Block) and switches PTBR on context switch On a context switch: X86 flushes the entire TLB MIPS and SPARC uses PID in TLB entries 5 WU UCB CS252 SP17

Miss rate vs. virtually addressed cache size (Agarwal 1987, H&P 5 th edition) 6 WU UCB CS252 SP17

Virtually Addressed Cache (Virtual Index/Virtual Tag) Virtual Address Virtual Address PC Physical Address Inst. Cache D Decode E + M Miss? Inst. TLB Instruction data Translate on miss P Register Memory Controller Main Memory (DRAM) Cache Hardware Page Table Walker Physical Address Miss? TLB W Physical Address 7

Aliasing in Virtual-Address Caches Page Table Tag VA 1 Pages VA 1 1st Copy of at PA PA VA 2 2nd Copy of at PA VA 2 Two virtual pages share one physical page Virtual cache can have two copies of same physical data. Writes to one copy not visible to reads of other! General Solution: Prevent aliases coexisting in cache Software (i.e., OS) solution for direct-mapped cache VAs of shared pages must agree in cache index bits; this ensures all VAs accessing same PA will conflict in direct-mapped cache (early SPARCs) 8

Concurrent Access to TLB & Cache (Virtual Index/Physical Tag) VA PA VPN L b TLB k PPN Page Offset Virtual Index Direct-map Cache 2 L blocks 2 b -byte block Index L is available without consulting the TLB à cache and TLB accesses can begin simultaneously! Tag comparison is made after both accesses are completed Cases: L + b = k, L + b < k, L + b > k Tag hit? = Physical Tag 9

Virtual-Index Physical-Tag Caches: Associative Organization VA VPN a L = k-b b 2 a Virtual Index TLB k Direct-map 2 L blocks Direct-map 2 L blocks PA PPN Tag Page Offset = hit? Phy. Tag 2 a = After the PPN is known, 2 a physical tags are compared How does this scheme scale to larger caches? 10

Concurrent Access to TLB & Large L1 The problem with L1 > Page size VA VPN a Page Offset b TLB Virtual Index VA 1 VA 2 PPN a PPN a L1 PA cache Direct-map PA PPN Page Offset b Tag Can VA 1 and VA 2 both map to PA? = hit? 11

A solution via Second Level Cache CPU RF L1 Instruction Cache L1 Cache Unified L2 Cache Memory Memory Memory Memory Usually a common L2 cache backs up both Instruction and L1 caches L2 is inclusive of both Instruction and caches Inclusive means L2 has copy of any line in either L1 12

VA Anti-Aliasing Using L2 [MIPS R10000,1996] VPN a Page Offset b Virtual Index L1 PA cache Direct-map TLB into L2 tag VA 1 VA 2 PPN a PPN a PA Suppose VA1 and VA2 both map to PA and VA1 is already in L1, L2 (VA1 ¹ VA2) After VA2 is resolved to PA, a collision will be detected in L2. VA1 will be purged from L1 and L2, and VA2 will be loaded Þ no aliasing! PPN Page Offset b Tag PPN = hit? PA a 1 Direct-Mapped L2 13

Anti-Aliasing using L2 for a Virtually Addressed L1 VA VPN Page Offset b Virtual Index & Tag TLB VA 1 VA 2 PA PPN Page Offset b L1 VA Cache Tag Physical Index & Tag Physically-addressed L2 can also be used to avoid aliases in virtuallyaddressed L1 PA VA 1 Virtual Tag L2 PA Cache L2 contains L1 14

Atlas Revisited 15 WU UCB CS252 SP17

Atlas Revisited One PAR for each physical page (frame) PAR s contain the VPN s of the pages resident in primary memory Advantage: The size is proportional to the size of the primary memory What is the disadvantage? PPN PAR s VPN 16

Hashed Page Table: Approximating Associative Addressing PID VPN d Virtual Address Offset hash + PA of PTE Page Table Base of Table Hashed Page Table is typically 2 to 3 times larger than the number of PPN s to reduce collision probability It can also contain DPN s for some non-resident pages (not common) If a translation cannot be resolved in this table then the software consults a data structure that has an entry for every existing page (e.g., full page table) VPN PID PPN VPN PID DPN VPN PID Primary Memory 17

Power PC: Hashed Page Table VPN d 80-bit VA hash Base of Table Offset Each hash table slot has 8 PTE's <VPN,PPN> that are searched sequentially If the first hash slot fails, an alternate hash function is used to look in another slot All these steps are done in hardware! Hashed Table is typically 2 to 3 times larger than the number of physical pages The full backup Page Table is managed in software + PA of Slot Page Table VPN VPN PPN Primary Memory 18

VM features track historical uses: Bare machine, only physical addresses - One program owned entire machine Batch-style multiprogramming - Several programs sharing CPU while waiting for I/O - Base & bound: translation and protection between programs (supports swapping entire programs but not demand-paged virtual memory) - Problem with external fragmentation (holes in memory), needed occasional memory defragmentation as new jobs arrived Time sharing - More interactive programs, waiting for user. Also, more jobs/second. - Motivated move to fixed-size page translation and protection, no external fragmentation (but now internal fragmentation, wasted bytes in page) - Motivated adoption of virtual memory to allow more jobs to share limited physical memory resources while holding working set in memory Virtual Machine Monitors - Run multiple operating systems on one machine - Idea from 1970s IBM mainframes, now common on laptops - e.g., run Windows on top of Mac OS X - Hardware support for two levels of translation/protection - Guest OS virtual -> Guest OS physical -> Host machine physical 19

Virtual Memory Use Today - 1 Servers/desktops/laptops/smartphones have full demand-paged virtual memory - Portability between machines with different memory sizes - Protection between multiple users or multiple tasks - Share small physical memory among active tasks - Simplifies implementation of some OS features Vector supercomputers have translation and protection but rarely complete demand-paging (Older Crays: base&bound, Japanese & Cray X1/X2: pages) - Don t waste expensive CPU time thrashing to disk (make jobs fit in memory) - Mostly run in batch mode (run set of jobs that fits in memory) - Difficult to implement restartable vector instructions 20

Virtual Memory Use Today - 2 Most embedded processors and DSPs provide physical addressing only - Can t afford area/speed/power budget for virtual memory support - Often there is no secondary storage to swap to! - Programs custom written for particular memory configuration in product - Difficult to implement restartable instructions for exposed architectures 21

Acknowledgements This course is partly inspired by previous MIT 6.823 and Berkeley CS252 computer architecture courses created by my collaborators and colleagues: - Arvind (MIT) - Joel Emer (Intel/MIT) - James Hoe (CMU) - John Kubiatowicz (UCB) - David Patterson (UCB) 22