CS 61C: Great Ideas in Computer Architecture. Virtual Memory

Similar documents
Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

CS 61C: Great Ideas in Computer Architecture. Virtual Memory III. Instructor: Dan Garcia

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

New-School Machine Structures. Overarching Theme for Today. Agenda. Review: Memory Management. The Problem 8/1/2011

CS61C : Machine Structures

CS 61C: Great Ideas in Computer Architecture. Multiple Instruction Issue, Virtual Memory Introduction

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 61C: Great Ideas in Computer Architecture. Cache Performance, Set Associative Caches

UC Berkeley CS61C : Machine Structures

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

John Wawrzynek & Nick Weaver

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Modern Computer Architecture

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Review. Manage memory to disk? Treat as cache. Lecture #26 Virtual Memory II & I/O Intro

V. Primary & Secondary Memory!

CS 61C: Great Ideas in Computer Architecture Virtual Memory. Instructors: John Wawrzynek & Vladimir Stojanovic

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Virtual Memory Overview

COSC3330 Computer Architecture Lecture 20. Virtual Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Virtual Memory: From Address Translation to Demand Paging

Virtual Memory: From Address Translation to Demand Paging

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Computer Science 146. Computer Architecture

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory. Bernhard Boser & Randy Katz

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory

A Few Problems with Physical Addressing. Virtual Memory Process Abstraction, Part 2: Private Address Space

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

UC Berkeley CS61C : Machine Structures

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Multilevel Caches, Cache Questions

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

UCB CS61C : Machine Structures

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Memory. Principle of Locality. It is impossible to have memory that is both. We create an illusion for the programmer. Employ memory hierarchy

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Intro to Virtual Memory

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Virtual Memory. Virtual Memory

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Direct-Mapped and Set Associative Caches. Instructor: Steven Ho

Computer Systems. Virtual Memory. Han, Hwansoo

Virtual Memory, Address Translation

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CS 5523 Operating Systems: Memory Management (SGG-8)

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

EN1640: Design of Computing Systems Topic 06: Memory System

1. Creates the illusion of an address space much larger than the physical memory

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

ECE331: Hardware Organization and Design

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 24

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Computer Architecture Lecture 13: Virtual Memory II

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

This Unit: Main Memory. Virtual Memory. Virtual Memory. Other Uses of Virtual Memory

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

Virtual Memory Nov 9, 2009"

ECE468 Computer Organization and Architecture. Virtual Memory

Review : Pipelining. Memory Hierarchy

CSE 560 Computer Systems Architecture

Processes and Virtual Memory Concepts

CS 153 Design of Operating Systems Winter 2016

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Virtual Memory - Objectives

Virtual Memory II CSE 351 Spring

ECE4680 Computer Organization and Architecture. Virtual Memory

Handout 4 Memory Hierarchy

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Chapter 8. Virtual Memory

CS61C : Machine Structures

Lecture 29 Review" CPU time: the best metric" Be sure you understand CC, clock period" Common (and good) performance metrics"

Memory hierarchy review. ECE 154B Dmitri Strukov

LECTURE 12. Virtual Memory

ecture 33 Virtual Memory Friedland and Weaver Computer Science 61C Spring 2017 April 12th, 2017

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

Lecture 9 Virtual Memory

1. Memory technology & Hierarchy

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Virtual Memory. CS 3410 Computer System Organization & Programming

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

EE 4683/5683: COMPUTER ARCHITECTURE

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Virtual Memory. Stefanos Kaxiras. Credits: Some material and/or diagrams adapted from Hennessy & Patterson, Hill, online sources.

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Virtual Memory I. CSE 351 Spring Instructor: Ruth Anderson

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Transcription:

CS 61C: Great Ideas in Computer Architecture Virtual Memory Instructor: Justin Hsia 7/30/2012 Summer 2012 Lecture #24 1

Review of Last Lecture (1/2) Multiple instruction issue increases max speedup, but higher penalty for a stall Superscalar because can achieve CPI < 1 Requires a significant amount of extra hardware Employ more aggressive scheduling techniques to increase performance Register renaming Speculation (guessing) Out of order execution 7/30/2012 Summer 2012 Lecture #24 2

Agenda Virtual Memory Page Tables Administrivia Translation Lookaside Buffer (TLB) VM Performance 7/30/2012 Summer 2012 Lecture #24 3

Next Up: Virtual Memory Earlier: Caches Memory Hierarchy Regs L1 Cache L2 Cache Memory Disk Tape Instr Operands Blocks Blocks Pages Files Upper Level Faster Larger Lower Level 7/30/2012 Summer 2012 Lecture #24 4

Memory Hierarchy Requirements Principle of Locality Allows caches to offer (close to) speed of cache memory with size of DRAM memory Can we use this at the next level to give speed of DRAM memory with size of Disk memory? What other things do we need from our memory system? 7/30/2012 Summer 2012 Lecture #24 5

Memory Hierarchy Requirements Allow multiple processes to simultaneously occupy memory and provide protection Don t let programs read from or write to each other s memories Give each program the illusion that it has its own private address space (via translation) Suppose code starts at address 0x00400000, then different processes each think their code resides at the same address Each program must have a different view of memory 7/30/2012 Summer 2012 Lecture #24 6

Virtual Memory Next level in the memory hierarchy Provides illusion of very large main memory Working set of pages residing in main memory (subset of all pages residing on disk) Main goal: Avoid reaching all the way back to disk as much as possible Additional goals: Let OS share memory among many programs and protect them from each other Each process thinks it has all the memory to itself 7/30/2012 Summer 2012 Lecture #24 7

Virtual to Physical Address Translation Program operates in its virtual address space Virtual Address (VA) (inst. fetch load, store) HW mapping Physical Address (PA) (inst. fetch load, store) Physical memory (including caches) Each program operates in its own virtual address space and thinks it s the only program running Each is protected from the other OS can decide where each goes in memory Hardware gives virtual physical mapping 7/30/2012 Summer 2012 Lecture #24 8

VM Analogy (1/2) Trying to find a book in the UCB system Book title is like virtual address (VA) What you want/are requesting Book call number is like physical address (PA) Where it is actually located Card catalogue is like a page table (PT) Maps from book title to call number Does not contain the actual that data you want The catalogue itself takes up space in the library 7/30/2012 Summer 2012 Lecture #24 9

VM Analogy (2/2) Indication of current location within the library system is like valid bit Valid if in current library (main memory) vs. invalid if in another branch (disk) Found on the card in the card catalogue Availability/terms of use like access rights What you are allowed to do with the book (ability to check out, duration, etc.) Also found on the card in the card catalogue 7/30/2012 Summer 2012 Lecture #24 10

Agenda Virtual Memory Page Tables Administrivia Translation Lookaside Buffer (TLB) VM Performance VM Wrap up 7/30/2012 Summer 2012 Lecture #24 11

First Attempt: Base and Bound Reg $base+ $bound $base 0 User C User B User A OS Enough space for User D, but discontinuous ( fragmentation problem ) Want: Discontinuous mapping Process size >> mem Need to use indirection 7/30/2012 Summer 2012 Lecture #24 12

Divide into equal sized chunks (about 4 KiB 8 KiB) Any chunk of Virtual Memory can be assigned to any chunk of Physical Memory ( page ) 64 MB Mapping VM to PM Physical Memory Virtual Memory Stack Heap 0 0 Static Code 7/30/2012 Summer 2012 Lecture #24 13

Paging Organization Here assume page size is 4 KiB Page is both unit of mapping and unit of transfer between disk and physical memory Physical Address 0x0000 0x1000 0x7000 page 0 page 1 Physical Memory 4 Ki 4 Ki......... page 7 4 Ki Addr Trans MAP Virtual Address 0x00000 page 0 4 Ki 0x01000 page 1 4 Ki 0x02000 page 2 4 Ki......... 0x1F000 page 31 Virtual Memory 4 Ki 7/30/2012 Summer 2012 Lecture #24 14

Virtual Memory Mapping Function How large is main memory? Disk? Don t know! Designed to be interchangeable components Need a system that works regardless of sizes Use lookup table (page table) to deal with arbitrary mapping Index lookup table by # of pages in VM (not all entries will be used/valid) Size of PM will affect size of stored translation 7/30/2012 Summer 2012 Lecture #24 15

Address Mapping Pages are aligned in memory Border address of each page has same lowest bits Page size is same in VM and PM, so denote lowest O = log 2 (page size/b) bits as page offset Use remaining upper address bits in mapping Tells you which page you want (similar to Tag) Physical Page # Page Offset Virtual Page # Page Offset Not necessarily the same size Same Size 7/30/2012 Summer 2012 Lecture #24 16

Address Mapping: Page Table Page Table functionality: Incoming request is Virtual Address (VA), want Physical Address (PA) Physical Offset = Virtual Offset (page aligned) So just swap Virtual Page Number (VPN) for Physical Page Number (PPN) Physical Page # Implementation? Virtual Page # Page Offset Use VPN as index into PT Store PPN and management bits (Valid, Access Rights) Does NOT store actual data (the data sits in PM) 7/30/2012 Summer 2012 Lecture #24 17

Page Table Layout Virtual Address: VPN offset Page Table V AR PPN 3) Combine PPN and offset 1) Index into PT using VPN X XX 2) Check Valid and Access Rights bits... + Physical Address 4) Use PA to access memory 7/30/2012 Summer 2012 Lecture #24 18

Page Table Entry Format Contains either PPN or indication not in main memory Valid = Valid page table entry 1 virtual page is in physical memory 0 OS needs to fetch page from disk Access Rights checked on every access to see if allowed (provides protection) Read Only: Can read, but not write page Read/Write: Read or write data on page Executable: Can fetch instructions from page 7/30/2012 Summer 2012 Lecture #24 19

Page Tables (1/2) A page table (PT) contains the mapping of virtual addresses to physical addresses Page tables located in main memory Why? Too large to fit in registers (2 20 entries for 4 KiB pages) Faster to access than disk and can be shared by multiple processors The OS maintains the PTs Each process has its own page table State of a process is PC, all registers, and PT OS stores address of the PT of the current process in the Page Table Base Register 7/30/2012 Summer 2012 Lecture #24 20

Page Tables (2/2) Solves fragmentation problem: all pages are the same size, so can utilize all available slots OS must reserve swap space on disk for each process Running programs requires hard drive space! To grow a process, ask Operating System If unused pages in PM, OS uses them first If not, OS swaps some old pages (LRU) to disk 7/30/2012 Summer 2012 Lecture #24 21

Paging/Virtual Memory Multiple Processes User A: Virtual Memory Stack 64 MB Physical Memory User B: Virtual Memory Stack Heap Heap 0 Static Code Page Table A 0 Page Table B 0 Static Code 7/30/2012 Summer 2012 Lecture #24 22

Review: Paging Terminology Programs use virtual addresses (VAs) Space of all virtual addresses called virtual memory (VM) Divided into pages indexed by virtual page number (VPN) Main memory indexed by physical addresses (PAs) Space of all physical addresses called physical memory (PM) Divided into pages indexed by physical page number (PPN) 7/30/2012 Summer 2012 Lecture #24 23

Question: How many bits wide are the following fields? 16 KiB pages 40 bit virtual addresses 64 GiB physical memory VPN 26 PPN 26 24 20 22 22 24

Agenda Virtual Memory Page Tables Administrivia Translation Lookaside Buffer (TLB) VM Performance VM Wrap up 7/30/2012 Summer 2012 Lecture #24 25

Administrivia (1/2) Project 3 (individual) due Sunday 8/5 Final Review Friday 8/3, 3 6pm in 306 Soda Final Thurs 8/9, 9am 12pm, 245 Li Ka Shing Focus on 2 nd half material, though midterm material still fair game MIPS Green Sheet provided again Two sided handwritten cheat sheet Can use the back side of your midterm cheat sheet! 7/30/2012 Summer 2012 Lecture #24 26

Administrivia (2/2) http://www.geekosystem.com/engineering professor meme/2/ 7/30/2012 Summer 2012 Lecture #24 27

Agenda Virtual Memory Page Tables Administrivia Translation Lookaside Buffer (TLB) VM Performance VM Wrap up 7/30/2012 Summer 2012 Lecture #24 28

Retrieving Data from Memory PT User 1 Physical Memory VA1 User 1 Virtual Address Space 1) Access page table for address translation PT User 2 VA2 2) Access correct physical address User 2 Virtual Address Space Requires two accesses of physical memory! 7/30/2012 Summer 2012 Lecture #24 29

Virtual Memory Problem 2 physical memory accesses per data access = SLOW! Since locality in pages of data, there must be locality in the translations of those pages Build a separate cache for the Page Table For historical reasons, cache is called a Translation Lookaside Buffer (TLB) Notice that what is stored in the TLB is NOT data, but the VPN PPN mapping translations 7/30/2012 Summer 2012 Lecture #24 30

TLBs vs. Caches Memory Address D$ / I$ Data at memory address VPN TLB PPN On miss: Access next cache level / main memory On miss: Access Page Table in main memory TLBs usually small, typically 16 512 entries TLB access time comparable to cache («main memory) TLBs can have associativity Usually fully/highly associative 7/30/2012 Summer 2012 Lecture #24 31

Where Are TLBs Located? Which should we check first: Cache or TLB? Can cache hold requested data if corresponding page is not in physical memory? No With TLB first, does cache receive VA or PA? hit miss VA PA CPU TLB Cache Main data Memory miss hit Page Table Notice that it is now the TLB that does translation, not the Page Table! 7/30/2012 Summer 2012 Lecture #24 32

Address Translation Using TLB Data Cache VPN TLB Tag TLB Index Page Offset TLB Tag (used just like in a cache) Tag... TLB... Block Data PPN Virtual Address PPN PA split two different ways! Physical Address Page Offset Tag Index Offset Note: TIO for VA & PA unrelated 7/30/2012 Summer 2012 Lecture #24 33

Typical TLB Entry Format Valid Dirty Ref Access Rights TLB Tag PPN X X X XX Valid and Access Rights: Same usage as previously discussed for page tables Dirty: Basically always use write back, so indicates whether or not to write page to disk when replaced Ref: Used to implement LRU Set when page is accessed, cleared periodically by OS If Ref = 1, then page was referenced recently TLB Tag: VPN mod (# TLB entries) 7/30/2012 Summer 2012 Lecture #24 34

Question: How many bits wide are the following? 16 KiB pages 40 bit virtual addresses 64 GiB physical memory 2 way set associative TLB with 512 entries Valid Dirty Ref Access Rights TLB Tag PPN X X X XX TLB Tag 12 TLB Index 14 TLB Entry 38 18 8 45 14 12 40 35

Fetching Data on a Memory Read 1) Check TLB (input: VPN, output: PPN) TLB Hit: Fetch translation, return PPN TLB Miss: Check page table (in memory) Page Table Hit: Load page table entry into TLB Page Table Miss (Page Fault): Fetch page from disk to memory, update corresponding page table entry, then load entry into TLB 2) Check cache (input: PPN, output: data) Cache Hit: Return data value to processor Cache Miss: Fetch data value from memory, store it in cache, return it to processor 7/30/2012 Summer 2012 Lecture #24 36

Page Faults Load the page off the disk into a free page of memory Switch to some other process while we wait Interrupt thrown when page loaded and the process' page table is updated When we switch back to the task, the desired data will be in memory If memory full, replace page (LRU), writing back if necessary, and update both page tables Continuous swapping between disk and memory called thrashing 7/30/2012 Summer 2012 Lecture #24 37

Performance Metrics VM performance also uses Hit/Miss Rates and Miss Penalties TLB Miss Rate: Fraction of TLB accesses that result in a TLB Miss Page Table Miss Rate: Fraction of PT accesses that result in a page fault Caching performance definitions remain the same Somewhat independent, as TLB will always pass PA to cache regardless of TLB hit or miss 7/30/2012 Summer 2012 Lecture #24 38

Data Fetch Scenarios Are the following scenarios for a single data access possible? TLB Miss, Page Fault TLB Hit, Page Table Hit TLB Miss, Cache Hit Page Table Hit, Cache Miss Page Fault, Cache Hit Yes No Yes Yes No 7/30/2012 Summer 2012 Lecture #24 39

Question: A program tries to load a word at X that causes a TLB miss but not a page fault. Are the following statements TRUE or FALSE? 1) The page table does not contain a valid mapping for the virtual page corresponding to the address X 2) The word that the program is trying to load is present in physical memory 1 F 2 F F T T F 40

Updating Scenarios Using V = valid, D = dirty, R = ref to mean that field is set to the shown value for any entry in either PT or TLB Which of the following scenarios for a single data access are possible? Read, D = 1 Write, R = 1 Read, V = 0 Write, D = 0 No Yes Yes No 7/30/2012 Summer 2012 Lecture #24 41

Question: Assume the page table entry in question is present in the TLB and we are using a uniprocessor system. Are the following statements TRUE or FALSE? 1) The valid bit for that page must be the same in the PT and TLB 2) The dirty bit for that page must be the same in the PT and TLB 1 F 2 F F T T F 42

Get To Know Your Staff Category: Wishlist 7/30/2012 Summer 2012 Lecture #24 43

Agenda Virtual Memory Page Tables Administrivia Translation Lookaside Buffer (TLB) VM Performance VM Wrap up 7/30/2012 Summer 2012 Lecture #24 44

VM Performance Virtual Memory is the level of the memory hierarchy that sits below main memory TLB comes before cache, but affects transfer of data from disk to main memory Previously we assumed main memory was lowest level, now we just have to account for disk accesses Same CPI, AMAT equations apply, but now treat main memory like a mid level cache 7/30/2012 Summer 2012 Lecture #24 45

Typical Performance Stats secondary memory CPU cache primary memory CPU primary memory Caching Demand paging cache entry page frame cache block ( 32 bytes) page ( 4Ki bytes) cache miss rate (1% to 20%) page miss rate (<0.001%) cache hit ( 1 cycle) page hit ( 100 cycles) cache miss ( 100 cycles) page miss ( 5M cycles) 7/30/2012 Summer 2012 Lecture #24 46

Impact of Paging on AMAT (1/2) Memory Parameters: L1 cache hit = 1 clock cycles, hit 95% of accesses L2 cache hit = 10 clock cycles, hit 60% of L1 misses DRAM = 200 clock cycles ( 100 nanoseconds) Disk = 20,000,000 clock cycles ( 10 milliseconds) Average Memory Access Time (no paging): 1 + 5% 10 + 5% 40% 200 = 5.5 clock cycles Average Memory Access Time (with paging): 5.5 (AMAT with no paging) +? 7/30/2012 Summer 2012 Lecture #24 47

Impact of Paging on AMAT (2/2) Average Memory Access Time (with paging) = 5.5 + 5% 40% (1 HR Mem ) 20,000,000 AMAT if HR Mem = 99%? 5.5 + 0.02 0.01 20,000,000 = 4005.5 ( 728x slower) 1 in 20,000 memory accesses goes to disk: 10 sec program takes 2 hours! AMAT if HR Mem = 99.9%? 5.5 + 0.02 0.001 20,000,000 = 405.5 AMAT if HR Mem = 99.9999% 5.5 + 0.02 0.000001 20,000,000 = 5.9 7/30/2012 Summer 2012 Lecture #24 48

Impact of TLBs on Performance Each TLB miss to Page Table ~ L1 Cache miss TLB Reach: Amount of virtual address space that can be simultaneously mapped by TLB: TLB typically has 128 entries of page size 4 8 KiB 128 4 KiB = 512 KiB = just 0.5 MiB What can you do to have better performance? Multi level TLBs Variable page size (segments) Special situationally used superpages Conceptually same as multi level caches Not covered here 7/30/2012 Summer 2012 Lecture #24 49

Agenda Virtual Memory Page Tables Administrivia Translation Lookaside Buffer (TLB) VM Performance VM Wrap up 7/30/2012 Summer 2012 Lecture #24 50

Virtual Memory Motivation Memory as cache for disk (reduce disk accesses) Disk is so slow it significantly affects performance Paging maximizes memory usage with large, evenly sized pages that can go anywhere Allows processor to run multiple processes simultaneously Gives each process illusion of its own (large) VM Each process uses standard set of VAs Access rights provide protection 7/30/2012 Summer 2012 Lecture #24 51

Paging Summary Paging requires address translation Can run programs larger than main memory Hides variable machine configurations (RAM/HD) Solves fragmentation problem Address mappings stored in page tables in memory Additional memory access mitigated with TLB Check TLB, then Page Table (if necessary), then Cache 7/30/2012 Summer 2012 Lecture #24 52

Hardware/Software Support for Memory Protection Different tasks can share parts of their virtual address spaces But need to protect against errant access Requires OS assistance Hardware support for OS protection Privileged supervisor mode (a.k.a. kernel mode) Privileged instructions Page tables and other state information only accessible in supervisor mode System call exception (e.g. syscall in MIPS) 7/30/2012 Summer 2012 Lecture #24 53

Context Switching How does a single processor run many programs at once? Context switch: Changing of internal state of processor (switching between processes) Save register values (and PC) and change value in Page Table Base register What happens to the TLB? Current entries are for different process Set all entries to invalid on context switch 7/30/2012 Summer 2012 Lecture #24 54

Virtual Memory Summary User program view: Contiguous memory Start from some set VA Infinitely large Is the only running program Reality: Non contiguous memory Start wherever available memory is Finite size Many programs running simultaneously Virtual memory provides: Illusion of contiguous memory All programs starting at same set address Illusion of ~ infinite memory (2 32 or 2 64 bytes) Protection, Sharing Implementation: Divide memory into chunks (pages) OS controls page table that maps virtual into physical addresses memory as a cache for disk TLB is a cache for the page table 7/30/2012 Summer 2012 Lecture #24 55