CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

Similar documents
ECE468 Computer Organization and Architecture. Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Modern Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

CS152 Computer Architecture and Engineering Lecture 17: Cache System

Virtual Memory. Virtual Memory

Handout 4 Memory Hierarchy

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

Lecture 11. Virtual Memory Review: Memory Hierarchy

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Advanced Computer Architecture

CPE 631 Lecture 04: CPU Caches

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Chapter 8. Virtual Memory

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Memory hierarchy review. ECE 154B Dmitri Strukov

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Memory Technologies. Technology Trends

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS450/550 Operating Systems

Memory Hierarchies 2009 DAT105

Virtual Memory: From Address Translation to Demand Paging

Memory Hierarchy Review

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Virtual Memory: From Address Translation to Demand Paging

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

ECE331: Hardware Organization and Design

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CS3350B Computer Architecture

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

Page 1. Memory Hierarchies (Part 2)

Virtual to physical address translation

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

UC Berkeley CS61C : Machine Structures

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS 153 Design of Operating Systems Winter 2016

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

Chapter 8 Virtual Memory

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

Chapter 8 Virtual Memory

virtual memory Page 1 CSE 361S Disk Disk

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CS61C : Machine Structures

Basic Memory Management

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

ECE ECE4680

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

Memory Management! Goals of this Lecture!

Memory Management. Goals of this Lecture. Motivation for Memory Hierarchy

Memory Management! How the hardware and OS give application pgms:" The illusion of a large contiguous address space" Protection against each other"

EE 4683/5683: COMPUTER ARCHITECTURE

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CPE300: Digital System Architecture and Design

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Memory Management Virtual Memory

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

Modern Virtual Memory Systems. Modern Virtual Memory Systems

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

CS399 New Beginnings. Jonathan Walpole

a process may be swapped in and out of main memory such that it occupies different regions

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Virtual Memory Oct. 29, 2002

Transcription:

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory March 22, 1995 Dave Patterson (patterson@cs) and Shing Kong (shingkong@engsuncom) Slides available on http://httpcsberkeleyedu/~patterson cs 152 vm1 DAP & SIK 1995 Review: The Principle of Locality Probability of reference 0 Address Space 2 The Principle of Locality: Program access a relatively small portion of the address space at any instant of time Example: 90% of time in 10% of the code Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon cs 152 vm2 DAP & SIK 1995

Review: The Need to Make a Decision! Direct Mapped Cache: Each memory location can only mapped to 1 cache location No need to make any decision :-) - Current item replaced the previous item in that cache location N-way Set Associative Cache: Each memory location have a choice of N cache locations Fully Associative Cache: Each memory location can be placed in ANY cache location Cache miss in a N-way Set Associative or Fully Associative Cache: Bring in new block from memory Throw out a cache block to make room for the new block Damn! We need to make a decision which block to throw out! cs 152 vm3 DAP & SIK 1995 Review Summary: The Principle of Locality: Program access a relatively small portion of the address space at any instant of time - Temporal Locality: Locality in Time - Spatial Locality: Locality in Space Three Major Categories of Cache Misses: Compulsory Misses: sad facts of life Example: cold start misses Conflict Misses: increase cache size and/or associativity Nightmare Scenario: ping pong effect! Capacity Misses: increase cache size Write Policy: Write Through: need a write buffer Nightmare: WB saturation Write Back: control can be complex cs 152 vm4 DAP & SIK 1995

Review: Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns $01-001/bit Main Memory M Bytes 100ns-1us $01-001 Disk G Bytes ms -3-4 10-10 cents Tape infinite sec-min 10-6 Registers Cache Memory Disk Tape Instr Operands Blocks Pages Files Staging Xfer Unit prog/compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level cs 152 vm5 DAP & SIK 1995 Outline of Today s Lecture Recap of Memory Hierarchy & Introduction to Cache (5 min) Virtual Memory Questions and Administrative Matters (3 min) Page Tables and TLB (25 min) Break (5 minutes) Protection (20 min) Summary (5 min) cs 152 vm6 DAP & SIK 1995

Virtual Memory Provides illusion of very large memory sum of the memory of many jobs greater than physical memory address space of each job larger than physical memory Allows available (fast and expensive) physical memory to be very well utilized Simplifies memory management (main reason today) Exploits memory hierarchy to keep average access time low Involves at least two storage levels: main and secondary Virtual Address -- address used by the programmer Virtual Address Space -- collection of such addresses Memory Address -- address of word in physical memory also known as physical address or real address cs 152 vm7 DAP & SIK 1995 Basic Issues in VM System Design size of information blocks that are transferred from secondary to main storage block of information brought into M, and M is full, then some region of M must be released to make room for the new block --> replacement policy which region of M is to hold the new block --> placement policy missing item fetched from secondary memory only on the occurrence of a fault --> fetch/load policy mem disk cache reg Paging Organization frame pages virtual and physical address space partitioned into blocks of equal size page frames pages cs 152 vm8 DAP & SIK 1995

Address Map V = {0, 1,, n - 1} virtual address space M = {0, 1,, m - 1} physical address space n > m MAP: V --> M U {0} address mapping function MAP(a) = a' if data at virtual address a is present in physical address a' and a' in M = 0 if data at virtual address a is not present in M a Processor a Name Space V Addr Trans Mechanism 0 a' missing item fault fault handler Main Memory Secondary Memory physical address OS performs this transfer cs 152 vm9 DAP & SIK 1995 PA 0 1024 7168 Paging Organization frame 0 1 7 Physical Memory 1K 1K 1K Address Mapping 10 VA page no disp Addr Trans MAP 0 1024 31744 page 0 1 31 Virtual Memory 1K 1K 1K unit of mapping also unit of transfer from virtual to physical memory Page Table Base Reg index into page table Page Table V Access Rights PA + table located in physical memory physical memory address actually, concatenation is more likely cs 152 vm10 DAP & SIK 1995

Address Mapping Algorithm If V = 1 then page is in main memory at frame address stored in table else address located page in secondary memory Access Rights R = Read-only, R/W = read/write, X = execute only If kind of access not compatible with specified access rights, then protection_violation_fault If valid bit not set then page fault Protection Fault: access rights violation; causes trap to hardware, microcode, or software fault handler Page Fault: page not resident in physical memory, also causes a trap; usually accompanied by a context switch: current process suspended while page is fetched from secondary storage eg, VAX 11/780 each process sees a 4 gigabyte (2 32 bytes) virtual addr space 1/2 for user regions, 1/2 for a system wide name space shared by all processes page size is 512 bytes cs 152 vm11 DAP & SIK 1995 Optimal Page Size Choose page that minimizes fragmentation large page size => internal fragmentation more severe BUT increases the # of pages / name space => larger page tables In general, the trend is towards larger page sizes because -- memories get larger as the price of RAM drops -- the gap between processor speed and disk speed grow wider -- programmers desire larger virtual address spaces Most machines at 4K byte pages today, with page sizes likely to increase cs 152 vm12 DAP & SIK 1995

Fragmentation & Relocation Fragmentation is when areas of memory space become unavailable for some reason Relocation: move program or data to a new region of the address space (possibly fixing all the pointers) External Fragmentation: Space left between blocks Internal Fragmentation: program is not an integral # of pages, part of the last page frame is "wasted" (obviously less of an issue as physical memories get larger) occupied 0 1 k-1 cs 152 vm13 DAP & SIK 1995 Fragmentation (cont) Table Fragmentation occurs when page tables become very large because of large virtual address spaces; direct mapped page tables could take up sizable chunk of memory 21 9 EX: VAX Architecture XX Page Number Disp NOTE: this implies that page table could require up to 2 ^21 entries, each on the order of 4 bytes long (8 M Bytes) Alternatives: 00 P0 region of user process 01 P1 region of user process 10 system name space (1) Hardware associative mapping: requires one entry per page frame (O( M )) rather than per page (O( N )) (2) "software" approach based on a hash table (inverted page table) page# disp Present Access Page# Phy Addr Page Table associative or hashed lookup pn the page number field cs 152 vm14 DAP & SIK 1995

Questions and Administrative Matters (5 Minutes) No lecture next Friday March 24 (belated President s Day) Exercises due by Friday vs Tuesday cs 152 vm15 DAP & SIK 1995 Page Replacement Algorithms Just like cache block replacement! First-in/First-Out: -- in response to page fault, replace the page that has been in memory for the longest period of time -- does not make use of the principle of locality: an old but frequently referenced page could be replaced -- easy to implement: maintain history thread thru page table entries, no need to track past reference history -- usually exhibits the worst behavior! Least Recently Used: -- selects the least recently used page for replacement -- requires knowledge about past references, more difficult to implement (thread thru page table entries from most recently referenced to least recently referenced; when a page is referenced it is placed at the head of the list; the end of the list is the page to replace) -- good performance, recognizes principle of locality cs 152 vm16 DAP & SIK 1995

Page Replacement (Continued) Not Recently Used: Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past = 0 otherwise -- if replacement is necessary, choose any page frame such that its reference bit is 0 This is a page that has not been referenced in the recent past -- clock implementation of NRU: page table entry 1 0 1 0 1 0 0 0 page table entry last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a 0 bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero ref bit An optimization is to search for the a page that is both not recently referenced AND not dirty cs 152 vm17 DAP & SIK 1995 Demand Paging and Prepaging Fetch Policy when is the page brought into memory? if pages are loaded solely in response to page faults, then the policy is demand paging An alternative is prepaging: anticipate future references and load such pages before their actual use + reduces page transfer overhead - removes pages already in page frames, which could adversely affect the page fault rate - predicting future references usually difficult Most systems implement demand paging without prepaging (One way to obtain effect of prepaging behavior is increasing the page size cs 152 vm18 DAP & SIK 1995

2-level page table Root Page Tables 4 bytes Second Level Page Table PA 4 bytes PA Data Pages Seg 0 256 P0 1 K D0 4 K Seg 1 PA PA P255 D1023 Seg 255 256K bytes in physical memory 1 Mbyte, but allocated in system virtual addr space Allocated in User Virtual Space 8 8 2 2 2 10 12 x x x 2 = 2 38 cs 152 vm19 DAP & SIK 1995 Virtual Address and a Cache VA PA miss Translation CPU Cache data hit Main Memory In Segment + Page approach OR 2-level page table approach, it takes two memory accesses to translate VA to PA This makes cache access very expensive, and this is the "innermost loop" that you want to go as fast as possible ASIDE: Why access cache with PA at all? VA caches have a problem! synonym problem: two different virtual addresses map to same physical address => two different cache entries holding data for the same physical address! for update: must update all cache entries with same physical address or memory becomes inconsistent determining this requires significant hardware, essentially an associative lookup on the physical address tags to see if you have multiple hits cs 152 vm20 DAP & SIK 1995

TLBs A way to speed up translation is to use a special cache of recently used page table entries -- this has many names, but the most frequently used is Translation Lookaside Buffer or TLB Virtual Address Physical Address Dirty Ref Valid Access TLB access time comparable to, though shorter than, cache access time (still much less than main memory access time) cs 152 vm21 DAP & SIK 1995 Translation Look-Aside Buffers Just like any other cache, the TLB can be organized as fully associative, set associative, or direct mapped TLBs are usually small, typically not more than 128-256 entries even on high end machines This permits fully associative lookup on these machines Most mid-range machines use small n-way set associative organizations CPU hit VA PA miss TLB Lookup Cache Main Memory Translation with a TLB miss Translation 1/2 t cs 152 vm22 DAP & SIK 1995 hit data t 20 t

Reducing Translation Time Machines with TLBs go one step further to reduce # cycles/cache access They overlap the cache access with the TLB access Works because high order bits of the VA are used to look in the TLB while low order bits are used as index into cache DS3100: 32bit VA, 4KB page, 20bit V Page tag, 32Bit PA TLB fully associative, 64 entries, (tag, page, dirty, valid) Hardware support for table walk on TLB miss (rather than full hardware replacement) cs 152 vm23 DAP & SIK 1995 Overlapped Cache & TLB Access 32 TLB assoc lookup index Cache 1 K 10 2 00 4 bytes PA Hit/ Miss 20 12 page # disp VA Data Hit/ Miss IF cache hit AND (cache tag = VA) then deliver data to CPU ELSE IF [cache miss OR (cache tag = VA)] and TLB hit THEN access memory with the PA from the TLB ELSE do standard VA translation = cs 152 vm24 DAP & SIK 1995

Problems With Overlapped TLB Access Overlapped access only works as long as the address bits used to index into the cache do not change as the result of VA translation This usually limits things to small caches, large page sizes, or high n-way set associative caches if you want a large cache Example: suppose everything the same except that the cache is increased to 8 K bytes instead of 4 K: 11 2 cache index 00 This bit is changed by VA translation, but 20 12 is needed for cache virt page # disp lookup Solutions: go to 8K byte page sizes go to 2 way set associative cache (would allow you to continue to use a 10 bit index) 10 4 4 1K 2 way set assoc cache cs 152 vm25 DAP & SIK 1995 SS-20 Review 4-way associative 16 KB Dcache = 4 * 1 page (4 KB) 5-way associative 20 KB Icache = 5 * 1 page (4 KB) cs 152 vm26 DAP & SIK 1995

Break (5 Minutes) cs 152 vm27 DAP & SIK 1995 Virtual Memory Virtual address (232, 2 64 ) to Physical Address mapping (2 28 ) Virtual memory terms of cache terms: Cache block? Cache Miss? How is Virtual Memory different from caches? What Controls Replacement Size Lower level use 4Qs for VM? Q1: Where can a block be placed in the upper level? Fully Associative, Set Associative, Direct Mapped Q2: How is a block found if it is in the upper level? Cache was Tag/Block Q3: Which block should be replaced on a miss? Cache was Random, LRU Q4: What happens on a write? Write Back or Write Through (with Write Buffer) cs 152 vm28 DAP & SIK 1995

More on Selecting a Page Size Reasons for larger page size Page table size is inversely proportional to the page size; therefore memory saved fast cache hit time easy when cache page size; bigger page makes it feasible to have 1 page cache Transferring larger pages to or from secondary storage, possibly over a network, is more efficient Number of TLB entries are restricted by clock cycle time, so a larger page size maps more memory thereby reducing TLB misses Reasons for a smaller page size don t waste storage; data must be contiguous within page quicker process start for small processes? Hybrid solution: multiple page sizes Alpha: 8KB, 64KB, 512 KB, 4 MB pages cs 152 vm29 DAP & SIK 1995 Segmentation (see x86) Alternative to paging (often combined with paging) Segments allocated for each program module; may be different sizes segment is unit of transfer between physical memory and disk BR seg # disp Present Access Length Phy Addr Segment Table physical addr + Presence Bit segment length access rights Addr=start addr of segment Faults: missing segment (Present = 0) overflow (Displacement exceeds segment length) protection violation (access incompatible with segment protection) Segment-based addressing sometimes used to implement capabilities, ie, hardware support for sophisticated protection mechanisms cs 152 vm30 DAP & SIK 1995

Segment Based Addressing Two Serious Drawbacks: (1) storage allocation with variable sized blocks (best fit vs fit fit vs buddy system) (2) external fragmentation: physical memory allocated in such a fashion that all remaining pieces are too small to be allocated to any segment Solved be expensive run-time memory compaction The best of both worlds: paged segmentation schemes virtual address seg # page # displacement used by IBM: 4K byte pages, 16 x 1 Mbyte or 64 x 64 Kbyte segments cs 152 vm31 DAP & SIK 1995 Example of Fast translation: Translation Buffer Cache of translated addresses Alpha 21064 TLB: 32 entry fully associative Page-frame Page address offset <30> <13> 1 <1> <2> <2> <30> <21> 2 V R W Tag Physical address 3 (low-order 13 bits <9> of address) 34-bit 32:1 MUX 4 <21> physical address (high-order 21 bits of address) cs 152 vm32 DAP & SIK 1995

Alpha VM Mapping 64-bit address divided into 3 segments seg0 (bit 63=0) user code/heap seg1 (bit 63 = 1, 62 = 1) user stack kseg (bit 63 = 1, 62 = 0) kernel segment for OS 3 level page table, each one page Alpha only 43 unique bits of VA (future min page size up to 64KB = > 55 bits of VA) seg0/seg1 selector Page Table Base Register 000 0 or 111 1 + L1 page table Page table entry Virtual address level1 level2 level3 L2 page table + L3 page table Page table entry + Page table entry page offset PTE bits; valid, kernel & user read & write enable (No reference, use, or dirty bit) What do you do? physical page-frame number Physical address Main memory page offset cs 152 vm33 DAP & SIK 1995 Alpha 21064 Separate Instr & Data TLB & Caches TLBs fully associative Caches 8KB direct mapped I T L B I C A C H E 1 PC Page-frame address <30> <1> <2> <2> 1 2 <30> <21> V R W Tag Physical address 3 (high-order 21 bits of physical address) <21> Page offset<13> 12:1 MUX Instruction <64> Data Out <64> Data In <64> 2 5 <64> 5 <64> 18 19 <1> <2> <2> <30> <21> V R W Tag Physical address <8> <5> <8> <5> Index Block Index Block Delayed Write Buffer offset D offset C 12 (256 Valid Tag Data A (256 Valid Tag Data <1> <21> <64> 9 blocks) 12 C blocks) <1> <21> <64> H 2 E 19 5 CPU D T L B 25 18 Data Page-frame address <30> 21 (high-order 21 bits of physical address) <21> Page offset<13> 23 32:1 MUX 19 26 23 <64> 23 Critical 8 bytes first 4 =? 22 =? 2 MB L2 cache, direct mapped <29> 13 Instruction Prefetch Stream Buffer =? Tag <29> Data <256> 7 8 9 24 <29> 27 Tag <29> Data <256> =? Write Buffer 256 bit path to main memory, 4 64-bit modules L2 C A C H E 6 4:1 MUX <10> <19> Tag Index 10 (65,536 blocks) 12 Alpha AXP 21064 V D Tag Data 28 <1> <1> <10> <256> 16 15 11 12 =? Victim Buffer 17 14 28 Main Memory 17 20 Magnetic Disk cs 152 vm34 DAP & SIK 1995

Virtual Memory in Historical Perspective Since VM invented, DRAMs now 64,000 times larger Today systems rarely have many page faults Should we drop VM then? cs 152 vm35 DAP & SIK 1995 Conclusion #1 Virtual Memory invented as another level of the hierarchy Controversial at the time: can SW automatically manage 64KB across many programs? DRAM growth removed the controversy Today VM allows many processes to share single memory without having to swap all processes to disk, protection more important (Multi-level) page tables to map virtual address to physical address TLBs are important for fast translation TLB misses are significant in performance cs 152 vm36 DAP & SIK 1995

Conclusion #2 Theory of Algorithms & Compilers based on number of operations Compiler remove operations and simplify ops: Integer adds << Integer multiplies << FP adds << FP multiplies Advanced pipelines => these operations take similar time As Clock rates get higher and pipelines are longer, instructions take less time but DRAMs only slightly faster (although much larger) Today time is a function of (ops, cache misses) Given importance of caches, what does this mean to: Compilers? Data structures? Algorithms? cs 152 vm37 DAP & SIK 1995