CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

Similar documents
Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

Recall: Paging. Recall: Paging. Recall: Paging. CS162 Operating Systems and Systems Programming Lecture 13. Address Translation, Caching

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 13. Address Translation (con t) Caches and TLBs

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

Memory Hierarchy Review

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Modern Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

CS162 Operating Systems and Systems Programming Lecture 12. Address Translation. Page 1

14:332:331. Week 13 Basics of Cache

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

ECE468 Computer Organization and Architecture. Memory Hierarchy

14:332:331. Week 13 Basics of Cache

EE 4683/5683: COMPUTER ARCHITECTURE

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CSL373: Operating Systems more VM fun

Course Administration

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

Handout 4 Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

CPE 631 Lecture 04: CPU Caches

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

LECTURE 11. Memory Hierarchy

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

ECE468 Computer Organization and Architecture. Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 13: Address Translation

Virtual Memory. Virtual Memory

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Virtual Memory: From Address Translation to Demand Paging

Question?! Processor comparison!

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

CSC Memory System. A. A Hierarchy and Driving Forces

CS3350B Computer Architecture

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Memory Hierarchy: Caches, Virtual Memory

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Page 1. Multilevel Memories (Improving performance using a little cash )

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Lecture 9 - Virtual Memory

Computer Systems. Virtual Memory. Han, Hwansoo

CSE 153 Design of Operating Systems

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Chapter 8. Virtual Memory

5DV118 Computer Organization and Architecture Umeå University Department of Computing Science Stephen J. Hegner

CS162 Operating Systems and Systems Programming Lecture 16. I/O Systems. Page 1

Page 1. Goals for Today" Important Aspects of Memory Multiplexing" Virtualizing Resources" CS162 Operating Systems and Systems Programming Lecture 9

Processes and Virtual Memory Concepts

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

CS61C : Machine Structures

CS 153 Design of Operating Systems Winter 2016

EECS150 - Digital Design Lecture 11 SRAM (II), Caches. Announcements

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Memory Hierarchy: Motivation

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Memory Technologies. Technology Trends

Memory hierarchy review. ECE 154B Dmitri Strukov

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Midterm 1. Administrivia. Virtualizing Resources. CS162 Operating Systems and Systems Programming Lecture 12. Address Translation

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

COSC3330 Computer Architecture Lecture 20. Virtual Memory

Introduction to Operating Systems

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Transcription:

CS162 Operating Systems and Systems Programming Lecture 13 Caches and TLBs March 12, 2008 Prof. Anthony D. Joseph http//inst.eecs.berkeley.edu/~cs162 Review Multi-level Translation What about a tree of tables? Lowest level page table memory still allocated with bitmap Higher levels often segmented Could have any number of levels. Example (top segment) Address Seg # Page # Base0 Limit0 V Base1 Limit1 V Base2 Limit2 V Base3 Limit3 N Base4 Limit4 V Base5 Limit5 N Base6 Limit6 N Base7 Limit7 V > page #0 V,R page #1 V,R page #2 V,R,W page #3 V,R,W page #4 N page #5 V,R,W Access Error What must be saved/restored on context switch? Contents of top-level segment registers (for this example) Pointer to top-level table (page table) Physical Page # Physical Address Check Perm Access Error Lec 13.2 Address PageTablePtr Review Two-level page table Physical Physical 10 bits 10 bits 12 bits Address Page # P1 index P2 index 4 bytes Tree of Page Tables Tables fixed size (1024 entries) On context-switch save single PageTablePtr register Sometimes, top-level page tables called directories (Intel) Each entry called a (surprise!) Page Table Entry (PTE) 4 bytes 4KB Lec 13.3 Review What is in a PTE? What is in a Page Table Entry (or PTE)? Pointer to next-level page table or to actual page Permission bits valid, read-only, read-write, write-only Example Intel x86 architecture PTE Address same format previous slide (10, 10, 12-bit offset) Intermediate page tables called Directories Page Frame Number (Physical Page Number) Free (OS) 31-12 11-9 8 7 6 5 4 3 2 1 0 P Present (same as valid bit in other architectures) W Writeable U User accessible PWT Page write transparent external cache write-through PCD Page cache disabled (page cannot be cached) A Accessed page has been accessed recently D Dirty (PTE only) page has been modified recently L L1 4MB page (directory only). Bottom 22 bits of virtual address serve as offset 0 L D A PWT PCD U W P Lec 13.4 Page 1

1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Performance Goals for Today Caching Concept Caching Translation Look-aside Buffers Note Some slides and/or pictures in the following are adapted from slides 2005 Silberschatz, Galvin, and Gagne Lec 13.5 Cache a repository for copies that can be accessed more quickly than the original Make frequent case fast and infrequent case less dominant Caching underlies many of the techniques that are used today to make computers fast Can cache memory locations, address translations, pages, file blocks, file names, network routes, etc Only good if Frequent case frequent enough and Infrequent case not too expensive Important measure Average Access time (Hit Rate x Hit Time) + (Miss Rate x Miss Time) Lec 13.6 1000 100 10 1 Why Bother with Caching? Processor-DRAM Gap (latency) Moore s Law (really Joy s Law) Less Law? Time µproc CPU 60%/yr. (2X/1.5yr) Processor- Performance Gap (grows 50% / year) DRAM DRAM 9%/yr. (2X/10 yrs) Lec 13.7 Address Another Major Reason to Deal with Caching Seg # Page # Base0 Limit0 V Base1 Limit1 V Base2 Limit2 V Base3 Limit3 N Base4 Limit4 V Base5 Limit5 N Base6 Limit6 N Base7 Limit7 V > page #0 V,R page #1 V,R page #2 V,R,W page #3 V,R,W page #4 N page #5 V,R,W Access Error Cannot afford to translate on every access At least three DRAM accesses per actual DRAM access Or perhaps I/O if page table partially on disk! Even worse What if we are using caching to make memory access faster than DRAM access??? Solution? Cache translations! Translation Cache TLB ( Translation Lookaside Buffer ) Physical Page # Physical Address Check Perm Access Error Lec 13.8 Page 2

Why Does Caching Help? Locality! Probability of reference 0 2 Address Space n - 1 Temporal Locality (Locality in Time) Keep recently accessed data items closer to processor Spatial Locality (Locality in Space) Move contiguous blocks to the upper levels To Processor From Processor Upper Level Blk X Lower Level Blk Y Lec 13.9 Administrivia Project #2 code deadline is next Thu (3/20) Having Eclipse startup problems?» The fix is to delete your ~/.eclipse folder rm -rf ~/.eclipse Then restart eclipse to recreate your config, you don t have to delete your workspace Please use the CS162 newsgroup for faster response EECS email is significantly delayed this week Midterm #2 re-grade requests due by Fri 3/14 5pm Talk with us if your grade is 1-2 std devs below mean Attend a CSUA Unix session to better understand Unix CSUA holds them towards the beginning of each semester Lec 13.10 Hierarchy of a Modern Computer System Take advantage of the principle of locality to Present as much memory as in the cheapest technology Provide access at speed offered by the fastest technology 10 9 Jim Gray s Storage Latency Analogy How Far Away is the Data? Andromeda Tape /Optical Robot 2,000 Years Processor 10 6 Disk Pluto 2 Years Datapath Control Registers On-Chip Cache Second Level Cache (SRAM) Main (DRAM) Speed (ns) 1s 10s-100s 100s Secondary Storage (Disk) 10,000,000s (10s ms) Tertiary Storage (Tape) 10,000,000,000s (10s sec) 100 10 2 1 On Board Cache On Chip Cache Registers Sacramento This Lecture Hall This Room My Head 1.5 hr 10 min 1 min Size (bytes) 100s Ks-Ms Ms-Gs Gs-Ts Ts-Ps Lec 13.11 Lec 13.12 Page 3

A Summary on Sources of Cache Misses Compulsory (cold start or process migration, first reference) first access to a block Cold fact of life not a whole lot you can do about it Note If you are going to run billions of instruction, Compulsory Misses are insignificant Capacity Cache cannot contain all blocks access by the program Solution increase cache size Conflict (collision) Multiple memory locations mapped to the same cache location Solution 1 increase cache size Solution 2 increase associativity Coherence (Invalidation) other process (e.g., I/O) updates memory Lec 13.13 How is a found in a Cache? Tag Address Index Set Select Data Select Index Used to Lookup Candidates in Cache Index identifies the set Tag used to identify actual copy If no candidates match, then declare cache miss is minimum quantum of caching Data select field used to select data within block Many caching applications don t have data select field offset Lec 13.14 Review Direct Mapped Cache Direct Mapped 2 N byte cache The lowest L bits are the Byte Select ( Size 2 L ) The middle M bits are the Cache Index (Cache Lines 2 M ) The uppermost bits are the (32 (M + L)) Example 1 KB Direct Mapped Cache with 32 B s Index chooses potential block, Tag checked to verify block, Byte select chooses byte within block 31 Valid Bit 0x50 Ex 0x50 9 Cache Index Ex 0x01 Byte 31 Byte 63 Byte 1023 Cache Data Byte 1 0 Byte 33 Byte 32 1 4 Byte Select Ex 0x00 Byte 0 Byte 992 0 2 3 31 Lec 13.15 Valid 31 Cache Data Cache 0 Review Set Associative Cache N-way set associative N entries per Cache Index N direct mapped caches operates in parallel Example Two-way set associative cache Cache Index selects a set from the cache Two tags in the set are compared to input in parallel Data is selected based on the tag result Compare 1 Sel1 OR Cache Index 3/10/08 Joseph Hit CS162 UCB Spring 2008 Cache 8 Mux 0 Sel0 Cache Data Cache 0 4 Byte Select Compare 0 Lec 13.16 Valid Page 4

Review Fully Associative Cache Fully Associative Every block can hold any line Address does not include a cache index Compare s of all Cache Entries in Parallel Example Size32B blocks We need N 27-bit comparators Still have byte select to choose from within block 31 (27 bits long) Valid Bit Byte 31 Byte 63 4 Cache Data Byte Select Ex 0x01 Byte 1 0 Byte 0 Byte 33 Byte 32 Lec 13.17 Review Where does a Get Placed in a Cache? Example 12 placed in 8 block cache no. no. Direct mapped block 12 can go only into block 4 (12 mod 8) 0 1 2 3 4 5 6 7 32- Address Space 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 no. Set associative block 12 can go anywhere in set 0 (12 mod 4) 0 1 2 3 4 5 6 7 Set Set Set Set 0 1 2 3 no. Fully associative block 12 can go anywhere 0 1 2 3 4 5 6 7 Lec 13.18 Review Which block should be replaced on a miss? Easy for Direct Mapped Only one possibility Set Associative or Fully Associative Random LRU (Least Recently Used) 2-way 4-way 8-way Size LRU Random LRU Random LRU Random 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12% Review What happens on a write? Write through The information is written to both the block in the cache and to the block in the lower-level memory Write back The information is written only to the block in the cache. Modified cache block is written to main memory only when it is replaced Question is block clean or dirty? Pros and Cons of each? WT» PRO read misses cannot result in writes» CON Processor held up on writes unless writes buffered WB» PRO repeated writes not sent to DRAM processor not held up on writes» CON More complex Read miss may require writeback of dirty data Lec 13.19 Lec 13.20 Page 5

CPU Caching Applied to Address Translation Address TLB Cached? Yes No Physical Address Physical BREAK Translate (MMU) Data Read or Write (untranslated) Question is one of page locality does it exist? Instruction accesses spend a lot of time on the same page (since accesses sequential) Stack accesses have definite locality of reference Data accesses have less page locality, but still some Can we have a TLB hierarchy? Sure multiple levels at different sizes/speeds Lec 13.22 What Actually Happens on a TLB Miss? Hardware traversed page tables On TLB miss, hardware in MMU looks at current page table to fill TLB (may walk multiple levels)» If PTE valid, hardware fills TLB and processor never knows» If PTE marked as invalid, causes Page Fault, after which kernel decides what to do afterwards Software traversed Page tables (like MIPS) On TLB miss, processor receives TLB fault Kernel traverses page table to find PTE» If PTE valid, fills TLB and returns from fault» If PTE marked as invalid, internally calls Page Fault handler Most chip sets provide hardware traversal Modern operating systems tend to have more TLB faults since they use translation for many things Examples» shared segments» user-level portions of an operating system What happens on a Context Switch? Need to do something, since TLBs map virtual addresses to physical addresses Address Space just changed, so TLB entries no longer valid! Options? Invalidate TLB simple but might be expensive» What if switching frequently between processes? Include ProcessID in TLB» This is an architectural solution needs hardware What if translation tables change? For example, to move page from memory to disk or vice versa Must invalidate TLB entry!» Otherwise, might think that page is still in memory! Lec 13.23 Lec 13.24 Page 6

What TLB organization makes sense? CPU TLB Cache Needs to be really fast Critical path of memory access» In simplest view before the cache» Thus, this adds to access time (reducing cache speed) Seems to argue for Direct Mapped or Low Associativity However, needs to have very few conflicts! With TLB, the Miss Time extremely high! This argues that cost of Conflict (Miss Time) is much higher than slightly increased cost of access (Hit Time) Thrashing continuous conflicts between accesses What if use low order bits of page as index into TLB?» First page of code, data, stack may map to same entry» Need 3-way associativity at least? What if use high order bits as index?» TLB mostly unused for small programs Lec 13.25 TLB organization include protection How big does TLB actually have to be? Usually small 128-512 entries Not very big, can support higher associativity TLB usually organized as fully-associative cache Lookup is by Address Returns Physical Address + other info What happens when fully-associative is too slow? Put a small (4-16 entry) direct-mapped cache in front Called a TLB Slice Example for MIPS R3000 Address Physical Address Dirty Ref Valid Access ASID 0xFA00 0x0003 Y N Y R/W 34 0x0040 0x0010 N Y Y R 0 0x0041 0x0011 N Y Y R 0 Lec 13.26 Example R3000 pipeline includes TLB stages MIPS R3000 Pipeline Inst Fetch Dcd/ Reg ALU / E.A Write Reg TLB I-Cache RF Operation WB E.A. TLB D-Cache TLB 64 entry, on-chip, fully associative, software TLB fault handler Address Space Reducing translation time further As described, TLB lookup is in serial with cache lookup Address V page no. TLB Lookup V Access Rights 10 offset PA ASID V. Page Number 6 20 12 0xx User segment (caching based on PT/TLB entry) 100 Kernel physical space, cached 101 Kernel physical space, uncached 11x Kernel virtual space Allows context switching among 64 user processes without TLB flush Lec 13.27 P page no. Machines with TLBs go one step further they overlap TLB lookup with cache access. Works because offset available early offset 10 Physical Address Lec 13.28 Page 7

Overlapping TLB & Cache Access Here is how this might work with a 4K cache 32 TLB assoc lookup index 20 10 2 4 bytes page # disp 00 Hit/ Miss FN FN Data Hit/ Miss What if cache size is increased to 8KB? Overlap not complete Need to do something else. See CS152/252 Another option Caches Tags in cache are virtual addresses Translation only happens on cache misses 4K Cache 1 K Lec 13.29 Summary #1/2 The Principle of Locality Program likely to access a relatively small portion of the address space at any instant of time.» Temporal Locality Locality in Time» Spatial Locality Locality in Space Three (+1) Major Categories of Cache Misses Compulsory Misses sad facts of life. Example cold start misses. Conflict Misses increase cache size and/or associativity Capacity Misses increase cache size Coherence Misses Caused by external processors or I/O devices Cache Organizations Direct Mapped single block per set Set associative more than one block per set Fully associative all entries equivalent Lec 13.30 Summary #2/2 Translation Caching (TLB) PTE Page Table Entries Includes physical page number Control info (valid bit, writeable, dirty, user, etc) A cache of translations called a Translation Lookaside Buffer (TLB) Relatively small number of entries (< 512) Fully Associative (Since conflict misses expensive) TLB entries contain PTE and optional process ID On TLB miss, page table must be traversed If located PTE is invalid, cause Page Fault On context switch/change in page table TLB entries must be invalidated somehow TLB is logically in front of cache Thus, needs to be overlapped with cache access to be really fast Lec 13.31 Page 8