CSE 410 Computer Systems. Hal Perkins Spring 2010 Lecture 12 More About Caches

Similar documents
Caches Concepts Review

ECE331: Hardware Organization and Design

Lecture 16. Today: Start looking into memory hierarchy Cache$! Yay!

Review: Computer Organization

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Plot SIZE. How will execution time grow with SIZE? Actual Data. int array[size]; int A = 0;

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Welcome to Part 3: Memory Systems and I/O

Cache introduction. April 16, Howard Huang 1

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Caching Basics. Memory Hierarchies

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Question?! Processor comparison!

ECE232: Hardware Organization and Design

Memory Hierarchies &

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

Slide Set 9. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng

University of Western Ontario, Computer Science Department CS3350B, Computer Architecture Quiz 1 (30 minutes) January 21, 2015

EE 4683/5683: COMPUTER ARCHITECTURE

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

EEC 483 Computer Organization

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

ECEC 355: Cache Design

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Memory. Lecture 22 CS301

LECTURE 11. Memory Hierarchy

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

ECE232: Hardware Organization and Design

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Logical Diagram of a Set-associative Cache Accessing a Cache

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6.

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Page 1. Multilevel Memories (Improving performance using a little cash )

CSC258: Computer Organization. Memory Systems

University of Western Ontario, Computer Science Department CS3350B, Computer Architecture Quiz 1 (30 minutes) January 21, 2015

CS3350B Computer Architecture

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS61C : Machine Structures

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Binghamton University. CS-220 Spring Cached Memory. Computer Systems Chapter

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

Memory. Objectives. Introduction. 6.2 Types of Memory

Course Administration

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Chapter 6 Objectives

Cache Memory - II. Some of the slides are adopted from David Patterson (UCB)


Pick a time window size w. In time span w, are there, Multiple References, to nearby addresses: Spatial Locality

Advanced Computer Architecture

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Computer Architecture CS372 Exam 3

CS 136: Advanced Architecture. Review of Caches

CS 61C: Great Ideas in Computer Architecture Caches Part 2

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

UCB CS61C : Machine Structures

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

Improving our Simple Cache

ECE331: Hardware Organization and Design

Last class. Caches. Direct mapped

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

Caches II CSE 351 Spring #include <yoda.h> int is_try(int do_flag) { return!(do_flag!do_flag); }

CS 31: Intro to Systems Caching. Martin Gagne Swarthmore College March 23, 2017

Exercises 6 - Virtual vs. Physical Memory, Cache

Advanced Memory Organizations

CSE 2021: Computer Organization

CSE 2021: Computer Organization

CS61C : Machine Structures

Caches 3/23/17. Agenda. The Dataflow Model (of a Computer)

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Lecture 2: Memory Systems

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

V. Primary & Secondary Memory!

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Memory Hierarchy: Caches, Virtual Memory

Lecture 33 Caches III What to do on a write hit? Block Size Tradeoff (1/3) Benefits of Larger Block Size

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

Computer Architecture V Fall Practice Exam Questions

Caches III. CSE 351 Autumn Instructor: Justin Hsia

Caches II. CSE 351 Spring Instructor: Ruth Anderson

The University of Adelaide, School of Computer Science 13 September 2018

Levels in memory hierarchy

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

Sarah L. Harris and David Money Harris. Digital Design and Computer Architecture: ARM Edition Chapter 8 <1>

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Page 1. Memory Hierarchies (Part 2)

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

COMP 3221: Microprocessors and Embedded Systems

UCB CS61C : Machine Structures

Caches. Samira Khan March 23, 2017

Memory hierarchy review. ECE 154B Dmitri Strukov

ECE 2300 Digital Logic & Computer Organization. More Caches

Transcription:

CSE 4 Computer Systems Hal Perkins Spring Lecture More About Caches

Reading Computer Organization and Design Section 5. Introduction Section 5. Basics of Caches Section 5. Measuring and Improving Cache Performance

How big is the cache? Suppose we have a byte-addressable machine with 6-bit addresses with a cache with the following characteristics: It is direct-mapped Each block holds one byte The cache index is the four least significant bits Two questions: How many blocks does the cache hold? How many bits of storage are required to build the cache (e.g., for the data array, tags, etc.)?

More cache organizations Now we ll explore some alternate cache organizations. How can we take advantage of spatial locality too? How can we reduce the number of potential conflicts? First, a brief discussion about cache performance. 4

Memory System Performance To examine the performance of a memory system, we need to focus on a couple of important factors. How long does it take to send data from the cache to the CPU? How long does it take to copy data from memory into the cache? How often do we have to access main memory? There are names for all of these variables. The hit time is how long it takes data to be sent from the cache to the processor. This is usually fast, on the order of - clock cycles. The miss penalty is the time to copy data from main memory to the cache. This often requires dozens of clock cycles (at least). The miss rate is the percentage of misses. CPU A little static RAM (cache) Lots of dynamic RAM 5

Average memory access time The average memory access time, or AMAT, can then be computed. AMAT = Hit time + (Miss rate x Miss penalty) This is just averaging the amount of time for cache hits and the amount of time for cache misses. How can we improve the average memory access time of as system? stem? Obviously, a lower AMAT is better. Miss penalties are usually much greater than hit times, so the best way to lower AMAT is to reduce the miss penalty or the miss rate. However, AMAT should only be used as a general guideline. e Remember e that execution ecut time is still the best performance metric. 6

Performance example Assume data accesses only. The cache hit ratio is 97% and the hit time is one cycle, but the miss penalty is cycles. AMAT = Hit time + (Miss rate x Miss penalty) = cycle + (% x cycles) =.6 cycles If the cache was perfect and never missed, the AMAT would be one cycle. But even with just a % miss rate, the AMAT here increases.6 times! How can we reduce miss rate? 7

Spatial locality One-byte cache blocks don t take advantage of spatial locality, which predicts that an access to one address will be followed by an access to a nearby address. What can we do? 8

Spatial locality What we can do is make the cache block size larger than one byte. Memory Address Here we use two- byte blocks, so we can load the cache with two bytes at a time. If we read from 5 6 address, the 7 data in addresses and would both be copied to cache block. 4 Index 8 9 4 5 9

Block addresses Now how can we figure out where data should be placed in the cache? It s timeforblock addresses! If the cache block size is n bytes, we can conceptually split the main memory into n -byte chunks too. To determine the block address of a byte address i, you can do the integer division i / n Byte Address Our example has two-byte cache blocks, so we can think of a 6-byte main memory as an 8-block main memory instead. 4 5 For instance, memory addresses and 6 both correspond to block address 6, since 7 / = 6 and / = 6. 8 9 4 5 Block Address 4 5 6 7

Cache mapping Once you know the block address, you can map it to the cache as before: find the remainder when the block address is divided by the number of cache blocks. Byte Block In our example, Address Address memory block 6 belongs in cache block, since 6 mod 4 =. This corresponds to placing data from memory byte addresses and into cache block. 4 Index 5 6 7 8 9 4 5 4 5 6 7

Data placement within a block When we access one byte of data in memory, we ll copy its entire block into the cache, to hopefully take advantage of spatial locality. In our example, if a program reads from byte address we ll load all of memory block 6 (both addresses and ) into cache block. Note byte address corresponds to the same memory block address! So a read from address will also cause memory block 6 (addresses and ) to be loaded into cache block. To make things simpler, byte i of a memory block is always stored in byte i of the corresponding cache block. Byte Address Byte Byte Cache Block

Locating data in the cache Let s say we have a cache with k blocks, each containing n bytes. We can determine where a byte of data belongs in this cache by looking at its address in main memory. k bits of the address will select one of the k cache blocks. The lowest n bits are now a block offset that decides which of the n bytes in the cache block will store the data. m-bit Address (m-k-n) bits Tag k bits Index n-bit Block Offset Our example used a -block cache with bytes per block. Thus, memory address () would be stored in byte of cache block. 4-bit Address bit bits -bit Block Offset

A picture (4-bit addresses!) Tag Index ( bits) Address (4 bits) Block offset Index Valid Tag Data = 8 8 Mux 8 Hit Data 4

An exercise Address (4 bits) Tag n Index ( bits) Index Valid Tag Data xca xfe xde xad xbe xef xfe xed = nn 8 8 Mux n Block offset For the addresses below, what byte is read from the cache (or is there a miss)? 8 Hit Data 5

Using arithmetic An equivalent way to find the right location within the cache is to use arithmetic again. m-bit Address (m-k-n) bits Tag k bits Index n-bit Block Offset We can find the index in two steps, as outlined earlier. Do integer division of the address by n to find the block address. Then mod the block address with k to find the index. The block offset is just the memory address mod n. For example, we can find address in a 4-block, -byte per block cache. The block address is / = 6, so the index is then 6 mod 4 =. The block offset would be mod =. 6

A diagram of a larger example cache Here is a cache with,4 blocks of 4 bytes each, and - bit memory addresses. Address ( bits) bits Tag Index Valid Tag Dt Data...... = 8 8 8 8 Mux Hit Data 8 7

A larger example cache mapping Where would the byte from memory address 646 be stored in this direct-mapped -block cache with -byte blocks? We can determine this with the binary force. 646 in binary is.... The lowest bits,, mean this is the second byte in its block. The next bits,, are the block number itself (5). Equivalently, you could use arithmetic instead. The block offset is 646 mod 4, which equals. The block address is 646/4 = 56, so the index is 56 mod 4, or 5. 8

A larger diagram of a larger example cache mapping Address ( bits)... bits Tag Index Valid Tag Data... 5... = 8 8 8 8 Mux Hit Data 8 9

What goes in the rest of that cache block? The other three bytes of that cache block come from the same memory block, whose addresses must all have the same index () and the same tag (...). Address ( bits)... Index Valid Tag Data... 5... Tag = 8 8 8 8 Mux Hit Data 8

The rest of that cache block Again, byte i of a memory block is stored into byte i of the corresponding cache block. In our example, memory block 56 consists of byte addresses 644 to 647. So bytes - of the cache block would contain data from address 644, 645, 646 and 647 respectively. You can also look at the lowest bits of the memory address to find the block offsets. Block offset Memory address Decimal.. 644.. 645.. 646.. 647 Index Valid Tag Data... 5...

Disadvantage of direct mapping The direct-mapped cache is easy: indices and offsets can be computed with bit operators or simple arithmetic, because each memory address belongs in exactly one block. Memory However, this isn t really Address flexible. If a program uses addresses, 6,, 6,,..., then each access will result in a cache miss and a load into cache block. This cache has four blocks, but direct mapping might not let us use all of them. This can result in more misses than we might like. Index

A fully associative cache A fully associative cache permits data to be stored in any cache block, instead of forcing each memory address into one particular block. When data is fetched from memory, it can be placed in any unused block of the cache. This way we ll never have a conflict between two or more memory addresses which map to a single cache block. In the previous example, we might put memory address in cache block, and address 6 in block. Then subsequent repeated accesses to and 6 would all be hits instead of misses. If all the blocks are already in use, it s usually best to replace the least recently used one, assuming that if it hasn t used it in a while, it won t be needed again anytime soon.

The price of full associativity However, a fully associative cache is expensive to implement. Because there is no index field in the address anymore, the entire address must be used as the tag, increasing the total cache size. Data could be anywhere in the cache, so we must check the tag of every cache block. That s a lot of hardware! (Expensive and starting to get slow) 4

Set associativity An intermediate possibility is a set-associative cache. The cache is divided into groups of blocks, called sets. Each memory address maps to exactly one set in the cache, but data may be placed in any block within that set. Each block in the set has a separate tag If each set has x blocks, the cache is an x -way associative cache. Here are several possible organizations of an eight-block cache. -way associativity -way associativity 4-way associativity 8 sets, block each 4 sets, blocks each sets, 4 blocks each Set Set Set 4 5 6 7 5

Locating a set associative block We can determine where a memory address belongs in an associative cache in a similar il way as before. If a cache has s sets and each block has n bytes, the memory address can be partitioned as follows. Address (m bits) (m-s-n) Tag s Index n Block offset Our arithmetic computations now compute a set index, to select a set within the cache instead of an individual block. Block Offset = Memory Address mod n Block Address = Memory Address / n Set Index = Block Address mod s The tag is compared to all of the tags in that cache set to see which block (f (if any) in that set is a match 6

Example placement in set-associative caches Where would data from memory byte address 695 be placed, assuming the eight-block cache designs below, with 6 bytes per block? 695 in binary is.... Each block has 6 bytes, so the lowest 4 bits are the block offset. For the -way cache, the next three bits () are the set index. For the -way cache, the next two bits () are the set index. For the 4-way cache, the next one bit () is the set index. The data may go in any block, shown in green, within the correct set. -way associativity 8 sets, block each -way associativity 4 sets, blocks each 4-way associativity sets, 4 blocks each Set Set Set 4 5 6 7 7

Block replacement Any empty block in the correct set may be used for storing data. If there are no empty blocks, the cache controller will attempt to replace the least recently used block, just like before. For highly associative caches, it s expensive to keep track of what s really the least recently used block, so some approximations are used. We won t get into the details. -way associativity itiit -way associativity itiit 4-way associativity itiit 8 sets, block each 4 sets, blocks each sets, 4 blocks each Set 4 5 6 7 Set Set 8

Summary Larger block sizes can take advantage of spatial locality by loading data from not just one address, but also nearby addresses, into the cache. Associative caches assign each memory address to a particular set within the cache, but not to any specific block within that set. Set sizes range from (direct-mapped) to k (fully associative). Larger sets and higher associativity lead to fewer cache conflicts and lower miss rates, but they also increase the hardware cost. In practice, -way through h 6-way set-associative ti caches strike a good balance between lower miss rates and higher costs. Next, we ll talk more about measuring cache performance, and also discuss the issue of writing data to a cache. 9