Plot SIZE. How will execution time grow with SIZE? Actual Data. int array[size]; int A = 0;

Similar documents
Lecture 16. Today: Start looking into memory hierarchy Cache$! Yay!

Cache introduction. April 16, Howard Huang 1

Welcome to Part 3: Memory Systems and I/O

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Memory Hierarchies &

LECTURE 11. Memory Hierarchy

Basic Memory Hierarchy Principles. Appendix C (Not all will be covered by the lecture; studying the textbook is recommended!)

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Caches Concepts Review

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

Memory Hierarchy: Caches, Virtual Memory

14:332:331. Week 13 Basics of Cache

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

The University of Adelaide, School of Computer Science 13 September 2018

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

CSE 410 Computer Systems. Hal Perkins Spring 2010 Lecture 12 More About Caches

Memory hierarchy review. ECE 154B Dmitri Strukov

EE 4683/5683: COMPUTER ARCHITECTURE

ECE Lab 8. Logic Design for a Direct-Mapped Cache. To understand the function and design of a direct-mapped memory cache.

14:332:331. Week 13 Basics of Cache

ECE468 Computer Organization and Architecture. Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

Computer Architecture Memory hierarchies and caches

Memory. Lecture 22 CS301

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Course Administration

Chapter 6 Objectives

Memory. Objectives. Introduction. 6.2 Types of Memory

V. Primary & Secondary Memory!

Cache Memory - II. Some of the slides are adopted from David Patterson (UCB)

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Page 1. Multilevel Memories (Improving performance using a little cash )

Caching Basics. Memory Hierarchies

CS3350B Computer Architecture

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

Memory hierarchy Outline

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

Introduction to cache memories

Digital Logic & Computer Design CS Professor Dan Moldovan Spring Copyright 2007 Elsevier 8-<1>

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Chapter 8 :: Topics. Chapter 8 :: Memory Systems. Introduction Memory System Performance Analysis Caches Virtual Memory Memory-Mapped I/O Summary

CS 240 Stage 3 Abstractions for Practical Systems

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6.


Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Introduction to OpenMP. Lecture 10: Caches

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

Computer Memory. Data Structures and Algorithms CSE 373 SP 18 - KASEY CHAMPION 1

Memory Hierarchy. Memory Flavors Principle of Locality Program Traces Memory Hierarchies Associativity. (Study Chapter 5)

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

CMPSC 311- Introduction to Systems Programming Module: Caching

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Pick a time window size w. In time span w, are there, Multiple References, to nearby addresses: Spatial Locality

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

LECTURE 10: Improving Memory Access: Direct and Spatial caches

Caches. Han Wang CS 3410, Spring 2012 Computer Science Cornell University. See P&H 5.1, 5.2 (except writes)

EE 457 Unit 7a. Cache and Memory Hierarchy

CS3350B Computer Architecture

Slide Set 8. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Cray XE6 Performance Workshop

Key Point. What are Cache lines

CS 537 Lecture 6 Fast Translation - TLBs

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Page 1. Memory Hierarchies (Part 2)

Lecture 2: Memory Systems

CPS101 Computer Organization and Programming Lecture 13: The Memory System. Outline of Today s Lecture. The Big Picture: Where are We Now?

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

Memory Technologies. Technology Trends

Cycle Time for Non-pipelined & Pipelined processors

Registers. Instruction Memory A L U. Data Memory C O N T R O L M U X A D D A D D. Sh L 2 M U X. Sign Ext M U X ALU CTL INSTRUCTION FETCH

CS 230 Practice Final Exam & Actual Take-home Question. Part I: Assembly and Machine Languages (22 pts)

Chapter 6 Caches. Computer System. Alpha Chip Photo. Topics. Memory Hierarchy Locality of Reference SRAM Caches Direct Mapped Associative

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Chapter 7: Large and Fast: Exploiting Memory Hierarchy

UC Berkeley CS61C : Machine Structures

CMPT 300 Introduction to Operating Systems

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Transcription:

How will execution time grow with SIZE? int array[size]; int A = ; for (int i = ; i < ; i++) { for (int j = ; j < SIZE ; j++) { A += array[j]; } TIME } Plot SIZE Actual Data 45 4 5 5 Series 5 5 4 6 8

Memory Systems and I/O We ve already seen how to make a fast processor. How can we supply the CPU with enough data to keep it busy? Part of CS78 focuses on memory and input/output issues, which are frequently bottlenecks that limit the performance of a system We start off looking at memory systems and turn to I/O How caches can dramatically improve the speed of memory accesses How virtual memory provides security and ease of programming How processors, memory and peripheral devices can be connected Cache introduction We ll answer the following questions. What are the challenges of building big, fast memory systems? What is a cache? Why caches work? How are caches organized? Where do we put things -and- how do we find them? 4

Small or slow Unfortunately there is a tradeoff between speed, cost and capacity. Storage Speed Cost Capacity Static RAM Fastest Expensive Smallest Dynamic RAMSlow Cheap Large Hard disks Slowest Cheapest Largest Fast memory is too expensive to have in abundance But dynamic memory has a much longer delay than other functional units in a datapath. If every lw or sw accessed dynamic memory, we d have to either increase the cycle time or stall frequently. Here are estimates of some current memory parameters Storage Delay Cost/MB Capacity Static RAM - cycles ~$5 8KB-MB Dynamic RAM - cycles ~$. 8MB-4GB Hard disks,, cycles ~$.5 GB-4GB 5 Introducing caches Wouldn t it be nice to find a balance between fast and cheap memory? We do this with a cache, a small amount of fast, expensive memory The cache goes between the processor and the slower, dynamic main memory It keeps a copy of the most frequently used data from the main memory Memory access speed increases overall, because the common case is faster Reads and writes to the most frequently used addresses will be serviced by the cache We only need to access the slower main memory for less frequently used data CPU A little static RAM (cache) Lots of dynamic RAM 6

The principle of locality It s usually difficult or impossible to figure out what data will be most frequently accessed before a program actually runs, which makes it hard to know what to store into the small, precious cache memory But in practice, most programs exhibit locality, which the cache can take advantage of The principle of temporal locality says that if a program accesses one memory address, there is a good chance that it will access the same address again soon The principle of spatial locality says that if a program accesses one memory address, there is a good chance that it will also access other nearby addresses 7 Temporal locality in programs The principle of temporal locality says memory accesses cluster in time Loops exhibit temporal locality for instructions The loop body will be executed many times The computer will need to access those same few locations of the instruction memory repeatedly For example: Loop: lw $t, ($s) add $t, $t, $s sw $t, ($s) addi $s, $s, -4 bne $s, $, Loop Each instruction will be fetched repeatedly, once on every loop iteration 8 4

Temporal locality in data Programs often access the same variables over and over, especially within loops. Below, sum and i are repeatedly read and written sum = ; for (i = ; i < MAX; i++) sum = sum + f(i); Commonly-accessed variables can sometimes be kept in registers, but this is not always possible. There are a limited number of registers There are situations where the data must be kept in memory, as is the case with shared or dynamically-allocated memory. 9 Spatial locality in programs The principle of spatial locality says that memory references cluster within an address range sub $sp, $sp, 6 sw $ra, ($sp) sw $s, 4($sp) sw $a, 8($sp) sw $a, ($sp) Nearly every program exhibits spatial locality, because instructions are usually executed in sequence if we execute an instruction at memory location i, then we will probably also execute the next instruction, at memory location i+. Code fragments such as loops exhibit both temporal and spatial locality. 5

Spatial locality in data Programs often access data that is stored contiguously Arrays, like a in the code on the top, are stored in memory contiguously. The individual fields of a record or object like employee are also kept contiguously in memory Can data have both spatial and temporal locality? sum = ; for (i = ; i < MAX; i++) sum = sum + a[i]; employee.name = Homer Simpson ; employee.boss = Mr. Burns ; employee.age = 45; Caches exploit temporal locality The first time the processor reads from an address in main memory, a copy of that data is also stored in the cache The next time that same address is read, we can use the copy of the data in the cache instead of accessing the slower dynamic memory So the first read is a little slower than before since it goes through both main memory and the cache, but subsequent reads are much faster This takes advantage of temporal locality commonly accessed data is stored in the faster cache memory CPU A little static RAM (cache) Lots of dynamic RAM 6

Caches exploit spatial locality When a CPU reads location i from memory, a copy of that data is placed in the cache But instead of just copying the contents of location i, we can copy several values into the cache at once, such as the four words from (wd) locations i through i + If the CPU later needs to read from locations i +, i + or i +, it can access that data from the cache and not the slower main memory For example, instead of reading just one array element at a time, the cache might actually be loading four array elements at once Again, the initial load incurs a performance penalty, but we re gambling on spatial locality and the chance that the CPU will need the extra data CPU A little static RAM (cache) Lots of dynamic RAM Other kinds of caches The general idea behind caches is used in many other situations. Networks are probably the best example. Networks have relatively high latency and low bandwidth, so repeated data transfers are undesirable. Browsers like Firefox and IE store your most recently accessed web pages on your hard disk Administrators can set up a network-wide cache, and companies like Akamai also provide caching services A few other examples: Many processors have a translation lookaside buffer, which is a cache dedicated to virtual memory support Operating systems may store frequently-accessed disk blocks, like directories, in main memory... and that data may then in turn be stored in the CPU cache! 4 7

Definitions: Hits and misses A cache hit occurs if the cache contains the requested location. Hits are good, because the cache can return the data much faster than main memory A cache miss occurs if the cache does not contain the requested location. This is bad, since the CPU must then wait for the slower main memory There are two measurements of cache performance: The hit rate is the percentage of memory accesses that are handled by the cache The miss rate ( - hit rate) is the percentage of accesses that must be handled by the slower main RAM Typical caches have a hit rate of 95% or higher, so in fact most memory accesses will be handled by the cache and will be dramatically faster 5 A simple cache design Caches are divided into blocks, a/k/a cache lines, which may be of various sizes The number of blocks in a cache is usually a power of For now we ll say that each block contains one byte. This won t take advantage of spatial locality, but we ll do that next time Here is an example cache with eight blocks, each holding one byte Block index 8-bit data 6 8

Four important questions. When we copy a block of data from main memory to the cache, where exactly should we put it?. How can we tell if a word is already in the cache, or if it has to be fetched from main memory first?. Eventually, the small cache memory might fill up. To load a new block from main RAM, we d have to replace one of the existing blocks in the cache... which one? 4. How can write operations be handled by the memory system? Questions and are related we have to know where the data is placed if we ever hope to find it again later! 7 Where should we put data in the cache? A direct-mapped cache is the simplest approach: each main memory address maps to exactly one cache block For example, on the right is a 6-byte main memory and a 4-byte cache (four -byte blocks). Memory locations, 4, 8 and all map to cache block. Addresses, 5, 9 and map to cache block, etc. How can we compute this mapping? Memory Address 4 5 6 7 8 9 4 5 8 Index 9

It s all divisions One way to figure out which cache block a particular memory address should go to is to use the mod (remainder) operator If the cache contains k blocks, then the data at memory address i would go to cache block index i mod k For instance, with the four-block cache here, address 4 would map to cache block 4 mod 4 = Memory Address 4 5 6 7 8 9 4 5 9 Index or least-significant bits An equivalent way to find the placement of a memory address in the cache is to look at the least significant k bits of the address With our four-byte cache we would inspect the two least significant bits of our memory addresses Again, you can see that address 4 ( base ) maps to cache block ( in binary) Taking the least k bits of a binary value is the same as computing that value mod k Memory Address Index

How can we find data in the cache? The second question was how to determine whether or not the data we re interested in is already stored in Memory the cache. Address If we want to read memory address i, we can use the mod trick to determine which cache block would 4 5 contain i. 6 7 But other addresses might 8 also map to the same 9 cache block. How can we distinguish between them? For instance, cache block 4 could contain data from 5 addresses, 6, or 4. Index Adding tags We need to add tags to the cache, which supply the rest of the address bits to let us distinguish between different memory locations that map to the same cache block Index Tag Data

Figuring out what s in the cache Now we can tell exactly which addresses of main memory are stored in the cache, by concatenating the cache block tags with the block indices Index Tag Data Main memory address in cache block + = + = + = + = One more detail: the valid bit When started, the cache does not contain valid data, I.e. is empty To account for this we add a valid bit for each block When the system is initialized, all the valid bits are set to When data is loaded into a particular cache block, the corresponding valid bit is set to Valid Index Bit Tag Data Main memory address in cache block + = Invalid Invalid + = So the cache contains more than just copies of the data in memory; it also has bits to help us find data within the cache and verify its validity 4

What happens on a cache hit When the CPU tries to read from memory, the address will be sent to a cache controller The lowest k bits of the address will index a block in the cache If the block is valid and the tag matches the upper (m - k) bits of the m-bit address, that data is sent to the CPU For a -bit memory address and a -byte cache: Address ( bits) Index Index Valid Tag Data...... To CPU Tag = Hit 5 What happens on a cache miss The delays that we ve been assuming for memories (e.g., ns) are really assuming cache hits If our CPU implementations accessed main memory directly, their cycle times would have to be much larger Instead we assume that most memory accesses will be cache hits, which allows us to use a shorter cycle time However, a much slower main memory access is needed on a cache miss. The simplest thing to do is to stall the pipeline until the data from main memory can be fetched (and also copied into the cache) 6

Loading a block into the cache After data is read from main memory, putting a copy of that data into the cache is straightforward. The lowest k address bits specify a cache block The upper (m - k) address bits are stored in the block s tag field The data is stored in the block s data field The valid bit is set to Address ( bits) Data Index Tag Index Valid Tag Data......... 7 What if the cache fills up? Our third question was what to do if we run out of space in our cache, or if we need to reuse a block for a different memory address We answered this question implicitly on the last page! A miss causes a new block to be loaded into the cache, automatically overwriting any previously stored data This is a least recently used replacement policy, which assumes that older data is less likely to be requested than newer data We ll see a few other policies next 8 4

Summary Basic ideas of caches By taking advantage of spatial and temporal locality, we can use a small amount of fast but expensive memory to dramatically speed up the average memory access time A cache is divided into many blocks, each of which contains a valid bit, a tag for matching memory addresses to cache contents, and the data itself Next we ll look at some more advanced cache organizations and see how to measure the performance of memory systems 9 5