Why Care About Memory Hierarchy?

Size: px
Start display at page:

Download "Why Care About Memory Hierarchy?"

Transcription

1 Performance EEC 8 Computer Architecture Memory Hierarchy Design (I) Department of Electrical Engineering and Computer Science Cleveland State University Why Care About Memory Hierarchy? Processor-DRAM Performance Gap grows % / year Processor %/year Moore s Law (X/. years) CPU DRAM DRAM 9%/year (X/ years) Time

2 An Unbalanced System Source: Bob Colwell keynote ISCA 9 Memory Issues Latency Time to move through the longest circuit path (from the start of request to the response) Bandwidth Number of bits transported at one time Capacity Size of memory Energy Cost of accessing memory (to read and write)

3 Model of Memory Hierarchy Reg File SRAM L Data cache L Inst cache L Cache DRAM Main Memory DISK Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers s Bytes < ns Cache K Bytes - ns -. cents/bit Main Memory M Bytes ns- ns $.-. cents /bit Disk G Bytes, ms (,, ns) cents/bit Tape infinite sec-min -8 Registers Instr. Operands Cache Cache Lines Memory Pages Disk Files Tape Staging Transfer Unit Compiler -8 bytes Cache controller 8-8 bytes Operating system -K bytes User Mbytes Upper Level faster This Lecture Larger Lower Level

4 Topics covered Why do caches work Principle of program locality Cache hierarchy Average memory access time (AMAT) Types of caches Direct mapped Set-associative Fully associative Cache policies Write back vs. write through Write allocate vs. No write allocate Principle of Locality Programs access a relatively small portion of address space at any instant of time. Two Types of Locality: Temporal Locality (Locality in Time): If an address is referenced, it tends to be referenced again e.g., loops, reuse Spatial Locality (Locality in Space): If an address is referenced, neighboring addresses tend to be referenced e.g., straightline code, array access Traditionally, HW has relied on locality for speed Locality is a program property that is exploited in machine design. 8

5 Example of Locality int A[], B[], C[], D; for (i=; i<; i++) { C[i] = A[i] * B[i] + D; } C[] C[] C[] C[] C[] C[] B[] B[] B[9] B[8] B[] B[] B[] B[] B[] B[] A[99] A[98] D C[99] C[98] A[] A[] A[] A[] A[] A[] A Cache Line (One fetch) C[9] C[] B[] A[9] A[] C[9] C[] B[] A[9] A[] 9 Modern Memory Hierarchy By taking advantage of the principle of locality: Present the user with as much memory as is available in the cheapest technology. Provide access at the speed offered by the fastest technology. Processor Control Datapath Registers L I L D Cache Cache Second Level Cache (SRAM) Third Level Cache (SRAM) Main Memory (DRAM) Secondary Storage (Disk) Tertiary Storage (Disk/Tape)

6 Example: Intel Core Duo L KB, 8-Way, Byte/Line, LRU, WB Cycle Latency IL Core DL DL Core IL L. MB, -Way, Byte/Line, LRU, WB Cycle Latency L Cache Source: Example : Intel Itanium MB Version 8nm mm MB Version nm mm

7 Intel Nehalem MB MB Core MB MB Core MB MB MB MB MB L Example : STI Cell Processor Local Storage SPE = M transistors (M array; M logic)

8 Cell Synergistic Processing Element Each SPE contains 8 x8 bit registers, KB, -port, ECC-protected local SRAM (Not cache) Cache Terminology Hit: data appears in some block Hit Rate: the fraction of memory accesses found in the level Hit Time: Time to access the level (consists of RAM access time + Time to determine hit) Miss: data needs to be retrieved from a block in the lower level (e.g., Block Y) Miss Rate = - (Hit Rate) Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor Hit Time << Miss Penalty From Processor Upper Level Memory Blk X Lower Level Memory Blk Y To Processor 8

9 Average Memory Access Time Average memory-access time = Hit time + Miss rate x Miss penalty Miss penalty: time to fetch a block from lower memory level access time: function of latency transfer time: function of bandwidth b/w levels Transfer one cache line/block at a time Transfer at the size of the memory-bus width Memory Hierarchy Performance clk clks Hit Time First-level Cache Miss % * Miss penalty Average Memory Access Time (AMAT) = Hit Time + Miss rate * Miss Penalty = T hit (L) + Miss%(L) * T(memory) Example: Cache Hit = cycle Miss rate = % =. Miss penalty = cycles AMAT = +. * = cycles Can we improve it? Main Memory (DRAM) 8 9

10 Reducing Penalty: Multi-Level Cache clk clks Second clks clks Level Cache First-level Cache L L Third Level Cache Main Memory (DRAM) On-die L Average Memory Access Time (AMAT) = T hit (L) + Miss%(L)* (T hit (L) + Miss%(L)* (T hit (L) + Miss%(L)*T(memory) ) ) 9 AMAT of multi-level memory = T hit (L) + Miss%(L)* T miss (L) = T hit (L) + Miss%(L)* { T hit (L) + Miss%(L)* (T miss (L) } = T hit (L) + Miss%(L)* { T hit (L) + Miss%(L)* (T miss (L) } = T hit (L) + Miss%(L)* { T hit (L) + Miss%(L) * [ T hit (L) + Miss%(L) * T(memory) ] }

11 AMAT Example = T hit (L) + Miss%(L)* (T hit (L) + Miss%(L)* (T hit (L) + Miss%(L)*T(memory) ) ) Example: Miss rate L=%, T hit (L) = cycle Miss rate L=%, T hit (L) = cycles Miss rate L=%, T hit (L) = cycles T(memory) = cycles AMAT =?. (compare to with no multi-levels).x speed-up! Assume that main memory accesses take ns and that memory accesses are % of all instructions. The system memory uses an L data cache with miss rate.%. CPU s clock rate is.ghz Assuming a base CPI of.. Calculate the total CPI? If we add an L cache with local miss rate 98%, hit time.ns, what is the total CPI in this case?

12 .GHz ~.ns Memory access cycles = /. = CPI = +.*. * CPI = +.*(.*./.+.*.98*) Types of Caches Type of cache Direct mapped (DM) Setassociative (SA) Fullyassociative (FA) Mapping of data from memory to cache A memory value can be placed at a single corresponding location in the cache DM and FA can be thought as special cases of SA DM -way SA FA All-way SA A memory value can be placed in any of a set of locations in the cache A memory value can be placed in any location in the cache Complexity of searching the cache Fast indexing mechanism Slightly more involved search mechanism Extensive hardware resources required to search (CAM)

13 Direct Mapping Tag Index Data x xf xaa xf Direct mapping: A memory value can only be placed at a single corresponding location in the cache Set Associative Mapping (-Way) Tag Index Data Way Way x xf xaa xf Set-associative mapping: A memory value can be placed in any location of a set in the cache

14 Fully Associative Mapping Tag Data x xf xaa xf Fully-associative mapping: A memory value can be placed anywhere in the cache Cache Two issues: How do we know if a data item is in the cache? If it is, how do we find it? Our first example: block size is one word of data "direct mapped" For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level 8

15 Direct Mapped Cache Address 8 9 A B C D E F Memory Cache Index DM Cache A Cache Line (or Block) Cache location is occupied by data from: Memory locations,, 8, and C Which one should we place in the cache? How can we tell which one is in the cache? 9 Three (or Four) Cs (Cache Miss Terms) Compulsory Misses: cold start misses (Caches do not have valid data at the start of the program) Capacity Misses: Increase cache size Conflict Misses: Increase cache size and/or associativity. Associative caches reduce conflict misses Coherence Misses: In multiprocessor systems (later lectures ) Processor Processor Processor x x x Cache x8 x9b x Cache x8 x9b x Cache

16 : # of set : : Example: KB DM Cache, -byte Lines The lowest M bits are the Offset (Line Size = M ) Index = log (# of sets) Tag Address 9 Index Ex: x Offset Ex: x Valid Bit : Cache Tag : Cache Data Byte Byte Byte Byte : Byte Byte Byte Byte 99 Direct Mapped Cache Mapping: address is modulo the number of blocks in the cache Cache Memory

17 Direct Mapped Cache For MIPS: Address (showing bit positions) Byte offset H it Tag D ata Index V alid T ag D ata Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88

18 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 8

19 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 -byte block, drop low bits for byte offset! Only matters for byteaddressable systems # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is Next log (8) bits mod 8 = Index # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 8 9

20 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is Next log (8) bits mod 8 = Index # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 9 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88

21 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy

22 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy

23 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #8 is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy

24 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #8 is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [b] copy Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [b] copy 8

25 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [b] copy 9 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [b] copy It s valid! How to tell it s the wrong address?

26 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [b] copy The tags don t match! It s not what we want to access! Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy

27 Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [c] copy Example: DM$, 8-Entry, B lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # #8 [a] [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [c] copy

28 Example: DM$, 8-Entry, B Q: What if the machine is only word-addressable? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [c] copy Example: DM$, 8-Entry, B Q: What if the machine is only word-addressable? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [c] copy 8

29 Example: DM$, 8-Entry, B Q: What if the machine is only word-addressable? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # is # [a] #8 [b] # [c] V alid T ag D ata ( bytes) #88 [a] copy [c] copy Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] Write Through Vs. Write Back # [c] V alid T ag D ata ( bytes) Write Through = write to both memory and cache Write Back = write to cache block only for the time being then write to memory when block is replaced.. [a] copy #88 [c] copy 8 9

30 Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] # [c] OLD V alid T ag D ata ( bytes) #88 [a] copy [c] NEW Do we update memory now? Or later? 9 Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] # [c] OLD V alid T ag D ata ( bytes) #88 [a] copy [c] NEW Assume later...

31 Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] # [c] OLD V alid T ag D ata ( bytes) #88 [a] copy [c] NEW Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #88 # [a] #8 [b] # [c] OLD V alid T ag D ata ( bytes) #88 [a] copy [c] NEW Now What? How do we know to write back?

32 Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #88 # [a] #8 [b] # [c] OLD V alid T ag D ata ( bytes) [a] copy [c] NEW Dirty Need extra state! The dirty bit! #88 Dirty Bit is used to track whether data from cache was written into memory Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #88 # [a] #8 [b] # [c] NEW V alid T ag D ata ( bytes) Dirty #88 [a] copy [c] NEW

33 Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #88 # [a] #8 [b] # [c] NEW V alid T ag D ata ( bytes) Dirty #88 [a] copy Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 #88 # [a] #8 [b] # [c] NEW V alid T ag D ata ( bytes) Dirty #88 [a] copy copy

34 Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] # [c] NEW V alid T ag D ata ( bytes) Dirty #88 [a] copy copy Example: DM$, 8-Entry, B Q: What about writing back to memory? lw $, # [a] lw $, #8 [b] sw $, # [c] sw $, #88 # [a] #8 [b] Write Back more complex.. Must have a miss and dirty =, in order to maintain data, cannot instantly write over cache like a write through V alid T ag D ata ( bytes) Dirty # #88 [c] NEW OLD [a] copy NEW Dirty Bit indicates value hasn t been written to memory yet (write-back method) 8

35 DM$: Thoughts Trade-Offs: Write-back or Write-Through? Write-Alloc or No-Write-Alloc? How does Tag change with # of Entries? How does minimum machine word size impact tag? H it Address (showing bit positions) Byte offset Tag Index V alid T ag D ata D ata What kind of locality are we taking advantage of? No write alloc = only processor reads cached 9 Direct Mapped Cache Taking advantage of spatial locality: Address (showing bit positions) Hit Tag Byte offset Index Block offset Data bits 8 bits V Tag Data K entries Mux

36 Hits vs. Misses Read hits this is what we want! Read misses stall the CPU, fetch block from memory, deliver to cache, restart Write hits: can replace data in cache and memory (write-through) write the data only into the cache (write-back the cache later) Write misses: read the entire block into the cache, then write the word? Writes are hard read: concurrently check tag and read data write is destructive, so it must be slower Write strategies when to update memory? - on every write (write-through) - when a modified block is replaced (write-back) what to do on a write miss? - fetch the block to cache (write allocate), used with write-back - do not fetch (write no-allocate), used with write-through Trade-offs - Write-back uses bus bandwidth more efficiently saves power since it uses lower level memory less often attractive to both multiprocessors and embedded applications. Write-through easier to implement keeps MM consistent with CM, also good for multiprocessors since data is coherent in the memory hierarchy.

37 Performance Increasing the block size tends to decrease miss rate: % % M iss ra te % % % % KB 8 KB KB KB KB % % % Block size (bytes) Program Block size in words Instruction miss rate Data miss rate Effective combined miss rate gcc.%.%.%.%.%.9% spice.%.%.%.%.%.% Use split caches because there is more spatial locality in code: Performance Simplified model: execution time = (execution cycles + stall cycles) * cycle time stall cycles = # of instructions * miss ratio * miss penalty Two ways of improving performance: decreasing the miss ratio decreasing the miss penalty What happens if we increase block size?

38 Decreasing miss ratio with associativity One-way set associative (direct mapped) Block Tag Data Set Two-way set associative Tag Data Tag Data Set Four-way set associative Tag Data Tag Data Tag Data Tag Data Eight-way set associative (fully associative) Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Compared to direct mapped, give a series of references that: results in a lower miss ratio using a -way set associative cache results in a higher miss ratio using a -way set associative cache (assuming least recently used replacement strategy) An implementation Address Index V Tag Data V Tag Data V Tag Data V Tag Data -to- multiplexor Hit Data 8

39 Set-Associative Cache Multiple cache blocks (lines) can be allocated into the same set When full, needs to evict some block out of the cache Need to consider the locality Replacement policy Last-In First-Out (LIFO), like a stack Random First-In First-Out (FIFO) Least Recently Used (LRU) Example of Caches Given a MB, direct-mapped physical caches, line size=byte Support up to -bit physical address Tag size? Now change it to -way, Tag size? How about if it s -bit physical address? 8 9

40 Show cache addressing for a byte-addressable memory with -bit addresses. Cache line W = B. Cache size L = 9 lines ( KB). Tag bits? 9 Show cache addressing for a byte-addressable memory with -bit addresses. Cache line W = B. Cache size L = 9 lines ( KB). Tag bits? Solution Byte offset in line is log = b. Cache line index is log 9 = b. This leaves = b for the tag. -bit line index in cache -bit line tag -bit byte offset in line -bit address Byte address in cache 8

41 Show cache addressing scheme for a byteaddressable memory with -bit addresses. Cache line width W = B. Set size S = lines. Cache size L = 9 lines ( KB). 8 Show cache addressing scheme for a byteaddressable memory with -bit addresses. Cache line width W = B. Set size S = lines. Cache size L = 9 lines ( KB). Solution Byte offset in line is log = b. Cache set index is (log 9/) = b. This leaves = b for -bit set index in cache the tag. -bit address -bit line tag Address in cache used to read out two candidate items and their control info -bit byte offset in line 8

42 A KB four-way set-associative cache is byte-addressable and contains B lines. Memory addresses are b wide. How wide are the tags in this cache? Solution Address ( B) = b byte offset + 9 b set index + 8 b tag. -bit address Tag width = 9 = 8 Tag Set index Offset 8 bits 9 bits bits Set size = B = 8 B Number of sets = / = 9 Line width = B = B 8 Example: KB DM Cache, -byte Lines lw from xffc8 Tag Index Offset FFC8 = Tag array Data array DM Cache 8

43 Index DM Cache Speed Advantage Tag and data access happen in parallel Faster cache access! Tag array Tag Index Offset Data array 8 Associative Caches Reduce Conflict Misses Set associative (SA) cache multiple possible locations in a set Fully associative (FA) cache any location in the cache Hardware and speed overhead Comparators Multiplexors Data selection only after Hit/Miss determination (i.e., after tag comparison) 8

44 Valid Set Associative Cache (-way) Cache index selects a set from the cache The two tags in the set are compared in parallel Data is selected based on the tag result Additional circuitry as compared to DM caches Makes SA caches slower to access than DM of comparable size Cache Index Cache Tag Cache Data Cache Data Cache Line : : : Cache Line : Cache Tag Valid : : Adr Tag Compare Sel Mux Sel Compare Hit OR Cache Line 8 Set-Associative Cache (-way) bit address lw from xffc8 Tag Index offset Tag array Data aray Data array Tag array 88

45 Fully Associative Cache tag = = = Associative Search offset Tag Data = Multiplexor Rotate and Mask 89 Address Fully Associative Cache Write Data Tag offset Tag Data Tag Data Tag Data Tag Data compare compare compare compare Additional circuitry as compared to DM caches More extensive than SA caches Read Data Makes FA caches slower to access than either DM or SA of comparable size 9

46 Cache Write Policy Write through -The value is written to both the cache line and to the lower-level memory. Write back - The value is written only to the cache line. The modified cache line is written to main memory only when it has to be replaced. Is the cache line clean (holds the same value as memory) or dirty (holds a different value than memory)? 9 Write-through Policy x8 x x Processor Cache Memory 9

47 Write Buffer Processor Cache DRAM Write Buffer Processor: writes data into the cache and the write buffer Memory controller: writes contents of the buffer to memory Write buffer is a FIFO structure: Typically to 8 entries Desirable: Occurrence of Writes << DRAM write cycles Memory system designer s nightmare: Write buffer saturation (i.e., Writes DRAM write cycles) 9 Writeback Policy x9abc x8 x8 x x????? Processor Write miss Cache Memory 9

48 On Write Miss Write allocate The line is allocated on a write miss, followed by the write hit actions above. Write misses first act like read misses No write allocate Write misses do not interfere cache Line is only modified in the lower level memory Mostly use with write-through cache 9 Quick recap Processor-memory performance gap Memory hierarchy exploits program locality to reduce AMAT Types of Caches Direct mapped Set associative Fully associative Cache policies Write through vs. Write back Write allocate vs. No write allocate 9 8

49 Cache Replacement Policy Random Replace a randomly chosen line FIFO Replace the oldest line LRU (Least Recently Used) Replace the least recently used line NRU (Not Recently Used) Replace one of the lines that is not recently used In Itanium L Dcache, L and L caches 9 LRU Policy Access C Access D Access E Access C Access G MRU MRU- LRU+ LRU A B C D C A B D D C A B E D C A C E D A G C E D MISS, replacement needed MISS, replacement needed 98 9

50 LRU From Hardware Perspective LRU Way Way Way Way A B C D State machine Access update Access D LRU policy increases cache access times Additional hardware bits needed for LRU state machine 99 LRU Algorithms True LRU Expensive in terms of speed and hardware Need to remember the order in which all N lines were last accessed N! scenarios O(log N!) O(N log N) LRU bits -ways AB BA = =! -ways ABC ACB BAC BCA CAB CBA = =! Pseudo LRU: O(N) Approximates LRU policy with a binary tree

51 Pseudo LRU Algorithm (-way SA) AB/CD bit (L) A/B bit (L) C/D bit (L) Way A Way B Way C Way D Tree-based O(N): bits for -way Cache ways are the leaves of the tree Combine ways as we proceed towards the root of the tree Way Way Way Way A B C D Pseudo LRU Algorithm A/B bit (L) AB/CD bit (L) C/D bit (L) Way A Way B Way C Way D LLL =, there is a hit in Way B, what is the new updated LLL? LRU update algorithm CD AB AB/CD Way hit L L L Way A --- Way B --- Way C --- Way D --- Less hardware than LRU Faster than LRU LLL =, a way needs to be replaced, which way would be chosen? Replacement Decision CD AB AB/CD L L L Way to replace X Way A X Way B X Way C X Way D

52 M i s s r a t e Not Recently Used (NRU) Use R(eferenced) and M(odified) bits (not referenced or not modified) (referenced or modified) Classify lines into C: R=, M= C: R=, M= C: R=, M= C: R=, M= Chose the victim from the lowest class (C > C > C > C) Periodically clear R and M bits Reducing Miss Rate Enlarge Cache If cache size is fixed Increase associativity Increase line size % Does this always work? Increasing cache pollution % % % K B 8 K B % % K B K B K B % % % B l o c k s iz e ( b y t e s )

53 Reduce Miss Rate/Penalty: Way Prediction Best of both worlds: Speed as that of a DM cache and reduced conflict misses as that of a SA cache Extra bits predict the way of the next access Alpha Way Prediction (next line predictor) If correct, -cycle I-cache latency If incorrect, -cycle latency from I-cache fetch/branch predictor Branch predictor can override the decision of the way predictor Alpha Way Prediction (offset) (-way) Note: Alpha advocates to align the branch targets on octaword ( bytes)

54 Reduce Miss Rate: Code Optimization Misses occur if sequentially accessed array elements come from different cache lines Code optimizations No hardware change Rely on programmers or compilers Examples: Loop interchange In nested loops: outer loop becomes inner loop and vice versa Loop blocking partition large array into smaller blocks, thus fitting the accessed array elements into cache size enhances cache reuse Loop Interchange Row-major ordering /* Before */ for (j=; j<; j++) for (i=; i<; i++) x[i][j] = *x[i][j] i= j= What is the worst that could happen? Hint: DM cache /* After */ for (i=; i<; i++) for (j=; j<; j++) x[i][j] = *x[i][j] Improved cache efficiency i= j= Is this always safe transformation? Does this always lead to higher efficiency? 8

55 Loop Blocking /* Before */ for (i=; i<n; i++) for (j=; j<n; j++) { r=; for (k=; k<n; k++) r += y[i][k]*z[k][j]; x[i][j] = r; } k y[i][k] j z[k][j] X[i][j] i k i Does not exploit locality 9 Loop Blocking Partition the loop s iteration space into many smaller chunks Ensure that the data stays in the cache until it is reused k y[i][k] j z[k][j] j X[i][j] i k i

56 /* After */ for (jj=; jj<n; jj=jj+b) for (kk = ; kk<n; kk=kk+b) for (i=; i<n; i=i++) for (j=jj; j<min(jj+b, N); j=++) { } r=; for (k=kk; k<min (kk+b,n); k++) r += y[i][k]*z[k][j]; x[i][j] = x[i][j] + r; Other Miss Penalty Reduction Techniques Critical value first and Restart early Send requested data in the leading edge transfer Trailing edge transfer continues in the background Give priority to read misses over writes Use write buffer (WT) and writeback buffer (WB) Combining writes combining write buffer Intel s WC (write-combining) memory type Victim caches Assist caches Non-blocking caches Data Prefetch mechanism

57 Write Combining Buffer Need to initiate separate writes back to lower level memory Wr. addr 8 V V Mem[] Mem[8] Mem[] Mem[] V V For WC buffer, combine neighbor addresses One single write back to lower level memory Wr. addr V V V V Mem[] Mem[8] Mem[] Mem[] WC memory type Intel (starting in P) supports USWC (or WC) memory type Uncacheable, speculative Write Combining Expensive (in terms of time) for individual write Combine several individual writes into a bursty write Effective for video memory data Algorithm writing byte at a time Combine of -byte data into one -byte write Ordering is not important

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1 Memory Hierarchy Maurizio Palesi Maurizio Palesi 1 References John L. Hennessy and David A. Patterson, Computer Architecture a Quantitative Approach, second edition, Morgan Kaufmann Chapter 5 Maurizio

More information

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

COSC 6385 Computer Architecture. - Memory Hierarchies (I) COSC 6385 Computer Architecture - Hierarchies (I) Fall 2007 Slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05 Recap

More information

Modern Computer Architecture

Modern Computer Architecture Modern Computer Architecture Lecture3 Review of Memory Hierarchy Hongbin Sun 国家集成电路人才培养基地 Xi an Jiaotong University Performance 1000 Recap: Who Cares About the Memory Hierarchy? Processor-DRAM Memory Gap

More information

COSC 6385 Computer Architecture - Memory Hierarchies (I)

COSC 6385 Computer Architecture - Memory Hierarchies (I) COSC 6385 Computer Architecture - Memory Hierarchies (I) Edgar Gabriel Spring 2018 Some slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05

More information

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1 Memory Hierarchy Maurizio Palesi Maurizio Palesi 1 References John L. Hennessy and David A. Patterson, Computer Architecture a Quantitative Approach, second edition, Morgan Kaufmann Chapter 5 Maurizio

More information

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141 EECS151/251A Spring 2018 Digital Design and Integrated Circuits Instructors: John Wawrzynek and Nick Weaver Lecture 19: Caches Cache Introduction 40% of this ARM CPU is devoted to SRAM cache. But the role

More information

CPE 631 Lecture 04: CPU Caches

CPE 631 Lecture 04: CPU Caches Lecture 04 CPU Caches Electrical and Computer Engineering University of Alabama in Huntsville Outline Memory Hierarchy Four Questions for Memory Hierarchy Cache Performance 26/01/2004 UAH- 2 1 Processor-DR

More information

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

CPU issues address (and data for write) Memory returns data (or acknowledgment for write) The Main Memory Unit CPU and memory unit interface Address Data Control CPU Memory CPU issues address (and data for write) Memory returns data (or acknowledgment for write) Memories: Design Objectives

More information

Advanced Computer Architecture

Advanced Computer Architecture ECE 563 Advanced Computer Architecture Fall 2009 Lecture 3: Memory Hierarchy Review: Caches 563 L03.1 Fall 2010 Since 1980, CPU has outpaced DRAM... Four-issue 2GHz superscalar accessing 100ns DRAM could

More information

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho Why memory hierarchy? L1 cache design Sangyeun Cho Computer Science Department Memory hierarchy Memory hierarchy goals Smaller Faster More expensive per byte CPU Regs L1 cache L2 cache SRAM SRAM To provide

More information

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): Motivation for The Memory Hierarchy: { CPU/Memory Performance Gap The Principle Of Locality Cache $$$$$ Cache Basics:

More information

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work? EEC 17 Computer Architecture Fall 25 Introduction Review Review: The Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology

More information

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University Lecture 12 Memory Design & Caches, part 2 Christos Kozyrakis Stanford University http://eeclass.stanford.edu/ee108b 1 Announcements HW3 is due today PA2 is available on-line today Part 1 is due on 2/27

More information

Computer Architecture Spring 2016

Computer Architecture Spring 2016 Computer Architecture Spring 2016 Lecture 02: Introduction II Shuai Wang Department of Computer Science and Technology Nanjing University Pipeline Hazards Major hurdle to pipelining: hazards prevent the

More information

Page 1. Multilevel Memories (Improving performance using a little cash )

Page 1. Multilevel Memories (Improving performance using a little cash ) Page 1 Multilevel Memories (Improving performance using a little cash ) 1 Page 2 CPU-Memory Bottleneck CPU Memory Performance of high-speed computers is usually limited by memory bandwidth & latency Latency

More information

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS152 Computer Architecture and Engineering Lecture 17: Cache System CS152 Computer Architecture and Engineering Lecture 17 System March 17, 1995 Dave Patterson (patterson@cs) and Shing Kong (shing.kong@eng.sun.com) Slides available on http//http.cs.berkeley.edu/~patterson

More information

Lecture 11. Virtual Memory Review: Memory Hierarchy

Lecture 11. Virtual Memory Review: Memory Hierarchy Lecture 11 Virtual Memory Review: Memory Hierarchy 1 Administration Homework 4 -Due 12/21 HW 4 Use your favorite language to write a cache simulator. Input: address trace, cache size, block size, associativity

More information

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Cache Introduction [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user with as much

More information

Topics. Digital Systems Architecture EECE EECE Need More Cache?

Topics. Digital Systems Architecture EECE EECE Need More Cache? Digital Systems Architecture EECE 33-0 EECE 9-0 Need More Cache? Dr. William H. Robinson March, 00 http://eecs.vanderbilt.edu/courses/eece33/ Topics Cache: a safe place for hiding or storing things. Webster

More information

EE 4683/5683: COMPUTER ARCHITECTURE

EE 4683/5683: COMPUTER ARCHITECTURE EE 4683/5683: COMPUTER ARCHITECTURE Lecture 6A: Cache Design Avinash Kodi, kodi@ohioedu Agenda 2 Review: Memory Hierarchy Review: Cache Organization Direct-mapped Set- Associative Fully-Associative 1 Major

More information

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1) Department of Electr rical Eng ineering, Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1) 王振傑 (Chen-Chieh Wang) ccwang@mail.ee.ncku.edu.tw ncku edu Depar rtment of Electr rical Engineering,

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Processor-Memory Performance Gap 10000 µproc 55%/year (2X/1.5yr) Performance 1000 100 10 1 1980 1983 1986 1989 Moore s Law Processor-Memory Performance

More information

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies 1 Lecture 22 Introduction to Memory Hierarchies Let!s go back to a course goal... At the end of the semester, you should be able to......describe the fundamental components required in a single core of

More information

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review CISC 662 Graduate Computer Architecture Lecture 6 - Cache and virtual memory review Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David

More information

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy ENG338 Computer Organization and Architecture Part II Winter 217 S. Areibi School of Engineering University of Guelph Hierarchy Topics Hierarchy Locality Motivation Principles Elements of Design: Addresses

More information

ארכיטקטורת יחידת עיבוד מרכזי ת

ארכיטקטורת יחידת עיבוד מרכזי ת ארכיטקטורת יחידת עיבוד מרכזי ת (36113741) תשס"ג סמסטר א' July 2, 2008 Hugo Guterman (hugo@ee.bgu.ac.il) Arch. CPU L8 Cache Intr. 1/77 Memory Hierarchy Arch. CPU L8 Cache Intr. 2/77 Why hierarchy works

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Processor-Memory Performance Gap 10000 µproc 55%/year (2X/1.5yr) Performance 1000 100 10 1 1980 1983 1986 1989 Moore s Law Processor-Memory Performance

More information

CS3350B Computer Architecture

CS3350B Computer Architecture CS335B Computer Architecture Winter 25 Lecture 32: Exploiting Memory Hierarchy: How? Marc Moreno Maza wwwcsduwoca/courses/cs335b [Adapted from lectures on Computer Organization and Design, Patterson &

More information

Lecture 7 - Memory Hierarchy-II

Lecture 7 - Memory Hierarchy-II CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II John Wawrzynek Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~johnw

More information

LECTURE 10: Improving Memory Access: Direct and Spatial caches

LECTURE 10: Improving Memory Access: Direct and Spatial caches EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff wolff@eecs.cwru.edu Case Western Reserve University This presentation uses

More information

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1 CSE 431 Computer Architecture Fall 2008 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Mary Jane Irwin ( www.cse.psu.edu/~mji ) [Adapted from Computer Organization and Design, 4 th Edition, Patterson

More information

Memory. Lecture 22 CS301

Memory. Lecture 22 CS301 Memory Lecture 22 CS301 Administrative Daily Review of today s lecture w Due tomorrow (11/13) at 8am HW #8 due today at 5pm Program #2 due Friday, 11/16 at 11:59pm Test #2 Wednesday Pipelined Machine Fetch

More information

ECE ECE4680

ECE ECE4680 ECE468. -4-7 The otivation for s System ECE468 Computer Organization and Architecture DRA Hierarchy System otivation Large memories (DRA) are slow Small memories (SRA) are fast ake the average access time

More information

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

EITF20: Computer Architecture Part 5.1.1: Virtual Memory EITF20: Computer Architecture Part 5.1.1: Virtual Memory Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Cache optimization Virtual memory Case study AMD Opteron Summary 2 Memory hierarchy 3 Cache

More information

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ. Computer Architectures Chapter 5 Tien-Fu Chen National Chung Cheng Univ. Chap5-0 Topics in Memory Hierachy! Memory Hierachy Features: temporal & spatial locality Common: Faster -> more expensive -> smaller!

More information

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be

More information

COSC4201. Chapter 5. Memory Hierarchy Design. Prof. Mokhtar Aboelaze York University

COSC4201. Chapter 5. Memory Hierarchy Design. Prof. Mokhtar Aboelaze York University COSC4201 Chapter 5 Memory Hierarchy Design Prof. Mokhtar Aboelaze York University 1 Memory Hierarchy The gap between CPU performance and main memory has been widening with higher performance CPUs creating

More information

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) Chapter Seven emories: Review SRA: value is stored on a pair of inverting gates very fast but takes up more space than DRA (4 to transistors) DRA: value is stored as a charge on capacitor (must be refreshed)

More information

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY 1 Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic disk 5ms 20ms, $0.20 $2 per

More information

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Lecture 17 Introduction to Memory Hierarchies Why it s important  Fundamental lesson(s) Suggested reading: (HP Chapter Processor components" Multicore processors and programming" Processor comparison" vs." Lecture 17 Introduction to Memory Hierarchies" CSE 30321" Suggested reading:" (HP Chapter 5.1-5.2)" Writing more "

More information

Course Administration

Course Administration Spring 207 EE 363: Computer Organization Chapter 5: Large and Fast: Exploiting Memory Hierarchy - Avinash Kodi Department of Electrical Engineering & Computer Science Ohio University, Athens, Ohio 4570

More information

Advanced Memory Organizations

Advanced Memory Organizations CSE 3421: Introduction to Computer Architecture Advanced Memory Organizations Study: 5.1, 5.2, 5.3, 5.4 (only parts) Gojko Babić 03-29-2018 1 Growth in Performance of DRAM & CPU Huge mismatch between CPU

More information

CS161 Design and Architecture of Computer Systems. Cache $$$$$

CS161 Design and Architecture of Computer Systems. Cache $$$$$ CS161 Design and Architecture of Computer Systems Cache $$$$$ Memory Systems! How can we supply the CPU with enough data to keep it busy?! We will focus on memory issues,! which are frequently bottlenecks

More information

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache 14:332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson s UCB CS152 slides and Mary Jane Irwin s PSU CSE331 slides] 331 Week131 Spring 2006

More information

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB Memory Technology Caches 1 Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic disk 5ms 20ms, $0.20 $2 per GB Ideal memory Average access time similar

More information

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches CS 61C: Great Ideas in Computer Architecture Direct Mapped Caches Instructor: Justin Hsia 7/05/2012 Summer 2012 Lecture #11 1 Review of Last Lecture Floating point (single and double precision) approximates

More information

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time Review ABC of Cache: Associativity Block size Capacity Cache organization Direct-mapped cache : A =, S = C/B

More information

Question?! Processor comparison!

Question?! Processor comparison! 1! 2! Suggested Readings!! Readings!! H&P: Chapter 5.1-5.2!! (Over the next 2 lectures)! Lecture 18" Introduction to Memory Hierarchies! 3! Processor components! Multicore processors and programming! Question?!

More information

Introduction to cache memories

Introduction to cache memories Course on: Advanced Computer Architectures Introduction to cache memories Prof. Cristina Silvano Politecnico di Milano email: cristina.silvano@polimi.it 1 Summary Summary Main goal Spatial and temporal

More information

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Caches Part 1. Instructor: Sören Schwertfeger.   School of Information Science and Technology SIST CS 110 Computer Architecture Caches Part 1 Instructor: Sören Schwertfeger http://shtech.org/courses/ca/ School of Information Science and Technology SIST ShanghaiTech University Slides based on UC Berkley's

More information

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses. Professor Randy H. Katz Computer Science 252 Fall 1995

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses. Professor Randy H. Katz Computer Science 252 Fall 1995 Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Fall 1995 Review: Who Cares About the Memory Hierarchy? Processor Only Thus Far in Course:

More information

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology Memory Hierarchies Instructor: Dmitri A. Gusev Fall 2007 CS 502: Computers and Communications Technology Lecture 10, October 8, 2007 Memories SRAM: value is stored on a pair of inverting gates very fast

More information

CMSC 611: Advanced Computer Architecture. Cache and Memory

CMSC 611: Advanced Computer Architecture. Cache and Memory CMSC 611: Advanced Computer Architecture Cache and Memory Classification of Cache Misses Compulsory The first access to a block is never in the cache. Also called cold start misses or first reference misses.

More information

Handout 4 Memory Hierarchy

Handout 4 Memory Hierarchy Handout 4 Memory Hierarchy Outline Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options (MMU Sub-system) Conclusion 2012/11/7 2 Since 1980, CPU has outpaced

More information

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed) Computing Systems & Performance Memory Hierarchy MSc Informatics Eng. 2012/13 A.J.Proença Memory Hierarchy (most slides are borrowed) AJProença, Computer Systems & Performance, MEI, UMinho, 2012/13 1 2

More information

The Memory Hierarchy & Cache

The Memory Hierarchy & Cache Removing The Ideal Memory Assumption: The Memory Hierarchy & Cache The impact of real memory on CPU Performance. Main memory basic properties: Memory Types: DRAM vs. SRAM The Motivation for The Memory

More information

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory 1 COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory Cristinel Ababei Dept. of Electrical and Computer Engineering Marquette University Credits: Slides adapted from presentations

More information

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies TDT4255 Lecture 10: Memory hierarchies Donn Morrison Department of Computer Science 2 Outline Chapter 5 - Memory hierarchies (5.1-5.5) Temporal and spacial locality Hits and misses Direct-mapped, set associative,

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: John Wawrzynek & Vladimir Stojanovic http://insteecsberkeleyedu/~cs61c/ Typical Memory Hierarchy Datapath On-Chip

More information

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0. Since 1980, CPU has outpaced DRAM... EEL 5764: Graduate Computer Architecture Appendix C Hierarchy Review Ann Gordon-Ross Electrical and Computer Engineering University of Florida http://www.ann.ece.ufl.edu/

More information

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache 14:332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson s UCB CS152 slides and Mary Jane Irwin s PSU CSE331 slides] 331 Lec20.1 Fall 2003 Head

More information

EITF20: Computer Architecture Part4.1.1: Cache - 2

EITF20: Computer Architecture Part4.1.1: Cache - 2 EITF20: Computer Architecture Part4.1.1: Cache - 2 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Cache performance optimization Bandwidth increase Reduce hit time Reduce miss penalty Reduce miss

More information

Memory Hierarchy. Reading. Sections 5.1, 5.2, 5.3, 5.4, 5.8 (some elements), 5.9 (2) Lecture notes from MKP, H. H. Lee and S.

Memory Hierarchy. Reading. Sections 5.1, 5.2, 5.3, 5.4, 5.8 (some elements), 5.9 (2) Lecture notes from MKP, H. H. Lee and S. Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili Sections 5.1, 5.2, 5.3, 5.4, 5.8 (some elements), 5.9 Reading (2) 1 SRAM: Value is stored on a pair of inerting gates Very fast but

More information

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016 Caches and Memory Hierarchy: Review UCSB CS240A, Winter 2016 1 Motivation Most applications in a single processor runs at only 10-20% of the processor peak Most of the single processor performance loss

More information

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed) Computing Systems & Performance Memory Hierarchy MSc Informatics Eng. 2011/12 A.J.Proença Memory Hierarchy (most slides are borrowed) AJProença, Computer Systems & Performance, MEI, UMinho, 2011/12 1 2

More information

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996 Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996 RHK.S96 1 Review: Who Cares About the Memory Hierarchy? Processor Only Thus

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Review: Major Components of a Computer Processor Devices Control Memory Input Datapath Output Secondary Memory (Disk) Main Memory Cache Performance

More information

Memory Hierarchies 2009 DAT105

Memory Hierarchies 2009 DAT105 Memory Hierarchies Cache performance issues (5.1) Virtual memory (C.4) Cache performance improvement techniques (5.2) Hit-time improvement techniques Miss-rate improvement techniques Miss-penalty improvement

More information

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Science CPUtime = IC CPI Execution + Memory accesses Instruction

More information

LECTURE 11. Memory Hierarchy

LECTURE 11. Memory Hierarchy LECTURE 11 Memory Hierarchy MEMORY HIERARCHY When it comes to memory, there are two universally desirable properties: Large Size: ideally, we want to never have to worry about running out of memory. Speed

More information

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture Announcements! Previous lecture Caches Inf3 Computer Architecture - 2016-2017 1 Recap: Memory Hierarchy Issues! Block size: smallest unit that is managed at each level E.g., 64B for cache lines, 4KB for

More information

Computer Architecture Spring 2016

Computer Architecture Spring 2016 Computer Architecture Spring 2016 Lecture 08: Caches III Shuai Wang Department of Computer Science and Technology Nanjing University Improve Cache Performance Average memory access time (AMAT): AMAT =

More information

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017 Caches and Memory Hierarchy: Review UCSB CS24A, Fall 27 Motivation Most applications in a single processor runs at only - 2% of the processor peak Most of the single processor performance loss is in the

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to

More information

Aleksandar Milenkovich 1

Aleksandar Milenkovich 1 Lecture 05 CPU Caches Outline Memory Hierarchy Four Questions for Memory Hierarchy Cache Performance Aleksandar Milenkovic, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama

More information

Improving Cache Performance. Reducing Misses. How To Reduce Misses? 3Cs Absolute Miss Rate. 1. Reduce the miss rate, Classifying Misses: 3 Cs

Improving Cache Performance. Reducing Misses. How To Reduce Misses? 3Cs Absolute Miss Rate. 1. Reduce the miss rate, Classifying Misses: 3 Cs Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the. Reducing Misses Classifying Misses: 3 Cs! Compulsory The first access to a block is

More information

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University Computer Organization and Structure Bing-Yu Chen National Taiwan University Large and Fast: Exploiting Memory Hierarchy The Basic of Caches Measuring & Improving Cache Performance Virtual Memory A Common

More information

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy Chapter 5A Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) Fast, expensive Dynamic RAM (DRAM) In between Magnetic disk Slow, inexpensive Ideal memory Access time of SRAM

More information

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste

More information

COSC4201. Chapter 4 Cache. Prof. Mokhtar Aboelaze York University Based on Notes By Prof. L. Bhuyan UCR And Prof. M. Shaaban RIT

COSC4201. Chapter 4 Cache. Prof. Mokhtar Aboelaze York University Based on Notes By Prof. L. Bhuyan UCR And Prof. M. Shaaban RIT COSC4201 Chapter 4 Cache Prof. Mokhtar Aboelaze York University Based on Notes By Prof. L. Bhuyan UCR And Prof. M. Shaaban RIT 1 Memory Hierarchy The gap between CPU performance and main memory has been

More information

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS November 25, 2011 Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 1 / 40

More information

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III CS 152 Computer Architecture and Engineering Lecture 8 - Memory Hierarchy-III Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste

More information

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip Reducing Hit Times Critical Influence on cycle-time or CPI Keep L1 small and simple small is always faster and can be put on chip interesting compromise is to keep the tags on chip and the block data off

More information

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site: Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, 2003 Textbook web site: www.vrtechnology.org 1 Textbook web site: www.vrtechnology.org Laboratory Hardware 2 Topics 14:332:331

More information

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches CS 61C: Great Ideas in Computer Architecture The Memory Hierarchy, Fully Associative Caches Instructor: Alan Christopher 7/09/2014 Summer 2014 -- Lecture #10 1 Review of Last Lecture Floating point (single

More information

Memory Technologies. Technology Trends

Memory Technologies. Technology Trends . 5 Technologies Random access technologies Random good access time same for all locations DRAM Dynamic Random Access High density, low power, cheap, but slow Dynamic need to be refreshed regularly SRAM

More information

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Improving Cache Performance [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user

More information

CMPT 300 Introduction to Operating Systems

CMPT 300 Introduction to Operating Systems CMPT 300 Introduction to Operating Systems Cache 0 Acknowledgement: some slides are taken from CS61C course material at UC Berkeley Agenda Memory Hierarchy Direct Mapped Caches Cache Performance Set Associative

More information

ECE331: Hardware Organization and Design

ECE331: Hardware Organization and Design ECE331: Hardware Organization and Design Lecture 22: Direct Mapped Cache Adapted from Computer Organization and Design, Patterson & Hennessy, UCB Intel 8-core i7-5960x 3 GHz, 8-core, 20 MB of cache, 140

More information

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals Cache Memory COE 403 Computer Architecture Prof. Muhamed Mudawar Computer Engineering Department King Fahd University of Petroleum and Minerals Presentation Outline The Need for Cache Memory The Basics

More information

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations DECstation 5 Miss Rates Cache Performance Measures % 3 5 5 5 KB KB KB 8 KB 6 KB 3 KB KB 8 KB Cache size Direct-mapped cache with 3-byte blocks Percentage of instruction references is 75% Instr. Cache Data

More information

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2) The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University Cache Line Replacement The cache

More information

Memory Hierarchy: The motivation

Memory Hierarchy: The motivation Memory Hierarchy: The motivation The gap between CPU performance and main memory has been widening with higher performance CPUs creating performance bottlenecks for memory access instructions. The memory

More information

Page 1. Memory Hierarchies (Part 2)

Page 1. Memory Hierarchies (Part 2) Memory Hierarchies (Part ) Outline of Lectures on Memory Systems Memory Hierarchies Cache Memory 3 Virtual Memory 4 The future Increasing distance from the processor in access time Review: The Memory Hierarchy

More information

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky Memory Hierarchy, Fully Associative Caches Instructor: Nick Riasanovsky Review Hazards reduce effectiveness of pipelining Cause stalls/bubbles Structural Hazards Conflict in use of datapath component Data

More information

SE-292 High Performance Computing. Memory Hierarchy. R. Govindarajan Memory Hierarchy

SE-292 High Performance Computing. Memory Hierarchy. R. Govindarajan Memory Hierarchy SE-292 High Performance Computing Memory Hierarchy R. Govindarajan govind@serc Memory Hierarchy 2 1 Memory Organization Memory hierarchy CPU registers few in number (typically 16/32/128) subcycle access

More information

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs" October 1, 2012! Prashanth Mohan!! Slides from Anthony Joseph and Ion Stoica! http://inst.eecs.berkeley.edu/~cs162! Caching!

More information

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste!

More information

Improving Cache Performance. Dr. Yitzhak Birk Electrical Engineering Department, Technion

Improving Cache Performance. Dr. Yitzhak Birk Electrical Engineering Department, Technion Improving Cache Performance Dr. Yitzhak Birk Electrical Engineering Department, Technion 1 Cache Performance CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time Memory

More information