CS 61C: Great Ideas in Computer Architecture (Machine Structures) More Cache: Set Associa0vity. Smart Phone. Today s Lecture. Core.

Size: px
Start display at page:

Download "CS 61C: Great Ideas in Computer Architecture (Machine Structures) More Cache: Set Associa0vity. Smart Phone. Today s Lecture. Core."

Transcription

1 CS 6C: Great Ideas in Computer Architecture (Machine Structures) More Cache: Set Associavity Instructors: Randy H Katz David A PaGerson Guest Lecture: Krste Asanovic hgp://insteecsberkeleyedu/~cs6c/fa Spring - - Lecture # Spring - - Lecture # New- School Machine Structures (It s a bit more complicated!) Parallel Requests Assigned to computer eg, Search Katz Parallel Threads Assigned to core eg, Lookup, Ads So3ware Parallel Instruc\ons > one \me eg, 5 pipelined instruc\ons Parallel Data > data one \me eg, Add of 4 pairs of words Hardware descrip\ons All one \me Harness Parallelism & Achieve High Performance Hardware Warehouse Scale Computer Core Input/Output Instruc\on Unit(s) Main Computer (Cache) Core Core Func\onal Unit(s) A +B A +B A +B A 3 +B 3 Smart Phone Today s Lecture Logic Gates Spring - - Lecture # 3 Review Big Ideas of Instruc\on- Level Parallelism Pipelining, Hazards, and Stalls Forwarding, Specula\on to overcome Hazards Mul\ple issue to increase performance IPC instead of CPI Dynamic Execu\on: Superscalar in- order issue, branch predic\on, register renaming, out- of- order execu\on, in- order commit unroll loops in HW, hide cache misses Spring - - Lecture # 4 Agenda Recap: Components of a Computer Cache Recap Administrivia Set- Associa\ve Caches Technology Break AMAT and Mul\level Cache Review Processor Control Datapath Devices Input Output Secondary (Disk) Main Cache Spring - - Lecture # 5 Spring - - Lecture # 6

2 Recap: Typical Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology On- Chip Components Control Datapath RegFile ITLB DTLB Instr Data Cache Cache Second Level Cache (SRAM) Main (DRAM) Speed (%cycles): ½ s s s s, s Size (bytes): s K s M s G s T s Cost: highest lowest Secondary (Disk) Spring - - Lecture # 7 Recap: Cache Performance and Average Access Time (AMAT) CPU \me = IC CPI CC = IC (CPI ideal + - stall cycles) CC CPI stall - stall cycles = Read- stall cycles + Write- stall cycles Read- stall cycles = reads/program read miss rate read miss penalty Write- stall cycles = (writes/program write miss rate write miss penalty) + write buffer stalls AMAT is the average \me to access memory considering both hits and misses AMAT = Time for a hit + Miss rate x Miss penalty Spring - - Lecture # 8 Improving Cache Performance Reduce the \me to hit in the cache Eg, Smaller cache, direct- mapped cache, special tricks for handling writes Reduce the miss rate Eg, Bigger cache, larger blocks More flexible placement (increase associavity) Reduce the miss penalty Eg, Smaller blocks or cri\cal word first in large blocks, special tricks for handling writes, faster/higher bandwidth memories Use mul\ple cache levels Sources of Cache Misses: The 3Cs Compulsory (cold start or process migra\on, st reference): First access to block impossible to avoid; small effect for long running programs Solu\on: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity: Cache cannot contain all blocks accessed by the program Solu\on: increase cache size (may increase access \me) Conflict (collision): Mul\ple memory loca\ons mapped to the same cache loca\on Solu\on : increase cache size Solu\on : increase associa\vity (may increase access \me) Spring - - Lecture # 9 Spring - - Lecture # Reducing Cache Misses Allow more flexible block placement Direct mapped $: memory block maps to exactly one cache block Fully associave $: allow a memory block to be mapped to any cache block Compromise: divide $ into sets, each of which consists of n ways (n- way set associave) to place memory block block maps to unique set determined by index field and is placed in any of the n- ways of that set Calcula\on: (block address) modulo (# sets in the cache) Spring - - Lecture # Alterna\ve Block Placement Schemes DM placement: mem block in 8 block cache: only one cache block where mem block can be found ( modulo 8) = 4 SA placement: four sets x - ways (8 cache blocks), memory block in set ( mod 4) = ; either element of the set FA placement: mem block can appear in any cache blocks Spring - - Lecture #

3 Agenda Cache Recap Administrivia Set- Associa\ve Caches Technology Break AMAT and Mul\level Cache Review Administrivia Project 4, Part due Sunday 4/7 Design a 6- bit pipelined computer in Logisim Extra Credit due 4/4 Fastest Matrix Mul\ply Final Review: Mon 5/ 5 8PM, 5 VLSB Final Exam: Mon 5/9 :3- :3PM Haas Pavilion Spring - - Lecture # 3 Spring - - Lecture # 4 Last survivor of group of women programmers of ENIAC, the first all- electronic digital calculator Used for calcula\ng ar\llery tables, but completed in 946 too late for WWII 6c in the News: Jean Bar\k, dies age 86 A Pioneer Programmer When the Eniac was shown off at the University of Pennsylvania in February 946, it generated headlines in newspapers across the country But the agen\on was all on the men and the machine The women were not even introduced at the event Spring - - Lecture # 5 [NY Times, 4/7/] Geyng to Know Your Prof Grains are mashed ~45-6 minutes to convert starch to sugar Missing the ales I grew up with in England, I learnt how to make beer Start from grains, about 5- gallons/batch Brewed over 5 batches so far Need thirsty friends to help consume experiments! Boil extracted wort for 9 minutes, add hops, then cool to add yeast ESB II ~6% ABV ESB I ~6% ABV Ordinary BiGer 4%ABV Belgian Pale II ~5% ABV Belgian Pale I ~5% ABV Mixed Cider 7%ABV In cellar: Ferments for 5-7 days Gravenstein Cider 7%ABV Bri\sh Barleywine 8% ABV Spring - - Lecture # 6 Condi\on for week to years Belgian Trappist % ABV Example: 4- Word Direct- Mapped $ Worst- Case Reference String Consider the main memory word reference string Start with an empty cache - all blocks ini\ally marked as not valid 4 4 Example: 4- Word Direct- Mapped $ Worst- Case Reference String Consider the main memory word reference string Start with an empty cache - all blocks ini\ally marked as not valid miss 4 miss miss 4 miss 4 4 Mem() Mem() Mem(4) Mem() 4 4 miss 4 miss 4 miss 4 miss 4 Mem(4) Mem() Mem(4) Mem() Spring - - Lecture # 7 8 requests, 8 misses Ping pong effect due to conflict misses - two memory loca\ons that map into the same cache block Spring - - Lecture # 8 3

4 Example: - Way Set Associa\ve $ Cache Way Set V (4 words = sets x ways per set) Tag Q: Is it there? Data Compare all the cache tags in the set to the high order 3 memory address bits to tell if the memory block is in the cache xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Main One word blocks Two low order bits define the byte in the word (3b words) Q: How do we find it? Use next low order memory address bit to determine which cache set (ie, modulo the number of sets in the cache) Spring - - Lecture # 9 Example: 4 Word - Way SA $ Same Reference String Consider the main memory word reference string Start with an empty cache - all blocks ini\ally marked as not valid 4 4 Spring - - Lecture # Example: 4- Word - Way SA $ Same Reference String Consider the main memory word reference string Start with an empty cache - all blocks ini\ally marked as not valid Example: Eight- Block Cache with Different Organiza\ons miss 4 miss hit 4 hit Mem() Mem() Mem() Mem() Mem(4) Mem(4) Mem(4) 8 requests, misses Solves the ping pong effect in a direct mapped cache due to conflict misses since now two memory loca\ons that map into the same cache set can co- exist! Spring - - Lecture # Total size of $ in blocks is equal to number of sets x associavity For fixed $ size, increasing associa\vity decreases number of sets while increasing number of elements per set With eight blocks, an 8- way set- associa\ve $ is same as a fully associa\ve $ Spring - - Lecture # Four- Way Set- Associa\ve Cache 8 = 56 sets each with four ways (each with one block) Byte offset Tag Index Index V Tag Data V Tag Data V Tag Data V Tag Data Way Way Way Way Range of Set- Associa\ve Caches For a fixed- size cache, each increase by a factor of two in associa\vity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Tag Index Block offset Byte offset 3 4x select Spring - - Lecture # 3 Hit Data Spring - - Lecture # 4 4

5 Range of Set- Associa\ve Caches For a fixed- size cache, each increase by a factor of two in associa\vity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Used for tag compare Tag Decreasing associa\vity Direct mapped (only one way) Smaller tags, only a single comparator Selects the set Index Increasing associa\vity Selects the word in the block Block offset Byte offset Fully associa\ve (only one set) Tag is all the bits except block and byte offset Costs of Set- Associa\ve Caches When miss occurs, which way s block selected for replacement? Least Recently Used (LRU): one that has been unused the longest Must track when each way s block was used rela\ve to other blocks in the set For - way SA $, one bit per set set to when a block is referenced; reset the other way s bit (ie, last used ) N- way set- associa\ve cache costs N comparators (delay and area) MUX delay (set selec\on) before data is available Data available a er set selec\on (and Hit/Miss decision) DM $: block is available before the Hit/Miss decision In Set- Associa\ve, not possible to just assume a hit and con\nue and recover later if it was a miss Spring - - Lecture # 5 Spring - - Lecture # 6 Cache Block Replacement Policies Random Replacement Hardware randomly selects a cache item and throw it out Least Recently Used Hardware keeps track of access history Replace the entry that has not been used for the longest \me For - way set- associa\ve cache, need one bit for LRU replacement Example of a Simple Pseudo LRU Implementa\on Assume 64 Fully Associa\ve entries Hardware replacement pointer points to one cache entry Whenever access is made to the entry the pointer points to: Move the pointer to the next entry Otherwise: do not move the pointer Replacement Pointer Entry Entry Spring - - Lecture # 7 Entry 63 : Benefits of Set- Associa\ve Caches Choice of DM $ or SA $ depends on the cost of a miss versus the cost of implementa\on Largest gains are in going from direct mapped to - way (%+ reduc\on in miss rate) Spring - - Lecture # 8 How to Calculate 3C s using Cache Simulator Compulsory: set cache size to infinity and fully associa\ve, and count number of misses Capacity: Chance cache size from infinity, usually in powers of, and count misses for each reduc\on in size 6 MB, 8 MB, 4 MB, 8 KB, 64 KB, 6 KB 3 Conflict: Change from fully associa\ve to n- way set associa\ve while coun\ng misses Fully associa\ve, 6- way, 8- way, 4- way, - way, - way Spring - - Lecture # 9 3Cs Revisted Three sources of misses (SPEC integer and floa\ng- point benchmarks) Compulsory misses 6%; not visible Capacity misses, func\on of cache size Conflict por\on depends on associa\vity and cache size Spring - - Lecture # 3 5

6 Reduce AMAT Use mul\ple levels of cache As technology advances, more room on IC die for larger L$ or for addi\onal levels of cache (eg, L$ and L3$) Normally the higher cache levels are unified, holding both instruc\ons and data Spring - - Lecture # 3 AMAT Revisited For a nd- level cache, L Equa\ons: AMAT = Hit Time L + Miss Rate L x Miss Penalty L Miss Penalty L = Hit Time L + Miss Rate L x Miss Penalty L AMAT = Hit Time L + Miss Rate L x (Hit Time L + Miss Rate L Miss Penalty L ) Defini\ons: Local miss rate: misses in this $ divided by the total number of memory accesses to this $ (Miss rate L ) Global miss rate: misses in this $ divided by the total number of memory accesses generated by the CPU (Miss Rate L x Miss Rate L ) Global miss rate is what magers to overall performance Local miss rate is factor in evalua\ng the effec\veness of L cache Spring - - Lecture # 3 CPI stalls Calcula\on Assume CPI ideal of cycle miss penalty to main memory 5 cycle miss penalty to Unified L$ 36% of instruc\ons are load/stores % L I$ miss rate; 4% L D$ miss rate 5% U(nified)L$ miss rate CPI stalls = + x x = 354 (vs 544 with no L$) Spring - - Lecture # 33 IFetch Ld/St L L Hierarchy with Two Cache Levels mem refs 4 mem refs mem refs CPU L$ L$ MM cycle cycles cycles For every CPU to memory references 4 will miss in L$; what is the miss rate? will miss in L$; what is the miss rate? Global vs local miss rate? If 5 mem refs per instruc\on, how do we normalize these numbers to # instruc\ons? Ave Mem Stalls/Instruc\on Spring - - Lecture # 34 AMAT Calcula\ons Local vs Global Miss Rates Example: For memory refs: 4 misses in L$ (miss rate 4%) misses in L$ (miss rate %) L$ hits cycle, L$ hits in cycles Miss to MM costs cycles 5 memory references per instruc\on (ie, 5% ld/st) mem refs = 667 instrs OR instrs = 5 mem refs Ask: Local miss rate AMAT Stall cycles per instruc\on with and without L$ With L$ Local miss rate = AMAT = Ave Mem Stalls/Ref = Ave Mem Stalls/Instr= Without L$ AMAT= Ave Mem Stalls/Ref= Ave Mem Stalls/Instr= Assume ideal CPI=, performance improvement = AMAT Calcula\ons Local vs Global Miss Rates Example: For memory refs: 4 misses in L$ (miss rate 4%) misses in L$ (miss rate %) L$ hits cycle, L$ hits in cycles Miss to MM costs cycles 5 memory references per instruc\on (ie, 5% ld/st) mem refs = 667 instrs OR instrs = 5 mem refs Ask: Local miss rate AMAT Stall cycles per instruc\on with and without L$ With L$ Local miss rate = 5% (/4) AMAT = +4%x(+5%x)=34 Ave Mem Stalls/Ref = (34- )=4 Ave Mem Stalls/Instr=4x5=36 Without L$ AMAT=+4%x=5 Ave Mem Stalls/Ref=(5- )=4 Ave Mem Stalls/Instr=4x5=6 Assume ideal CPI=, performance improvement = (6+)/(36+)=5% Spring - - Lecture # 35 Spring - - Lecture # 36 6

7 CPI/Miss Rates/DRAM Access SpecInt6 Design Considera\ons Different design considera\ons for L$ and L$ L$ focuses on fast access: minimize hit \me to achieve shorter clock cycle, eg, smaller $ L$, L3$ focus on low miss rate: reduce penalty of long main memory access \mes: eg, Larger $ with larger block sizes/ higher levels of associa\vity Miss penalty of L$ is significantly reduced by presence of L$, so can be smaller/faster even with higher miss rate For the L$, fast hit \me is less important than low miss rate L$ hit \me determines L$ s miss penalty L$ local miss rate >> than the global miss rate Spring - - Lecture # 37 Spring - - Lecture # 38 Improving Cache Performance Reduce the \me to hit in the cache Eg, Smaller cache, direct- mapped cache, special tricks for handling writes Reduce the miss rate (in L$, L3$) Eg, Bigger cache, larger blocks More flexible placement (increase associavity) Reduce the miss penalty (in L$) Eg, Smaller blocks or crical word first in large blocks, special tricks for handling for writes, faster/ higher bandwidth memories Use mulple cache levels Sources of Cache Misses: 3Cs for L$, L3$ Compulsory (cold start or process migra\on, st reference): First access to block impossible to avoid; small effect for long running programs Solu\on: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity: Cache cannot contain all blocks accessed by the program Soluon: increase cache size (may increase access me) Conflict (collision): Mul\ple memory loca\ons mapped to the same cache loca\on Solu\on : increase cache size Soluon : increase associavity (may increase access me) Spring - - Lecture # 39 Spring - - Lecture # 4 Two Machines Cache Parameters L cache organization & size L associativity Intel Nehalem Split I$ and D$; 3KB for each per core; 64B blocks 4-way (I), 8-way (D) set assoc; ~LRU replacement AMD Barcelona Split I$ and D$; 64KB for each per core; 64B blocks -way set assoc; LRU replacement L write policy write-back, write-allocate write-back, write-allocate L cache organization & size Unified; 56KB (5MB) per core; 64B blocks Unified; 5KB (5MB) per core; 64B blocks L associativity 8-way set assoc; ~LRU 6-way set assoc; ~LRU L write policy write-back write-back L write policy write-back, write-allocate write-back, write-allocate L3 cache organization & size Unified; 89KB (8MB) shared by cores; 64B blocks Unified; 48KB (MB) shared by cores; 64B blocks L3 associativity 6-way set assoc 3-way set assoc; evict block shared by fewest cores L3 write policy write-back, write-allocate write-back; write-allocate Spring - - Lecture # 4 Intel Nehalem Die Photo 4 cores, 3KB I$/3- KB D$, 5KB L$ Share one 8- MB L3$ Spring - - Lecture # 4 7

8 Cache Design Space Summary Several interac\ng dimensions Cache size Block size Associa\vity Replacement policy Write- through vs write- back Write alloca\on Op\mal choice is a compromise Depends on access characteris\cs Workload Use (I- cache, D- cache) Depends on technology / cost Simplicity o en wins Cache Size AssociaIvity Block Size Bad Good Factor A Factor B Less More Name of the Game: Reduce Cache Misses memory blocks mapping to same block knock each other out as program bounces from memory loca\on to next One way to do it: set- associa\vity block maps into more than cache block N- way: n possible places in cache to hold a memory block N- way Cache of N+M blocks: N ways x M sets Mul\- level caches Op\mize first level to be fast! Op\mize nd and 3 rd levels to minimize the memory access penalty Spring - - Lecture # 43 Spring - - Lecture # 44 8

Agenda. Recap: Components of a Computer. Agenda. Recap: Cache Performance and Average Memory Access Time (AMAT) Recap: Typical Memory Hierarchy

Agenda. Recap: Components of a Computer. Agenda. Recap: Cache Performance and Average Memory Access Time (AMAT) Recap: Typical Memory Hierarchy // CS 6C: Great Ideas in Computer Architecture (Machine Structures) Set- Associa+ve Caches Instructors: Randy H Katz David A PaFerson hfp://insteecsberkeleyedu/~cs6c/fa Cache Recap Recap: Components of

More information

Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10. 10/4/10 Fall Lecture #16. Agenda

Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10. 10/4/10 Fall Lecture #16. Agenda CS 61C: Great Ideas in Computer Architecture (Machine Structures) Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10 1 Agenda Cache Sizing/Hits and Misses Administrivia

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) CS 61C: Great Ideas in Computer Architecture (Machine Structures) Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10 1 2 Cache Field Sizes Number of bits in a cache includes

More information

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!) 7/4/ CS 6C: Great Ideas in Computer Architecture (Machine Structures) Caches II Instructor: Michael Greenbaum New-School Machine Structures (It s a bit more complicated!) Parallel Requests Assigned to

More information

Review. You Are Here! Agenda. Recap: Typical Memory Hierarchy. Recap: Components of a Computer 11/6/12

Review. You Are Here! Agenda. Recap: Typical Memory Hierarchy. Recap: Components of a Computer 11/6/12 /6/ Review CS 6C: Great Ideas in Computer Architecture More Cache: Set Associavity Instructors: Krste Asanovic, Randy H Katz hbp://insteecsberkeleyedu/~cs6c/fa Big Ideas of InstrucPon- Level Parallelism

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Mul$- Level Caches

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Mul$- Level Caches Costs of Set AssociaRve s CS 61C: Great Ideas in Computer Architecture (Machine Structures) Mul$- Level s Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10 1 When miss

More information

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy CS 6C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 Instructors: Krste Asanović & Randy H Katz http://insteecsberkeleyedu/~cs6c/ Parallel Requests Assigned to computer eg, Search

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 Instructors: Krste Asanović & Randy H. Katz http://inst.eecs.berkeley.edu/~cs61c/ 10/19/17 Fall 2017 - Lecture #16 1 Parallel

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: John Wawrzynek & Vladimir Stojanovic http://insteecsberkeleyedu/~cs61c/ Typical Memory Hierarchy Datapath On-Chip

More information

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Improving Cache Performance [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user

More information

Page 1. Memory Hierarchies (Part 2)

Page 1. Memory Hierarchies (Part 2) Memory Hierarchies (Part ) Outline of Lectures on Memory Systems Memory Hierarchies Cache Memory 3 Virtual Memory 4 The future Increasing distance from the processor in access time Review: The Memory Hierarchy

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 Instructors: Bernhard Boser & Randy H. Katz http://inst.eecs.berkeley.edu/~cs61c/ 10/24/16 Fall 2016 - Lecture #16 1 Software

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 3//5 CS 6C: Great Ideas in Computer Architecture (Machine Structures) Caches Part Instructors: Krste Asanovic & Vladimir Stojanovic hfp://insteecsberkeleyedu/~cs6c/ Parallel Requests Assigned to computer

More information

CS 61C: Great Ideas in Computer Architecture Direct- Mapped Caches. Increasing distance from processor, decreasing speed.

CS 61C: Great Ideas in Computer Architecture Direct- Mapped Caches. Increasing distance from processor, decreasing speed. CS 6C: Great Ideas in Computer Architecture Direct- Mapped s 9/27/2 Instructors: Krste Asanovic, Randy H Katz hdp://insteecsberkeleyedu/~cs6c/fa2 Fall 2 - - Lecture #4 New- School Machine Structures (It

More information

EE 4683/5683: COMPUTER ARCHITECTURE

EE 4683/5683: COMPUTER ARCHITECTURE EE 4683/5683: COMPUTER ARCHITECTURE Lecture 6A: Cache Design Avinash Kodi, kodi@ohioedu Agenda 2 Review: Memory Hierarchy Review: Cache Organization Direct-mapped Set- Associative Fully-Associative 1 Major

More information

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu CENG 3420 Computer Organization and Design Lecture 08: Cache Review Bei Yu CEG3420 L08.1 Spring 2016 A Typical Memory Hierarchy q Take advantage of the principle of locality to present the user with as

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: Krste Asanović & Randy H. Katz http://inst.eecs.berkeley.edu/~cs61c/ 10/16/17 Fall 2017 - Lecture #15 1 Outline

More information

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)] ECE7995 (6) Improving Cache Performance [Adapted from Mary Jane Irwin s slides (PSU)] Measuring Cache Performance Assuming cache hit costs are included as part of the normal CPU execution cycle, then CPU

More information

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality Administrative EEC 7 Computer Architecture Fall 5 Improving Cache Performance Problem #6 is posted Last set of homework You should be able to answer each of them in -5 min Quiz on Wednesday (/7) Chapter

More information

CS 61C: Great Ideas in Computer Architecture. Cache Performance, Set Associative Caches

CS 61C: Great Ideas in Computer Architecture. Cache Performance, Set Associative Caches CS 61C: Great Ideas in Computer Architecture Cache Performance, Set Associative Caches Instructor: Justin Hsia 7/09/2012 Summer 2012 Lecture #12 1 Great Idea #3: Principle of Locality/ Memory Hierarchy

More information

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1 CSE 431 Computer Architecture Fall 2008 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Mary Jane Irwin ( www.cse.psu.edu/~mji ) [Adapted from Computer Organization and Design, 4 th Edition, Patterson

More information

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts // CS C: Great Ideas in Computer Architecture (Machine Structures) s Part Instructors: Krste Asanović & Randy H Katz http://insteecsberkeleyedu/~csc/ Organization and Principles Write Back vs Write Through

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: Bernhard Boser & Randy H Katz http://insteecsberkeleyedu/~cs61c/ 10/18/16 Fall 2016 - Lecture #15 1 Outline

More information

CS 61C: Great Ideas in Computer Architecture Caches Part 2

CS 61C: Great Ideas in Computer Architecture Caches Part 2 CS 61C: Great Ideas in Computer Architecture Caches Part 2 Instructors: Nicholas Weaver & Vladimir Stojanovic http://insteecsberkeleyedu/~cs61c/fa15 Software Parallel Requests Assigned to computer eg,

More information

CS 61C: Great Ideas in Computer Architecture

CS 61C: Great Ideas in Computer Architecture 9// CS 6C: Great Ideas in Computer Architecture Lecture 4 Cache Performance (Cache III) Instructors: ike Franklin Dan Garcia hfp://insteecsberkeleyedu/~cs6c/fa In the News ajor Expansion of World s Centers

More information

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu CENG 3420 Computer Organization and Design Lecture 08: Memory - I Bei Yu CEG3420 L08.1 Spring 2016 Outline q Why Memory Hierarchy q How Memory Hierarchy? SRAM (Cache) & DRAM (main memory) Memory System

More information

CS3350B Computer Architecture

CS3350B Computer Architecture CS335B Computer Architecture Winter 25 Lecture 32: Exploiting Memory Hierarchy: How? Marc Moreno Maza wwwcsduwoca/courses/cs335b [Adapted from lectures on Computer Organization and Design, Patterson &

More information

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Caches Part 1. Instructor: Sören Schwertfeger.   School of Information Science and Technology SIST CS 110 Computer Architecture Caches Part 1 Instructor: Sören Schwertfeger http://shtech.org/courses/ca/ School of Information Science and Technology SIST ShanghaiTech University Slides based on UC Berkley's

More information

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2 CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2 Instructor: Sagar Karandikar sagark@eecsberkeleyedu hbp://insteecsberkeleyedu/~cs61c 1 So/ware Parallel Requests Assigned to computer

More information

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance CS 6C: Great Ideas in Computer Architecture Direct Mapped Caches, Set Associative Caches, Cache Performance Instructor: Justin Hsia 7//23 Summer 23 Lecture # Great Idea #3: Principle of Locality/ Memory

More information

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2 7/6/5 CS 6C: Great Ideas in Computer Architecture Lecture 5: Caches, Part Instructor: Sagar Karandikar sagark@eecsberkeleyedu hdp://insteecsberkeleyedu/~cs6c Parallel Requests Assigned to computer eg,

More information

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141 EECS151/251A Spring 2018 Digital Design and Integrated Circuits Instructors: John Wawrzynek and Nick Weaver Lecture 19: Caches Cache Introduction 40% of this ARM CPU is devoted to SRAM cache. But the role

More information

Course Administration

Course Administration Spring 207 EE 363: Computer Organization Chapter 5: Large and Fast: Exploiting Memory Hierarchy - Avinash Kodi Department of Electrical Engineering & Computer Science Ohio University, Athens, Ohio 4570

More information

Improving Cache Performance

Improving Cache Performance Improving Cache Performance Computer Organization Architectures for Embedded Computing Tuesday 28 October 14 Many slides adapted from: Computer Organization and Design, Patterson & Hennessy 4th Edition,

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Processor-Memory Performance Gap 10000 µproc 55%/year (2X/1.5yr) Performance 1000 100 10 1 1980 1983 1986 1989 Moore s Law Processor-Memory Performance

More information

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds. Performance 980 98 982 983 984 985 986 987 988 989 990 99 992 993 994 995 996 997 998 999 2000 7/4/20 CS 6C: Great Ideas in Computer Architecture (Machine Structures) Caches Instructor: Michael Greenbaum

More information

CS 465 Final Review. Fall 2017 Prof. Daniel Menasce

CS 465 Final Review. Fall 2017 Prof. Daniel Menasce CS 465 Final Review Fall 2017 Prof. Daniel Menasce Ques@ons What are the types of hazards in a datapath and how each of them can be mi@gated? State and explain some of the methods used to deal with branch

More information

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache 14:332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson s UCB CS152 slides and Mary Jane Irwin s PSU CSE331 slides] 331 Lec20.1 Fall 2003 Head

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Processor-Memory Performance Gap 10000 µproc 55%/year (2X/1.5yr) Performance 1000 100 10 1 1980 1983 1986 1989 Moore s Law Processor-Memory Performance

More information

Mo Money, No Problems: Caches #2...

Mo Money, No Problems: Caches #2... Mo Money, No Problems: Caches #2... 1 Reminder: Cache Terms... Cache: A small and fast memory used to increase the performance of accessing a big and slow memory Uses temporal locality: The tendency to

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 Instructors: Krste Asanovic & Vladimir Stojanovic hfp://inst.eecs.berkeley.edu/~cs61c/ Parallel Requests Assigned to computer

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 Instructors: Krste Asanovic & Vladimir Stojanovic hcp://inst.eecs.berkeley.edu/~cs61c/ So$ware Parallel Requests Assigned

More information

CS 61C: Great Ideas in Computer Architecture. Multilevel Caches, Cache Questions

CS 61C: Great Ideas in Computer Architecture. Multilevel Caches, Cache Questions CS 61C: Great Ideas in Computer Architecture Multilevel Caches, Cache Questions Instructor: Alan Christopher 7/14/2014 Summer 2014 -- Lecture #12 1 Great Idea #3: Principle of Locality/ Memory Hierarchy

More information

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache 14:332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson s UCB CS152 slides and Mary Jane Irwin s PSU CSE331 slides] 331 Week131 Spring 2006

More information

CMPT 300 Introduction to Operating Systems

CMPT 300 Introduction to Operating Systems CMPT 300 Introduction to Operating Systems Cache 0 Acknowledgement: some slides are taken from CS61C course material at UC Berkeley Agenda Memory Hierarchy Direct Mapped Caches Cache Performance Set Associative

More information

Show Me the $... Performance And Caches

Show Me the $... Performance And Caches Show Me the $... Performance And Caches 1 CPU-Cache Interaction (5-stage pipeline) PCen 0x4 Add bubble PC addr inst hit? Primary Instruction Cache IR D To Memory Control Decode, Register Fetch E A B MD1

More information

Improving Cache Performance

Improving Cache Performance Improving Cache Performance Tuesday 27 October 15 Many slides adapted from: and Design, Patterson & Hennessy 5th Edition, 2014, MK and from Prof. Mary Jane Irwin, PSU Summary Previous Class Memory hierarchy

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic disk 5ms 20ms, $0.20 $2 per

More information

CPE 631 Lecture 04: CPU Caches

CPE 631 Lecture 04: CPU Caches Lecture 04 CPU Caches Electrical and Computer Engineering University of Alabama in Huntsville Outline Memory Hierarchy Four Questions for Memory Hierarchy Cache Performance 26/01/2004 UAH- 2 1 Processor-DR

More information

Modern Computer Architecture

Modern Computer Architecture Modern Computer Architecture Lecture3 Review of Memory Hierarchy Hongbin Sun 国家集成电路人才培养基地 Xi an Jiaotong University Performance 1000 Recap: Who Cares About the Memory Hierarchy? Processor-DRAM Memory Gap

More information

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy ENG338 Computer Organization and Architecture Part II Winter 217 S. Areibi School of Engineering University of Guelph Hierarchy Topics Hierarchy Locality Motivation Principles Elements of Design: Addresses

More information

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): Motivation for The Memory Hierarchy: { CPU/Memory Performance Gap The Principle Of Locality Cache $$$$$ Cache Basics:

More information

CS3350B Computer Architecture

CS3350B Computer Architecture CS3350B Computer Architecture Winter 2015 Lecture 3.1: Memory Hierarchy: What and Why? Marc Moreno Maza www.csd.uwo.ca/courses/cs3350b [Adapted from lectures on Computer Organization and Design, Patterson

More information

ECE331: Hardware Organization and Design

ECE331: Hardware Organization and Design ECE331: Hardware Organization and Design Lecture 24: Cache Performance Analysis Adapted from Computer Organization and Design, Patterson & Hennessy, UCB Overview Last time: Associative caches How do we

More information

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS152 Computer Architecture and Engineering Lecture 17: Cache System CS152 Computer Architecture and Engineering Lecture 17 System March 17, 1995 Dave Patterson (patterson@cs) and Shing Kong (shing.kong@eng.sun.com) Slides available on http//http.cs.berkeley.edu/~patterson

More information

COSC3330 Computer Architecture Lecture 19. Cache

COSC3330 Computer Architecture Lecture 19. Cache COSC3330 Computer Architecture Lecture 19 Cache Instructor: Weidong Shi (Larry), PhD Computer Science Department University of Houston Cache Topics 3 Cache Hardware Cost How many total bits are required

More information

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University Lecture 12 Memory Design & Caches, part 2 Christos Kozyrakis Stanford University http://eeclass.stanford.edu/ee108b 1 Announcements HW3 is due today PA2 is available on-line today Part 1 is due on 2/27

More information

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory 1 COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory Cristinel Ababei Dept. of Electrical and Computer Engineering Marquette University Credits: Slides adapted from presentations

More information

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

COSC 6385 Computer Architecture. - Memory Hierarchies (I) COSC 6385 Computer Architecture - Hierarchies (I) Fall 2007 Slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05 Recap

More information

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Cache Introduction [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user with as much

More information

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016 Caches and Memory Hierarchy: Review UCSB CS240A, Winter 2016 1 Motivation Most applications in a single processor runs at only 10-20% of the processor peak Most of the single processor performance loss

More information

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 6: Memory Organization Part I

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 6: Memory Organization Part I ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 6: Memory Organization Part I Ujjwal Guin, Assistant Professor Department of Electrical and Computer Engineering Auburn University, Auburn,

More information

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work? EEC 17 Computer Architecture Fall 25 Introduction Review Review: The Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology

More information

Memory hierarchy review. ECE 154B Dmitri Strukov

Memory hierarchy review. ECE 154B Dmitri Strukov Memory hierarchy review ECE 154B Dmitri Strukov Outline Cache motivation Cache basics Six basic optimizations Virtual memory Cache performance Opteron example Processor-DRAM gap in latency Q1. How to deal

More information

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt Memory Hierarchy 2/18/2016 CS 152 Sec6on 5 Colin Schmidt Agenda Review Memory Hierarchy Lab 2 Ques6ons Return Quiz 1 Latencies Comparison Numbers L1 Cache 0.5 ns L2 Cache 7 ns 14x L1 cache Main Memory

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 Instructors: Nicholas Weaver & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Components of a Computer Processor

More information

Computer Architecture Spring 2016

Computer Architecture Spring 2016 Computer Architecture Spring 2016 Lecture 02: Introduction II Shuai Wang Department of Computer Science and Technology Nanjing University Pipeline Hazards Major hurdle to pipelining: hazards prevent the

More information

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste!

More information

CS 110 Computer Architecture

CS 110 Computer Architecture CS 110 Computer Architecture Performance and Floating Point Arithmetic Instructor: Sören Schwertfeger http://shtech.org/courses/ca/ School of Information Science and Technology SIST ShanghaiTech University

More information

COSC 6385 Computer Architecture - Memory Hierarchies (I)

COSC 6385 Computer Architecture - Memory Hierarchies (I) COSC 6385 Computer Architecture - Memory Hierarchies (I) Edgar Gabriel Spring 2018 Some slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05

More information

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches CS 61C: Great Ideas in Computer Architecture Direct Mapped Caches Instructor: Justin Hsia 7/05/2012 Summer 2012 Lecture #11 1 Review of Last Lecture Floating point (single and double precision) approximates

More information

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies TDT4255 Lecture 10: Memory hierarchies Donn Morrison Department of Computer Science 2 Outline Chapter 5 - Memory hierarchies (5.1-5.5) Temporal and spacial locality Hits and misses Direct-mapped, set associative,

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 32: Pipeline Parallelism 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 32: Pipeline Parallelism 3 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 32: Pipeline Parallelism 3 Instructor: Dan Garcia inst.eecs.berkeley.edu/~cs61c! Compu@ng in the News At a laboratory in São Paulo,

More information

Transistor: Digital Building Blocks

Transistor: Digital Building Blocks Final Exam Review Transistor: Digital Building Blocks Logically, each transistor acts as a switch Combined to implement logic functions (gates) AND, OR, NOT Combined to build higher-level structures Multiplexer,

More information

Caching Basics. Memory Hierarchies

Caching Basics. Memory Hierarchies Caching Basics CS448 1 Memory Hierarchies Takes advantage of locality of reference principle Most programs do not access all code and data uniformly, but repeat for certain data choices spatial nearby

More information

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site: Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, 2003 Textbook web site: www.vrtechnology.org 1 Textbook web site: www.vrtechnology.org Laboratory Hardware 2 Topics 14:332:331

More information

Direct-Mapped and Set Associative Caches. Instructor: Steven Ho

Direct-Mapped and Set Associative Caches. Instructor: Steven Ho Direct-Mapped and Set Associative Caches Instructor: Steven Ho Great Idea #3: Principle of Locality/ Memory Hierarchy 7/6/28 CS6C Su8 - Lecture 5 2 Extended Review of Last Lecture Why have caches? Intermediate

More information

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017 Caches and Memory Hierarchy: Review UCSB CS24A, Fall 27 Motivation Most applications in a single processor runs at only - 2% of the processor peak Most of the single processor performance loss is in the

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #21: Caches 3 2005-07-27 CS61C L22 Caches III (1) Andy Carle Review: Why We Use Caches 1000 Performance 100 10 1 1980 1981 1982 1983

More information

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more

More information

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1 Memory Hierarchy Maurizio Palesi Maurizio Palesi 1 References John L. Hennessy and David A. Patterson, Computer Architecture a Quantitative Approach, second edition, Morgan Kaufmann Chapter 5 Maurizio

More information

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University Computer Organization and Structure Bing-Yu Chen National Taiwan University Large and Fast: Exploiting Memory Hierarchy The Basic of Caches Measuring & Improving Cache Performance Virtual Memory A Common

More information

Page 1. Multilevel Memories (Improving performance using a little cash )

Page 1. Multilevel Memories (Improving performance using a little cash ) Page 1 Multilevel Memories (Improving performance using a little cash ) 1 Page 2 CPU-Memory Bottleneck CPU Memory Performance of high-speed computers is usually limited by memory bandwidth & latency Latency

More information

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste

More information

Instructor: Randy H. Katz hap://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #7. Warehouse Scale Computer

Instructor: Randy H. Katz hap://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #7. Warehouse Scale Computer CS 61C: Great Ideas in Computer Architecture Everything is a Number Instructor: Randy H. Katz hap://inst.eecs.berkeley.edu/~cs61c/fa13 9/19/13 Fall 2013 - - Lecture #7 1 New- School Machine Structures

More information

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #13. Warehouse Scale Computer

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #13. Warehouse Scale Computer CS 61C: Great Ideas in Computer Architecture Cache Performance and Parallelism Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13 10/8/13 Fall 2013 - - Lecture #13 1 New- School Machine

More information

Lecture 11: Memory Systems -- Cache Organiza9on and Performance CSE 564 Computer Architecture Summer 2017

Lecture 11: Memory Systems -- Cache Organiza9on and Performance CSE 564 Computer Architecture Summer 2017 Lecture 11: Memory Systems -- Cache Organiza9on and Performance CSE 564 Computer Architecture Summer 2017 Department of Computer Science and Engineering Yonghong Yan yan@oakland.edu www.secs.oakland.edu/~yan

More information

Memory. Lecture 22 CS301

Memory. Lecture 22 CS301 Memory Lecture 22 CS301 Administrative Daily Review of today s lecture w Due tomorrow (11/13) at 8am HW #8 due today at 5pm Program #2 due Friday, 11/16 at 11:59pm Test #2 Wednesday Pipelined Machine Fetch

More information

CS 61C: Great Ideas in Computer Architecture. Multiple Instruction Issue, Virtual Memory Introduction

CS 61C: Great Ideas in Computer Architecture. Multiple Instruction Issue, Virtual Memory Introduction CS 61C: Great Ideas in Computer Architecture Multiple Instruction Issue, Virtual Memory Introduction Instructor: Justin Hsia 7/26/2012 Summer 2012 Lecture #23 1 Parallel Requests Assigned to computer e.g.

More information

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2) The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University Cache Line Replacement The cache

More information

Computer Architecture

Computer Architecture Computer Architecture An Introduction Virendra Singh Associate Professor Computer Architecture and Dependable Systems Lab Department of Electrical Engineering Indian Institute of Technology Bombay http://www.ee.iitb.ac.in/~viren/

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #24 Cache II 27-8-6 Scott Beamer, Instructor New Flow Based Routers CS61C L24 Cache II (1) www.anagran.com Caching Terminology When we try

More information

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches Computer Systems CSE 410 Autumn 2013 10 Memory Organiza:on and Caches 06 April 2012 Memory Organiza?on 1 Roadmap C: car *c = malloc(sizeof(car)); c->miles = 100; c->gals = 17; float mpg = get_mpg(c); free(c);

More information

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

CS 61C: Great Ideas in Computer Architecture. Virtual Memory CS 61C: Great Ideas in Computer Architecture Virtual Memory Instructor: Justin Hsia 7/30/2012 Summer 2012 Lecture #24 1 Review of Last Lecture (1/2) Multiple instruction issue increases max speedup, but

More information

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o Memory hierarchy review ECE 154B Dmitri Strukov Outline Cache motivation Cache basics Opteron example Cache performance Six basic optimizations Virtual memory Processor DRAM gap (latency) Four issue superscalar

More information

Caching and Demand- Paged Virtual Memory

Caching and Demand- Paged Virtual Memory Caching and Demand- Paged Virtual Memory Defini8ons Cache Copy of data that is faster to access than the original Hit: if cache has copy Miss: if cache does not have copy Cache block Unit of cache storage

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) CS 6C: Great Ideas in Computer Architecture (Machine Structures) Instructors: Randy H Katz David A PaHerson hhp://insteecsberkeleyedu/~cs6c/fa Direct Mapped (contnued) - Interface CharacterisTcs of the

More information

Memory Hierarchy: Caches, Virtual Memory

Memory Hierarchy: Caches, Virtual Memory Memory Hierarchy: Caches, Virtual Memory Readings: 5.1-5.4, 5.8 Big memories are slow Computer Fast memories are small Processor Memory Devices Control Input Datapath Output Need to get fast, big memories

More information

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved. LRU A list to keep track of the order of access to every block in the set. The least recently used block is replaced (if needed). How many bits we need for that? 27 Pseudo LRU A B C D E F G H A B C D E

More information

Lecture 11 Cache. Peng Liu.

Lecture 11 Cache. Peng Liu. Lecture 11 Cache Peng Liu liupeng@zju.edu.cn 1 Associative Cache Example 2 Associative Cache Example 3 Associativity Example Compare 4-block caches Direct mapped, 2-way set associative, fully associative

More information