CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review

Size: px
Start display at page:

Download "CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review"

Transcription

1 CSE 502 Graduate Computer Architecture Lec 7-10 App B: Memory Hierarchy Review Larry Wittie Computer Science, StonyBrook University and ~lw Slides adapted from David Patterson, UC-Berkeley cs252-s06 2/14,16,21,23/2012 1

2 Review from Last Lecture Conclusions for Control and Pipelining Quantify and summarize performance Ratios to VAX, Geometric Mean, Multiplicative Standard Deviation F&P: Benchmarks age, disks fail, single-point failure Control via State Machines and Microprogramming Pipelines just overlap tasks - easy for independent tasks Speed Up Pipeline Depth; if ideal CPI is 1, then: Pipeline depth Speedup = 1 + Pipeline stall CPI Cycle Cycle Time Time unpipelined pipelined Hazards limit performance on computers by stalling: Structural: need more HW resources Data (RAR, RAW,WAR,WAW): need forwarding, compiler scheduling Control: delayed branch or branch (taken/not-taken) prediction Exceptions and interrupts add complexity 2/14,16,21,23/2012 2

3 Outline Review Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options Conclusion 2/14,16,21,23/2012 3

4 Memory Hierarchy Review 2/14,16,21,23/2012 4

5 Since 1980, CPU has outpaced DRAM... Performance (1/latency) Q. How do architects address this gap? A. Put smaller, faster cache memories between CPU and DRAM. Create a memory hierarchy. CPU CPU 60% per yr 2X in 1.5 yrs Latency gap grew 50% per year DRAM 9% per yr DRAM 2X in 10 yrs Year 2/14,16,21,23/2012 5

6 Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers 100s Bytes <10s ns Cache K Bytes ns cents/bit Main Memory M Bytes 200ns- 500ns $ cents /bit Disk G Bytes, 10 ms (10,000,000 ns) cents/bit Tape Almost infinite sec-min cents/bit Registers Instr. Operands Cache Blocks Memory Pages Disk Files Tape Staging Xfer Unit prog./compiler 1-8 bytes cache cntl bytes OS 512-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level 2/14,16,21,23/2012 6

7 Memory Hierarchy: Apple imac G5 (2004-5) Managed by compiler Managed by hardware Managed by OS, hardware, application yr Reg L1 Inst L1 Data L2 DRAM Disk Size in Bytes 1K 64K 32K 512K 256M 80G Latency in Cycles, Time 1 cyc, 0.6 ns 3 cyc, 1.9 ns 3 cyc, 1.9 ns 11 cyc, 6.9 ns 88 cyc, 55 ns 10 7 cyc, 12 ms Goal: Illusion of large, fast, cheap memory Let programs address a memory space that scales to the disk size, at a speed that is usually nearly as fast as register access imac G5 1.6 GHz 1600 (mem: 7.3) x Apple II imac G5 1.6 GHz clock, 55 ns DRAM vs Apple II 1 MHz, 400ns DRAM Perform: CPU 1600 X, DRAM 7.3 X faster in 27 yrs => 2X/ 2.5y, 9.3y 2/14,16,21,23/2012 7

8 imac s PowerPC 970 (G5): All caches on-chip L1 (64K Instruction) 1/2 KB R eg ist er s 512K L2 1/2 KB L1 (32K Data) Original = 64-bit single-core 2/14,16,21,23/2012 8

9 Cache Terminology to Understand 1 address trace Sequence of memory (block) addresses in the order accessed by running code 2 average memory access time (AMAT) The average time in cycles to satisfy a memory reference including any delays because of cache misses 3 block Group of successive memory words that fit in the data section of a cache entry 4 locality Near repetition of program memory accesses either in address value or in time 5-8 cache instruction cache data cache unified cache 9-11 direct mapped fully associative n-way set associative 12 set memory address tag field index field block offset block address write-back write-through write buffer write stall valid bit dirty bit write allocate 25 no-write allocate cache hit cache miss hit time miss rate 30 miss penalty memory stall cycles least recently used (LRU) random replacement memory references per instruction misses per instruction 36 virtual memory page page fault! 2/14,16,21,23/2012 9

10 The Principle of Locality The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) For last 20 years, HW has relied on locality for speed by using fast cache copies of memory parts to lower average memory access time (AMAT) for programs Locality is a property of programs which is exploited in machine design. 2/14,16,21,23/

11 Programs with locality cache well... Memory Address (one dot per access) Space Time Bad locality behavior Spatial Locality Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): (1971) Temporal Locality 2/14,16,21,23/ Time=>

12 Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X) Hit Rate: the fraction of memory accesses found in the upper level Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the upper level Hit Time << Miss Penalty(=500 instructions on 21264!) To Processor From Processor Upper Level Memory Blk X Lower Level Large Size Memory Blk Y 2/14,16,21,23/

13 Cache Measures Hit rate: fraction, near 100%, found in that level So high that usually talk about Miss rate = 1 - Hit rate Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Miss penalty: time to supply a missed block from lower level, including any CPU-visible delays to save replaced write-back data to make room in upper level cache. { All active caches are full } {replacement time: time to make upper-level room for new block} access time: time to lower level = f(latency to lower level) transfer time: time to transfer block =f(bw between upper & lower levels) 2/14,16,21,23/

14 Four Questions for Memory Hierarchy (relative to both caches and DRAM for virtual memory) Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy) 2/14,16,21,23/

15 Q1: Where can a block be placed in the upper level? Memory block 12 placed in each 8-block cache: 3 cache organizations are shown below: Fully associative, direct mapped, 2-way set associative (S.A.) S.A. Mapping = Block Number Modulo (Number of Sets) (Cache blocks allowed to hold block 12, shown in blue.) Direct Mapped 2-Way Set Assoc Full Associative (12 mod 8) = 4 (12 mod 4=8/2) = Cache Memory /14,16,21,23/

16 Memory Address Fields to Find a Word in Cache Figure B.3 The three portions of an address in a set associative or direct-mapped cache. The tag is used to check all the blocks in the set, and the index is used to select the set. The block offset is the address of the desired data within the block. Fully associative caches have no index field. 2/14,16,21,23/ , Elsevier Inc. 16

17 Q2: How tell if block is in upper level cache? Bits = 18b: tag 8b index: 256 entries/cache ( 4b: 16 wds/block 2b: 4 Byte/wd) IMPORTANT SLIDE! or ( 6b: 64 Bytes/block 6 offset bits) Bits: (1-way) Direct Mapped Cache 16KB = 2 8=index sets x 1 blk/set x 2 6=4+2=offset B/blk Index => cache set Location of all possible blocks Must check tag for each block: No need to check index, offset bits Increasing associativity: Shrinks index & expands tag size Hit V Tag 18 bits Tag = Address (showing bit positions) Index Byte offset Data 512 bits Mux Block (a.k.a. Page) Address Tag Index 32 Data Capacity: 16KB = 256 x 1 x 64 Bytes 32 Block offset Bit Fields in Memory Address Used to Access Cache Word Virtual Memory Cache Block 256 entries Offset Bits of Bytes in Page 2/14,16,21,23/ Data

18 Data Cache Organization - Opteron Microprocessor Figure B.5 The 64 KB cache is twoway set associative with 64-byte blocks. The 9-bit index selects among 512 sets. The four steps of a read hit, shown as circled numbers in order of occurrence, label this organization. Three bits of the block offset join the index to supply the RAM address to select the proper 8 bytes. Thus, the cache holds two groups of bit words, with each group containing half of the 512 sets. Although not exercised in this example, the line from lower-level memory to the cache is used on a miss to load the cache. The size of address leaving the processor is 40 bits because it is a physical address and not a virtual address. Figure B.24 on page B-47 explains how the Opteron maps from virtual to physical for a cache access. 2/14,16,21,23/ , Elsevier Inc. 18

19 Miss Rates Versus Cache (Data) Capacity Figure B.9 Total miss rate (top) and distribution of miss rate (bottom) for each size cache according to the three C s for the data in Figure B.8. The top diagram shows the actual data cache miss rates, while the bottom diagram shows the percentage in each category. (Space allows the graphs to show one extra cache size than can fit in Figure B.8.) 2/14,16,21,23/ , Elsevier Inc. 19

20 Optimum Block Sizes in Small Caches Figure B.10 Miss rate versus block size for five different-sized caches. Note that miss rate actually goes up if the block size is too large relative to the cache size. Each line represents a cache of different size. SPEC2000 traces would take too long if block size were included, so these data are based on SPEC92 on a DECstation 5000 [Gee et al. 1993]. 2/14,16,21,23/ , Elsevier Inc. 20

21 Miss Rates vs Cache Size for Multilevel Caches Figure B.14 Second-level caches smaller than the sum of the two 64 KB first-level caches make little sense, as reflected in the high miss rates. After 256 KB the single cache is within 10% of the global miss rates. The miss rate of a single-level cache versus size is plotted against the local miss rate and global miss rate of a secondlevel cache using a 32 KB first-level cache. The L2 caches (unified) were two-way set associative with [random] replacement. Each had split L1 instruction and data caches that were 64 KB two-way set associative with LRU replacement. The block size for both L1 and L2 caches was 64 bytes. 2/14,16,21,23/ , Elsevier Inc. 21

22 Relative Execution Time by Level-2 Cache Size Figure B.15 The two bars are for different clock cycle delays for an L2 cache hit. The reference execution time of 1.00 is for an 8192 KB second-level cache with a 1-clock-cycle latency on a second-level hit. 2/14,16,21,23/ , Elsevier Inc. 22

23 Q3: Which block to replace after a miss? (After start up, cache is nearly always full) Easy if Direct Mapped (only 1 block 1 way per set index ) If Set Associative or Fully Associative, must pick a block: Random ( Ran ) Easy to implement, but not best. If only 2-way: 1bit/way LRU (Least Recently Used) LRU is best, but hard to implement if > 8-way Other LRU approximations better than Random Miss Rates for 3 Cache Sizes & Associativities Associativity 2-way 4-way 8-way DataSize LRU Ran LRU Ran LRU Ran 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12% Random picks in big caches => ~ same low miss rate as LRU 2/14,16,21,23/

24 Q4: Write policy: What happens if write (store) data? Write-Through Write-Back Policy Data word written to cache block is also written to next lower-level memory Example, instr. sw to L1$ also goes to L2$ Write new data word only to 1 cache block Update lower level just before a written block leaves cache, so not lose true value Debugging Easier Harder Can read misses force writes? No Yes (no longer slow any reads; use write-buffer) Do repeated writes touch lower level? Yes, memory busier No Additional option -- let writes to an un-cached address allocate a new cache line ( write-allocate ), else just Write-Through. 2/14,16,21,23/

25 Write Buffers for Write-Through Caches Processor Cache Write Buffer Lower Level Memory Write buffer holds (addresses&) data awaiting write-through to lower levels Q. Why a write buffer? Q. Why a buffer, why not just one register? Q. Are Read After Write (RAW) hazards an issue for write buffer? A. So CPU not stall for writes A. Multi-write bursts are common before write-thru. A. Yes! Drain buffer before next read or check buffer addresses before read-miss. 2/14,16,21,23/

26 5 Basic Cache Optimizations Reducing Miss Rate 1. Larger Block size (reduce Compulsory, cold, misses) 2. Larger Cache size (reduce Capacity misses) 3. Higher Associativity (reduce Conflict misses) ( and multiprocessors have cache Coherence misses) (4 Cs) Reducing Miss Penalty 4. Multilevel Caches {total miss rate = (local miss rate k ), where means product of all items k, for k = 1 to max. } Reducing Hit Time (minimize cache latency) 5. Giving Reads Priority over Writes, since CPU is waiting. Read completes before earlier writes in write buffer 2/14,16,21,23/

27 Outline Review Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options Conclusion 2/14,16,21,23/

28 The Limits of Physical Addressing Programs used physical addresses of memory locations A0-A31 CPU D0-D31 Simple addressing method of archaic pre-1978 computers Data A0-A31 Memory D0-D31 All programs shared one address space: the physical address space Machine language programs had to be aware of the machine organization and load into memory areas not used by OS No way to prevent a program from accessing any machine resource in memory 2/14,16,21,23/

29 Solution Post-1978: Add Layer of Indirection Virtual Addresses Physical Addresses A0-A31 Virtual Physical A0-A31 CPU D0-D31 Address Translation Main Memory D0-D31 Data All user programs run in an standardized virtual address space starting at zero. Needs fast(!) Address Translation hardware, managed by the operating system (OS), to map each virtual address to physical memory Hardware supports modern OS features: Memory protection, Address translation, Sharing 2/14,16,21,23/

30 Three Advantages of Virtual Memory Translation: Program can be given consistent view of memory, even though physical memory is scrambled (pages of programs in any order in physical RAM) Makes multithreading reasonable (now used a lot!). Only the most important part of each program ( the working set ) must be in physical memory at any one time. Contiguous structures (like stacks) use only as much physical memory as necessary, yet still can grow later as needed, without recopying. Protection (most important now): Different threads (or processes) protected from each other. Different pages can be given special behavior; examples: (Read Only, Invisible to User Programs, Not Cached). Kernel and OS data are protected from access by user programs. Very important for protection from malicious programs. Sharing: Can map same physical page to multiple users. ( Shared memory holding C++ compiler for many users at once.) 2/14,16,21,23/

31 Same Size VM Pages Fit in Any Order in RAM Figure B.19 The logical program in its contiguous virtual address space is shown on the left. It consists of four pages, A, B, C, and D. The actual location of three of the blocks is in physical main memory and the other is located on the disk. 2/14,16,21,23/ , Elsevier Inc.

32 virtual address Page tables encode mappings of code s virtual addresses to physical memory address space Page Table page page page page OS manages the page table of each Virt. Add. Space ID Physical Memory Space frame frame frame frame A virtual address space (V.A.S.) is divided into blocks of memory called pages A machine usually supports pages of a few sizes (MIPS R4000) A page table is indexed by a virtual address Each valid page table entry codes the present physical memory frame address for its page 2/14,16,21,23/

33 virtual address Page Table page page page page Details of Page Table Physical Memory Space frame frame frame frame Page Table Ptr In Base Reg Virtual Address (for 4,096 Bytes/page) 12 V page no. offset index into page table Page Table 0 V Access Rights Page table maps virtual page numbers to physical frames ( PTE = Page Table Entry) Virtual memory treats main memory cache for disk PA table located in physical memory (V is valid bit) (Byte offset same in VA & PA) Ph page no. offset 12 Physical Address 2/14,16,21,23/

34 All page tables may not fit in memory! A table for 4KB pages for a 32-bit physical address space (max size 4GB) has 1M entries (4K 1M = 4G) Each process needs its own address space tables! Two-level Page Tables 32 bit virtual address P1 index P2 index Page Offset Single top-level table wired (stays) in main memory Only a subset of the 1024 second-level tables are in main memory; rest are on disk or unallocated 2/14,16,21,23/

35 Mapping Opteron-64 Virtual Address to Physical Address Figure B.27 The mapping of an Opteron virtual address. The Opteron virtual memory implementation with four page table levels supports an effective physical address size of 40 bits. Each page table has 512 entries, so each level field is 9 bits wide. The AMD64 architecture document allows the virtual address size to grow from the current 48 bits to 64 bits, and the physical address size to grow from the current 40 bits to 52 bits. How many bytes in each memory page? 2/14,16,21,23/ , Elsevier Inc. 35

36 Opteron TLB: Control Bits + Tag + Physical Page Number Figure B.24 Operation of the Opteron data TLB during address translation. The four steps of a TLB hit are shown as circled numbers. This TLB has 40 entries. The protection and access fields of each Opteron page table entry include: V-Valid, R/W-Read.Only/Read.Write, U/S-User.Access.OK/Supervisors.Only, D-Dirty (modified page in DRAM), A-Accessed.Recently (since A bit last zeroed), {bits not shown} Present.In.Memory/On.Disk.Only, Page.Size (4KB/4MB Page), No.Execute (Data.Only.Not.Code), Write.Thru/Back.Data.Caching, No.Caching 2/14,16,21,23/ , Elsevier Inc. 36

37 VM and Disk: Page replacement policy Set of all pages in Memory 2. Head pointer: Place page on free list if used ( accessed ) bit is still clear (=0). Schedule freed pages with dirty bit set to be written to disk. Dirty bit: page written. Used bit: set to 1 on any reference 1. Tail pointer: Clears (=0) used bits" in page table, saying page may not be recently used. Architect s role: support setting dirty and used bits dirty used Page Table... Freelist Free Pages 2/14,16,21,23/

38 TLB Design Concepts 2/14,16,21,23/

39 MIPS Address Translation: How does it work? Virtual Addresses Physical Addresses A0-A31 CPU D0-D31 Virtual Physical TLB also contains protection bits for virtual address A0-A31 Translation Look-Aside Memory Buffer D0-D31 (TLB) Data What is the Per-Process Translation Look-Aside Buffer (TLB) table of mappings A small very fast fully-associative cache of mappings from virtual to physical addresses Fast common case: If virtual address is in TLB, process has permission to read/write it. 2/14,16,21,23/ that it caches? Recently used VA PA entries.

40 TLB caches page table entries. The TLB Caches Page Table Entries virtual address page offset Page Table for current AddrSpaceID V Physical and virtual pages must be the same size! Here, 1024 Bytes/page each. Physical frame address TLB PAdd frame Vadd page physical address frame offset MIPS handles TLB misses in software (random replacement). Other machines use hardware. V=0 pages reside on disk or have not yet been allocated to this ASID. OS handles V=0 as a Page fault 2/14,16,21,23/

41 A Memory Access From Virtual Address to L2 Cache Hit Figure B.17 The overall picture of a hypothetical memory hierarchy going from virtual address to L2 cache access. The page size is 16 KB. The TLB is two-way (2-way) set associative with 256 (2*2 7 ) entries. The L1 cache is a direct-mapped 16 KB (2 8 *2 6 Byte), and the L2 cache is 4-way set associative with a total of 4 MB (4*2 14 *2 6 B). Both use 64-byte (2 6 ) blocks. The virtual address is 64 bits and the physical address is 40 bits. 2/14,16,21,23/ , Elsevier Inc. 41

42 Can TLB translation overlap cache indexing? {Maybe if cache is tiny.} Virtual Page Number Page Offset Tag Part of Physical Addr = Physical Page Number Index Byte Select Virtual Translation Look-Aside Buffer (TLB) Physical Cache Tags Valid Cache Data Cache Block Cache Tag = Cache Block Having cache index in page offset works, but Q. What is the downside? A. Inflexibility. Size of cache limited by page size. Hit Data out 2/14,16,21,23/

43 Problems With Overlapped TLB Access Overlapped access only works so long as the address bits used to index into the cache do not change as the result of VA translation This usually limits overlapping to small caches, large page sizes, or high n-way set associative caches if a large capacity cache is needed Example: suppose everything the same except that the cache is increased to 8 KBytes instead of 4 KB: 11 2 cache index virt page # disp 00 Solutions: go to 8KByte page sizes; go to 2-way set associative cache; or SW guarantee VA[13]=PA[13] This bit is changed by VA translation, but it is needed for cache lookup K 2-way set assoc cache 2/14,16,21,23/

44 Can CPU use virtual addresses for cache? Virtual Addresses Physical Addresses A0-A31 CPU D0-D31 Virtual Cache D0-D31 Virtual Physical Translation Look-Aside Buffer (TLB) A0-A31 Main Memory D0-D31 If cache index in virtual address, only cache misses use TLB! Downside: a subtle, fatal problem. What is it? (Aliasing) A. Synonym problem. If two address spaces share a physical frame, data may be in cache twice. Maintaining consistency is a nightmare. 2/14,16,21,23/

45 Aside for HW1: Head to Head ILP Comparison Processor Micro architecture Fetch / Issue / Execute Funct. Units Clock Rate (GHz) Transistors Die size Power Intel Pentium 4 Extreme Prescott Speculative dynamically scheduled; deeply pipelined; SMT (2004) 3/3/4 7 int. 1 FP M 122 mm W AMD Athlon 64 FX-55 / -57 Opteron Speculative dynamically scheduled (2004 / 05) 3/3/4 6 int. 3 FP 2.6 / M/114M 193/115 mm W/ 104 W IBM Power5 (1 core only) Speculative dynamically scheduled; SMT; 2 cores/chip (2004) 8/4/8 6 int. 2 FP M 300 mm 2 (est.) 80W (est.) Intel Itanium 2 Madison Statically scheduled VLIW-style (2004) 6/5/11 9 int. 2 FP M 423 mm W Sources: /14,16,21,23/

46 Summary #1/3: The Cache Design Space Several interacting dimensions cache size block size associativity replacement policy write-through vs write-back write allocation The optimal choice is a compromise depends on access characteristics» workload» use (I-cache, D-cache, TLB) depends on technology / cost Simplicity often wins Bad Good Cache Size Factor A Less Associativity Block Size Factor B More 2/14,16,21,23/

47 Summary #2/3: Caches The Principle of Locality: Program access a relatively small portion of the address space at any instant of time.» Temporal Locality: Locality in Time» Spatial Locality: Locality in Space Three Major Uniprocessor Categories of Cache Misses: Compulsory Misses: sad facts of life. Example: cold start misses. Capacity Misses: increase cache size Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! Coherence Misses (in multiprocessors): Avoid frequent use of shared variables in parallel programs. Write Policy: Write Through vs. Write Back Today CPU time is a function of (ops, cache misses) vs. just f(ops): Increasing performance affects Compilers, Data structures, and Algorithms 2/14,16,21,23/

48 Summary #3/3: TLB, Virtual Memory Page tables map virtual address to physical address TLBs are important for fast translation TLB misses are significant in processor performance This decade is a funny time, since most systems cannot access all of 2nd level cache without TLB misses! The answer in newer processors is 2-levels of TLB. Caches, TLBs, Virtual Memory all understood by examining how they deal with four questions: 1) Where can a block be placed? 2) How is a block found? 3) What block is replaced on a miss? 4) How are writes handled? Today VM allows many processes to share single memory without having to swap all processes to disk; today VM protection is more important than memory hierarchy benefits, but computers are still insecure Short in-class open-book quiz on appendices A-C & Chapter 1 near start of next (2/28) class. Bring a calculator. Read Chapter 2 next. (Please put your best address on your quiz.) 2/14,16,21,23/

49 Unused Slides, Spring /14,16,21,23/

50 1977: DRAM faster than microprocessors Apple (1977) Latencies CPU clock: 1000 ns DRAM access: 400 ns Steve Jobs Steve Wozniak 2/14,16,21,23/

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review CSE 502 Graduate Computer Architecture Lec 6-7 Memory Hierarchy Review Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse502 and ~lw Slides adapted from David Patterson,

More information

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0. Since 1980, CPU has outpaced DRAM... EEL 5764: Graduate Computer Architecture Appendix C Hierarchy Review Ann Gordon-Ross Electrical and Computer Engineering University of Florida http://www.ann.ece.ufl.edu/

More information

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review CSE 502 Graduate Computer Architecture Lec 5-6 Memory Hierarchy Review Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse502 and ~lw Slides adapted from David Patterson,

More information

Handout 4 Memory Hierarchy

Handout 4 Memory Hierarchy Handout 4 Memory Hierarchy Outline Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options (MMU Sub-system) Conclusion 2012/11/7 2 Since 1980, CPU has outpaced

More information

Modern Computer Architecture

Modern Computer Architecture Modern Computer Architecture Lecture3 Review of Memory Hierarchy Hongbin Sun 国家集成电路人才培养基地 Xi an Jiaotong University Performance 1000 Recap: Who Cares About the Memory Hierarchy? Processor-DRAM Memory Gap

More information

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

COSC 6385 Computer Architecture. - Memory Hierarchies (I) COSC 6385 Computer Architecture - Hierarchies (I) Fall 2007 Slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05 Recap

More information

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory 1 COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory Cristinel Ababei Dept. of Electrical and Computer Engineering Marquette University Credits: Slides adapted from presentations

More information

Memory Hierarchy Review

Memory Hierarchy Review EECS 252 Graduate Computer Architecture Lecture 3 0 (continued) Review of Caches and Virtual January 27 th, 20 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley

More information

Review from last lecture. EECS 252 Graduate Computer Architecture. Lec 4 Memory Hierarchy Review. Outline. Example Standard Deviation: Last time

Review from last lecture. EECS 252 Graduate Computer Architecture. Lec 4 Memory Hierarchy Review. Outline. Example Standard Deviation: Last time Review from last lecture EECS 252 Graduate Computer Architecture Lec 4 Hierarchy Review David Patterson Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~pattrsn

More information

Advanced Computer Architecture

Advanced Computer Architecture ECE 563 Advanced Computer Architecture Fall 2009 Lecture 3: Memory Hierarchy Review: Caches 563 L03.1 Fall 2010 Since 1980, CPU has outpaced DRAM... Four-issue 2GHz superscalar accessing 100ns DRAM could

More information

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review CISC 662 Graduate Computer Architecture Lecture 6 - Cache and virtual memory review Michela Taufer http://www.cis.udel.edu/~taufer/teaching/cis662f07 Powerpoint Lecture Notes from John Hennessy and David

More information

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck Main memory management CMSC 411 Computer Systems Architecture Lecture 16 Memory Hierarchy 3 (Main Memory & Memory) Questions: How big should main memory be? How to handle reads and writes? How to find

More information

COSC 6385 Computer Architecture - Memory Hierarchies (I)

COSC 6385 Computer Architecture - Memory Hierarchies (I) COSC 6385 Computer Architecture - Memory Hierarchies (I) Edgar Gabriel Spring 2018 Some slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05

More information

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1 Memory Hierarchy Maurizio Palesi Maurizio Palesi 1 References John L. Hennessy and David A. Patterson, Computer Architecture a Quantitative Approach, second edition, Morgan Kaufmann Chapter 5 Maurizio

More information

ECE4680 Computer Organization and Architecture. Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory ECE468 Computer Organization and Architecture Virtual Memory If I can see it and I can touch it, it s real. If I can t see it but I can touch it, it s invisible. If I can see it but I can t touch it, it

More information

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1 Memory Hierarchy Maurizio Palesi Maurizio Palesi 1 References John L. Hennessy and David A. Patterson, Computer Architecture a Quantitative Approach, second edition, Morgan Kaufmann Chapter 5 Maurizio

More information

ECE468 Computer Organization and Architecture. Virtual Memory

ECE468 Computer Organization and Architecture. Virtual Memory ECE468 Computer Organization and Architecture Virtual Memory ECE468 vm.1 Review: The Principle of Locality Probability of reference 0 Address Space 2 The Principle of Locality: Program access a relatively

More information

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks Administrivia CMSC 4 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy Alan Sussman als@cs.umd.edu Homework # returned today solutions posted on password protected web

More information

CPE 631 Lecture 04: CPU Caches

CPE 631 Lecture 04: CPU Caches Lecture 04 CPU Caches Electrical and Computer Engineering University of Alabama in Huntsville Outline Memory Hierarchy Four Questions for Memory Hierarchy Cache Performance 26/01/2004 UAH- 2 1 Processor-DR

More information

Memory hierarchy review. ECE 154B Dmitri Strukov

Memory hierarchy review. ECE 154B Dmitri Strukov Memory hierarchy review ECE 154B Dmitri Strukov Outline Cache motivation Cache basics Six basic optimizations Virtual memory Cache performance Opteron example Processor-DRAM gap in latency Q1. How to deal

More information

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141 EECS151/251A Spring 2018 Digital Design and Integrated Circuits Instructors: John Wawrzynek and Nick Weaver Lecture 19: Caches Cache Introduction 40% of this ARM CPU is devoted to SRAM cache. But the role

More information

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies 1 Lecture 22 Introduction to Memory Hierarchies Let!s go back to a course goal... At the end of the semester, you should be able to......describe the fundamental components required in a single core of

More information

CS3350B Computer Architecture

CS3350B Computer Architecture CS335B Computer Architecture Winter 25 Lecture 32: Exploiting Memory Hierarchy: How? Marc Moreno Maza wwwcsduwoca/courses/cs335b [Adapted from lectures on Computer Organization and Design, Patterson &

More information

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Improving Cache Performance [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user

More information

EE 4683/5683: COMPUTER ARCHITECTURE

EE 4683/5683: COMPUTER ARCHITECTURE EE 4683/5683: COMPUTER ARCHITECTURE Lecture 6A: Cache Design Avinash Kodi, kodi@ohioedu Agenda 2 Review: Memory Hierarchy Review: Cache Organization Direct-mapped Set- Associative Fully-Associative 1 Major

More information

Page 1. Memory Hierarchies (Part 2)

Page 1. Memory Hierarchies (Part 2) Memory Hierarchies (Part ) Outline of Lectures on Memory Systems Memory Hierarchies Cache Memory 3 Virtual Memory 4 The future Increasing distance from the processor in access time Review: The Memory Hierarchy

More information

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Caches Part 1. Instructor: Sören Schwertfeger.   School of Information Science and Technology SIST CS 110 Computer Architecture Caches Part 1 Instructor: Sören Schwertfeger http://shtech.org/courses/ca/ School of Information Science and Technology SIST ShanghaiTech University Slides based on UC Berkley's

More information

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory March 22, 1995 Dave Patterson (patterson@cs) and Shing Kong (shingkong@engsuncom) Slides available on http://httpcsberkeleyedu/~patterson

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic disk 5ms 20ms, $0.20 $2 per

More information

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

EITF20: Computer Architecture Part 5.1.1: Virtual Memory EITF20: Computer Architecture Part 5.1.1: Virtual Memory Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Cache optimization Virtual memory Case study AMD Opteron Summary 2 Memory hierarchy 3 Cache

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 Instructors: Nicholas Weaver & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Components of a Computer Processor

More information

Computer Architecture Spring 2016

Computer Architecture Spring 2016 Computer Architecture Spring 2016 Lecture 02: Introduction II Shuai Wang Department of Computer Science and Technology Nanjing University Pipeline Hazards Major hurdle to pipelining: hazards prevent the

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Processor-Memory Performance Gap 10000 µproc 55%/year (2X/1.5yr) Performance 1000 100 10 1 1980 1983 1986 1989 Moore s Law Processor-Memory Performance

More information

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality Administrative EEC 7 Computer Architecture Fall 5 Improving Cache Performance Problem #6 is posted Last set of homework You should be able to answer each of them in -5 min Quiz on Wednesday (/7) Chapter

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Processor-Memory Performance Gap 10000 µproc 55%/year (2X/1.5yr) Performance 1000 100 10 1 1980 1983 1986 1989 Moore s Law Processor-Memory Performance

More information

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy ENG338 Computer Organization and Architecture Part II Winter 217 S. Areibi School of Engineering University of Guelph Hierarchy Topics Hierarchy Locality Motivation Principles Elements of Design: Addresses

More information

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] CSF Cache Introduction [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005] Review: The Memory Hierarchy Take advantage of the principle of locality to present the user with as much

More information

Question?! Processor comparison!

Question?! Processor comparison! 1! 2! Suggested Readings!! Readings!! H&P: Chapter 5.1-5.2!! (Over the next 2 lectures)! Lecture 18" Introduction to Memory Hierarchies! 3! Processor components! Multicore processors and programming! Question?!

More information

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1 CSE 431 Computer Architecture Fall 2008 Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Mary Jane Irwin ( www.cse.psu.edu/~mji ) [Adapted from Computer Organization and Design, 4 th Edition, Patterson

More information

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory CPS 104 Computer Organization and Programming Lecture 20: Virtual Nov. 10, 1999 Dietolf (Dee) Ramm http://www.cs.duke.edu/~dr/cps104.html CPS 104 Lecture 20.1 Outline of Today s Lecture O Virtual. 6 Paged

More information

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)] ECE7995 (6) Improving Cache Performance [Adapted from Mary Jane Irwin s slides (PSU)] Measuring Cache Performance Assuming cache hit costs are included as part of the normal CPU execution cycle, then CPU

More information

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS152 Computer Architecture and Engineering Lecture 17: Cache System CS152 Computer Architecture and Engineering Lecture 17 System March 17, 1995 Dave Patterson (patterson@cs) and Shing Kong (shing.kong@eng.sun.com) Slides available on http//http.cs.berkeley.edu/~patterson

More information

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner CPS104 Computer Organization and Programming Lecture 16: Virtual Memory Robert Wagner cps 104 VM.1 RW Fall 2000 Outline of Today s Lecture Virtual Memory. Paged virtual memory. Virtual to Physical translation:

More information

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy Chapter 5A Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) Fast, expensive Dynamic RAM (DRAM) In between Magnetic disk Slow, inexpensive Ideal memory Access time of SRAM

More information

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface COEN-4710 Computer Hardware Lecture 7 Large and Fast: Exploiting Memory Hierarchy (Chapter 5) Cristinel Ababei Marquette University Department

More information

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 568/668 Part 11 Memory Hierarchy - I Israel Koren ECE568/Koren Part.11.1 ECE568/Koren Part.11.2 Ideal Memory

More information

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work? EEC 17 Computer Architecture Fall 25 Introduction Review Review: The Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology

More information

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Lecture 17 Introduction to Memory Hierarchies Why it s important  Fundamental lesson(s) Suggested reading: (HP Chapter Processor components" Multicore processors and programming" Processor comparison" vs." Lecture 17 Introduction to Memory Hierarchies" CSE 30321" Suggested reading:" (HP Chapter 5.1-5.2)" Writing more "

More information

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been? CS5 / EE5365 cache. Where Have We Been? Multi-Cycle Control Finite State Machines Microsequencing (Microprogramming) Exceptions Pipelining Datapath Making use of multi-cycle datapath Pipelining Control

More information

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache 14:332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson s UCB CS152 slides and Mary Jane Irwin s PSU CSE331 slides] 331 Week131 Spring 2006

More information

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY 1 Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: John Wawrzynek & Vladimir Stojanovic http://insteecsberkeleyedu/~cs61c/ Typical Memory Hierarchy Datapath On-Chip

More information

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University Lecture 12 Memory Design & Caches, part 2 Christos Kozyrakis Stanford University http://eeclass.stanford.edu/ee108b 1 Announcements HW3 is due today PA2 is available on-line today Part 1 is due on 2/27

More information

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o Memory hierarchy review ECE 154B Dmitri Strukov Outline Cache motivation Cache basics Opteron example Cache performance Six basic optimizations Virtual memory Processor DRAM gap (latency) Four issue superscalar

More information

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Memory Hierarchy Requirements. Three Advantages of Virtual Memory CS61C L12 Virtual (1) CS61CL : Machine Structures Lecture #12 Virtual 2009-08-03 Jeremy Huddleston Review!! Cache design choices: "! Size of cache: speed v. capacity "! size (i.e., cache aspect ratio)

More information

EITF20: Computer Architecture Part4.1.1: Cache - 2

EITF20: Computer Architecture Part4.1.1: Cache - 2 EITF20: Computer Architecture Part4.1.1: Cache - 2 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Cache performance optimization Bandwidth increase Reduce hit time Reduce miss penalty Reduce miss

More information

V. Primary & Secondary Memory!

V. Primary & Secondary Memory! V. Primary & Secondary Memory! Computer Architecture and Operating Systems & Operating Systems: 725G84 Ahmed Rezine 1 Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM)

More information

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): Motivation for The Memory Hierarchy: { CPU/Memory Performance Gap The Principle Of Locality Cache $$$$$ Cache Basics:

More information

Course Administration

Course Administration Spring 207 EE 363: Computer Organization Chapter 5: Large and Fast: Exploiting Memory Hierarchy - Avinash Kodi Department of Electrical Engineering & Computer Science Ohio University, Athens, Ohio 4570

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic disk 5ms 20ms, $0.20 $2 per

More information

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming MEMORY HIERARCHY BASICS B649 Parallel Architectures and Programming BASICS Why Do We Need Caches? 3 Overview 4 Terminology cache virtual memory memory stall cycles direct mapped valid bit block address

More information

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour CSE 4 Computer Architecture Spring 25 Lectures 7 Virtual Memory Pramod V. Argade May 25, 25 Announcements Office Hour Monday, June 6th: 6:3-8 PM, AP&M 528 Instead of regular Monday office hour 5-6 PM Reading

More information

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University Computer Organization and Structure Bing-Yu Chen National Taiwan University Large and Fast: Exploiting Memory Hierarchy The Basic of Caches Measuring & Improving Cache Performance Virtual Memory A Common

More information

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

CS 61C: Great Ideas in Computer Architecture. Virtual Memory CS 61C: Great Ideas in Computer Architecture Virtual Memory Instructor: Justin Hsia 7/30/2012 Summer 2012 Lecture #24 1 Review of Last Lecture (1/2) Multiple instruction issue increases max speedup, but

More information

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY Abridged version of Patterson & Hennessy (2013):Ch.5 Principle of Locality Programs access a small proportion of their address space at any time Temporal

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to

More information

14:332:331. Week 13 Basics of Cache

14:332:331. Week 13 Basics of Cache 14:332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson s UCB CS152 slides and Mary Jane Irwin s PSU CSE331 slides] 331 Lec20.1 Fall 2003 Head

More information

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM Computer Architecture Computer Science & Engineering Chapter 5 Memory Hierachy Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic

More information

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements CS61C L25 Virtual I (1) inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #25 Virtual I 27-8-7 Scott Beamer, Instructor Another View of the Hierarchy Thus far{ Next: Virtual { Regs Instr.

More information

Lecture 11. Virtual Memory Review: Memory Hierarchy

Lecture 11. Virtual Memory Review: Memory Hierarchy Lecture 11 Virtual Memory Review: Memory Hierarchy 1 Administration Homework 4 -Due 12/21 HW 4 Use your favorite language to write a cache simulator. Input: address trace, cache size, block size, associativity

More information

Page 1. Multilevel Memories (Improving performance using a little cash )

Page 1. Multilevel Memories (Improving performance using a little cash ) Page 1 Multilevel Memories (Improving performance using a little cash ) 1 Page 2 CPU-Memory Bottleneck CPU Memory Performance of high-speed computers is usually limited by memory bandwidth & latency Latency

More information

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1) Department of Electr rical Eng ineering, Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1) 王振傑 (Chen-Chieh Wang) ccwang@mail.ee.ncku.edu.tw ncku edu Depar rtment of Electr rical Engineering,

More information

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu CENG 3420 Computer Organization and Design Lecture 08: Memory - I Bei Yu CEG3420 L08.1 Spring 2016 Outline q Why Memory Hierarchy q How Memory Hierarchy? SRAM (Cache) & DRAM (main memory) Memory System

More information

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Page 1. Review: Address Segmentation  Review: Address Segmentation  Review: Address Segmentation Review Address Segmentation " CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs" February 23, 2011! Ion Stoica! http//inst.eecs.berkeley.edu/~cs162! 1111 0000" 1110 000" Seg #"

More information

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading Review on ILP TDT 4260 Chap 5 TLP & Hierarchy What is ILP? Let the compiler find the ILP Advantages? Disadvantages? Let the HW find the ILP Advantages? Disadvantages? Contents Multi-threading Chap 3.5

More information

CS161 Design and Architecture of Computer Systems. Cache $$$$$

CS161 Design and Architecture of Computer Systems. Cache $$$$$ CS161 Design and Architecture of Computer Systems Cache $$$$$ Memory Systems! How can we supply the CPU with enough data to keep it busy?! We will focus on memory issues,! which are frequently bottlenecks

More information

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) CS6C Review of Cache/VM/TLB Lecture 26 April 3, 999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs6c/schedule.html Outline Review Pipelining Review Cache/VM/TLB Review

More information

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate Outline Lecture 7: EITF20 Computer Architecture Anders Ardö EIT Electrical and Information Technology, Lund University November 21, 2012 A. Ardö, EIT Lecture 7: EITF20 Computer Architecture November 21,

More information

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed) Computing Systems & Performance Memory Hierarchy MSc Informatics Eng. 2012/13 A.J.Proença Memory Hierarchy (most slides are borrowed) AJProença, Computer Systems & Performance, MEI, UMinho, 2012/13 1 2

More information

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

EITF20: Computer Architecture Part 5.1.1: Virtual Memory EITF20: Computer Architecture Part 5.1.1: Virtual Memory Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Virtual memory Case study AMD Opteron Summary 2 Memory hierarchy 3 Cache performance 4 Cache

More information

CMPT 300 Introduction to Operating Systems

CMPT 300 Introduction to Operating Systems CMPT 300 Introduction to Operating Systems Cache 0 Acknowledgement: some slides are taken from CS61C course material at UC Berkeley Agenda Memory Hierarchy Direct Mapped Caches Cache Performance Set Associative

More information

ECE ECE4680

ECE ECE4680 ECE468. -4-7 The otivation for s System ECE468 Computer Organization and Architecture DRA Hierarchy System otivation Large memories (DRA) are slow Small memories (SRA) are fast ake the average access time

More information

Lecture 29 Review" CPU time: the best metric" Be sure you understand CC, clock period" Common (and good) performance metrics"

Lecture 29 Review CPU time: the best metric Be sure you understand CC, clock period Common (and good) performance metrics Be sure you understand CC, clock period Lecture 29 Review Suggested reading: Everything Q1: D[8] = D[8] + RF[1] + RF[4] I[15]: Add R2, R1, R4 RF[1] = 4 I[16]: MOV R3, 8 RF[4] = 5 I[17]: Add R2, R2, R3

More information

Memory Hierarchies 2009 DAT105

Memory Hierarchies 2009 DAT105 Memory Hierarchies Cache performance issues (5.1) Virtual memory (C.4) Cache performance improvement techniques (5.2) Hit-time improvement techniques Miss-rate improvement techniques Miss-penalty improvement

More information

EN1640: Design of Computing Systems Topic 06: Memory System

EN1640: Design of Computing Systems Topic 06: Memory System EN164: Design of Computing Systems Topic 6: Memory System Professor Sherief Reda http://scale.engin.brown.edu Electrical Sciences and Computer Engineering School of Engineering Brown University Spring

More information

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging CS162 Operating Systems and Systems Programming Lecture 14 Caching (Finished), Demand Paging October 11 th, 2017 Neeraja J. Yadwadkar http://cs162.eecs.berkeley.edu Recall: Caching Concept Cache: a repository

More information

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed) Computing Systems & Performance Memory Hierarchy MSc Informatics Eng. 2011/12 A.J.Proença Memory Hierarchy (most slides are borrowed) AJProença, Computer Systems & Performance, MEI, UMinho, 2011/12 1 2

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #23: VM I 2006-08-08 CS 61C L23 VM I (1) Andy Carle Outline Cache Review Virtual Memory CS 61C L23 VM I (2) Improving Miss Penalty

More information

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site: Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, 2003 Textbook web site: www.vrtechnology.org 1 Textbook web site: www.vrtechnology.org Laboratory Hardware 2 Topics 14:332:331

More information

Memory Hierarchy. Mehran Rezaei

Memory Hierarchy. Mehran Rezaei Memory Hierarchy Mehran Rezaei What types of memory do we have? Registers Cache (Static RAM) Main Memory (Dynamic RAM) Disk (Magnetic Disk) Option : Build It Out of Fast SRAM About 5- ns access Decoders

More information

Memory. Principle of Locality. It is impossible to have memory that is both. We create an illusion for the programmer. Employ memory hierarchy

Memory. Principle of Locality. It is impossible to have memory that is both. We create an illusion for the programmer. Employ memory hierarchy Datorarkitektur och operativsystem Lecture 7 Memory It is impossible to have memory that is both Unlimited (large in capacity) And fast 5.1 Intr roduction We create an illusion for the programmer Before

More information

CS3350B Computer Architecture

CS3350B Computer Architecture CS3350B Computer Architecture Winter 2015 Lecture 3.1: Memory Hierarchy: What and Why? Marc Moreno Maza www.csd.uwo.ca/courses/cs3350b [Adapted from lectures on Computer Organization and Design, Patterson

More information

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1 COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface Chapter 5 Large and Fast: Exploiting Memory Hierarchy 5 th Edition Memory Technology Static RAM (SRAM) 0.5ns 2.5ns, $2000 $5000 per GB Dynamic

More information

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN CHAPTER 4 TYPICAL MEMORY HIERARCHY MEMORY HIERARCHIES MEMORY HIERARCHIES CACHE DESIGN TECHNIQUES TO IMPROVE CACHE PERFORMANCE VIRTUAL MEMORY SUPPORT PRINCIPLE OF LOCALITY: A PROGRAM ACCESSES A RELATIVELY

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Static RAM (SRAM) Dynamic RAM (DRAM) 50ns 70ns, $20 $75 per GB Magnetic disk 0.5ns 2.5ns, $2000 $5000 per GB 5.1 Introduction Memory Technology 5ms

More information

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel Recap: Set Associative Cache CS152 Computer Architecture and Engineering Lecture 21 Virtual and Buses April 19, 1999 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) Valid N-way set associative: N entries

More information

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley. CS152 Computer Architecture and Engineering Caches and Virtual Memory October 31, 1997 Dave Patterson (http.cs.berkeley.edu/~patterson) lecture slides http//www-inst.eecs.berkeley.edu/~cs152/ cs 152 L1

More information

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy CS 6C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3 Instructors: Krste Asanović & Randy H Katz http://insteecsberkeleyedu/~cs6c/ Parallel Requests Assigned to computer eg, Search

More information

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency) Recap Who Cares About the Hierarchy? -DRAM Gap (latency) CS52 Computer Architecture and Engineering s and Virtual October 3, 997 Dave Patterson (http.cs.berkeley.edu/~patterson) lecture slides http//www-inst.eecs.berkeley.edu/~cs52/

More information

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals Cache Memory COE 403 Computer Architecture Prof. Muhamed Mudawar Computer Engineering Department King Fahd University of Petroleum and Minerals Presentation Outline The Need for Cache Memory The Basics

More information