Lecture-14 (Memory Hierarchy) CS422-Spring 2018 Biswa@CSE-IITK
The Ideal World Instruction Supply Pipeline (Instruction execution) Data Supply - Zero-cycle latency - Infinite capacity - Zero cost - Perfect control flow - No pipeline stalls -Perfect data flow (reg/memory dependencies) - Zero-cycle interconnect (operand communication) - Enough functional units - Zero latency compute - Zero-cycle latency - Infinite capacity - Infinite bandwidth - Zero cost CS422: Spring 2018 Biswabandan Panda, CSE@IITK 2
World of Memory Hierarchy. But Why? CS422: Spring 2018 Biswabandan Panda, CSE@IITK 3
Semiconductor Memory Semiconductor memory began to be competitive in early 1970s Intel formed to exploit market for semiconductor memory Early semiconductor memory was Static RAM (SRAM). SRAM cell internals similar to a latch (cross-coupled inverters). First commercial Dynamic RAM (DRAM) was Intel 1103 1Kbit of storage on single chip charge on a capacitor used to hold value Semiconductor memory quickly replaced core in 70s CS422: Spring 2018 Biswabandan Panda, CSE@IITK 4
One-transistor DRAM 1-T DRAM Cell word access transistor V REF bit Storage capacitor (FET gate, trench, stack) CS422: Spring 2018 Biswabandan Panda, CSE@IITK 5
Row Address Decoder DRAM Architecture Col. 1 bit lines Col. 2 M word lines Row 1 N Row 2 N N+M M Column Decoder & Sense Amplifiers Memory cell (one bit) Data D Bits stored in 2-dimensional arrays on chip Modern chips have around 4-8 logical banks on each chip each logical bank physically implemented as many smaller arrays CS422: Spring 2018 Biswabandan Panda, CSE@IITK 6
_bitline DRAM Dynamic random access memory Capacitor charge state indicates stored value Whether the capacitor is charged or discharged indicates storage of 1 or 0 1 capacitor 1 access transistor Capacitor leaks DRAM cell loses charge over time DRAM cell needs to be refreshed row enable CS422: Spring 2018 Biswabandan Panda, CSE@IITK 7
bitline _bitline SRAM Static random access memory Two cross coupled inverters store a single bit Feedback path enables the stored value to persist in the cell 4 transistors for storage 2 transistors for access row select CS422: Spring 2018 Biswabandan Panda, CSE@IITK 8
DRAM vs SRAM DRAM Slower access (capacitor) Higher density (1T 1C cell) Lower cost Requires refresh (power, performance, circuitry) SRAM Faster access (no capacitor) Lower density (6T cell) Higher cost No need for refresh CS422: Spring 2018 Biswabandan Panda, CSE@IITK 9
The Problem? Bigger is slower SRAM, 512 Bytes, sub-nanosec SRAM, KByte~MByte, ~nanosec DRAM, Gigabyte, ~50 nanosec Hard Disk, Terabyte, ~10 millisec Faster is more expensive (dollars and chip area) SRAM, < 10$ per Megabyte DRAM, < 1$ per Megabyte Hard Disk < 1$ per Gigabyte These sample values scale with time Other technologies have their place as well Flash memory, PC-RAM, MRAM, RRAM (not mature yet) CS422: Spring 2018 Biswabandan Panda, CSE@IITK 10
Why Memory Hierarchy? We want both fast and large But we cannot achieve both with a single level of memory Idea: Have multiple levels of storage (progressively bigger and slower as the levels are farther from the processor) and ensure most of the data the processor needs is kept in the fast(er) level(s) CS422: Spring 2018 Biswabandan Panda, CSE@IITK 11
1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Performance Memory Wall Problem 1000 µproc 60%/year CPU 100 10 1 Processor-Memory Performance Gap: (growing 50%/yr) DRAM DRAM 7%/year Time Four-issue 4GHz superscalar accessing 100ns DRAM could execute instructions during time for one memory access! 1600 CS422: Spring 2018 Biswabandan Panda, CSE@IITK 12
Size Affects Latency CPU CPU Small Memory Big Memory Motivates 3D stacking Signals have further to travel Fan out to more locations CS422: Spring 2018 Biswabandan Panda, CSE@IITK 13
Memory Hierarchy CPU A Small, Fast Memory (RF, SRAM) B Big, Slow Memory (DRAM) holds frequently used data capacity: Register << SRAM << DRAM latency: Register << SRAM << DRAM bandwidth: on-chip >> off-chip On a data access: if data fast memory low latency access (SRAM) if data fast memory high latency access (DRAM) CS422: Spring 2018 Biswabandan Panda, CSE@IITK 14
Memory Address (one dot per access) Access Patterns Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): 168-192 (1971) Time CS422: Spring 2018 Biswabandan Panda, CSE@IITK 15
Examples Address n loop iterations Instruction fetches Stack accesses subroutine call argument access subroutine return Data accesses scalar accesses Time CS422: Spring 2018 Biswabandan Panda, CSE@IITK 16
Locality of Reference Temporal Locality: If a location is referenced it is likely to be referenced again in the near future. Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future. CS422: Spring 2018 Biswabandan Panda, CSE@IITK 17
Memory Address (one dot per access) Again Temporal Locality Spatial Locality Time CS422: Spring 2018 Biswabandan Panda, CSE@IITK 18
Inside a Cache Address Address Processor Data CACHE Data Main Memory copy of main memory location 100 copy of main memory location 101 100 304 Data Byte Data Byte Data Byte Line Address Tag 6848 416 Data Block CS422: Spring 2018 Biswabandan Panda, CSE@IITK 19
Intel i7 Private L1 and L2 L2 is 256KB each. 10 cycle latency 8MB shared L3. ~40 cycles latency CS422: Spring 2018 Biswabandan Panda, CSE@IITK 20
Cache Events Look at Processor Address, search cache tags to find match. Then either Found in cache a.k.a. HIT Not in cache a.k.a. MISS Return copy of data from cache Read block of data from Main Memory Wait Return data to processor and update cache Q: Which line do we replace? CS422: Spring 2018 Biswabandan Panda, CSE@IITK 21
Placement Policy Block Number Memory 0 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 2 2 2 2 2 2 2 2 2 2 0 1 2 3 4 5 6 7 8 9 3 3 0 1 Set Number 0 1 2 3 0 1 2 3 4 5 6 7 Cache block 12 can be placed Fully (2-way) Set Direct Associative Associative Mapped anywhere anywhere in only into set 0 block 4 (12 mod 4) (12 mod 8) CS422: Spring 2018 Biswabandan Panda, CSE@IITK 22
Direct Mapped Tag Index Block Offset t V Tag k Data Block b 2 k lines In reality, tag-store is placed separately = t HIT Data Word or Byte CS422: Spring 2018 Biswabandan Panda, CSE@IITK 23
High bits or Low bits k Index V Tag t Tag Block Offset Data Block b 2 k lines = t HIT Data Word or Byte CS422: Spring 2018 Biswabandan Panda, CSE@IITK 24
Set-Associative Tag Index Block Offset b t V Tag k Data Block V Tag Data Block t = = Data Word or Byte HIT CS422: Spring 2018 Biswabandan Panda, CSE@IITK 25
Block Offset Tag Fully-associative V Tag t = Data Block t = HIT b = Data Word or Byte CS422: Spring 2018 Biswabandan Panda, CSE@IITK 26
What s in Tag Store? Valid bit Tag Replacement policy bits Dirty bit? Write back vs. write through caches CS422: Spring 2018 Biswabandan Panda, CSE@IITK 27