William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Similar documents
WEEK 7. Chapter 4. Cache Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

(Advanced) Computer Organization & Architechture. Prof. Dr. Hasan Hüseyin BALIK (4 th Week)

Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Unit 2. Chapter 4 Cache Memory

William Stallings Computer Organization and Architecture 8th Edition. Cache Memory

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.

TK2123: COMPUTER ORGANISATION & ARCHITECTURE. CPU and Memory (2)

Chapter 4. Cache Memory. Yonsei University

Chapter 4 Main Memory

Overview IN this chapter we will study. William Stallings Computer Organization and Architecture 6th Edition

Computer & Microprocessor Architecture HCA103

Characteristics of Memory Location wrt Motherboard. CSCI 4717 Computer Architecture. Characteristics of Memory Capacity Addressable Units

Chapter 4 - Cache Memory

Characteristics. Microprocessor Design & Organisation HCA2102. Unit of Transfer. Location. Memory Hierarchy Diagram

Cache memory. Lecture 4. Principles, structure, mapping

Memory Hierarchy Recall the von Neumann bottleneck - single, relatively slow path between the CPU and main memory.

Chapter 1 Computer System Overview

CSC 553 Operating Systems

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

COMPUTER ARCHITECTURE AND ORGANIZATION

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

Physical characteristics (such as packaging, volatility, and erasability Organization.

Computer Organization. Chapter 12: Memory organization

Lecture 2: Memory Systems

UNIT-V MEMORY ORGANIZATION

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Advanced Memory Organizations

Chapter 1 Computer System Overview

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CPE300: Digital System Architecture and Design

Cache Memory Part 1. Cache Memory Part 1 1

Computer Organization (Autonomous)

Q.1 Explain Computer s Basic Elements

LECTURE 11. Memory Hierarchy

MEMORY. Objectives. L10 Memory

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Memory. Objectives. Introduction. 6.2 Types of Memory

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

The Memory Hierarchy & Cache

GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATION SEMESTER: III

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

LECTURE 10: Improving Memory Access: Direct and Spatial caches

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

and data combined) is equal to 7% of the number of instructions. Miss Rate with Second- Level Cache, Direct- Mapped Speed

EE 4683/5683: COMPUTER ARCHITECTURE

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Memory hierarchy and cache

EE414 Embedded Systems Ch 5. Memory Part 2/2

CS3350B Computer Architecture

Computer Organization and Assembly Language (CS-506)

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Overview. EE 4504 Computer Organization. Historically, the limiting factor in a computer s performance has been memory access time

A Review on Cache Memory with Multiprocessor System

The Central Processing Unit

Contents. Main Memory Memory access time Memory cycle time. Types of Memory Unit RAM ROM

Page 1. Multilevel Memories (Improving performance using a little cash )

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Chapter 8. Virtual Memory

Operating system Dr. Shroouq J.

Chapter 6 Objectives

B.H.GARDI COLLEGE OF MASTER OF COMPUTER APPLICATION

Memory Organization MEMORY ORGANIZATION. Memory Hierarchy. Main Memory. Auxiliary Memory. Associative Memory. Cache Memory.

Page 1. Memory Hierarchies (Part 2)

A Cache Hierarchy in a Computer System

www-inst.eecs.berkeley.edu/~cs61c/

Performance metrics for caches

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Fig 7.30 The Cache Mapping Function. Memory Fields and Address Translation

Computer System Overview OPERATING SYSTEM TOP-LEVEL COMPONENTS. Simplified view: Operating Systems. Slide 1. Slide /S2. Slide 2.

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Course Administration

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

MIPS) ( MUX

Computer Organization

MICROPROCESSOR MEMORY ORGANIZATION

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Memory memories memory

Caches Concepts Review

Advanced Parallel Architecture Lesson 4. Annalisa Massini /2015

High Performance Computer Architecture Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Improving Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Highly-Associative Caches

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

Lecture 16. Today: Start looking into memory hierarchy Cache$! Yay!

Microprocessors I MICROCOMPUTERS AND MICROPROCESSORS

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Module 5a: Introduction To Memory System (MAIN MEMORY)

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

UNIT:4 MEMORY ORGANIZATION

Concept of Memory. The memory of computer is broadly categories into two categories:

Computer System Overview

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Memory Hierarchy Y. K. Malaiya

Computer Organization

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Plot SIZE. How will execution time grow with SIZE? Actual Data. int array[size]; int A = 0;

ECE Lab 8. Logic Design for a Direct-Mapped Cache. To understand the function and design of a direct-mapped memory cache.

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

CPE300: Digital System Architecture and Design

Transcription:

+ William Stallings Computer Organization and Architecture 10 th Edition 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.

2 + Chapter 4 Cache Memory

3 Location Internal (e.g. processor registers, cache, main memory) External (e.g. optical disks, magnetic disks, tapes) Capacity Number of words Number of bytes Unit of Transfer Word Block Access Method Sequential Direct Random Associative Performance Access time Cycle time Transfer rate Physical Type Semiconductor Magnetic Optical Magneto-optical Physical Characteristics Volatile/nonvolatile Erasable/nonerasable Organization Memory modules Table 4.1 Key Characteristics of Computer Memory Systems

+ Characteristics of Memory Systems 4 Location Refers to whether memory is internal and external to the computer Internal memory is often equated with main memory Processor requires its own local memory, in the form of registers Cache is another form of internal memory External memory consists of peripheral storage devices that are accessible to the processor via I/O controllers Capacity Memory is typically expressed in terms of bytes Unit of transfer For internal memory the unit of transfer is equal to the number of electrical lines into and out of the memory module

Method of Accessing Units of Data 5 Sequential access Direct access Random access Associative Memory is organized into units of data called records Involves a shared readwrite mechanism Each addressable location in memory has a unique, physically wiredin addressing mechanism A word is retrieved based on a portion of its contents rather than its address Access must be made in a specific linear sequence Individual blocks or records have a unique address based on physical location The time to access a given location is independent of the sequence of prior accesses and is constant Each location has its own addressing mechanism and retrieval time is constant independent of location or prior access patterns Access time is variable Access time is variable Any location can be selected at random and directly addressed and accessed Cache memories may employ associative access Main memory and some cache systems are random access

Capacity and Performance: 6 The two most important characteristics of memory Three performance parameters are used: Access time (latency) For random-access memory it is the time it takes to perform a read or write operation For non-random-access memory it is the time it takes to position the read-write mechanism at the desired location Memory cycle time Access time plus any additional time required before second access can commence Additional time may be required for transients to die out on signal lines or to regenerate data if they are read destructively Concerned with the system bus, not the processor Transfer rate The rate at which data can be transferred into or out of a memory unit For random-access memory it is equal to 1/(cycle time)

+ Memory 7 The most common forms are: Semiconductor memory Magnetic surface memory Optical Magneto-optical Several physical characteristics of data storage are important: Volatile memory Information decays naturally or is lost when electrical power is switched off Nonvolatile memory Once recorded, information remains without deterioration until deliberately changed No electrical power is needed to retain information Magnetic-surface memories Are nonvolatile Semiconductor memory May be either volatile or nonvolatile Nonerasable memory Cannot be altered, except by destroying the storage unit Semiconductor memory of this type is known as read-only memory (ROM) For random-access memory the organization is a key design issue Organization refers to the physical arrangement of bits to form words

+ Memory Hierarchy 8 Design constraints on a computer s memory can be summed up by three questions: How much, how fast, how expensive There is a trade-off among capacity, access time, and cost Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time The way out of the memory dilemma is not to rely on a single memory component or technology, but to employ a memory hierarchy

Inboard memory Registers Cache Main memory 9 Outboard storage Magnetic disk CD-ROM CD-RW DVD-RW DVD-RAM Blu-Ray Off-line storage Magnetic tape Figure 4.1 The Memory Hierarchy

Average access time T 1 + T 2 10 T 2 T 1 0 1 Fraction of accesses involving only Level 1 (Hit ratio) Figure 4.2 Performance of a Simple Two-Level Memory

+ Memory 11 The use of three levels exploits the fact that semiconductor memory comes in a variety of types which differ in speed and cost Data are stored more permanently on external mass storage devices External, nonvolatile memory is also referred to as secondary memory or auxiliary memory Disk cache A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk A few large transfers of data can be used instead of many small transfers of data Data can be retrieved rapidly from the software cache rather than slowly from the disk

Word Transfer Block Transfer 12 CPU Cache Main Memory Fast Slow (a) Single cache CPU Level 1 (L1) cache Level 2 (L2) cache Level 3 (L3) cache Main Memory Fastest Fast Less fast Slow (b) Three-level cache organization Figure 4.3 Cache and Main Memory

Line Number Tag Block 0 1 2 Memory address 0 1 2 3 Block 0 (K words) 13 C 1 Block Length (K Words) (a) Cache Block M 1 2 n 1 Word Length (b) Main memory Figure 4.4 Cache/Main-Memory Structure

START 14 Receive address RA from CPU Is block containing RA in cache? Yes Fetch RA word and deliver to CPU No Access main memory for block containing RA Allocate cache line for main memory block Load main memory block into cache line Deliver RA word to CPU DONE Figure 4.5 Cache Read Operation

System Bus Address 15 Address buffer Processor Control Cache Control Data buffer Data Figure 4.6 Typical Cache Organization

16 Cache Addresses Logical Physical Cache Size Mapping Function Direct Associative Set Associative Replacement Algorithm Least recently used (LRU) First in first out (FIFO) Least frequently used (LFU) Random Write Policy Write through Write back Line Size Number of caches Single or two level Unified or split Table 4.2 Elements of Cache Design

+ Cache Addresses 17 Virtual Memory Virtual memory Facility that allows programs to address memory from a logical point of view, without regard to the amount of main memory physically available When used, the address fields of machine instructions contain virtual addresses For reads to and writes from main memory, a hardware memory management unit (MMU) translates each virtual address into a physical address in main memory

Logical address MMU Physical address 18 Processor Cache Main memory Data (a) Logical Cache Logical address MMU Physical address Processor Cache Main memory Data (b) Physical Cache Figure 4.7 Logical and Physical Caches

Processor Type Year of Introduction L1 Cachea L2 cache L3 Cache IBM 360/85 Mainframe 1968 16 to 32 kb PDP-11/70 Minicomputer 1975 1 kb VAX 11/780 Minicomputer 1978 16 kb IBM 3033 Mainframe 1978 64 kb IBM 3090 Mainframe 1985 128 to 256 kb Intel 80486 PC 1989 8 kb Pentium PC 1993 8 kb/8 kb 256 to 512 KB PowerPC 601 PC 1993 32 kb PowerPC 620 PC 1996 32 kb/32 kb PowerPC G4 PC/server 1999 32 kb/32 kb 256 KB to 1 MB 2 MB IBM S/390 G6 Mainframe 1999 256 kb 8 MB Pentium 4 PC/server 2000 8 kb/8 kb 256 KB IBM SP High-end server/ 2000 64 kb/32 kb 8 MB supercomputer CRAY MTAb Supercomputer 2000 8 kb 2 MB Itanium PC/server 2001 16 kb/16 kb 96 KB 4 MB Itanium 2 PC/server 2002 32 kb 256 KB 6 MB IBM High-end 2003 64 kb 1.9 MB 36 MB POWER5 server CRAY XD-1 Supercomputer 2004 64 kb/64 kb 1MB IBM POWER6 PC/server 2007 64 kb/64 kb 4 MB 32 MB IBM z10 Mainframe 2008 64 kb/128 kb 3 MB 24-48 MB Intel Core i7 Workstaton/ 2011 EE 990 6 32 kb/32 kb 1.5 MB 12 MB server IBM zenterprise 196 Mainframe/ Server 2011 24 64 kb/ 128 kb 24 1.5 MB 24 MB L3 192 MB L4 19 Table 4.3 Cache Sizes of Some Processors a Two values separated by a slash refer to instruction and data caches. b Both caches are instruction only; no data caches. (Table can be found on page 134 in the textbook.)

Mapping Function 20 Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines Three techniques can be used: Direct The simplest technique Maps each block of main memory into only one possible cache line Associative Permits each main memory block to be loaded into any line of the cache The cache control logic interprets a memory address simply as a Tag and a Word field To determine whether a block is in the cache, the cache control logic must simultaneously examine every line s Tag for a match Set Associative A compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages

m lines B 0 b t b L 0 21 B m 1 First m blocks of main memory (equal to size of cache) (a) Direct mapping cache memory L m 1 b = length of block in bits t = length of tag in bits t b L 0 b one block of main memory (b) Associative mapping cache memory L m 1 Figure 4.8 Mapping From Main Memory to Cache: Direct and Associative

+ Mapping Function 22 Cache of 64kByte Cache block of 4 bytes i.e. cache is 16k (2 14 ) lines of 4 bytes 16MBytes main memory 24 bit address (2 24 =16M)

Direct Mapping Address Structure Tag s-r Line or Slot r Word w 8 14 2 24 bit address 2 bit word identifier (4 byte block) 22 bit block identifier 8 bit tag (=22-14) 14 bit slot or line No two blocks in the same line have the same Tag field Check contents of cache by finding line and checking Tag

+ 24 Hit and Miss

s+w 25 Cache Main Memory s r Tag Memory Address Line r Word w Tag Data L 0 WO W1 W2 W3 B 0 s r Compare (hit in cache) 1 if match 0 if no match w L i s w W4j W(4j+1) W(4j+2) W(4j+3) B j 0 if match 1 if no match (miss in cache) L m 1 Figure 4.9 Direct-Mapping Cache Organization

Direct + Mapping Example 26

+ 27

Tag (hex) 00 00 Main memory address (binary) Tag Line + Word 000000000000000000000000 000000000000000000000100 Data 13579246 28 00 00 000000001111111111111000 000000001111111111111100 16 16 000101100000000000000000 000101100000000000000100 77777777 11235813 Tag 00 16 Data 13579246 11235813 Line Number 0000 0001 16 000101100011001110011100 FEDCBA98 16 FEDCBA98 0CE7 16 000101101111111111111100 12345678 FF 16 11223344 12345678 3FFE 3FFF FF FF 111111110000000000000000 111111110000000000000100 8 bits 32 bits 16-Kline cache FF FF 111111111111111111111000 111111111111111111111100 11223344 24682468 32 bits 16-MByte main memory Note: Memory address values are in binary representation; other values are in hexadecimal Tag Line Word Main memory address = 8 bits 14 bits 2 bits Figure 4.10 Direct Mapping Example

+ Direct Mapping Summary 29 Address length = (s + w) bits Number of addressable units = 2 s+w words or bytes Block size = line size = 2 w words or bytes Number of blocks in main memory = 2 s+ w /2 w = 2 s Number of lines in cache = m = 2 r Size of tag = (s r) bits

+ Direct Mapping pros & cons 30 Simple Inexpensive Fixed location for given block If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high

+ Victim Cache 31 Originally proposed as an approach to reduce the conflict misses of direct mapped caches without affecting its fast access time Fully associative cache Typical size is 4 to 16 cache lines Residing between direct mapped L1 cache and the next level of memory

s+w 32 Cache Main Memory s Memory Address Tag Word Tag Data L 0 W0 W1 W2 W3 B 0 w Compare 1 if match 0 if no match (hit in cache) s w L j L m 1 s w W4j W(4j+1) W(4j+2) W(4j+3) B j 0 if match 1 if no match (miss in cache) Figure 4.11 Fully Associative Cache Organization

Tag (hex) 000000 000001 Main memory address (binary) Tag Word Data 000000000000000000000000 000000000000000000000100 13579246 33 058CE6 058CE7 058CE8 000101100011001110011000 000101100011001110011100 000101100011001110100000 FEDCBA98 Tag 3FFFFE 058CE7 3FFFFD 000000 3FFFFF Data 11223344 FEDCBA98 FEDCBA98 33333333 13579246 24682468 Line Number 0000 0001 3FFD 3FFE 3FFF 22 bits 32 bits 16 Kline Cache 3FFFFD 3FFFFE 3FFFFF 111111111111111111110100 111111111111111111111000 111111111111111111111100 33333333 11223344 24682468 32 bits 16 MByte Main Memory Note: Memory address values are in binary representation; other values are in hexadecimal Main Memory Address = Tag Word 22 bits 2 bits Figure 4.12 Associative Mapping Example

+ Associative Mapping Summary 34 Address length = (s + w) bits Number of addressable units = 2 s+w words or bytes Block size = line size = 2 w words or bytes Number of blocks in main memory = 2 s+ w /2 w = 2 s Number of lines in cache = undetermined Size of tag = s bits

+ Set Associative Mapping 35 Compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages Cache consists of a number of sets Each set contains a number of lines A given block maps to any line in a given set e.g. 2 lines per set 2 way associative mapping A given block can be in one of 2 lines in only one set

v lines k lines B 0 L 0 36 B v 1 First v blocks of main memory (equal to number of sets) Cache memory - set 0 L k 1 Cache memory - set v 1 (a) v associative-mapped caches B 0 L 0 one set B v 1 First v blocks of main memory (equal to number of sets) Cache memory - way 1 Cache memory - way k L v 1 (b) k direct-mapped caches Figure 4.13 Mapping From Main Memory to Cache: k-way Set Associative

s+w 37 Cache Main Memory Tag Memory Address Set Word Tag Data F 0 B 0 s d d w F 1 B 1 Set 0 s d F k 1 F k s+w B j Compare F k+i Set 1 1 if match 0 if no match (hit in cache) F 2k 1 0 if match 1 if no match (miss in cache) Figure 4.14 k-way Set Associative Cache Organization

+ Set Associative Mapping Summary 38 Address length = (s + w) bits Number of addressable units = 2 s+w words or bytes Block size = line size = 2 w words or bytes Number of blocks in main memory = 2 s+w/ 2 w= 2 s Number of lines in set = k Number of sets = v = 2 d Number of lines in cache = m=kv = k * 2 d Size of cache = k * 2 d+w words or bytes Size of tag = (s d) bits

Tag (hex) 000 000 Main memory address (binary) Tag Set + Word Data 000000000000000000000000 13579246 000000000000000000000100 Tag Main Memory Address = Set Word 39 9 bits 13 bits 2 bits 000 000 000000001111111111111000 000000001111111111111100 02C 02C 000101100000000000000000 000101100000000000000100 77777777 11235813 Tag 000 02C Data 13579246 11235813 Set Number 0000 0001 Tag 02C Data 77777777 02C 000101100011001110011100 FEDCBA98 02C FEDCBA98 0CE7 02C 000101100111111111111100 12345678 1FF 02C 11223344 12345678 1FFE 1FFF 1FF 24682468 1FF 1FF 111111111000000000000000 111111111000000000000100 9 bits 32 bits 9 bits 16 Kline Cache 32 bits 1FF 1FF 111111111111111111111000 111111111111111111111100 11223344 24682468 32 bits 16 MByte Main Memory Note: Memory address values are in binary representation; other values are in hexadecimal Figure 4.15 Two-Way Set Associative Mapping Example

Hit ratio 1.0 40 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M Cache size (bytes) direct 2-way 4-way 8-way 16-way Figure 4.16 Varying Associativity over Cache Size

+ Replacement Algorithms 41 Once the cache has been filled, when a new block is brought into the cache, one of the existing blocks must be replaced For direct mapping there is only one possible line for any particular block and no choice is possible For the associative and set-associative techniques a replacement algorithm is needed To achieve high speed, an algorithm must be implemented in hardware

+ The most common replacement 42 algorithms are: Least recently used (LRU) Most effective Replace that block in the set that has been in the cache longest with no reference to it Because of its simplicity of implementation, LRU is the most popular replacement algorithm First-in-first-out (FIFO) Replace that block in the set that has been in the cache longest Easily implemented as a round-robin or circular buffer technique Least frequently used (LFU) Replace that block in the set that has experienced the fewest references Could be implemented by associating a counter with each line

Write Policy 43 When a block that is resident in the cache is to be replaced there are two cases to consider: There are two problems to contend with: If the old block in the cache has not been altered then it may be overwritten with a new block without first writing out the old block More than one device may have access to main memory If at least one write operation has been performed on a word in that line of the cache then main memory must be updated by writing the line of cache out to the block of memory before bringing in the new block A more complex problem occurs when multiple processors are attached to the same bus and each processor has its own local cache - if a word is altered in one cache it could conceivably invalidate a word in other caches

+ Write Through 44 and Write Back Write through Simplest technique All write operations are made to main memory as well as to the cache The main disadvantage of this technique is that it generates substantial memory traffic and may create a bottleneck Write back Minimizes memory writes Updates are made only in the cache Portions of main memory are invalid and hence accesses by I/O modules can be allowed only through the cache This makes for complex circuitry and a potential bottleneck

Line Size 45 When a block of data is retrieved and placed in the cache not only the desired word but also some number of adjacent words are retrieved As the block size increases more useful data are brought into the cache Two specific effects come into play: Larger blocks reduce the number of blocks that fit into a cache As a block becomes larger each additional word is farther from the requested word As the block size increases the hit ratio will at first increase because of the principle of locality The hit ratio will begin to decrease as the block becomes bigger and the probability of using the newly fetched information becomes less than the probability of reusing the information that has to be replaced

+ Multilevel Caches 46 As logic density has increased it has become possible to have a cache on the same chip as the processor The on-chip cache reduces the processor s external bus activity and speeds up execution time and increases overall system performance When the requested instruction or data is found in the on-chip cache, the bus access is eliminated On-chip cache accesses will complete appreciably faster than would even zero-wait state bus cycles During this period the bus is free to support other transfers Two-level cache: Internal cache designated as level 1 (L1) External cache designated as level 2 (L2) Potential savings due to the use of an L2 cache depends on the hit rates in both the L1 and L2 caches The use of multilevel caches complicates all of the design issues related to caches, including size, replacement algorithm, and write policy

Hit ratio 0.98 0.96 47 0.94 0.92 0.90 0.88 0.86 0.84 0.82 0.80 0.78 1k 2k 4k 8k 16k 32k L1 = 16k L1 = 8k 64k 128k 256k 512k 1M 2M L2 Cache size (bytes) Figure 4.17 Total Hit Ratio (L1 and L2) for 8 Kbyte and 16 Kbyte L1

+ Unified Versus Split Caches 48 Has become common to split cache: One dedicated to instructions One dedicated to data Both exist at the same level, typically as two L1 caches Advantages of unified cache: Higher hit rate Balances load of instruction and data fetches automatically Only one cache needs to be designed and implemented Trend is toward split caches at the L1 and unified caches for higher levels Advantages of split cache: Eliminates cache contention between instruction fetch/decode unit and execution unit Important in pipelining

Problem External memory slower than the system bus. Increased processor speed results in external bus becoming a bottleneck for cache access. Internal cache is rather small, due to limited space on chip Contention occurs when both the Instruction Prefetcher and the Execution Unit simultaneously require access to the cache. In that case, the Prefetcher is stalled while the Execution Unit s data access takes place. Increased processor speed results in external bus becoming a bottleneck for L2 cache access. Some applications deal with massive databases and must have rapid access to large amounts of data. The on-chip caches are too small. Solution Add external cache using faster memory technology. Move external cache onchip, operating at the same speed as the processor. Add external L2 cache using faster technology than main memory Create separate data and instruction caches. Create separate back-side bus that runs at higher speed than the main (front-side) external bus. The BSB is dedicated to the L2 cache. Move L2 cache on to the processor chip. Add external L3 cache. Processor on which Feature First Appears 386 486 486 Pentium Pentium Pro Pentium II Pentium III Move L3 cache on-chip. Pentium 4 49 Table 4.4 Intel Cache Evolution (Table is on page 150 in the

50 System Bus Out-of-order execution logic L1 instruction cache (12K mops) Instruction fetch/decode unit 64 bits L3 cache (1 MB) Integer register file FP register file Load address unit Store address unit Simple integer ALU Simple integer ALU Complex integer ALU FP/ MMX unit FP move unit L2 cache (512 KB) L1 data cache (16 KB) 256 bits Figure 4.18 Pentium 4 Block Diagram

51 Table 4.5 Pentium 4 Cache Operating Modes Control Bits Operating Mode CD NW Cache Fills Write Throughs Invalidates 0 0 Enabled Enabled Enabled 1 0 Disabled Enabled Enabled 1 1 Disabled Disabled Disabled Note: CD = 0; NW = 1 is an invalid combination.

+ Summary Chapter 4 Cache Memory 52 Computer memory system overview Characteristics of Memory Systems Memory Hierarchy Cache memory principles Pentium 4 cache organization Elements of cache design Cache addresses Cache size Mapping function Replacement algorithms Write policy Line size Number of caches

+ A four-way set associative cache has lines of 8 bytes and a total size of 8kbytes. The 64Mbyte main memory is byte addressable. Show the format of main memory addresses. 53 Main Memory Size = 64Mbyte 2 26 Cache size = 8kbyte 2 13 Line has 8 bytes 4-way set associative cache = Set has 4 lines No. of Lines in Cache = 8kbyte / 8 = 1k Set = 1k/4= 256 TAG SET WORD 15 8 3