Cache Coherence in Bus-Based Shared Memory Multiprocessors

Similar documents
Shared Memory Multiprocessors

Processor Architecture

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

L7 Shared Memory Multiprocessors. Shared Memory Multiprocessors

Shared Memory Architectures. Shared Memory Multiprocessors. Caches and Cache Coherence. Cache Memories. Cache Memories Write Operation

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

Bus-Based Coherent Multiprocessors

Computer Architecture

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

Fall 2012 EE 6633: Architecture of Parallel Computers Lecture 4: Shared Address Multiprocessors Acknowledgement: Dave Patterson, UC Berkeley

Advanced OpenMP. Lecture 3: Cache Coherency

Multicore Workshop. Cache Coherency. Mark Bull David Henty. EPCC, University of Edinburgh

Switch Gear to Memory Consistency

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

Shared Memory Architectures. Approaches to Building Parallel Machines

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Cache Coherence: Part 1

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

Snooping-Based Cache Coherence

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

CS/ECE 757: Advanced Computer Architecture II (Parallel Computer Architecture) Symmetric Multiprocessors Part 1 (Chapter 5)

Module 10: "Design of Shared Memory Multiprocessors" Lecture 20: "Performance of Coherence Protocols" MOESI protocol.

Lecture-22 (Cache Coherence Protocols) CS422-Spring

NOW Handout Page 1. Recap. Protocol Design Space of Snooping Cache Coherent Multiprocessors. Sequential Consistency.

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy

Memory Hierarchy in a Multiprocessor

Approaches to Building Parallel Machines. Shared Memory Architectures. Example Cache Coherence Problem. Shared Cache Architectures

Snooping coherence protocols (cont.)

Lecture 24: Multiprocessing Computer Architecture and Systems Programming ( )

Alewife Messaging. Sharing of Network Interface. Alewife User-level event mechanism. CS252 Graduate Computer Architecture.

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Multiprocessors 1. Outline

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Multiprocessors & Thread Level Parallelism

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy Prof. Sherief Reda School of Engineering Brown University

Computer Systems Architecture

NOW Handout Page 1. Recap: Performance Trade-offs. Shared Memory Multiprocessors. Uniprocessor View. Recap (cont) What is a Multiprocessor?

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

Cray XE6 Performance Workshop

Recall: Sequential Consistency. CS 258 Parallel Computer Architecture Lecture 15. Sequential Consistency and Snoopy Protocols

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

Handout 3 Multiprocessor and thread level parallelism

Module 5: Performance Issues in Shared Memory and Introduction to Coherence Lecture 10: Introduction to Coherence. The Lecture Contains:

CSC 631: High-Performance Computer Architecture

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

10 Parallel Organizations: Multiprocessor / Multicore / Multicomputer Systems

CSE502: Computer Architecture CSE 502: Computer Architecture

Aleksandar Milenkovich 1

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

Chapter 5. Multiprocessors and Thread-Level Parallelism

Caches. Parallel Systems. Caches - Finding blocks - Caches. Parallel Systems. Parallel Systems. Lecture 3 1. Lecture 3 2

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

Convergence of Parallel Architecture

Parallel Arch. Review

12 Cache-Organization 1

Shared Symmetric Memory Systems

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

EC 513 Computer Architecture

Parallel Computing Platforms. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville

Module 9: "Introduction to Shared Memory Multiprocessors" Lecture 16: "Multiprocessor Organizations and Cache Coherence" Shared Memory Multiprocessors

Chapter 5. Multiprocessors and Thread-Level Parallelism

Page 1. Cache Coherence

Snooping coherence protocols (cont.)

1. Memory technology & Hierarchy

Lecture 23: Thread Level Parallelism -- Introduction, SMP and Snooping Cache Coherence Protocol

Lecture 3: Snooping Protocols. Topics: snooping-based cache coherence implementations

Parallel Computing Platforms

Computer Systems Architecture

Chapter 5 Thread-Level Parallelism. Abdullah Muzahid

CSC526: Parallel Processing Fall 2016

The Cache-Coherence Problem

Lecture 20: Multi-Cache Designs. Spring 2018 Jason Tang

CS315A Midterm Solutions

Chap. 4 Multiprocessors and Thread-Level Parallelism

Shared Memory Multiprocessors

Parallel Computer Architecture Spring Shared Memory Multiprocessors Memory Coherence

Lecture 24: Virtual Memory, Multiprocessors

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

CSCI 4717 Computer Architecture

Computer Architecture

Multiprocessor Cache Coherency. What is Cache Coherence?

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

Chapter Seven. Idea: create powerful computers by connecting many smaller ones

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Cache Coherence Protocols: Implementation Issues on SMP s. Cache Coherence Issue in I/O

EITF20: Computer Architecture Part 5.2.1: IO and MultiProcessor

Foundations of Computer Systems

Parallel Architecture. Hwansoo Han

Introduction to Multiprocessors (Part I) Prof. Cristina Silvano Politecnico di Milano

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Chapter 5. Thread-Level Parallelism

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Transcription:

Cache Coherence in Bus-Based Shared Memory Multiprocessors Shared Memory Multiprocessors Variations Cache Coherence in Shared Memory Multiprocessors A Coherent Memory System: Intuition Formal Definition of Coherence Cache Coherence Approaches Bus-Snooping Cache Coherence Protocols Write-invalidate invalidate Bus-Snooping Protocol For Write-Through Caches Write-invalidate invalidate Bus-Snooping Protocol For Write-Back Caches MSI Write-Back Invalidate Protocol MESI Write-Back Invalidate Protocol Write-update Bus-Snooping Protocol For Write-Back Caches Dragon Write-back Update Protocol PCA Chapter 5 #1 lec # 9 Spring2011 5-3-2011

Shared Memory Multiprocessors Cache Data Inconsistency /Coherence Problem Direct support in hardware of shared address space (SAS) parallel programming model: address translation and protection in hardware (hardware SAS). Any processor can directly reference any memory location Communication occurs implicitly as result of loads and stores Normal uniprocessor mechanisms used to access data (loads and stores) + synchronization Key is extension of memory hierarchy to support multiple processors. Extended memory hierarchy Memory may be physically distributed among processors Caches in the extended memory hierarchy may have multiple inconsistent copies of the same data leading to data inconsistency or cache coherence problem that have to addressed by hardware architecture. #2 lec # 9 Spring2011 5-3-2011

Shared Memory Multiprocessors: Support of Programming Models Message passing Programming models Shared address space Compilation or library Multiprogramming Communication abstraction User/system boundary Operating systems support Communication hardware Physical communication medium Address translation and protection in hardware (hardware SAS). Message passing using shared memory buffers: Hardware/software boundary Can offer very high performance since no OS involvement necessary. The focus here is on supporting a consistent or coherent shared address space. #3 lec # 9 Spring2011 5-3-2011

Shared Memory Multiprocessors Variations Uniform Memory Access (UMA) Multiprocessors : All processors have equal access to all memory addresses. Can be further divided into three types: Bus-based shared memory multiprocessors Symmetric Memory Multiprocessors (SMPs). Shared cache multiprocessors Dancehall multiprocessors e.g. CMPs (multi-core processors) Non-uniform Memory Access (NUMA) or distributed memory Multiprocessors : Shared memory is physically distributed locally among processors (nodes). Access to remote memory is higher. Most popular design to build scalable systems (MPPs). Cache coherence achieved by directory-based methods. #4 lec # 9 Spring2011 5-3-2011

Shared Memory Multiprocessors Variations P 1 Switch P n P 1 Symmetric Memory Multiprocessors (SMPs) P n UMA e.g CMPs (interleaved) (Interleaved) Second-level First-level $ $ $ $ Bus or point-to-point interconnects Bus UMA (Interleaved) Main memory Mem I/O devices (a) Shared cache (b) Bus-based shared memory Or SMP nodes P 1 P n P 1 Scalable Distributed Shared Memory P n UMA $ Interconnection network $ Mem $ Mem $ Mem (c) Dancehall Mem Scalable network p-to-p or MIN Interconnection network (d) Distributed-memory NUMA #5 lec # 9 Spring2011 5-3-2011

Uniform Memory Access (UMA) Multiprocessors Bus-based Multiprocessors: (SMPs) A number of processors (commonly 2-4) 2 in a single node share physical memory via system bus or point-to to-point interconnects (e.g. AMD64 via. HyperTransport) Symmetric access to all of main memory from any processor. Commonly called: Symmetric Memory Multiprocessors (SMPs). Building blocks for larger parallel systems (MPPs, clusters) Also attractive for high throughput servers Bus-snooping mechanisms used to address the cache coherency problem. Shared cache Multiprocessor Systems: Low-latency sharing and prefetching across processors. Sharing of working sets. No cache coherence problem (and hence no false sharing either). But high bandwidth needs and negative interference (e.g. conflicts). Hit and miss latency increased due to intervening switch and cache size. Used in mid 80s to connect a few of processors on a board (Encore, Sequent). Used currently in chip multiprocessors (CMPs): 2-4 processors on a single chip. e.g IBM Power 4, 5: two processor cores on a chip (shared L2). Dancehall: No local memory associated with a node. Not a popular design: All memory is uniformly costly to access over the network for all processors. #6 lec # 9 Spring2011 5-3-2011

Uniform Memory Access Example: Intel Pentium Pro Quad CPU Interrupt controller Bus interface 256-KB L 2 $ P-Pro module P-Pro module Circa 1997 P-Pro module Shared FSB P-Pro bus (64-bit data, 36-bit addr ess, 66 MHz) PCI bridge PCI bridge Memory controller PCI I/O cards PCI bus PCI bus MIU 1-, 2-, or 4-way interleaved DRAM Bus-Based Symmetric Memory Processors (SMPs). A single Front Side Bus (FSB) is shared among processors This severely limits scalability to only 2-4 processors All coherence and multiprocessing glue in processor module Highly integrated, targeted at high volume Repeated here from lecture 1 #7 lec # 9 Spring2011 5-3-2011

Non-Uniform Memory Access (NUMA) Example: AMD 8-way 8 Opteron Server Node Circa 2003 Dedicated point-to-point interconnects (Coherent HyperTransport links) used to connect processors alleviating the traditional limitations of FSB-based SMP systems (yet still providing the cache coherency support needed) Each processor has two integrated DDR memory channel controllers: memory bandwidth scales up with number of processors. NUMA architecture since a processor can access its own memory at a lower latency than access to remote memory directly connected to other processors in the system. Total 16 processor cores when dual core Opteron processors used Repeated here from lecture 1 #8 lec # 9 Spring2011 5-3-2011

Chip Multiprocessor (Shared-Cache) Example: CMP IBM Power 4 Two Processor cores on a chip share level 2 cache #9 lec # 9 Spring2011 5-3-2011

Complexities of MIMD Shared Memory Access Relative order (interleaving) of instructions in different streams in not fixed. With no synchronization among instructions streams, a large number of instruction interleavings is possible. If instructions are reordered in a stream then an even larger of number of instruction interleavings is larger possible. i.e Effect of access not visible to memory, all processors in the same order If memory accesses are not atomic with multiple copies of the same data coexisting (cache-based systems) then different processors observe different interleavings during the same execution. The total number of possible observed execution orders becomes even larger. i.e orders #10 lec # 9 Spring2011 5-3-2011

Cache Coherence in Shared Memory Multiprocessors Caches play a key role in all shared memory multiprocessor system variations: Reduce average data access time (AMAT). Reduce bandwidth demands placed on shared interconnect. Replication in cache reduces artifactual communication. Cache coherence or inconsistency problem. Private processor caches create a problem: Copies of a variable can be present in multiple caches. A write by one processor may not become visible to others: Processors accessing stale (old) value in their private caches. Also caused by: Process migration. I/O activity. Software and/or hardware actions needed to ensure: 1- Write visibility to all processors 2- in correct order thus maintaining cache coherence. i.e. Processors must see the most updated value #11 lec # 9 Spring2011 5-3-2011

Cache Coherence Problem Example P 1 From P 2 main memory P 3 u =? u =? $ 4 $ 5 $ u:5 u:5 u = 7 3 Write Back Caches Assumed 1 u:5 Memory 2 I/O devices 1 P1 reads u=5 from memory 2 P3 reads u=5 from memory 3 P3 writes u=7 to local P3 cache 4 P1 reads old u=5 from local P1 cache 5 P2 reads old u=5 from memory Processors see different values for u after event 3. With write back caches, a value updated in cache may not have been written back to memory: Processes even accessing main memory may see very stale value. Unacceptable: leads to incorrect program execution. PCA page 274 Write by P3 not visible to P1, P2 #12 lec # 9 Spring2011 5-3-2011 Time

A Coherent Memory System: Intuition Reading a memory location should return the latest value written (by any process). Easy to achieve in uniprocessors: Except for DMA-based I/O: Coherence between DMA I/O devices and processors. Infrequent so software solutions work: Uncacheable memory regions, uncacheable operations, flush pages, pass I/O data through caches. The same should hold when processes run on different processors: e.g. Results should be the same as if the processes were interleaved (or running) on a uniprocessor. Coherence problem much more critical in multiprocessors: Pervasive. Performance-critical. Must be treated as a basic hardware design issue. PCA page 275 #13 lec # 9 Spring2011 5-3-2011

Basic Definitions Extend definitions in uniprocessors to multiprocessors: Memory operation: a single read (load), write (store) or read-modifywrite access to a memory location. Assumed to execute atomically: 1- (visible) with respect to (w.r.t) each other and 2- in the same order. Issue: A memory operation issues when it leaves processor s internal environment and is presented to memory system (cache, buffer ). Perform: operation appears to have taken place, as far as processor can tell from other memory operations it issues. A write performs w.r.t. the processor when a subsequent read by the processor returns the value of that write or a later write (no RAW, WAW). A read perform w.r.t the processor when subsequent writes issued by the processor cannot affect the value returned by the read (no WAR). In multiprocessors, stay same but replace the by a processor Also, complete: perform with respect to all processors. Still need to make sense of order in operations from different processes. PCA page 276 #14 lec # 9 Spring2011 5-3-2011

Shared Memory Access Consistency A load by processor P i is performed with respect to processor P k at a point in time when the issuing of a subsequent store to the same location by P k cannot affect the value returned by the load (no WAW, WAR). A store by P i is considered performed with respect to P k at one time when a subsequent load from the same address by P k returns the value by this store (no RAW). A load is globally performed (i.e. complete) if it is performed with respect to all processors and if the store that is the source of the returned value has been performed with respect to all processors. #15 lec # 9 Spring2011 5-3-2011

Formal Definition of Coherence Results of a program: values returned by its read (load) operations A memory system is coherent if the results of any execution of a program are such that for each location, it is possible to construct a hypothetical serial order of all operations to the location that is consistent with the results of the execution and in which: 1. operations issued by any particular process occur in the order issued by that process, and 2. the value returned by a read is the value written by the latest write to that location in the serial order Two necessary conditions: Write propagation: value written must become visible to others Write serialization: writes to location seen in same order by all if one processor sees w1 after w2, another processor should not see w2 before w1 No need for analogous read serialization since reads not visible to others. PCA page 276-277 #16 lec # 9 Spring2011 5-3-2011 i.e processors

Write Atomicity Write Atomicity: Position in total order at which a write appears to perform should be the same for all processes: Time Nothing a process does after it has seen the new value produced by a write W should be visible to other processes until they too have seen W. In effect, extends write serialization to writes from multiple processes. P 1 P 2 P 3 A=1; while (A==0); B=1; while (B==0); print A; Problem if P 2 leaves loop, writes B, and P 3 sees new B but old A (from its cache, i.e cache coherence problem). Old A? PCA page 289 #17 lec # 9 Spring2011 5-3-2011

Cache Coherence Approaches Bus-Snooping Protocols: Used in bus-based systems where all processors observe memory transactions and take proper action to invalidate or update local cache content if needed. Directory Schemes: Used in scalable cache-coherent distributedmemory multiprocessor systems where cache directories are used to keep a record on where copies of cache blocks reside. Shared Caches: No private caches. Not Scalable This limits system scalability (limited to chip multiprocessors, CMPs). Non-cacheable Data: Not to cache shared writable data: Locks, process queues. Data structures protected by critical sections. Only instructions or private data is cacheable. Data is tagged by the compiler. Cache Flushing: Flush cache whenever a synchronization primitive is executed. Slow unless special hardware is used. Scalable Not Scalable #18 lec # 9 Spring2011 5-3-2011

Cache Coherence Using A Bus Built on top of two fundamentals of uniprocessor systems: i.e.smps 1 Bus transactions. 2 State transition diagram of cache blocks. Uniprocessor bus transaction: Three phases: arbitration, command/address, data transfer. All devices observe addresses, one is responsible for transaction Uniprocessor cache block states: Effectively, every block is a finite state machine. Write-through, write no-allocate has two states: valid, invalid. Write-back caches have one more state: Modified ( dirty ). Multiprocessors extend both these two fundamentals somewhat to implement cache coherence using a bus. PCA page 274 Three States: Valid (V), Invalid (I), Modified (M) #19 lec # 9 Spring2011 5-3-2011

Basic Idea: Bus-Snooping Cache Coherence Protocols Transactions on bus are visible to all processors. Processors or bus-watching (bus snoop) mechanisms can snoop (monitor) the bus and take action on relevant events (e.g. change state) to ensure data consistency among private caches and shared memory. Basic Protocol Types: 1 Write-invalidate: Invalidate all remote copies of when a local cache block is updated. Write-update: 2 When a local cache block is updated, the new data block is broadcast to all caches containing a copy of the block updating them. #20 lec # 9 Spring2011 5-3-2011

Write-invalidate invalidate & Write-update Coherence Protocols for Write-through through Caches See handout Figure 7.14 in Advanced Computer Architecture: Parallelism, Scalability, Programmability, Kai Hwang. #21 lec # 9 Spring2011 5-3-2011

Implementing Bus-Snooping Protocols Cache controller now receives inputs from both sides: 1 Requests from local processor 2 Bus requests/responses from bus snooping mechanism. In either case, takes zero or more actions: Possibly: Updates state, responds with data, generates new bus transactions. i.e invalidate or update other shared copies Protocol is a distributed algorithm: Cooperating state machines. Set of states, state transition diagram, actions. Granularity of coherence is typically a cache block Change Block State Bus action Like that of allocation in cache and transfer to/from cache. False sharing of a cache block may generate unnecessary coherence protocol actions over the bus. Cache block falsely shared PCA page 280 between P0 and P1 X Y P0 P1 #22 lec # 9 Spring2011 5-3-2011

Coherence with Write-through through Caches P 1 $ Bus snoop Possible Action: Invalidate or update cache block in P1 if shared P n $ Snoop Invalidate or update shared copies Snoop Mem Write through to memory I/O devices Cache-memory transaction (Write) Key extensions to uniprocessor: snooping, invalidating/updating caches: Invalidation- versus update-based protocols. Write propagation: even in invalidation case, later reads will see new value: Invalidation causes miss on later access, and memory update via write-through. #23 lec # 9 Spring2011 5-3-2011

Two States: I Invalidation Action Write-invalidate invalidate Bus-Snooping Protocol: For Write-Through Caches The state of a cache block copy of local processor i can take one of two states (j represents a remote processor): Valid State: V All processors can read (R(i), R(j)) safely. Local processor i can also write W(i) In this state after a successful read R(i) or write W(i) Invalid State: not in cache or, Block being invalidated. Block being replaced Z(i) or Z(j) When a remote processor writes W(j) to its cache copy, all other cache copies become invalidated. V I Bus write cycles are higher than bus read cycles due to request invalidations to remote caches. Assuming write allocate i = Local Processor j = Other (remote) processor #24 lec # 9 Spring2011 5-3-2011

Write-invalidate invalidate Bus-Snooping Protocol R(j) Z(j) Z(i) W(j) For Write-Through Caches State Transition Diagram Invalid I R(i), W(i) W(j), Z(i) Valid V W(i) = Write to block by processor i W(j) = Write to block copy in cache j by processor j i R(i) = Read block by processor i. R(j) = Read block copy in cache j by processor j i Z(i) = Replace block in cache. Z(j) = Replace block copy in cache j i i local processor j other processor Local write Assuming write allocate used For a cache block in local processor i R(i) W(i) R(j) Z(j) Write by other processor j, invalidate #25 lec # 9 Spring2011 5-3-2011

PrRd/ A read by this processor A read by this processor PrRd/BusRd A write by other processor to block i.e W(j) PrWr/BusWr Write-invalidate invalidate Bus-Snooping Protocol i.e R(i) V I PrWr/BusWr A write by this processor i.e W(j) BusWr/ Snooper senses a write by other processor to same block -> invalidate For Write-Through Caches Alternate State Transition Diagram Processor-initiated transactions Bus-snooper-initiated transactions V = Valid I = Invalid A/B means if A is observed B is generated. Processor Side Requests: read (PrRd) write (PrWr) Bus Side or snooper/cache controller Actions: bus read (BusRd) bus write (BusWr) Two states per block in each cache, as in uniprocessor. state of a block can be seen as p-vector (for all p processors). p bits Hardware state bits associated with only blocks that are in the cache. other blocks can be seen as being in invalid (not-present) state in that cache Write will invalidate all other caches (no local change of state). can have multiple simultaneous readers of block,but write invalidates them. PCA page 281 i.e W(i) #26 lec # 9 Spring2011 5-3-2011

Problems With Write-Through High bandwidth requirements: Every write from every processor goes to shared bus and memory. Consider 200MHz, 1 CPI processor, and 15% of the instructions are 8-byte stores. Each processor generates 30M stores or 240MB data per second. 1GB/s bus can support only about 4 processors without saturating. Write-through especially is unpopular for SMPs. Write-back caches absorb most writes as cache hits: Write hits don t go on bus. Visible to all But now how do we ensure write propagation and serialization? Requires more sophisticated coherence protocols. In correct order i.e write atomicity PCA page 283 #27 lec # 9 Spring2011 5-3-2011

Three States: Basic Write-invalidate invalidate Bus-Snooping Protocol: For Write-Back Caches Corresponds to ownership protocol. i.e which processor owns the block Valid state in write-through protocol is divided into two states (3 states total): RW (read-write): (this processor i owns block) or Modified M The only cache copy existing in the system; owned by the local processor. Read (R(i)) and (W(i)) can be safely performed in this state. RO (read-only): or Shared S Multiple cache block copies exist in the system; owned by memory. Reads ((R(i)), ((R(j)) can safely be performed in this state. INV (invalid): I Entered when : Not in cache or, A remote processor writes (W(j) to its cache copy. A local processor replaces (Z(i) its own copy. A cache block is uniquely owned after a local write W(i) Before a block is modified, ownership for exclusive access is obtained by a readonly bus transaction broadcast to all caches and memory. If a modified remote block copy exists, memory is updated (forced write back), local copy is invalidated and ownership transferred to requesting cache. i = Local Processor j = Other (remote) processor For a cache block in local processor i #28 lec # 9 Spring2011 5-3-2011

Write-invalidate Bus-Snooping Protocol For Write-Back Caches State Transition Diagram For a cache block in local processor i R(i) W(i) Z(j) RW (Processor i owns block) i local processor j other processor W(i) W(j) Z(i) R(j) W(i) INV Force write back R(i) RO Owned by memory W(j), Z(i) RW: Read-Write RO: Read Only INV: Invalidated or not in cache R(i) R(j) Z(j) Other processor writes invalidate W(i) = Write to block by processor i W(j) = Write to block copy in cache j by processor j i R(i) = Read block by processor i. R(j) = Read block copy in cache j by processor j i Z(i) = Replace block in cache. Z(j) = Replace block copy in cache j i R(j), Z(j), W(j), Z(i) #29 lec # 9 Spring2011 5-3-2011

Basic MSI Write-Back Invalidate Protocol Three States: PCA page 293 States: Invalid (I). Shared (S): Shared unmodified copies exist. Dirty or Modified (M): One only valid, other copies must be invalidated. Processor Events: PrRd (read). PrWr (write). MSI is similar to previous protocol just different representation (i.e still corresponds to ownership protocol) MSI Modified Shared Invalid Bus Transactions: BusRd: Asks for copy with no intent to modify. BusRdX: Asks for copy with intent to modify. BusWB: Updates memory. e.g write back to memory Actions: Update state, perform bus transaction, flush value onto bus (forced write back). #30 lec # 9 Spring2011 5-3-2011

Three States: Basic MSI Write-Back Invalidate Protocol State Transition Diagram M = Dirty or Modified, main memory is not up-to-date, owned by local processor S = Shared, main memory is up-to-date owned by main memory I = Invalid Processor Side Requests: Block owned read (PrRd) by main memory write (PrWr) Bus Side or snooper/cache controller Actions: Bus Read (BusRd) Bus Read Exclusive (BusRdX) bus write back (BusWB) Flush PrWr/BusRdX PrRd/BusRd PrWr/BusRdX PrRd/ M S PrRd/ BusRd/ I PrWr/ BusRd/Flush Block owned by local processor BusRdX/Flush BusRdX/ Forced Write Back BusWB Invalidate Replacement changes state of two blocks: Outgoing and incoming. PCA page 295 Processor Initiated Bus-Snooper Initiated Still an ownership protocol #31 lec # 9 Spring2011 5-3-2011

Modified Exclusive Shared Invalid Solution: MESI (4-state) Invalidation Protocol Problem with MSI protocol: Reading and modifying data is 2 bus transactions, even if not sharing: e.g. even in sequential program. BusRd ( I-> S ) followed by BusRdX ( S -> M ). Add exclusive state (E): Write locally without a bus transaction, but not modified: Main memory is up to date, so cache is not necessarily the owner. New shared bus signal needed Four States: Invalid (I). Exclusive or exclusive-clean (E): Only this cache has a copy, but not modified; main memory has same copy. Shared (S): Two or more caches may have copies. Modified (M): Dirty. I -> E on PrRd if no one else has copy. i.e. shared signal, S = 0 Needs shared signal S on bus: wired-or line asserted in response to BusRd. S = 0 Not Shared S = 1 Shared i.e no other cache has a copy #32 lec # 9 Spring2011 5-3-2011

MESI State Transition Diagram Four States: M = Modified or Dirty E = Exclusive S = Shared I = Invalid PrWr/BusRdX PrWr/BusRdX PrWr/ M E PrRd/ PrRd PrWr/ BusRd/Flush BusRd/ Flush BusRdX/Flush BusRdX/Flush Invalidate / Forced Write Back S BusRdX/Flush S = shared signal = 0 not shared = 1 shared Not Shared PrRd/ BusRd (S) PrRd/ BusRd/Flush PrRd/ BusRd(S) I Shared BusRd(S) Means shared line asserted on BusRd transaction. Flush: If cache-to-cache sharing, only one cache flushes data. #33 lec # 9 Spring2011 5-3-2011

Invalidate Versus Update Basic question of program behavior: Is a block written by one processor read by others before it is rewritten (i.e. written-back)? Invalidation: Yes => Readers will take a miss. No => Multiple writes without additional traffic. Clears out copies that won t be used again. Update: Yes => Readers will not miss if they had a copy previously. Single bus transaction to update all copies. No => Multiple useless updates, even to dead copies. Need to look at program behavior and hardware complexity. In general, invalidation protocols are much more popular. Some systems provide both, or even hybrid protocols. #34 lec # 9 Spring2011 5-3-2011

Update-Based Bus-Snooping Protocols A write operation updates values in other caches. New, update bus transaction. Advantages: Other processors don t miss on next access: reduced latency In invalidation protocols, they would miss and cause more transactions. Single bus transaction to update several caches can save bandwidth. Also, only the word written is transferred, not whole block Disadvantages: Multiple writes by same processor cause multiple update transactions. In invalidation, first write gets exclusive ownership, others local Detailed tradeoffs more complex. Depending on program behavior/hardware complexity #35 lec # 9 Spring2011 5-3-2011

Dragon Write-back Update Protocol 4 states: Exclusive-clean or exclusive (E): I and memory have this block. Shared clean (Sc): I, others, and maybe memory, but I m not owner. Shared modified (Sm): I and others but not memory, and I m the owner. Sm and Sc can coexist in different caches, with only one Sm. Modified or dirty (D): I have this block and no one else, stale memory. No explicit invalid state (implied). If in cache, cannot be invalid. If not present in cache, can view as being in not-present or invalid state. New processor events: PrRdMiss, PrWrMiss. Introduced to specify actions when block not present in cache. New bus transaction: BusUpd. Broadcasts single word written on bus; updates other relevant caches. PCA page 301 Fifth (Invalid) State Implied That was modified in owner s cache Also requires shared signal S on bus (similar to MESI) #36 lec # 9 Spring2011 5-3-2011

Dragon State Transition Diagram S = Shared S = Not Shared PrRd/ S = shared signal = 0 not shared = 1 shared PrRd/ BusUpd/Update I PrRdMiss/BusRd(S) E BusRd/ Sc I PrRdMiss/BusRd(S) Four States: E = Exclusive Sc = Shared clean Sm = Shared modified M = Modified Shared PrWrMiss/(BusRd(S); BusUpd) Not shared BusUpd/Update Update others Sm PrWr/ Supply data BusRd/Flush PrWr/BusUpd(S) Shared PrWr/BusUpd(S) Update others M Shared PrWr/BusUpd(S) Not shared PrWrMiss/BusRd(S) This Processor is owner PrRd/ PrWr/BusUpd(S) BusRd/Flush Supply data Shared BusUpd = Broadcast word written on bus to update other caches Not shared PrRd/ PrWr/ #37 lec # 9 Spring2011 5-3-2011