Lecture 3: Snooping Protocols. Topics: snooping-based cache coherence implementations

Similar documents
Lecture 7: Implementing Cache Coherence. Topics: implementation details

Lecture 2: Snooping and Directory Protocols. Topics: Snooping wrap-up and directory implementations

Lecture 8: Snooping and Directory Protocols. Topics: split-transaction implementation details, directory implementations (memory- and cache-based)

Lecture 7: PCM Wrap-Up, Cache coherence. Topics: handling PCM errors and writes, cache coherence intro

Lecture 1: Introduction

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Module 10: "Design of Shared Memory Multiprocessors" Lecture 20: "Performance of Coherence Protocols" MOESI protocol.

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Lecture 7: PCM, Cache coherence. Topics: handling PCM errors and writes, cache coherence intro

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Processor Architecture

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

Lecture 18: Coherence Protocols. Topics: coherence protocols for symmetric and distributed shared-memory multiprocessors (Sections

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Lecture 25: Multiprocessors

Computer Architecture

A Basic Snooping-Based Multi-Processor Implementation

A More Sophisticated Snooping-Based Multi-Processor

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Memory Hierarchy in a Multiprocessor

A Basic Snooping-Based Multi-Processor Implementation

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

A Basic Snooping-Based Multi-processor

Lecture 18: Coherence and Synchronization. Topics: directory-based coherence protocols, synchronization primitives (Sections

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

CS/ECE 757: Advanced Computer Architecture II (Parallel Computer Architecture) Symmetric Multiprocessors Part 1 (Chapter 5)

EC 513 Computer Architecture

Lecture 5: Directory Protocols. Topics: directory-based cache coherence implementations

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

Shared Memory Multiprocessors

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy Prof. Sherief Reda School of Engineering Brown University

Shared Memory Multiprocessors

Lecture-22 (Cache Coherence Protocols) CS422-Spring

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

CSE502: Computer Architecture CSE 502: Computer Architecture

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy

Lecture 8: Directory-Based Cache Coherence. Topics: scalable multiprocessor organizations, directory protocol design issues

NOW Handout Page 1. Recap. Protocol Design Space of Snooping Cache Coherent Multiprocessors. Sequential Consistency.

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Aleksandar Milenkovich 1

Fall 2012 EE 6633: Architecture of Parallel Computers Lecture 4: Shared Address Multiprocessors Acknowledgement: Dave Patterson, UC Berkeley

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

Snooping-Based Cache Coherence

Snooping coherence protocols (cont.)

Lecture 24: Multiprocessing Computer Architecture and Systems Programming ( )

Limitations of parallel processing

Scalable Cache Coherence. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville

Lecture 24: Virtual Memory, Multiprocessors

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Approaches to Building Parallel Machines. Shared Memory Architectures. Example Cache Coherence Problem. Shared Cache Architectures

Physical Design of Snoop-Based Cache Coherence on Multiprocessors

Snooping coherence protocols (cont.)

Computer Systems Architecture

Multiprocessors & Thread Level Parallelism

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

Snoop-Based Multiprocessor Design III: Case Studies

Recall: Sequential Consistency. CS 258 Parallel Computer Architecture Lecture 15. Sequential Consistency and Snoopy Protocols

Chapter 5. Multiprocessors and Thread-Level Parallelism

Lecture 4: Directory Protocols and TM. Topics: corner cases in directory protocols, lazy TM

Cache Coherence in Bus-Based Shared Memory Multiprocessors

Advanced OpenMP. Lecture 3: Cache Coherency

Lecture 3: Directory Protocol Implementations. Topics: coherence vs. msg-passing, corner cases in directory protocols

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

CMSC 611: Advanced Computer Architecture

Foundations of Computer Systems

Chapter 5. Multiprocessors and Thread-Level Parallelism

Review: Multiprocessor. CPE 631 Session 21: Multiprocessors (Part 2) Potential HW Coherency Solutions. Bus Snooping Topology

CMSC 611: Advanced. Distributed & Shared Memory

12 Cache-Organization 1

Special Topics. Module 14: "Directory-based Cache Coherence" Lecture 33: "SCI Protocol" Directory-based Cache Coherence: Sequent NUMA-Q.

Lecture 24: Board Notes: Cache Coherency

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

Fall 2015 :: CSE 610 Parallel Computer Architectures. Cache Coherence. Nima Honarmand

Multicore Workshop. Cache Coherency. Mark Bull David Henty. EPCC, University of Edinburgh

Lecture 13: Consistency Models. Topics: sequential consistency, requirements to implement sequential consistency, relaxed consistency models

Shared Memory SMP and Cache Coherence (cont) Adapted from UCB CS252 S01, Copyright 2001 USB

CS315A Midterm Solutions

Lecture 18: Shared-Memory Multiprocessors. Topics: coherence protocols for symmetric shared-memory multiprocessors (Sections

The Cache Write Problem

Scalable Cache Coherence

Cache Coherence in Scalable Machines

CMSC Computer Architecture Lecture 15: Memory Consistency and Synchronization. Prof. Yanjing Li University of Chicago

Shared Memory Architectures. Approaches to Building Parallel Machines

Scalable Cache Coherence

Lecture 26: Multiprocessors. Today s topics: Directory-based coherence Synchronization Consistency Shared memory vs message-passing

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

ECSE 425 Lecture 30: Directory Coherence

Page 1. Cache Coherence

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

Shared Memory Multiprocessors. Symmetric Shared Memory Architecture (SMP) Cache Coherence. Cache Coherence Mechanism. Interconnection Network

Lecture: Coherence, Synchronization. Topics: directory-based coherence, synchronization primitives (Sections )

Lecture: Coherence Protocols. Topics: wrap-up of memory systems, multi-thread programming models, snooping-based protocols

The MESI State Transition Graph

Transcription:

Lecture 3: Snooping Protocols Topics: snooping-based cache coherence implementations 1

Design Issues, Optimizations When does memory get updated? demotion from modified to shared? move from modified in one cache to modified in another? Who responds with data? memory or a cache that has the block in exclusive state does it help if sharers respond? We can assume that bus, memory, and cache state transactions are atomic if not, we will need more states A transition from shared to modified only requires an upgrade request and no transfer of data Is the protocol simpler for a write-through cache? 2

4-State Protocol Multiprocessors execute many single-threaded programs A read followed by a write will generate bus transactions to acquire the block in exclusive state even though there are no sharers Note that we can optimize protocols by adding more states increases design/verification complexity 3

MESI Protocol The new state is exclusive-clean the cache can service read requests and no other cache has the same block When the processor attempts a write, the block is upgraded to exclusive-modified without generating a bus transaction When a processor makes a read request, it must detect if it has the only cached copy the interconnect must include an additional signal that is asserted by each cache if it has a valid copy of the block 4

Design Issues When caches evict blocks, they do not inform other caches it is possible to have a block in shared state even though it is an exclusive-clean copy Cache-to-cache sharing: SRAM vs. DRAM latencies, contention in remote caches, protocol complexities (memory has to wait, which cache responds), can be especially useful in distributed memory systems The protocol can be improved by adding a fifth state (owner MOESI) the owner services reads (instead of memory) 5

Update Protocol (Dragon) 4-state write-back update protocol, first used in the Dragon multiprocessor (1984) Write-back update is not the same as write-through on a write, only caches are updated, not memory Goal: writes may usually not be on the critical path, but subsequent reads may be 6

4 States No invalid state Modified and Exclusive-clean as before: used when there is a sole cached copy Shared-clean: potentially multiple caches have this block and main memory may or may not be up-to-date Shared-modified: potentially multiple caches have this block, main memory is not up-to-date, and this cache must update memory only one block can be in Sm state In reality, one state would have sufficed more states to reduce traffic 7

Design Issues If the update is also sent to main memory, the Sm state can be eliminated If all caches are informed when a block is evicted, the block can be moved from shared to M or E this can help save future bus transactions Having an extra wire to determine exclusivity seems like a worthy trade-off in update systems 8

State Transitions To NP I E S M From NP 0 0 1.25 0.96 1.68 I 0.64 0 0 1.87 0.002 E 0.20 0 14.0 0.02 1.00 S 0.42 2.5 0 134.7 2.24 NP Not Present State transitions per 1000 data memory references for Ocean M 2.63 0.002 0 2.3 843.6 To NP I E S M From NP -- -- BusRd BusRd BusRdX I -- -- BusRd BusRd BusRdX Bus actions for each state transition E -- -- -- -- -- S -- -- Not possible -- BusUpgr M BusWB BusWB Not possible BusWB -- 9

Basic Implementation Assume single level of cache, atomic bus transactions It is simpler to implement a processor-side cache controller that monitors requests from the processor and a bus-side cache controller that services the bus Both controllers are constantly trying to read tags tags can be duplicated (moderate area overhead) unlike data, tags are rarely updated tag updates stall the other controller 10

Reporting Snoop Results Uniprocessor system: initiator places address on bus, all devices monitor address, one device acks by raising a wired-or signal, data is transferred In a multiprocessor, memory has to wait for the snoop result before it chooses to respond need 3 wired-or signals: (i) indicates that a cache has a copy, (ii) indicates that a cache has a modified copy, (iii) indicates that the snoop has not completed Ensuring timely snoops: the time to respond could be fixed or variable (with the third wired-or signal), or the memory could track if a cache has a block in M state 11

Non-Atomic State Transitions Note that a cache controller s actions are not all atomic: tag look-up, bus arbitration, bus transaction, data/tag update Consider this: block A in shared state in P1 and P2; both issue a write; the bus controllers are ready to issue an upgrade request and try to acquire the bus; is there a problem? The controller can keep track of additional intermediate states so it can react to bus traffic (e.g. S M, I M, I S,E) Alternatively, eliminate upgrade request; use the shared wire to suppress memory s response to an exclusive-rd 12

Livelock Livelock can happen if the processor-cache handshake is not designed correctly Before the processor can attempt the write, it must acquire the block in exclusive state If all processors are writing to the same block, one of them acquires the block first if another exclusive request is seen on the bus, the cache controller must wait for the processor to complete the write before releasing the block -- else, the processor s write will fail again because the block would be in invalid state 13

Split Transaction Bus What would it take to implement the protocol correctly while assuming a split transaction bus? Split transaction bus: a cache puts out a request, releases the bus (so others can use the bus), receives its response much later Assumptions: only one request per block can be outstanding separate lines for addr (request) and data (response) 14

Split Transaction Bus Proc 1 Proc 2 Proc 3 Cache Cache Cache Request lines Response lines 15

Design Issues When does the snoop complete? What if the snoop takes a long time? What if the buffer in a processor/memory is full? When does the buffer release an entry? Are the buffers identical? How does each processor ensure that a block does not have multiple outstanding requests? What determines the write order requests or responses? 16

Design Issues II What happens if a processor is arbitrating for the bus and witnesses another bus transaction for the same address? If the processor issues a read miss and there is already a matching read in the request table, can we reduce bus traffic? 17

Title Bullet 18