Cache Coherence in Scalable Machines

Similar documents
Cache Coherence in Scalable Machines

Scalable Cache Coherence

Scalable Cache Coherence. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Scalable Cache Coherent Systems Scalable distributed shared memory machines Assumptions:

Scalable Cache Coherent Systems

NOW Handout Page 1. Context for Scalable Cache Coherence. Cache Coherence in Scalable Machines. A Cache Coherent System Must:

Cache Coherence: Part II Scalable Approaches

Scalable Cache Coherence

Lecture 8: Snooping and Directory Protocols. Topics: split-transaction implementation details, directory implementations (memory- and cache-based)

Cache Coherence in Scalable Machines

Lecture 5: Directory Protocols. Topics: directory-based cache coherence implementations

Lecture 8: Directory-Based Cache Coherence. Topics: scalable multiprocessor organizations, directory protocol design issues

Lecture 2: Snooping and Directory Protocols. Topics: Snooping wrap-up and directory implementations

Module 14: "Directory-based Cache Coherence" Lecture 31: "Managing Directory Overhead" Directory-based Cache Coherence: Replacement of S blocks

A Scalable SAS Machine

Lecture 3: Directory Protocol Implementations. Topics: coherence vs. msg-passing, corner cases in directory protocols

Cache Coherence. Todd C. Mowry CS 740 November 10, Topics. The Cache Coherence Problem Snoopy Protocols Directory Protocols

... The Composibility Question. Composing Scalability and Node Design in CC-NUMA. Commodity CC node approach. Get the node right Approach: Origin

Scalable Multiprocessors

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

Recall: Sequential Consistency Example. Implications for Implementation. Issues for Directory Protocols

Multiprocessors & Thread Level Parallelism

Lect. 6: Directory Coherence Protocol

Memory Hierarchy in a Multiprocessor

Special Topics. Module 14: "Directory-based Cache Coherence" Lecture 33: "SCI Protocol" Directory-based Cache Coherence: Sequent NUMA-Q.

Multiprocessors II: CC-NUMA DSM. CC-NUMA for Large Systems

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

Recall Ordering: Scheurich and Dubois. CS 258 Parallel Computer Architecture Lecture 21 P 1 : Directory Based Protocols. Terminology for Shared Memory

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Recall: Sequential Consistency Example. Recall: MSI Invalidate Protocol: Write Back Cache. Recall: Ordering: Scheurich and Dubois

EC 513 Computer Architecture

Chapter 5. Multiprocessors and Thread-Level Parallelism

Portland State University ECE 588/688. Directory-Based Cache Coherence Protocols

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

Lecture 4: Directory Protocols and TM. Topics: corner cases in directory protocols, lazy TM

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Performance study example ( 5.3) Performance study example

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Chapter 9 Multiprocessors

Shared Memory Multiprocessors

CMSC 611: Advanced Computer Architecture

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

SIGNET: NETWORK-ON-CHIP FILTERING FOR COARSE VECTOR DIRECTORIES. Natalie Enright Jerger University of Toronto

Suggested Readings! What makes a memory system coherent?! Lecture 27" Cache Coherency! ! Readings! ! Program order!! Sequential writes!! Causality!

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

Cache Coherence. Bryan Mills, PhD. Slides provided by Rami Melhem

A More Sophisticated Snooping-Based Multi-Processor

CSC 631: High-Performance Computer Architecture

Multicast Snooping: A Multicast Address Network. A New Coherence Method Using. With sponsorship and/or participation from. Mark Hill & David Wood

Fall 2015 :: CSE 610 Parallel Computer Architectures. Cache Coherence. Nima Honarmand

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

CMSC 611: Advanced. Distributed & Shared Memory

ECE 669 Parallel Computer Architecture

Multiprocessor Cache Coherency. What is Cache Coherence?

Computer Architecture

Lecture 25: Multiprocessors

Cache Coherence: Part 1

5 Chip Multiprocessors (II) Chip Multiprocessors (ACS MPhil) Robert Mullins

Foundations of Computer Systems

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

5 Chip Multiprocessors (II) Robert Mullins

1. Memory technology & Hierarchy

Lecture 3: Snooping Protocols. Topics: snooping-based cache coherence implementations

Case Study 1: Single-Chip Multicore Multiprocessor

Chapter 5. Multiprocessors and Thread-Level Parallelism

Flynn s Classification

Review: Multiprocessor. CPE 631 Session 21: Multiprocessors (Part 2) Potential HW Coherency Solutions. Bus Snooping Topology

Cache Coherence (II) Instructor: Josep Torrellas CS533. Copyright Josep Torrellas

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

EECS 570 Lecture 11. Directory-based Coherence. Winter 2019 Prof. Thomas Wenisch

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

Chapter 5 Thread-Level Parallelism. Abdullah Muzahid

ECSE 425 Lecture 30: Directory Coherence

A Basic Snooping-Based Multi-Processor Implementation

CSE502: Computer Architecture CSE 502: Computer Architecture

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Lecture 7: Implementing Cache Coherence. Topics: implementation details

Parallel Programming Platforms

Interconnect Routing

EE382 Processor Design. Illinois

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

[ 7.2.5] Certain challenges arise in realizing SAS or messagepassing programming models. Two of these are input-buffer overflow and fetch deadlock.

Bus-Based Coherent Multiprocessors

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Handout 3 Multiprocessor and thread level parallelism

Review. EECS 252 Graduate Computer Architecture. Lec 13 Snooping Cache and Directory Based Multiprocessors. Outline. Challenges of Parallel Processing

Page 1. Lecture 12: Multiprocessor 2: Snooping Protocol, Directory Protocol, Synchronization, Consistency. Bus Snooping Topology

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Module 10: "Design of Shared Memory Multiprocessors" Lecture 20: "Performance of Coherence Protocols" MOESI protocol.

Lecture 18: Coherence and Synchronization. Topics: directory-based coherence protocols, synchronization primitives (Sections

Contributors to the course material

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Cache Coherence and Atomic Operations in Hardware

Shared Memory SMP and Cache Coherence (cont) Adapted from UCB CS252 S01, Copyright 2001 USB

Lecture 24: Thread Level Parallelism -- Distributed Shared Memory and Directory-based Coherence Protocol

Transcription:

Cache Coherence in Scalable Machines COE 502 arallel rocessing Architectures rof. Muhamed Mudawar Computer Engineering Department King Fahd University of etroleum and Minerals Generic Scalable Multiprocessor 1 2 n Memory $ Memory $ Memory $ CA CA = Communication Assist CA Scalable Interconnection Network many parallel transactions Scalable distributed memory machine -C-M nodes connected by a scalable network Scalable memory bandwidth at reasonable latency Communication Assist (CA) Interprets network transactions, forms an interface rovides a shared address space in hardware Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 2

What Must a Coherent System do? rovide set of states, transition diagram, and actions Determine when to invoke coherence protocol Done the same way on all systems State of the line is maintained in the cache rotocol is invoked if an access fault occurs on the line Read miss, Write miss, Writing to a shared block Manage coherence protocol 1. Find information about state of block in other caches Whether need to communicate with other cached copies 2. Locate the other cached copies 3. Communicate with those copies (invalidate / update) Handled differently by different approaches Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 3 Bus-Based Cache Coherence Functions 1, 2, and 3 are accomplished through Broadcast and snooping on bus rocessor initiating bus transaction sends out a search Others respond and take necessary action Could be done in a scalable network too Broadcast to all processors, and let them respond Conceptually simple, but broadcast doesn t scale On a bus, bus bandwidth doesn t scale On a scalable network Every access fault leads to at least p network transactions Scalable Cache Coherence needs Different mechanisms to manage protocol Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 4

Directory-Based Cache Coherence 1 n Directory Memory $ Directory Memory $ CA = Communication Assist CA Scalable Interconnection Network many parallel transactions Scalable cache coherence is based on directories Distributed directories for distributed memories Each cache-line-sized block of a memory Has a directory entry that keeps track of State and the nodes that are currently sharing the block Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 5 Simple Directory Scheme Simple way to organize a directory is to associate Every memory block with a corresponding directory entry To keep track of copies of cached blocks and their states On a miss Locate home node and the corresponding directory entry Look it up to find its state and the nodes that have copies Communicate with the nodes that have copies if necessary On a read miss Directory indicates from which node data may be obtained On a write miss Directory identifies shared copies to be invalidated/updated Many alternatives for organizing directory Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 6

Directory Information 1 n Directory Memory $ Dirty bit resence bits for all nodes Directory Memory $ CA CA Scalable Interconnection Network many parallel transactions A simple organization of a directory entry is to have Bit vector of p presence bits for each of the p nodes Indicating which nodes are sharing that block One or more state bits per block reflecting memory view One dirty bit is used indicating whether block is modified If dirty bit is 1 then block is in modified state in only one node Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 7 Home Node Definitions Node in whose main memory the block is allocated Dirty Node Node that has copy of block in its cache in modified state Owner Node Node that currently holds the valid copy of a block Can be either the home node or dirty node Exclusive Node Node that has block in its cache in an exclusive state Either exclusive clean or exclusive modified Local or Requesting Node Node that issued the request for the block Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 8

Basic Operation on a Read Miss Read miss by processor i Requestor sends read request message to home node Assist looks-up directory entry, if dirty-bit is OFF Reads block from memory Sends data reply message containing to requestor Turns ON resence[ i ] If dirty-bit is ON Requestor sends read request message to dirty node Dirty node sends data reply message to requestor Dirty node also sends revision message to home node Cache block state is changed to shared Home node updates its main memory and directory entry Turns dirty-bit OFF Turns resence[ i ] ON Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 9 Read Miss to a Block in Dirty State The number 1, 2, and so on show serialization of transactions Letters to same number indicate overlapped transactions 1. Total of 5 transactions Message 4a and 4b are performed in parallel Requestor $ CA 3. Mem / Dir Read request to owner 4a. Data Reply Read request to home node 2. Response with owner identity $ CA Mem / Dir 4b. Home Node $ CA Mem / Dir Revision message to home node Dirty Node Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 10

Basic Operation on a Write Miss Write miss caused by processor i Requestor sends read exclusive message to home node Assist looks up directory entry, if dirty-bit is OFF Home node sends a data reply message to processor i Message contains data and presence bits identifying all sharers Requestor node sends invalidation messages to all sharers Sharer nodes invalidate their cached copies and Send acknowledgement messages to requestor node If dirty-bit is ON Home node sends a response message identifying dirty node Requestor sends a read exclusive message to dirty node Dirty node sends a data reply message to processor i And changes its cache state to Invalid In both cases, home node clears presence bits But turns ON resence[ i ] and dirty-bit Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 11 Write Miss to a Block with Sharers rotocol is orchestrated by the assists Called coherence or directory controllers 3a. Invalidation request to sharer $ CA 4a. Mem / Dir Sharer Requestor $ CA Mem / Dir Invalidation acknowledgement 3b. 1. RdEx request to home node 2. Invalidation request to sharer Data reply with sharer s identity 4b. $ CA Mem / Dir Sharer Home Node $ CA Invalidation acknowledgement Mem / Dir Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 12

Data Sharing atterns rovide insight into directory requirements If most misses involve O() transactions then Broadcast might be a good solution However generally, there are few sharers at a write Important to understand two aspects of data sharing Frequency of shared writes or invalidating writes On a write miss or when writing to a block in the shared state Called invalidation frequency in invalidation-based protocols Distribution of sharers called the invalidation size distribution Invalidation size distribution also provides Insight into how to organize and store directory information Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 13 Cache Invalidation atterns Simulation running on 64 processors Figures show invalidation size distribution MSI protocol is used % of shared writes % of shared writes 100 91.22 90 80 70 60 50 LU Invalidation atterns 40 30 20 8.75 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.22 0 90 80 70 60 50 40 30 20 10 0 0 1 2 3 4 5 6 7 # of invalidations 80.98 15.06 0 8 to 11 12 to 15 16 to 19 20 to 23 24 to 27 28 to 31 32 to 35 36 to 39 40 to 43 44 to 47 48 to 51 52 to 55 56 to 59 60 to 63 Ocean Invalidation atterns 3.04 0.49 0.34 0.03 0 0.03 0 0 0 0 0 0 0 0 0 0 0 0 0.02 0 1 2 3 4 5 6 7 # of invalidations 8 to 11 12 to 15 16 to 19 20 to 23 24 to 27 28 to 31 32 to 35 36 to 39 40 to 43 44 to 47 48 to 51 52 to 55 56 to 59 60 to 63 Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 14

Cache Invalidation atterns cont d Infinite per-processor caches are used To capture inherent sharing patterns % of shared writes % of shared writes 50 48.35 45 40 35 30 25 22.87 20 15 10.56 10 5 1.27 0 60 50 40 30 20 10 0 0 1 2 3 4 5 6 7 Barnes-Hut Invalidation atterns 5.33 2.87 1.88 1.4 2.5 1.06 0.61 0.24 0.28 0.2 0.06 0.1 0.07 0 0 0 0 0.33 # of invalidations 58.35 12.04 6.68 8 to 11 12 to 15 16 to 19 20 to 23 24 to 27 28 to 31 32 to 35 36 to 39 40 to 43 44 to 47 48 to 51 52 to 55 56 to 59 60 to 63 Radiosity Invalidation atterns 4.16 2.24 1.59 1.16 0.97 3.28 2.2 1.74 1.46 0.92 0.45 0.37 0.31 0.28 0.26 0.24 0.19 0.19 0.91 0 1 2 3 4 5 6 7 # of invalidations 8 to 11 12 to 15 16 to 19 20 to 23 24 to 27 28 to 31 32 to 35 36 to 39 40 to 43 44 to 47 48 to 51 52 to 55 56 to 59 60 to 63 Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 15 Framework for Sharing atterns Shared-Data access patterns can be categorized: Code and read-only data structures are never written Not an issue for directories roducer-consumer One processor produces data and others consume it Invalidation size is determined by number of consumers Migratory data Data migrates from one processor to another Example: computing a global sum, where sum migrates Invalidation size is small (typically 1) even as scales Irregular read-write Example: distributed task-queue (processes probe head ptr) Invalidations usually remain small, though frequent Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 16

Sharing atterns Summary Generally, few sharers at a write Scales slowly with A write may send 0 invalidations in MSI protocol Since block is loaded in shared state This would not happen in MESI protocol Infinite per-processor caches are used To capture inherent sharing patterns Finite caches send replacement hints on block replacement Which turn off presence bits and reduce # of invalidations However, traffic will not be reduced Non-zero frequencies of very large invalidation sizes Due to spinning on a synchronization variable by all Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 17 Alternatives for Directories Directory Schemes Centralized Distributed Finding source of directory information Flat Hierarchical Locating copies Memory-based Directory information co-located with memory module in home node for each memory block Examples: Stanford DASH/FLASH, MIT Alewife, SGI Origin, HAL Cache-based Caches holding a copy of the memory block form a doubly linked list Directory hold pointer to head of list Examples: IEEE SCI, Sequent NUMA-Q Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 18

Flat Memory-based Schemes Information about cached (shared) copies is Stored with the block at the home node erformance Scaling Traffic on a shared write: proportional to number of sharers Latency on shared write: can issue invalidations in parallel Simplest Representation: one presence bit per node Storage overhead is proportional to M For M memory blocks in memory, and ignoring state bits Directory storage overhead scale poorly with Given 64-byte cache block size For 64 nodes: 64 presence bits / 64 bytes = 12.5% overhead 256 nodes: 256 presence bits / 64 bytes = 50% overhead 1024 nodes: 1024 presence bits / 64 bytes = 200% overhead Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 19 Reducing Directory Storage Optimizations for full bit vector schemes Increase cache block size Reduces storage overhead proportionally Use more than one processor (SM) per node resence bit is per node, not per processor But still scales as M Reasonable and simple enough for all but very large machines Example: 256 processors, 4-processor nodes, 128-byte lines Overhead = (256/4) / (128*8) = 64 / 1024 = 6.25% (attractive) Need to reduce width Addressing the term Need to reduce height Addressing the M term M Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 20

Directory Storage Reductions Width observation: Most blocks cached (shared) by only few nodes Don t need a bit per node Sharing patterns indicate a few pointers should suffice Entry can contain a few (5 or so) pointers to sharing nodes = 1024 10 bit pointers 5 pointers need only 50 bits rather than 1024 bits Need an overflow strategy when there are more sharers Height observation: Number of memory blocks >> Number of cache blocks Most directory entries are useless at any given time Organize directory as a cache Rather than having one entry per memory block Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 21 Limited ointers Dir i : only i pointers are used per entry Works in most cases when # sharers i Overflow mechanism needed when # sharers > i Overflow bit indicates that number of sharers exceed i Overflow methods include Broadcast No-broadcast Coarse Vector Software Overflow Dynamic ointers Overflow bit Directory Entry 2 ointers 0 9 7 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 22

Broadcast (Dir i B) Broadcast bit turned on upon overflow Invalidations broadcast to all nodes when block is written Regardless of whether or not they were sharing the block Network bandwidth may be wasted for overflow cases Latency increases if processor wait for acknowledgements No-Broadcast (Dir i NB) Overflow Methods Avoids broadcast by never allowing # of sharers to exceed i When number of sharers is equal to i New sharer replaces (invalidates) one of the old ones And frees up its pointer in the directory entry Drawback: does not deal with widely shared data Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 23 Coarse Vector Overflow Method Coarse vector (Dir i CV r ) Uses i pointers in its initial representation But on overflow, changes representation to a coarse vector Each bit indicates a unique group of r nodes On a write, invalidate all r nodes that a bit corresponds to Overflow bit 2 ointers Overflow bit 8-bit coarse vector Directory Entry 0 9 7 Directory Entry 1 0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7 8 9 10 11 8 9 10 11 12 13 14 15 12 13 14 15 Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 24

Robustness of Coarse Vector Normalized Invalidations 800 700 600 500 400 300 200 100 0 Dir 4 B Dir 4 NB Dir 4 CV 4 LocusRoute Cholesky Barnes-Hut 64 processors (one per node), 4 pointers per entry 16-bit coarse vector, 4 processors per group Normalized to full-bit-vector (100 invalidations) Conclusion: coarse vector scheme is quite robust Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 25 Software Overflow Schemes Software (Dir i SW) On overflow, trap to system software Overflow pointers are saved in local memory Frees directory entry to handle i new sharers in hardware Used by MIT Alewife: 5 pointers, plus one bit for local node But large overhead for interrupt processing 84 cycles for 5 invalidations, but 707 cycles for 6. Dynamic ointers (Dir i D) is a variation of Dir i SW Each directory entry contains extra pointer to local memory Extra pointer uses a free list in special area of memory Free list is manipulated by hardware, not software Example: Stanford FLASH multiprocessor Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 26

Reducing Directory Height Sparse Directory: reducing M term in M Observation: cached entries << main memory Most directory entries are unused most of the time Example: 2MB cache and 512MB local memory per node => 510 / 512 or 99.6% of directory entries are unused Organize directory as a cache to save space Dynamically allocate directory entries, as cache lines Allow use of faster SRAMs, instead of slower DRAMs Reducing access time to directory information in critical path When an entry is replaced, send invalidations to all sharers Handles references from potentially all processors Essential to be large enough and with enough associativity Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 27 Alternatives for Directories Directory Schemes Centralized Distributed Finding source of directory information Flat Hierarchical Locating copies Memory-based Directory information co-located with memory module in home node for each memory block Examples: Stanford DASH/FLASH, MIT Alewife, SGI Origin, HAL Cache-based Caches holding a copy of the memory block from a doubly linked list; Directory holds pointer to head of list Examples: IEEE SCI, Sequent NUMA-Q Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 28

Flat Cache-based Schemes How they work: Home node only holds pointer to rest of directory info Distributed doubly linked list, weaves through caches Cache line has two associated pointers oints to next and previous nodes with a copy On read, add yourself to head of the list On write, propagate chain of invalidations down the list Scalable Coherent Interface (SCI) IEEE Standard Doubly linked list Directory Home Node Cache Node A Cache Node B Cache Node C Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 29 Scaling roperties (Cache-based) Traffic on write: proportional to number of sharers Latency on write: proportional to number of sharers Don t know identity of next sharer until reach current one Also assist processing at each node along the way Even reads involve more than one assist Home and first sharer on list Storage overhead Quite good scaling along both axes Only one head pointer per directory entry Rest is all proportional to cache size But complex hardware implementation Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 30

Alternatives for Directories Directory Schemes Centralized Distributed Finding source of directory information Flat Hierarchical Locating copies Memory-based Directory information co-located with memory module in home node for each memory block Examples: Stanford DASH/FLASH, MIT Alewife, SGI Origin, HAL Cache-based Caches holding a copy of the memory block form a doubly linked list Directory holds pointer to head of list Examples: IEEE SCI, Sequent NUMA-Q Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 31 Finding Directory Information Flat schemes Directory distributed with memory Information at home node Location based on address: transaction sent to home Hierarchical schemes Directory organized as a hierarchical data structure Leaves are processing nodes Internal nodes have only directory information Directory entry says whether subtree caches a block To find directory info, send search message up to parent Routes itself through directory lookups oint-to-point messages between children and parents Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 32

Locating Shared Copies of a Block Flat Schemes Memory-based schemes Information about all copies stored at the home directory Examples: Stanford Dash/Flash, MIT Alewife, SGI Origin Cache-based schemes Information about copies distributed among copies themselves Inside caches, each copy points to next to form a linked list Scalable Coherent Interface (SCI: IEEE standard) Hierarchical Schemes Through the directory hierarchy Each directory entry has presence bits To parent and children subtrees Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 33 Hierarchical Directories Tracks which of its children level-1 directories have a copy of the memory block. Also tracks which local memory blocks are cached outside this subtree. Inclusion is maintained between level-2 and level-1 directories level-2 directory Directory is a hierarchical data structure rocessing nodes Leaves are processing nodes, internal nodes are just directories Logical hierarchy, not necessarily a physical one Tracks which of its children level-1 processing nodes have a copy directory of the memory block. Also tracks which local memory blocks are cached outside this subtree. Inclusion is maintained between level-1 directory and processor caches Can be embedded in a general network topology Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 34

Individual nodes are multiprocessors Example: mesh of SMs Coherence across nodes is directory-based Directory keeps track of nodes, not individual processors Coherence within nodes is snooping or directory Orthogonal, but needs a good interface of functionality Examples: Two-Level Hierarchy Convex Exemplar: directory-directory Sequent, Data General, HAL: directory-snoopy Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 35 Examples of Two-level Hierarchies C B1 C C B1 C C B1 C C B1 C Main Mem Snooping Adapter Snooping Adapter Main Mem Dir. Main Mem Assist Assist Main Mem Dir. B2 Network (a) Snooping-snooping (b) Snooping-directory C C C C C C C C M/D A M/D A M/D A M/D A M/D A M/D A M/D A M/D A Network1 Network1 Network1 Network1 Directory adapter Directory adapter Dir/Snoopy adapter Dir/Snoopy adapter Network2 (c) Directory-directory Bus (or Ring) (d) Directory-snooping Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 36

Hierarchical Approaches: Snooping Extend snooping approach: hierarchy of broadcast media Tree of buses or rings rocessors are in the bus- or ring-based multiprocessors at the leaves arents and children connected by two-way snoopy interfaces Snoop both buses and propagate relevant transactions Main memory may be centralized at root or distributed among leaves Handled similarly to bus, but not full broadcast faulting processor sends out search bus transaction on its bus propagates up and down hieararchy based on snoop results roblems: High latency: multiple levels, and snoop/lookup at every level Bandwidth bottleneck at root Not popular today Cache Coherence in Scalable Machines Muhamed Mudawar, CSE 661 Spring 2005, KFUM Slide 37 Advantages of Multiprocessor Nodes otential for cost and performance advantages Amortization of node fixed costs over multiple processors Can use commodity SMs and CMs Less nodes for directory to keep track of Much communication may be contained within node rocessors within a node prefetch data for each other rocessors can share caches (overlapping of working sets) Benefits depend on sharing pattern (and mapping) Good for widely read-shared data Good for nearest-neighbor, if properly mapped Not so good for all-to-all communication Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 38

rotocol Design Optimizations Reduce Bandwidth demands By reducing number of protocol transactions per operation Reduce Latency By reducing network transactions in critical path Overlap network activities or make them faster Reduce endpoint assist occupancy per transaction Especially when the assists are programmable Traffic, latency, and occupancy Should not scale up too quickly with number of nodes Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 39 Reducing Read Miss Latency Strict Request-Response: L: Local or Requesting node H: Home node D: Dirty node 4 transactions in critical path 5 transactions in all Intervention Forwarding: Home forwards intervention Directed to owner s cache Home keeps track of requestor 4 transactions in all Reply Forwarding: Owner replies to requestor 3 transactions in critical path 3: intervention 1: request 4a: revise L H D 2: response 4b: data reply 1: request 2: intervention L H D 4: data reply 3: revise 1: request 2: intervention L H D 3b: data reply 3a: revise Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 40

Reducing Invalidation Latency In Cache-Based rotocol Invalidations sent from Home To all sharer nodes S i Strict Request-Response: 2s transactions in total 2s transactions in critical path s = number of sharers Invalidation Forwarding: Each sharer forwards invalidation to next sharer s acknowledgements to Home s+1 transactions in critical path Single acknowledgement: Sent by last sharer s+1 transactions in total 3: inval 5: inval 1: inval H S1 S2 S3 2: ack 4: ack 6: ack 1: inval 2a: inval 3a: inval H S1 S2 S3 2b: ack 3b: ack 4: ack 1: inval 2: inval 3: inval H S1 S2 S3 4: ack Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 41 Correctness Ensure basics of coherence at state transition level Relevant lines are invalidated, updated, and retrieved Correct state transitions and actions happen Ensure serialization and ordering constraints Ensure serialization for coherence (on a single location) Ensure atomicity for consistency (on multiple locations) Avoid deadlocks, livelocks, and starvation roblems: Multiple copies and multiple paths through network Large latency makes optimizations attractive But optimizations complicate correctness Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 42

Serialization for Coherence Serialization means that writes to a given location Are seen in the same order by all processors In a bus-based system: Multiple copies, but write serialization is imposed by bus In a scalable multiprocessor with cache coherence Home node can impose serialization on a given location All relevant operations go to the home node first But home node cannot satisfy all requests Valid copy can be in a dirty node Requests may reach the home node in one order, But reach the dirty node in a different order Then writes to same location are not seen in same order by all Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 43 Ensuring Serialization Use additional busy or pending directory states Indicate that previous operation is in progress Further operations on same location must be delayed Ensure serialization using one of the following Buffer at the home node Until previous (in progress) request has completed Buffer at the requestor nodes By constructing a distributed list of pending requests NACK and retry Negative acknowledgement sent to the requestor Request retried later by the requestor s assist Forward to the dirty node Dirty node determines their serialization when block is dirty Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 44

Write Atomicity roblem A=1; while (A==0) ; B=1; while (B==0) ; print A; 1 2 3 Mem Cache Mem Cache A:0->1 A:0 Cache B:0->1 Mem A=1 delay B=1 A=1 Interconnection Netw ork Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 45 Ensuring Write Atomicity Multiple copies and a distributed interconnect In Invalidation-based scheme Block can be shared by multiple nodes Owner provides appearance of write atomicity by not allowing read access to the new value until all shared copies are invalidated and invalidations are acknowledged. Much harder in update schemes! Since the data is sent directly to the sharers and is accessible immediately Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 46

Deadlock, Livelock, & Starvation otential source for deadlock A node may receive too many messages If no sufficient buffer space then flow control causes deadlock ossible solutions for buffer deadlock rovide enough buffer space or use main memory Use NACKs (negative acknowledgments) rovide separate request and response networks with separate buffers otential source for Livelock NACKs that cause potential livelock and traffic problems otential source for Starvation Unfairness in handling requests Solution is to buffer all requests in FIFO order Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 47 Summary of Directory Organizations Flat Schemes: Issue (a): finding source of directory data Go to home, based on address Issue (b): finding out where the copies are Memory-based: all info is in directory at home Cache-based: home has pointer to distributed linked list Issue (c): communicating with those copies Memory-based: point-to-point messages (coarser on overflow) Can be multicast or overlapped Cache-based: point-to-point linked list traversal to find them Hierarchical Schemes: All three issues through sending messages up and down tree No single explicit list of sharers Only direct communication is between parents and children Cache Coherence in Scalable Machines Muhamed Mudawar, COE 502 KFUM Slide 48

SGI Origin Case Study L2 $ 1 4 MB SysAD bus L2 $ 1 4 MB L2 $ 1 4 MB SysAD bus L2 $ 1 4 MB Xbow Hub Router Router Hub Xbow Directory Main Memory (1 4 GB) Directory Main Memory (1 4 GB) Each node consists of Interconnection Network Two MIS R10k processors with 1 st and 2 nd level caches A fraction of the total main memory and directory A Hub: communication assist or coherence controller SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 49 SGI Origin System Overview Directory is in DRAM and is accessed in parallel Hub is the communication assist Implements the network interface and coherence protocol Sees all second-level cache misses Connects to router chip Connects to Xbow for I/O Connects to memory SysAD Bus System Address/Data Shared by 2 processors No snooping coherence Coherence thru directory L2 $ 1 4 MB Xbow Directory SysAD bus Hub L2 $ 1 4 MB Main Memory (1 4 GB) L2 $ 1 4 MB Directory Interconnection Network SysAD bus Hub L2 $ 1 4 MB Main Memory (1 4 GB) Xbow SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 50

Origin Network N N N N N N N N N N N N (b) 4-node (c) 8-node (d) 16-node (d) 32-node meta-router (e) 64-node Up to 512 nodes or 1024 processors SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 51 Origin Network cont d Interconnection network has a hypercube topology With up to 32 nodes or 64 processors Fat cube topology for beyond 64 processors Up to 512 nodes or 1024 processors Each router has six pairs of 1.56GB/s links Two to nodes, four to other routers Latency: 41ns pin to pin across a router Flexible cables up to 3 ft long Four virtual channels per physical link Request, reply, other two for priority or I/O SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 52

Origin Cache and Directory States Cache states: MESI Seven directory states Unowned: no cache has a copy, memory copy is valid Exclusive: one cache has block in M or E state Memory may or may not be up to date Shared: multiple (can be zero) caches have a shared copy Shared copy is clean and memory has a valid copy Three Busy states for three types of pending requests: Read, read exclusive (or upgrade), and uncached read Indicates home has received a previous request for the block Couldn t satisfy it itself, sent it to another node and is waiting Cannot take another request for the block yet oisoned state, used for efficient page migration SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 53 Origin Directory Structure Flat Memory based: directory information at home Three directory formats: 1) If exclusive, entry is pointer to specific processor (not node) 2) If shared, bit vector: each bit points to a node, not processor Invalidation sent to a Hub is broadcast to both processors 16-bit format (32 processors), kept in main memory DRAM 64-bit format (128 processors), uses extended directory memory 128-byte blocks 3) For larger machines, coarse bit vector Each bit corresponds to n/64 nodes (64-bit format) Invalidations are sent to all Hubs and processors in a group Machine chooses format dynamically When application is confined to 64 nodes or less SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 54

Handling a Read Miss Hub examines address If remote, sends request to home node If local, looks up local directory entry and memory Memory block is fetched in parallel with directory entry Called speculative data access Directory lookup returns one cycle earlier If directory is unowned or shared, it s a win Data already obtained by Hub Directory may indicate one of many states If directory is in Unowned State Goes to Exclusive state (pointer format) Replies with data block to requestor Exclusive data reply No network transactions if home is local Cache state L I E 1: read request H U E 2: exclusive data reply Dir state SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 55 Read Block in Shared / Busy State If directory is in Shared state: Directory sets presence bit of requesting node Replies with data block to requestor 1: read request Shared data reply Strict request-reply Block is cached in shared state If directory is in a Busy state: Directory not ready to handle request NACK to requestor Asking requestor to try again So as not to buffer request at home NACK is a response that does not carry data L I S L I H S 2: shared data reply 1: read request 2: NACK H B SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 56

Read Block in Exclusive State If directory is in Exclusive state (interesting case): If home is not owner, need to get data from owner To home and to requestor Uses reply forwarding for lowest latency and traffic Directory state is set to busy-shared state Home optimistically sends a speculative reply to requestor At same time, request is forwarded to exclusive node (owner) Owner sends revision message to home & reply to requestor 1: read request 2b: Intervention L I S 2a: speculative reply H E B S 3b: Revision E E,M S 3a: shared reply SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 57 Read Block in Exclusive State (2) At the home node: Set directory to busy state to NACK subsequent requests Set requestor presence bit Assume block is clean and send speculative reply At the owner node: If block is dirty Send data reply to requestor and a sharing write back to home Both messages carry modified block data If block is clean Similar to above, but messages don t carry data Revision message to home is called a downgrade Home changes state from busy-shared to shared When it receives revision message SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 58

Speculative Replies Why speculative replies? Requestor needs to wait for reply from owner anyway No latency savings Could just get data from owner always Two reasons when block is in exclusive-clean state 1. L2 cache controller does not reply with data When data is in exclusive-clean state in cache So there is a need to get data from home (speculative reply) 2. Enables protocol optimization No need to send notification messages back to home When exclusive-clean blocks are replaced in caches Home will supply data (speculatively) and then checks SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 59 Handling a Write Miss Request to home can be read exclusive or upgrade Read-exclusive when block is not present in cache Request both data and ownership Upgrade when writing a block in shared state Request ownership only Copies in other caches must be invalidated Except when directory state in Unowned Invalidations must be explicitly acknowledged Home node updates directory and sends the invalidations Includes the requestor identity in the invalidations So that acknowledgements are sent back to requestor directly SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 60

Write to Block in Unowned State If Read Exclusive: Directory state is changed to Exclusive Directory entry is set to point to requestor Home node sends a data reply to requestor Requestor cache changes state to Modified if Upgrade: L I M 1: RdEx request H U E 2: exclusive data reply The expected directory state is Shared But Exclusive means block has been invalidated In requestor s cache and directory notified Can happen in Origin rotocol L S I 1:Upgr request Invalidate H S E 0:Upgr Someone else has upgraded same block 2: NACK NACK reply: request should be retried as Read Exclusive SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 61 Write to Block in Shared State At the home: Set directory state to exclusive Set directory pointer for requestor processor Ensures that subsequent requests are forwarded to requestor Home sends invalidation requests to all sharers Which will acknowledge requestor (Not home) 1: RdEx/Upgr request 2c: Invalidation request L I,S M 2a: Exclusive reply/upgr Ack H S E 2b: Invalidation request S1 S I S2 S I 3a: Invalidation Ack 3b: Invalidation Ack SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 62

Write to Block in Shared State (2) If Read Exclusive, home sends Exclusive reply with invalidations pending to requestor Reply contains data and number of sharers to be invalidated If Upgrade, home sends Upgrade Ack (No data) with invalidations pending At requestor: Wait for all acks to come back before closing the operation Subsequent request for block to home Forwarded as intervention to requestor For proper serialization Requestor does not handle intervention request until All invalidation acks received for its outstanding request SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 63 Write to Block in Exclusive State If Upgrade Expected directory state is Shared But another write has beaten this one Request not valid so NACK response Request should be retried as Read Exclusive If Read Exclusive L S I 1:Upgr request Invalidate 2: NACK Set dir state to busy-exclusive and send speculative reply Send invalidation to owner with identity of requestor H S E 0:RdEx 1: RdEx request 2b: Invalidation L I M 2a: Speculative reply H E B E 3b: Revision E E,M I 3a: exclusive reply SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 64

Write Block in Exclusive State (2) At owner: Send ownership transfer revision message to home No data is sent to home If block is dirty in cache Send exclusive reply with data to requestor Overrides speculative reply sent by home If block is clean in exclusive state Send acknowledgment to requestor No data to requestor because got that from speculative reply Write to a block in Busy state: NACK Must try again to avoid buffering request at home node SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 65 Handling Writeback Requests Directory state cannot be shared or unowned Requestor is the owner and has the block as modified If a read request had come in to set state to shared Request would have been forwarded to owner, and State would have been Busy If directory state is Exclusive Directory state is set to unowned Acknowledgment is returned L M I 1:Write Back H E U 2: Acknowledgment SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 66

Handling Writeback Requests (2) If directory state is Busy: interesting race condition Busy because of request from another node Y Intervention has been forwarded to node X doing writeback Intervention and writeback have crossed each other Y s operation has had it s effect on directory Cannot drop or NACK writeback (only valid copy) Can t even retry after Y s ref completes 1: Request Y 2b: Intervention Y L I S,M 3a: Data Reply H 2a: Write Back E B S,E E M I 2c: Speculative Reply Y 3b: Write Back Acknowledgment SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 67 Solution to Writeback Race Combine the two operations Request coming from node Y and Writeback coming from X When writeback reaches directory, state changes to Shared if it was busy-shared (Y requested read copy) Exclusive if busy-exclusive (Y requested exclusive copy) Home forwards the writeback data to the requestor Y Sends writeback acknowledgment to X When X receives the intervention, it ignores it Since it has an outstanding writeback for the cache block Y s operation completes when it gets the reply X s writeback completes when it gets writeback ack SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 68

Replacement of Shared Block Could send a replacement hint to the directory To clear the presence bit of the sharing node Reduces occurrence in limited-pointer representation Can eliminate an invalidation Next time the block is written But does not reduce traffic Have to send a replacement hint Incurs the traffic at a different time If block is not written again then replacement hint is a waste Origin protocol does not use replacement hints Can have a directory in shared state while no copy exists SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 69 Total Transaction Types Coherent memory transactions 9 request types 6 invalidation/intervention types 39 reply/response types Non-coherent memory transactions Uncached memory, I/O, and synchronization operations 19 request transaction types 14 reply types NO invalidation/intervention types SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 70

Serialization for Coherence Serialization means that writes to a given location Are seen in the same order by all processors In a bus-based system: Multiple copies, but write serialization is imposed by bus In a scalable multiprocessor with cache coherence Home node can impose serialization on a given location All relevant operations go to the home node first But home node cannot satisfy all requests Valid copy may not be in main memory but in a dirty node Requests may reach the home node in one order, But reach the dirty node in a different order Then writes to same location are not seen in same order by all SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 71 ossible Solutions for Serialization Buffer (FIFO) requests at the home (MIT Alewife) All requests go to home first (good for serialization) Buffer until previous request has completed But buffering requests becomes acute at the home (full buffer) Let buffer overflow into main memory (MIT Alewife) Buffer at the requestors (SCI rotocol) Distributed linked list of pending requests Used in cache-based approach Use busy state and NACK to retry (Origin2000) Negative acknowledgement sent to requestor Serialization order is for accepted requests (not NACKed) Forward to owner (Stanford DASH) Serialization determined by home when clean Serialization determined by owner when exclusive SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 72

Example shows need for serialization at home 3b 1 M I Need Serialization at Home Home E B E 2a 3a 4 2b 1 2 I M I Initial Condition: block A is in modified state in 1 s cache. 1. 2 sends read-exclusive request for A to home node. 2a. Home forwards request to 1 (dirty node). 2b. Home sends speculative reply to 2. 3a. 1 sends data reply to 2 (replaces 2b). 3b. 1 sends Ownership transfer revision message to Home. 4. 2 having received its reply, considers write complete. roceeds, but incurs a replacement of the just dirtied block, causing it to be written back in transaction 4. Writeback transaction is received at home before Ownership-transfer message (3b) and the block is written into memory. Then when the revision message arrives at the home, the directory is made to point to 2 as having the dirty copy. This corrupts coherence. Origin prevents this from happening using busy states Directory detects write-back from same node as request Write-back is NACKed and must be retried SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 73 5a Need Serialization at Requestor Having home node determine order is not enough 1 1 I E Following example shows need for serialization at requestor Home U E B E 3 4b 5b 4a 2 2 I M 1. 1 sends read request to home node for block A 2. 2 sends read-exclusive request to home for the write of A. But won t process it until it is done with read. 3. In response to (1), home sends reply to 1 (and sets directory pointer). Home now thinks read is complete. Unfortunately, the reply does not get to 1 right away. 4a. In response to (2), home sends speculative reply to 2 4b. In response to (2), home sends invalidation to 1; it reaches 1 before (3) (no point-to-point order). 5. 1 receives and applies invalidate, sends acknowledgment 5a to home and 5b to requestor. Finally, transaction 3 (exclusive data reply) reaches 1. 1 has block in exclusive state, while 2 has it modified. Origin: serialization is applied to all nodes Any node does not begin a new transaction Until previous transaction on same block is complete SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 74

1 3 5 7 9 Starvation NACKs can cause starvation ossible solutions: Do nothing: starvation shouldn t happen often (DASH) Random delay between retries riorities: increase priority of NACKed requests (Origin) SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 75 Application Speedups 30 25 Barnes-Hut: 512K bodies Barnes-Hut: 16K bodies Ocean: n=1026 Ocean: n=514 Radix: 4M keys 20 15 10 5 0 11 13 15 17 19 21 23 25 27 29 31 1 3 5 7 9 30 25 LU: n=4096 LU: n=2048 Raytrace: ball4 (128) Raytrace: car (1024) Radiosity: room Radiosity: largeroom 20 Speedup 15 10 5 0 11 13 15 17 19 21 23 25 27 29 31 Speedup rocessors rocessors SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 76

In directory protocol Summary Substantial implementation complexity below state diagram Directory versus cache states Transient states Race conditions Conditional actions Speculation Origin philosophy: Memory-less: a node reacts to incoming events using only local state An operation does not hold shared resources while requesting others SGI Origin Muhamed Mudawar, COE 502, KFUM Slide 77