Recall: Sequential Consistency Example. Recall: MSI Invalidate Protocol: Write Back Cache. Recall: Ordering: Scheurich and Dubois

Similar documents
Recall: Sequential Consistency Example. Implications for Implementation. Issues for Directory Protocols

NOW Handout Page 1. Context for Scalable Cache Coherence. Cache Coherence in Scalable Machines. A Cache Coherent System Must:

Scalable Cache Coherent Systems Scalable distributed shared memory machines Assumptions:

Scalable Cache Coherent Systems

Cache Coherence in Scalable Machines

Recall Ordering: Scheurich and Dubois. CS 258 Parallel Computer Architecture Lecture 21 P 1 : Directory Based Protocols. Terminology for Shared Memory

Cache Coherence: Part II Scalable Approaches

A Scalable SAS Machine

Scalable Cache Coherence

Scalable Cache Coherence. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Scalable Cache Coherence

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

Cache Coherence in Scalable Machines

Scalable Multiprocessors

Cache Coherence. Todd C. Mowry CS 740 November 10, Topics. The Cache Coherence Problem Snoopy Protocols Directory Protocols

Cache Coherence in Scalable Machines

Lecture 2: Snooping and Directory Protocols. Topics: Snooping wrap-up and directory implementations

Lecture 8: Directory-Based Cache Coherence. Topics: scalable multiprocessor organizations, directory protocol design issues

Recall: Sequential Consistency of Directory Protocols How to get exclusion zone for directory protocol? Recall: Mechanisms for reducing depth

Lecture 8: Snooping and Directory Protocols. Topics: split-transaction implementation details, directory implementations (memory- and cache-based)

Recall: Sequential Consistency. CS 258 Parallel Computer Architecture Lecture 15. Sequential Consistency and Snoopy Protocols

ECE 669 Parallel Computer Architecture

Lecture 3: Directory Protocol Implementations. Topics: coherence vs. msg-passing, corner cases in directory protocols

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

... The Composibility Question. Composing Scalability and Node Design in CC-NUMA. Commodity CC node approach. Get the node right Approach: Origin

Module 14: "Directory-based Cache Coherence" Lecture 31: "Managing Directory Overhead" Directory-based Cache Coherence: Replacement of S blocks

Review. EECS 252 Graduate Computer Architecture. Lec 13 Snooping Cache and Directory Based Multiprocessors. Outline. Challenges of Parallel Processing

Multiprocessors II: CC-NUMA DSM. CC-NUMA for Large Systems

Memory Hierarchy in a Multiprocessor

Performance study example ( 5.3) Performance study example

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Alewife Messaging. Sharing of Network Interface. Alewife User-level event mechanism. CS252 Graduate Computer Architecture.

A More Sophisticated Snooping-Based Multi-Processor

EECS 570 Lecture 11. Directory-based Coherence. Winter 2019 Prof. Thomas Wenisch

5 Chip Multiprocessors (II) Chip Multiprocessors (ACS MPhil) Robert Mullins

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

Lecture 7: Implementing Cache Coherence. Topics: implementation details

Interconnect Routing

Lecture 5: Directory Protocols. Topics: directory-based cache coherence implementations

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

5 Chip Multiprocessors (II) Robert Mullins

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Shared Memory Multiprocessors

Cache Coherence (II) Instructor: Josep Torrellas CS533. Copyright Josep Torrellas

Portland State University ECE 588/688. Directory-Based Cache Coherence Protocols

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Multiprocessors & Thread Level Parallelism

CSE502: Computer Architecture CSE 502: Computer Architecture

Computer Architecture

Page 1. Cache Coherence

Bus-Based Coherent Multiprocessors

Multiprocessor Systems

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Lecture 25: Multiprocessors

CMSC 611: Advanced Computer Architecture

Lecture 3: Snooping Protocols. Topics: snooping-based cache coherence implementations

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

EC 513 Computer Architecture

EC 513 Computer Architecture

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Cache Coherence. Bryan Mills, PhD. Slides provided by Rami Melhem

CS/ECE 757: Advanced Computer Architecture II (Parallel Computer Architecture) Symmetric Multiprocessors Part 1 (Chapter 5)

Shared Memory Architectures. Shared Memory Multiprocessors. Caches and Cache Coherence. Cache Memories. Cache Memories Write Operation

Chapter 5. Multiprocessors and Thread-Level Parallelism

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

Page 1. Lecture 12: Multiprocessor 2: Snooping Protocol, Directory Protocol, Synchronization, Consistency. Bus Snooping Topology

NOW Handout Page 1. Recap: Performance Trade-offs. Shared Memory Multiprocessors. Uniprocessor View. Recap (cont) What is a Multiprocessor?

CSC 631: High-Performance Computer Architecture

Shared Memory Multiprocessors

Foundations of Computer Systems

Suggested Readings! What makes a memory system coherent?! Lecture 27" Cache Coherency! ! Readings! ! Program order!! Sequential writes!! Causality!

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Lecture 24: Virtual Memory, Multiprocessors

CMSC 611: Advanced. Distributed & Shared Memory

Lecture 4: Directory Protocols and TM. Topics: corner cases in directory protocols, lazy TM

1. Memory technology & Hierarchy

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

A Basic Snooping-Based Multi-Processor Implementation

Fall 2015 :: CSE 610 Parallel Computer Architectures. Cache Coherence. Nima Honarmand

Shared Memory Architectures. Approaches to Building Parallel Machines

Review: Multiprocessor. CPE 631 Session 21: Multiprocessors (Part 2) Potential HW Coherency Solutions. Bus Snooping Topology

Chapter 9 Multiprocessors

NOW Handout Page 1. Recap. Protocol Design Space of Snooping Cache Coherent Multiprocessors. Sequential Consistency.

ECSE 425 Lecture 30: Directory Coherence

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

Special Topics. Module 14: "Directory-based Cache Coherence" Lecture 33: "SCI Protocol" Directory-based Cache Coherence: Sequent NUMA-Q.

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Cache Coherence: Part 1

A Basic Snooping-Based Multi-Processor Implementation

Cache Coherence in Bus-Based Shared Memory Multiprocessors

Flynn s Classification

Cache Coherence Protocols: Implementation Issues on SMP s. Cache Coherence Issue in I/O

5008: Computer Architecture

Switch Gear to Memory Consistency

Aleksandar Milenkovich 1

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Transcription:

ecall: Sequential onsistency Example S22 Graduate omputer rchitecture Lecture 2 pril 9 th, 212 Distributed Shared Memory rof John D. Kubiatowicz http://www.cs.berkeley.edu/~kubitron/cs22 rocessor 1 rocessor 2 One onsistent Serial Order LD 1 LD 2 B 7 ST 1,6 LD 6 LD B 21 ST 2 B,1 ST B, LD B 2 LD 6 6 ST B,21 LD 7 6 LD 8 B LD 1 LD 2 B 7 LD B 2 ST 1,6 LD 6 6 ST B,21 LD 6 LD B 21 LD 7 6 ST 2 B,1 ST B, LD 8 B 2 ecall: Ordering: Scheurich and Dubois : 1 : 2 : W Sufficient onditions every process issues mem operations in program order after a write operation is issued, the issuing process waits for the write to complete before issuing next memory operation after a read is issued, the issuing process waits for the read to complete and for the write whose value is being returned to complete (gloabaly) before issuing its next operation Exclusion Zone Instantaneous ompletion point ecall: MSI Invalidate rotocol: Write Back ache Three States: M : Modified S : Shared I : Invalid ead obtains block in shared even if only cache copy Obtain exclusive ownership before writing Busdx causes others to invalidate (demote) If M in another cache, will flush Busdx even if hit in S» promote to M (upgrade) What about replacement? S->I, M->I as before rwr/busdx rd/ rwr/busdx rd/busd rd/ Busd/ rwr/ M S I Busd/Flush BusdX/Flush BusdX/

Sufficient for Sequential onsistency? Sufficient onditions issued in program order after write issues, the issuing process waits for the write to complete before issuing next memory operation after read is issues, the issuing process waits for the read to complete and for the write whose value is being returned to complete (globally) before issuing its next operation Writer condition: Write is completed when it occurs in the cache» lready had to invalidate other copies; once writeable copy (state M) is in cache, noone else can read again without bus transaction!» Thus, when write happens in cache, it is complete» Must wait for write permission to return to cache! eader ondition: if a read returns the value of a write, that write has become visible to all others already» Either: it is being read by the processor that wrote it and no other processor has a copy (thus any read by any other processor will get new value via a flush» Or: it has already been flushed back to memory and all processors will have the value Memory Basic Operation of Directory ache ache Interconnection Network presence bits dirty bit Directory k processors. With each cache-block in memory: k presence-bits, 1 dirty-bit With each cache-block in cache: 1 valid bit, and 1 dirty (owner) bit ead from main memory by processor i: If dirty-bit OFF then { read from main memory; turn p[i] ON; } If dirty-bit ON then { recall line from dirty proc (cache state to shared); update memory; turn dirty-bit OFF; turn p[i] ON; supply recalled data to i;} Write to main memory by processor i: If dirty-bit OFF then {send invalidations to all caches that have the block; turn dirty-bit ON; supply data to i; turn p[i] ON;... } If dirty-bit ON then {recall line from dirty proc (invalidate); update memory; keep dirty-bit ON; supply recalled data to i} 6 Example: How to invalidate read copies Example: from read-shared to read-write roc 1 WEQ WK INV Home INV K K K INV Node X Node Y Node Z No need to broadcast to hunt down copies Some entity in the system knows where all copies reside Doesn t need to be specific could reflect group of processors each of which might contain a copy (or might not) Disadvantages: Serious restrictions on when you can issue requests rocessor held up on store until: Every processor cache invalidated! Is this sequentially consistent?? 7 orrectness Ensure basics of coherence at state transition level relevant lines are updated/invalidated/fetched correct state transitions and actions happen Ensure ordering and serialization constraints are met for coherence (single location) for consistency (multiple locations): assume sequential consistency void deadlock, livelock, starvation roblems: multiple copies ND multiple paths through network (distributed pathways) unlike bus and non cache-coherent (each had only one) large latency makes optimizations attractive» increase concurrency, complicate correctness 8

oherence: Serialization to a Location Need entity that sees op s from many procs bus: multiple copies, but serialization imposed by bus order Timestamp snooping: serialization imposed by virtual time scalable M without coherence: main memory module determined order scalable M with cache coherence home memory good candidate» all relevant ops go home first but multiple copies» valid copy of data may not be in main memory» reaching main memory in one order does not mean will reach valid copy in that order» serialized in one place doesn t mean serialized wrt all copies 9 Serialization: Filter Through Home Node? Need a serializing agent home memory is a good candidate, since all misses go there first Having single entity determine order is not enough it may not know when all xactions for that operation are done everywhere 1 1 Home 6 2 2 1. 1 issues read request to home node for 2. 2 issues read-exclusive request to home corresponding to write of. But won t process it until it is done with read. Home recieves 1, and in response sends reply to 1 (and sets directory presence bit). Home now thinks read is complete. Unfortunately, the reply does not get to 1 right away.. In response to 2, home sends invalidate to 1; it reaches 1 before transaction (no point-to-point order among requests and replies).. 1 receives and applies invalideate, sends ack to home. 6. Home sends data reply to 2 corresponding to request 2. Finally, transaction (read reply) reaches 1. roblem: Home deals with write access before prev. is fully done 1 should not allow new access to line until old one done 1 Basic Serialization Solution Use additional busy or pending directory and cache states ache-side: Transaction Buffer Similar to memory load/store buffer in uniprocessor Handles misordering in network:» E.g. Sent request, got invalidate first: wait to invalidate until get response from memory (either data or NK)» Make sure that memory state machine has actual understanding of state of caches + network Memory-Side: Indicate that operation is in progress, further operations on location must be delayed or queued buffer at home buffer at requestor NK and retry forward to dirty node 11 Sequential onsistency? How to get exclusion zone for directory protocol? learly need to make sure that invalidations really invalidate copies» Keep state to handle reordering at client (previous slide s problem) While acknowledgements outstanding, cannot handle read requests» NK read requests» Queue read requests Example for invalidationbased scheme: block owner (home node) provides appearance of atomicity by waiting for all invalidations to be ack d before allowing access to new value s a result, write commit point becomes point at which WData leaves home node (after last ack received) Much harder in update schemes! 12 EQ NK eq WData eq HOME Inv ck ck Inv eader Inv ck eader eader

Livelock: ache Side Window of Vulnerability: Time between return of value to cache and consumption by cache an lead to cache-side thrashing during context switching Various solutions possible include stalling processor until result can be consumed by processor Discussion of losing the Window of Vulnerability in Multiphase Memory Transactions by Kubiatowicz, haiken, and garwal Solution: rovide Transaction buffers to track state of outstanding requests Stall context switching and invalidations selectively when thrashing detected (presumably rare) 1 Livelock: Memory Side What happens if popular item is written frequently? ossible that some disadvantaged node never makes progress! This is a memory-side thrashing problem Examples: High-conflict lock bit Scheduling queue Solutions? Ignore» Good solution for low-conflict locks» Bad solution for high-conflict locks Software queueing: educe conflict by building a queue of locks» Example: MS Lock ( Mellor-rummey-Scott ) Hardware Queuing at directory: ossible scalability problems» Example: QOLB protocol ( Queue on Lock Bit )» Natural fit to SI protocol Escalating priorities of requests (SGI Origin)» ending queue of length 1» Keep item of highest priority in that queue» New requests start at priority» When NK happens, increase priority 1 erformance Latency protocol optimizations to reduce network xactions in critical path overlap activities or make them faster Throughput reduce number of protocol operations per invocation educe the residency time in the directory controller» Faster hardware, etc are about how these scale with the number of nodes rotocol Enhancements for Latency Forwarding messages: memory-based protocols :intervention 1: req 2:intervention 1: req a:revise L H L H 2:reply :reply :response b:response (a) Strict request-reply (a) Intervention forwarding 1: req 2:intervention a:revise L H b:response Intervention is like a req, but issued in reaction to req. and sent to cache, rather than memory. (a) eply forwarding 1 16

Other Latency Optimizations Throw hardware at critical path SM for directory (sparse or cache) bit per block in SM to tell if protocol should be invoked Overlap activities in critical path multiple invalidations at a time in memory-based overlap invalidations and acks in cache-based lookups of directory and memory, or lookup with transaction» speculative protocol operations elaxing onsistency, e.g. Stanford Dash: Write request when outstanding reads: home node gives data to requestor, directs invalidation acks to be returned to requester llow write to continue immediately upon receipt of data Does not provide Sequential consistency (provides release consistency), but still write atomicity:» ache can refuse to satisfy intervention (delay or NK) until all acks received» Thus, writes have well defined ordering (coherence), but execution note necessarily sequentially consistent (instructions after write not delayed until write completion) 17 Increasing Throughput educe the number of transactions per operation invals, acks, replacement hints all incur bandwidth and assist occupancy educe occupancy (overhead of protocol processing) transactions small and frequent, so occupancy very important ipeline the assist (protocol processing) Many ways to reduce latency also increase throughput e.g. forwarding to dirty node, throwing hardware at critical path... 18 Deadlock, Livelock, Starvation equest-response protocol Similar issues to those discussed earlier a node may receive too many messages flow control can cause deadlock separate request and reply networks with request-reply protocol Or NKs, but potential livelock and traffic problems New problem: protocols often are not strict requestreply e.g. rd-excl generates invalidation requests (which generate ack replies) other cases to reduce latency and allow concurrency 19 Deadlock Issues with rotocols :intervention L 1: req H a:revise 2:reply b:response 1: req 2:intervention L H :reply :response 1: req 2:intervention a:revise L H b:response 1 2 1 2 onsider Dual graph of message dependencies Nodes: Networks, rcs: rotocol actions Number of networks = length of longest dependency 2 Networks Sufficient to void Deadlock Must always make sure response (end) can be absorbed! 2 1 2 a a b b Need Networks to void Deadlock Need Networks to void Deadlock

Mechanisms for reducing depth Example of two-network protocols: Only equest-esponse (2-level responses) 1: req 2:intervention a:revise L H b:response 1: req 2:intervention a:revise L H NK b:response 2:intervention 1: req a:revise L H 2 :SendInt To b:response 1 2 X 1 2 1 2 2 a Original: Need Networks to void Deadlock Optional NK When blocked Need 2 Networks to Transform to equest/esp: Need 2 Networks to equestor. ead req. to owner Data eply Node with dirtycopy 1. ead request to directory 2. eply with owner identity a. b. evision message to directory (a) ead miss to a block in dirty state Directorynode for block dex request to directory eply with sharers identity a. b. Inval. req. to sharer Inval. req. to sharer a. b. Inval. ack Inval. ack Sharer equestor 2. 1. Sharer (b) Write miss to a block with o tw sharers Directorynode a 21 22 omplexity? ache coherence protocols are complex hoice of approach conceptual and protocol design versus implementation Tradeoffs within an approach performance enhancements often add complexity, complicate correctness» more concurrency, potential race conditions» not strict request-reply Many subtle corner cases BUT, increasing understanding/adoption makes job much easier automatic verification is important but hard opular Middle Ground Two-level hierarchy Individual nodes are multiprocessors, connected nonhiearchically e.g. mesh of SMs oherence across nodes is directory-based directory keeps track of nodes, not individual processors oherence within nodes is snooping or directory orthogonal, but needs a good interface of functionality Examples: onvex Exemplar: directory-directory Sequent, Data General, HL: directory-snoopy SM on a chip? 2 2

Example Two-level Hierarchies dvantages of Multiprocessor Nodes Main Mem B1 Snooping dapter Network1 Directory adapter B2 (a) Snooping-snooping Network2 B1 Snooping dapter (c) Directory-directory Main Mem Network1 Directory adapter Dir. B1 Main Mem ssist Network1 Dir/Snoopy adapter Network ssist (b) Snooping-directory Bus (or ing) (d) Directory-snooping B1 Main Mem Dir. Network1 Dir/Snoopy adapter otential for cost and performance advantages amortization of node fixed costs over multiple processors» applies even if processors simply packaged together but not coherent can use commodity SMs less nodes for directory to keep track of much communication may be contained within node (cheaper) nodes prefetch data for each other (fewer remote misses) combining of requests (like hierarchical, only two-level) can even share caches (overlapping of working sets) benefits depend on sharing pattern (and mapping)» good for widely read-shared: e.g. tree data in Barnes-Hut» good for nearest-neighbor, if properly mapped» not so good for all-to-all communication 2 26 Disadvantages of oherent M Nodes Bandwidth shared among nodes all-to-all example applies to coherent or not Bus increases latency to local memory With coherence, typically wait for local snoop results before sending remote requests Snoopy bus at remote node increases delays there too, increasing latency and reducing bandwidth May hurt performance if sharing patterns don t comply Scaling Issues Memory and directory bandwidth entralized directory is bandwidth bottleneck, just like centralized memory How to maintain directory information in distributed way? erformance characteristics traffic: no. of network transactions each time protocol is invoked latency = no. of network transactions in critical path Directory storage requirements Number of presence bits grows as the number of processors Deadlock issues: May need as many networks as longest chain of request/response pairs How directory is organized affects all these, performance at a target scale, as well as coherence management issues 27 28

Insight into Directory equirements ache Invalidation atterns If most misses involve O() transactions, might as well broadcast! Study Inherent program characteristics: frequency of write misses? how many sharers on a write miss how these scale 1 91.22 9 8 7 6 2 8.7 1 LU Invalidation atterns.22 lso provides insight into how to organize and store directory information 1 2 6 7 8 to 11 12 to 1 16 to 19 2 to 2 2 to 27 28 to 1 2 to # of invalidations Ocean Invalidation atterns 6 to 9 to to 7 8 to 1 2 to 6 to 9 6 to 6 9 8 8.98 7 6 2 1.6 1..9....2 1 2 6 7 8 to 11 12 to 1 16 to 19 2 to 2 2 to 27 28 to 1 2 to 6 to 9 to to 7 8 to 1 2 to 6 to 9 6 to 6 29 # of invalidations ache Invalidation atterns Barnes-Hut Invalidation atterns 8. 2 22.87 2 1 1.6 1. 2.87 1.27 1.88 1. 2. 1.6.61.2.28.2.6.1.7. 1 2 6 7 8 to 11 12 to 1 16 to 19 2 to 2 2 to 27 28 to 1 2 to 6 to 9 to to 7 8 to 1 2 to 6 to 9 6 to 6 # of invalidations adiosity Invalidation atterns 8. 6 2 12. 1 6.68.16 2.2.28 1.9 1.16.97 2.2 1.7 1.6.92..7.1.28.26.2.19.19.91 1 2 6 7 8 to 11 12 to 1 16 to 19 2 to 2 2 to 27 28 to 1 2 to 6 to 9 to to 7 8 to 1 2 to 6 to 9 6 to 6 # of invalidations 1 Sharing atterns Summary Generally, few sharers at a write, scales slowly with ode and read-only objects (e.g, scene data in aytrace)» no problems as rarely written Migratory objects (e.g., cost array cells in Locusoute)» even as # of Es scale, only 1-2 invalidations Mostly-read objects (e.g., root of tree in Barnes)» invalidations are large but infrequent, so little impact on performance Frequently read/written objects (e.g., task queues)» invalidations usually remain small, though frequent Synchronization objects» low-contention locks result in small invalidations» high-contention locks need special support (SW trees, queueing locks) Implies directories very useful in containing traffic if organized properly, traffic and latency shouldn t scale too badly Suggests techniques to reduce storage overhead 2

Organizing Directories How to Find Directory Information How to find source of directory information entralized Directory Schemes Distributed Flat Hierarchical centralized memory and directory - easy: go to it but not scalable distributed memory and directory flat schemes» directory distributed with memory: at the home» location based on address (hashing): network xaction sent directly to home hierarchical schemes»?? How to locate copies Memory-based ache-based How Hierarchical Directories Work (Tracks which of its children level-1 directories have a copy of the memory block. lso tracks which local memory blocks are cached outside this subtree. Inclusion is maintained between level-1 directories and level-2 directory.) level-2 directory processing nodes level-1 directory (Tracks which of its children processing nodes have a copy of the memory block. lso tracks which local memory blocks are cached outside this subtree. Inclusion is maintained between processor caches and directory.) Find Directory Info (cont) distributed memory and directory flat schemes» hash hierarchical schemes» node s directory entry for a block says whether each subtree caches the block» to find directory info, send search message up to parent routes itself through directory lookups» like hiearchical snooping, but point-to-point messages between children and parents Directory is a hierarchical data structure leaves are processing nodes, internal nodes just directory logical hierarchy, not necessarily phyiscal» (can be embedded in general network) 6

How Is Location of opies Stored? Hierarchical Schemes through the hierarchy each directory has presence bits child subtrees and dirty bit Flat Schemes vary a lot different storage overheads and performance characteristics Memory-based schemes» info about copies stored all at the home with the memory block» Dash, lewife, SGI Origin, Flash ache-based schemes» info about copies distributed among copies themselves each copy points to next» Scalable oherent Interface (SI: IEEE standard) 7 Flat, Memory-based Schemes info about copies co-located with block at the home just like centralized scheme, except distributed erformance Scaling traffic on a write: proportional to number of sharers latency on write: can issue invalidations to sharers in parallel Storage overhead simplest representation: full bit vector, (called Full-Mapped Directory ), i.e. one presence bit per node storage overhead doesn t scale well with ; 6-byte line implies» 6 nodes: 12.7% ovhd.» 26 nodes: % ovhd.; 12 nodes: 2% ovhd. for M memory blocks in memory, storage overhead is proportional to *M:» ssuming each node has memory M local = M/, 2 M local» This is why people talk about full-mapped directories as scaling with the square of the number of processors 8 M educing Storage Overhead Optimizations for full bit vector schemes increase cache block size (reduces storage overhead proportionally) use multiprocessor nodes (bit per mp node, not per processor) still scales as *M, but reasonable for all but very large machines» 26-procs, per cluster, 128B line: 6.2% ovhd. educing width addressing the term? educing height addressing the M term? M = M local Storage eductions Width observation: most blocks cached by only few nodes don t have a bit per node, but entry contains a few pointers to sharing nodes» alled Limited Directory rotocols =12 => 1 bit ptrs, can use 1 pointers and still save space sharing patterns indicate a few pointers should suffice (five or so) need an overflow strategy when there are more sharers Height observation: number of memory blocks >> number of cache blocks most directory entries are useless at any given time ould allocate directory from pot of directory entries» If memory line doesn t have a directory, no-one has copy» What to do if overflow? Invalidate directory with invaliations organize directory as a cache, rather than having one entry per memory block 9

ase Study: lewife rchitecture ost Effective Mesh Network ro: Scales in terms of hardware ro: Exploits Locality Directory Distributed along with main memory Bandwidth scales with number of processors on: Non-Uniform Latencies of ommunication Have to manage the mapping of processes/threads onto processors due lewife employs techniques for latency minimization and latency tolerance so programmer does not have to manage ontext Switch in 11 cycles between processes on remote memory request which has to incur communication network latency ache ontroller holds tags and implements the coherence protocol 1 LimitLESS rotocol (lewife) Limited Directory that is Locally Extended through Software Support Handle the common case (small worker set) in hardware and the exceptional case (overflow) in software rocessor with rapid trap handling (executes trap code within cycles of initiation) State Shared rocessor needs complete access to coherence related controller state in the hardware directories Directory ontroller can invoke processor trap handlers Machine needs an interface to the network that allows the processor to launch and intercept coherence protocol packets 2 The rotocol Transition to Software lewife: p=-entry limited directory with software extension (LimitLESS) ead-only directory transaction: Incoming EQ with n p Hardware memory controller responds If n > p: send EQ to processor for handling Trap routine can either discard packet or store it to memory Store-back capability permits message-passing and block transfers otential Deadlock Scenario with rocessor Stalled and waiting for a remote cache-fill Solution: Synchronous Trap (stored in local memory) to empty input queue

Transition to Software (on t) Overflow Trap Scenario First Instance: Full-Map bit-vector allocated in local memory and hardware pointers transferred into this and vector entered into hash table Otherwise: Transfer hardware pointers into bit vector Meta-State Set to Trap-On-Write While emptying hardware pointers, Meta-State: Trans-In-rogress Incoming Write equest Scenario Empty hardware pointers to memory Set cktr to number of bits that are set in bit-vector Send invalidations to all caches except possibly requesting one Free vector in memory Upon invalidate acknowledgement (cktr == ), send Write-ermission and set Memory State to ead-write Flat, ache-based Schemes How they work: home only holds pointer to rest of directory info distributed linked list of copies, weaves through caches» cache tag has pointer, points to next cache with a copy on read, add yourself to head of the list (comm. needed) on write, propagate chain of invals down the list Scalable oherent Interface (SI) IEEE Standard doubly linked list ache ache Main Memory (Home) Node Node 1 Node 2 ache 6 Scaling roperties (ache-based) Traffic on write: proportional to number of sharers Latency on write: proportional to number of sharers! don t know identity of next sharer until reach current one also assist processing at each node along the way (even reads involve more than one other assist: home and first sharer on list) Storage overhead: quite good scaling along both axes Only one head ptr per memory block» rest is all prop to cache size Very complex!!! Summary of Directory Organizations Flat Schemes: Issue (a): finding source of directory data go to home, based on address Issue (b): finding out where the copies are memory-based: all info is in directory at home cache-based: home has pointer to first element of distributed linked list Issue (c): communicating with those copies memory-based: point-to-point messages (perhaps coarser on overflow)» can be multicast or overlapped cache-based: part of point-to-point linked list traversal to find them» serialized Hierarchical Schemes: all three issues through sending messages up and down tree no single explict list of sharers only direct communication is between parents and children 7 8

Summary of Directory pproaches Directories offer scalable coherence on general networks no need for broadcast media Many possibilities for organizing directory and managing protocols Hierarchical directories not used much high latency, many network transactions, and bandwidth bottleneck at root Both memory-based and cache-based flat schemes are alive for memory-based, full bit vector suffices for moderate scale» measured in nodes visible to directory protocol, not processors will examine case studies of each Summary Memory oherence: Writes to a given location eventually propagated Writes to a given location seen in same order by everyone Memory onsistency: onstraints on ordering between processors and locations Sequential onsistency: For every parallel execution, there exists a serial interleaving Distributed Directory Structure Flat: Each address has a home node Hierarchical: directory spread along tree Mechanism for locating copies of data Memory-based schemes» info about copies stored all at the home with the memory block ache-based schemes» info about copies distributed among copies themselves 9 Summary (ontinued) Issues for Distributed Directories orrectness erformance omplexity and dealing with errors Serialization Solutions Use additional busy or pending directory and cache states ache-side: Transaction Buffer Memory-Side: Indicate that operation is in progress, further operations on location must be delayed or queued erformance Enhancements educe number of hops educe occupancy of transactions in memory controller Deadlock Issues with rotocols Many protocols are not simply request-response onsider Dual graph of message dependencies» Nodes: Networks, rcs: rotocol actions» onsider maximum depth of graph to discover number of networks 1