Scalable Cache Coherent Systems

Similar documents
Scalable Cache Coherent Systems Scalable distributed shared memory machines Assumptions:

Cache Coherence in Scalable Machines

Cache Coherence: Part II Scalable Approaches

Scalable Cache Coherence

NOW Handout Page 1. Context for Scalable Cache Coherence. Cache Coherence in Scalable Machines. A Cache Coherent System Must:

Scalable Cache Coherence

Scalable Multiprocessors

Scalable Cache Coherence. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

A Scalable SAS Machine

Recall Ordering: Scheurich and Dubois. CS 258 Parallel Computer Architecture Lecture 21 P 1 : Directory Based Protocols. Terminology for Shared Memory

Cache Coherence in Scalable Machines

Cache Coherence. Todd C. Mowry CS 740 November 10, Topics. The Cache Coherence Problem Snoopy Protocols Directory Protocols

Recall: Sequential Consistency Example. Recall: MSI Invalidate Protocol: Write Back Cache. Recall: Ordering: Scheurich and Dubois

Recall: Sequential Consistency Example. Implications for Implementation. Issues for Directory Protocols

Memory Hierarchy in a Multiprocessor

Lecture 8: Snooping and Directory Protocols. Topics: split-transaction implementation details, directory implementations (memory- and cache-based)

Lecture 2: Snooping and Directory Protocols. Topics: Snooping wrap-up and directory implementations

Cache Coherence in Scalable Machines

Lecture 8: Directory-Based Cache Coherence. Topics: scalable multiprocessor organizations, directory protocol design issues

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

ECE 669 Parallel Computer Architecture

Multiprocessors & Thread Level Parallelism

ECSE 425 Lecture 30: Directory Coherence

Performance study example ( 5.3) Performance study example

Review. EECS 252 Graduate Computer Architecture. Lec 13 Snooping Cache and Directory Based Multiprocessors. Outline. Challenges of Parallel Processing

5008: Computer Architecture

Computer Architecture Lecture 10: Thread Level Parallelism II (Chapter 5) Chih Wei Liu 劉志尉 National Chiao Tung University

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

CS654 Advanced Computer Architecture Lec 14 Directory Based Multiprocessors

Lecture 5: Directory Protocols. Topics: directory-based cache coherence implementations

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Multiprocessor Cache Coherency. What is Cache Coherence?

EC 513 Computer Architecture

CISC 662 Graduate Computer Architecture Lectures 15 and 16 - Multiprocessors and Thread-Level Parallelism

Cache Coherence. Bryan Mills, PhD. Slides provided by Rami Melhem

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Fall 2012 EE 6633: Architecture of Parallel Computers Lecture 4: Shared Address Multiprocessors Acknowledgement: Dave Patterson, UC Berkeley

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Chapter 9 Multiprocessors

Shared Memory SMP and Cache Coherence (cont) Adapted from UCB CS252 S01, Copyright 2001 USB

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

EC 513 Computer Architecture

CMSC 611: Advanced. Distributed & Shared Memory

Parallel Architecture. Sathish Vadhiyar

CMSC 611: Advanced Computer Architecture

Chapter 5. Multiprocessors and Thread-Level Parallelism

Introduction to Multiprocessors (Part I) Prof. Cristina Silvano Politecnico di Milano

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

1. Memory technology & Hierarchy

12 Cache-Organization 1

Lecture 3: Directory Protocol Implementations. Topics: coherence vs. msg-passing, corner cases in directory protocols

Interconnect Routing

Module 14: "Directory-based Cache Coherence" Lecture 31: "Managing Directory Overhead" Directory-based Cache Coherence: Replacement of S blocks

CSC 631: High-Performance Computer Architecture

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Lecture 7: PCM Wrap-Up, Cache coherence. Topics: handling PCM errors and writes, cache coherence intro

Computer Systems Architecture

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

Chapter Seven. Idea: create powerful computers by connecting many smaller ones

Flynn s Classification

Lect. 6: Directory Coherence Protocol

Recall: Sequential Consistency of Directory Protocols How to get exclusion zone for directory protocol? Recall: Mechanisms for reducing depth

Thread- Level Parallelism. ECE 154B Dmitri Strukov

Page 1. Cache Coherence

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Multiprocessor Systems

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

Shared Memory Multiprocessors

Cache Coherence (II) Instructor: Josep Torrellas CS533. Copyright Josep Torrellas

Module 9: "Introduction to Shared Memory Multiprocessors" Lecture 16: "Multiprocessor Organizations and Cache Coherence" Shared Memory Multiprocessors

Approaches to Building Parallel Machines. Shared Memory Architectures. Example Cache Coherence Problem. Shared Cache Architectures

Cache Coherence in Bus-Based Shared Memory Multiprocessors

Chapter 5. Multiprocessors and Thread-Level Parallelism

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Lecture 24: Virtual Memory, Multiprocessors

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

Page 1. Instruction-Level Parallelism (ILP) CISC 662 Graduate Computer Architecture Lectures 16 and 17 - Multiprocessors and Thread-Level Parallelism

5 Chip Multiprocessors (II) Chip Multiprocessors (ACS MPhil) Robert Mullins

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

Shared Memory Architectures. Approaches to Building Parallel Machines

Computer Architecture

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Lecture 1: Introduction

EE382 Processor Design. Illinois

... The Composibility Question. Composing Scalability and Node Design in CC-NUMA. Commodity CC node approach. Get the node right Approach: Origin

Caches Concepts Review

Lecture 25: Multiprocessors

Special Topics. Module 14: "Directory-based Cache Coherence" Lecture 33: "SCI Protocol" Directory-based Cache Coherence: Sequent NUMA-Q.

Caches. Parallel Systems. Caches - Finding blocks - Caches. Parallel Systems. Parallel Systems. Lecture 3 1. Lecture 3 2

Processor Architecture

Parallel Computer Architecture Spring Distributed Shared Memory Architectures & Directory-Based Memory Coherence

SIGNET: NETWORK-ON-CHIP FILTERING FOR COARSE VECTOR DIRECTORIES. Natalie Enright Jerger University of Toronto

Lecture 7: PCM, Cache coherence. Topics: handling PCM errors and writes, cache coherence intro

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

EECS 570 Lecture 11. Directory-based Coherence. Winter 2019 Prof. Thomas Wenisch

Foundations of Computer Systems

PARALLEL MEMORY ARCHITECTURE

Transcription:

NUM SS Scalable ache oherent Systems Scalable distributed shared memory machines ssumptions: rocessor-ache-memory nodes connected by scalable network. Distributed shared physical address space. ommunication assist () must interpret network transactions, forming shared address space. Hardware-supported SS For such a system with distributed shared physical address space: cache miss must be satisfied transparently from local or remote memory depending on address. By its normal operation, cache replicates data locally resulting in a potential cache coherence problem between local and remote copies of data. Thus: coherency solution must be in place for correct operation. Standard bus-snooping protocols studied earlier do not apply for lack of a bus or a broadcast medium to snoop on. For this type of system to be scalable, in addition to network latency and bandwidth scalability, the cache coherence protocol or solution used must also scale as well. hapter 8 ME655 - Shaaban #1 lec # 11 Fall 2015 12-1-2015

Functionality Expected In ache oherent System 1 2 rovide a set of states, a state transition diagram, and 3 actions representing the cache coherence protocol used. Manage coherence protocol: (0) Determine when to invoke the coherence protocol (a) Find source of information about state of cache line in other caches Whether need to communicate with other cached copies (b) Find out the location or locations of other (shared) copies if any. (c) ommunicate with those copies (invalidate/update). ction (0) is done the same way on all cache coherent systems: State of the local cache line is maintained in the cache. rotocol is invoked if an access fault occurs on the cache block or line. Different approaches are distinguished by (a) to ( c ). ME655 - Shaaban #2 lec # 11 Fall 2015 12-1-2015

Bus-Based oherence i.e rocessor that intends to modify a cache block ll of (a), (b), (c) done through broadcast on the bus: Faulting processor sends out a search. Over bus Others respond to the search probe and take necessary action. e.g. invalidate their local copies This approach could be done in a scalable network too: Broadcast to all processors, and let them respond over network. in general onceptually simple, but broadcast doesn t scale with p: On a scalable network (e.g MINs), every fault may lead to at least p network transactions. (a) Find source of information about state of cache line in other caches Whether need to communicate with other cached copies (b) Find out the location or locations of other (shared) copies if any. (c) ommunicate with those copies (invalidate/update). p = Number of processors ME655 - Shaaban #3 lec # 11 Fall 2015 12-1-2015

Scalable ache oherence scalable cache coherence approach may have similar cache line states and state transition diagrams as in bus-based coherence protocols. However, different additional mechanisms other than broadcasting must be devised to manage the coherence protocol. i.e to meet coherence functionality requirements a-c Three possible approaches: pproach #1: Hierarchical Snooping. pproach #2: Directory-based cache coherence. pproach #3: combination of the above two approaches. ME655 - Shaaban #4 lec # 11 Fall 2015 12-1-2015

pproach #1: Hierarchical Snooping Extend snooping approach: hierarchy of broadcast media: Tree of buses or rings (KSR-1). rocessors are in the bus- or ring-based multiprocessors at the leaves. arents and children connected by two-way snooping interfaces: Snoop both buses and propagate relevant transactions. Main memory may be centralized at root or distributed among leaves. Issues (a) - (c) handled similarly to bus, but not full broadcast. Faulting processor sends out search bus transaction on its bus. ropagates up and down hierarchy based on snoop results. roblems: High latency: multiple levels, and snoop/lookup at every level. Bandwidth bottleneck at root. Does not scale well This approach has, for the most part, been abandoned. (a) Find source of information about state of cache line in other caches Whether need to communicate with other cached copies (b) Find out the location or locations of other (shared) copies if any. (c) ommunicate with those copies (invalidate/update). ME655 - Shaaban #5 lec # 11 Fall 2015 12-1-2015

Hierarchical Snoopy ache oherence Simplest way: hierarchy of buses; snoop-based coherence at each level. or rings. onsider buses. Two possibilities: (a) ll main memory at the global (B2) bus. UM (b) Main memory distributed among the clusters of SM nodes. NUM L2 B1 L2 B1 Memory L2 B1 B1 L2 Memory UM Main Memory ( Mp) (a) B2 NUM B2 (b) entralized Main Memory (Does not scale well) Distributed Main Memory ME655 - Shaaban #6 lec # 11 Fall 2015 12-1-2015

Bus Hierarchies with entralized Memory M? B1 B1 L2 L2 Or L3 B2 Or L3 Main Memory ( Mp) B1 follows standard snoopy protocol. Need a monitor per B1 bus: Decides what transactions to pass back and forth between buses. cts as a filter to reduce bandwidth needs. Use L2 (or L3) cache: Much larger than caches (set associative). Must maintain inclusion. Has dirty-but-stale bit per line. L2 (or L3) cache can be DRM based, since fewer references get to it. ME655 - Shaaban #7 lec # 11 Fall 2015 12-1-2015

Bus Hierarchies with entralized Memory dvantages and Disadvantages dvantages: Simple extension of bus-based scheme. Misses to main memory require single traversal to root of hierarchy. lacement of shared data is not an issue. One centralized memory Disadvantages: Misses to local data also traverse hierarchy. Higher traffic and latency. Memory at global bus must be highly interleaved for bandwidth. ME655 - Shaaban #8 lec # 11 Fall 2015 12-1-2015

Bus Hierarchies with Distributed Memory M? B1 B1 Memory L2 Or L3 L2 Memory B2 System bus or coherent point-to-point link (e.g. coherent HyperTransport, or QI) Main memory distributed among clusters of SM nodes. luster is a full-fledged bus-based machine, memory and all. utomatic scaling of memory (each cluster brings some with it). Good placement can reduce global bus traffic and latency. But latency to far-away memory is larger. (NUM) s expected in NUM systems ME655 - Shaaban #9 lec # 11 Fall 2015 12-1-2015

Directory Functionality Scalable pproach #2: Directories directory is composed of a number of directory entries. Every memory block has an associated directory entry: Keeps track of the nodes or processors that have cached copies of the memory block and their states. On a miss (0) invoke coherence protocol, (a) find directory entry, (b) look it up, and (c) communicate only with the nodes that have copies if necessary. possible Directory Entry (Memory-based Full-map or Full Bit Vector Type): One entry per memory block resence Bits i = 1 if processor i has a copy 0 1 2. i. N-1 One presence bit per processor Dirty Dirty Bit If Dirty = 1 then only one i = 1 ( i is owner of block) In scalable networks, communication with directory and nodes that have copies is through network transactions. number of alternatives exist for organizing directory information. Next ME655 - Shaaban #10 lec # 11 Fall 2015 12-1-2015

Organizing Directories Used in UM SS Directory Schemes Used in scalable NUM SS (a) How to find source of directory information (b) How to locate copies Both memory and directories are distributed. Directory entry co-located with memory block itself at its home node Full-Map (Full Bit Vector) entralized Both memory and directory are centralized. Does not scale well. Memory-based Limited Directory e.g SGI Origin, Stanford DSH Flat Distributed Hierarchical ache-based (chained directories) Singly Linked chain Doubly Linked chain e.g IEEE Scalable oherent Interface (SI), Sequent NUM-Q ME655 - Shaaban #11 lec # 11 Fall 2015 12-1-2015

Basic Operation of entralized Directory Memory Full Map ache ache Interconnection Network presence bits dirty bit Directory Read from main memory (read miss) by processor i: Both memory and directory are centralized. processors. ssuming write-back, write invalidate. With each cache-block in memory: presence-bits p[i], 1 dirty-bit. With each cache-block in cache: 1 valid bit, and 1 dirty (owner) bit. Dirty bit on --> only one p[i] on If dirty-bit OFF then { read from main memory; turn p[i] ON; } if dirty-bit ON then { recall line from dirty proc j (cache state to shared); update memory; turn dirty-bit OFF; turn p[i] ON; supply recalled data to i;} Write miss to main memory by processor i: If dirty-bit OFF then { supply data to i; send invalidations to all caches that have the block; turn dirty-bit ON; turn p[i] ON;... } if dirty-bit ON then { recall line from dirty proc (with p[j] on); update memory; block state on proc j invalid ; turn p[i] ON; supply recalled data to i;} resence Bits (one per processor) 0 1 2. i. N-1 Directory Entry er Memory Block Dirty Bit Does not scale well. ME655 - Shaaban #12 lec # 11 Fall 2015 12-1-2015 No longer block owner Forced write back

Distributed, Flat, Memory-based Schemes M ll info about copies of a memory blocks co-located with block itself at home node (directory node of block). Works just like centralized scheme, except distributed. Scaling of performance characteristics: Traffic on a write: proportional to number of sharers. Latency on a write: an issue invalidations to sharers in parallel. Scaling of storage overhead: Simplest representation: Full-Map ( full bit vector), i.e. one presence bit per node: presence bits, 1 dirty bit per block directory entry. Storage overhead doesn t scale well with ; a 64-byte cache line implies: 64 nodes: 65/(64 x 8) = 12.7% overhead. 256 nodes: 50% overhead.; 1024 nodes: 200% overhead. For M memory blocks in memory, storage overhead is proportional to *M Examples: SGI Origin, Stanford DSH. resence Bits (one per processor) Full Map Entry 0 1 2. i. N-1 Directory Entry er Memory Block Dirty Bit = N = Number of rocessors M = Total Number of Memory Blocks ME655 - Shaaban #13 lec # 11 Fall 2015 12-1-2015

Owner of block Requesting node Requestor (c) 3. Read req. to owner 4a. Data Reply Basic Operation of Distributed, Flat, Memory-based Directory 1. Read request to directory 2. Reply with owner identity (c) (c) (a) (b) 4b. Revision message to directory Directory node for block Update directory entry Home node of block Requesting node (c) (c) Requestor RdEx request to directory 2. Reply with sharers identity 3a. 3b. Inval. req. Inval. req. to sharer to sharer (c) 4a. 4b. Inval. ack Inval. ack (c) 1. (a) (b) Directorynode Home node of block Node with dirtycopy Read (a) Read miss to a a block in dirty in state dirty state (One owner) Sharer Sharer Write (b) miss Write miss to a to block a with two two sharers sharers ssuming: Write back, write invalidate (a) Find source of info (home node directory) (b) Get directory info (entry) from home node (e.g owner, sharers) (c) rotocol actions: ommunicate with other nodes as needed ME655 - Shaaban #14 lec # 11 Fall 2015 12-1-2015

Reducing Storage Overhead of Distributed Memory-based Directories Optimizations for full bit vector schemes: (Full Map) Increase cache block size (reduces storage overhead proportionally) Use multiprocessor (SM) nodes (one presence bit per multiprocessor node, not per processor) still scales as *M, but not a problem for all but very large machines 256-processors, 4 per node, 128 Byte block : 6.25% overhead. Limited Directories: ddressing entry width Observation: most blocks cached by only few nodes. Don t have a bit per node, but directory entry contains a few pointers to sharing nodes (each pointer has log 2 bits, e.g =1024 => 10 bit pointers). Sharing patterns indicate a few pointers should suffice (five or so) Need an overflow strategy when there are more sharers. Storage requirements: O(M log 2 ). Reducing height : addressing the M term Observation: number of memory blocks >> number of cache blocks Most directory entries are useless at any given time Organize directory as a cache, rather than having one entry per memory block. ME655 - Shaaban #15 lec # 11 Fall 2015 12-1-2015

Distributed, Flat, ache-based Schemes How they work: Memory block at home node only holds pointer to rest of directory info (start of chain or linked list of sharers). Distributed linked list of copies, weaves through caches: ache tag has pointer, points to next cache with a copy (sharer). On read, add yourself to head of the list. On write, propagate chain of invalidations down the list. Doubly-linked List/hain Main Memory (Home) Home Node of Block Singly-linked chain also possible but slower Node 0 Node 1 Node 2 ache ache ache Utilized in Scalable oherent Interface (SI) IEEE Standard: Uses a doubly-linked list. Used in Dolphin Interconnects ME655 - Shaaban #16 lec # 11 Fall 2015 12-1-2015

Scaling roperties of ache-based Schemes Traffic on write: proportional to number of sharers. Latency on write: proportional to number of sharers. Don t know identity of next sharer until reach current one also assist processing at each node along the way. (even reads involve more than one other communication assist: home and first sharer on list) Storage overhead: quite good scaling along both axes Only one head pointer per memory block rest of storage overhead is proportional to cache size. Other properties: Good: mature, IEEE Standard (SI), fairness. Bad: complex. Not total number of memory blocks, M ME655 - Shaaban #17 lec # 11 Fall 2015 12-1-2015

Operation: Distributed Hierarchical Directories Directory is a hierarchical data structure: Leaves are processing nodes, internal nodes just directories. Logical hierarchy, not necessarily physical (can be embedded in general network). otential bandwidth bottleneck at root. processing nodes (Tracks which of its children level-1 directories have a copy of the memory block. lso tracks which local memory blocks are cached outside this subtree. Inclusion is maintained between level-1 directories and level-2 directory.) level-2 directory level-1 directory (Tracks which of its children processing nodes have a copy of the memory block. lso tracks which local memory blocks are cached outside this subtree. Inclusion is maintained between processor caches and directory.) ME655 - Shaaban #18 lec # 11 Fall 2015 12-1-2015

(a) How to Find Directory Information entralized memory and directory: - Easy: go to it But not scalable. Distributed memory and directory: Flat schemes: Directory distributed with memory: at the cache block home node. Location based on address: network transaction sent directly to home. Hierarchical schemes: Directory organized as a hierarchical data structure. Leaves are processing nodes, internal nodes have only directory state. Node s directory entry for a block says whether each subtree caches the block To find directory info, send search message up to parent Routes itself through directory lookups. Similar to hierarchical snooping, but point-to-point messages are sent between children and parents. (a) (b) (c) Find source of info (e.g. home node directory) Get directory information (e.g owner, sharers) rotocol actions: ommunicate with other nodes as needed ME655 - Shaaban #19 lec # 11 Fall 2015 12-1-2015

(b) How Is Location of opies Stored? Hierarchical Schemes: Through the hierarchy. Each directory has presence bits for its children (subtrees), and dirty bit. Flat Schemes: Varies a lot (memory-based vs. ache-based). Different storage overheads and performance characteristics. Memory-based schemes: Info about copies stored at the home node with the memory block. Examples: Dash, lewife, SGI Origin, Flash. ache-based schemes: Info about copies distributed among copies themselves. Via linked list: Each copy points to next. Example: Scalable oherent Interface (SI, an IEEE standard). (a) (b) (c) Find source of info (e.g. home node directory) Get directory information (e.g owner, sharers) rotocol actions: ommunicate with other nodes as needed ME655 - Shaaban #20 lec # 11 Fall 2015 12-1-2015

Summary of Directory Organizations Flat Schemes: Issue (a): finding source of directory data: Go to home node, based on address. Issue (b): finding out where the copies are. Memory-based: all info is in directory at home node. ache-based: home has pointer to first element of distributed linked list. Issue (c): communicating with those copies. Memory-based: point-to-point messages. an be multicast or overlapped. ache-based: part of point-to-point linked list traversal to find them. Serialized. Hierarchical Schemes: ll three issues through sending messages up and down tree. No single explicit list of sharers. Only direct communication is between parents and children. ME655 - Shaaban #21 lec # 11 Fall 2015 12-1-2015

Summary of Directory pproaches Directories offer scalable coherence on general networks. No need for broadcast media. Many possibilities for organizing directories and managing protocols. Hierarchical directories not used much. High latency, many network transactions, and bandwidth bottleneck at root. Both memory-based and cache-based distributed flat schemes are alive: For memory-based, full bit vector suffices for moderate scale. Measured in nodes visible to directory protocol, not processors. ME655 - Shaaban #22 lec # 11 Fall 2015 12-1-2015

pproach #3: opular Middle Ground Two-level Hierarchy Example Individual nodes are multiprocessors, connected nonhierarchically. e.g. mesh of SMs. oherence across nodes is directory-based. Directory keeps track of nodes, not individual processors. oherence within nodes is snooping or directory. Orthogonal, but needs a good interface of functionality. Examples: e.g. Snooping + Directory onvex Exemplar: directory-directory. Sequent, Data General, HL: directory-snoopy. SM= Symmetric Multiprocessor Node ME655 - Shaaban #23 lec # 11 Fall 2015 12-1-2015

Example Two-level Hierarchies B1 B1 B1 B1 Main Mem Snooping dapter Snooping dapter Main Mem Dir. Main Mem ssist ssist Main Mem Dir. B2 i.e 2-level Hierarchical Snooping (a) Snooping-snooping Network (b) Snooping-directory Network1 Network1 Network1 Network1 Directory adapter Directory adapter Dir/Snoopy adapter Dir/Snoopy adapter Network2 (c) Directory-directory Bus (or Ring) (d) Directory-snooping ME655 - Shaaban #24 lec # 11 Fall 2015 12-1-2015

dvantages of Multiprocessor Nodes otential for cost and performance advantages: mortization of node fixed costs over multiple processors pplies even if processors simply packaged together but not coherent. an use commodity SMs. Less nodes for directory to keep track of. With good mapping/data allocation Than going the network Much communication may be contained within node (cheaper). Nodes can prefetch data for each other (fewer remote misses). ombining of requests (like hierarchical, only two-level). an even share caches (overlapping of working sets). Benefits depend on sharing pattern (and mapping): Good for widely read-shared: e.g. tree data in Barnes-Hut Good for nearest-neighbor, if properly mapped Not so good for all-to-all communication. SM= Symmetric Multiprocessor Node ME655 - Shaaban #25 lec # 11 Fall 2015 12-1-2015

Disadvantages of oherent M Nodes Memory Bandwidth shared among processors in a node: Fix: Each processor has own memory (NUM) Bus increases latency to local memory. Fix: Use point-to-point interconnects (e.g HyperTransport). Or crossbar-based SGI Origin 2000 With local node coherence in place, a U typically must wait for local snoop results before sending remote requests. Bus snooping at remote nodes also increases delays there too, increasing latency and reducing bandwidth. Overall, may hurt performance if sharing patterns don t comply with system architecture. M = Multiprocessor ME655 - Shaaban #26 lec # 11 Fall 2015 12-1-2015

Non-Uniform Memory ccess (NUM) Example: MD 8-way Opteron Server Node Dedicated point-to-point interconnects (oherent HyperTransport links) used to connect processors alleviating the traditional limitations of FSB-based SM systems (yet still providing the cache coherency support needed) Each processor has two integrated DDR memory channel controllers: memory bandwidth scales up with number of processors. NUM architecture since a processor can access its own memory at a lower latency than access to remote memory directly connected to other processors in the system. Total 16 processor cores when dual core Opteron processors used Repeated here from lecture 1 ME655 - Shaaban #27 lec # 11 Fall 2015 12-1-2015