Multiprocessor Interconnection Networks

Similar documents
Interconnection Network

Interconnection Networks. Issues for Networks

4. Networks. in parallel computers. Advances in Computer Architecture

CS 258, Spring 99 David E. Culler Computer Science Division U.C. Berkeley Wide links, smaller routing delay Tremendous variation 3/19/99 CS258 S99 2

Interconnection Network. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Interconnection Network

Chapter 9 Multiprocessors

EE382 Processor Design. Illinois

Overview. Processor organizations Types of parallel machines. Real machines

CS252 Graduate Computer Architecture Lecture 14. Multiprocessor Networks March 9 th, 2011

Interconnection Networks

Lecture 28: Networks & Interconnect Architectural Issues Professor Randy H. Katz Computer Science 252 Spring 1996

Lecture 2: Topology - I

Interconnection networks

Physical Organization of Parallel Platforms. Alexandre David

Introduction to Multiprocessors (Part I) Prof. Cristina Silvano Politecnico di Milano

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University

TDT Appendix E Interconnection Networks

Scalable Distributed Memory Machines

SHARED MEMORY VS DISTRIBUTED MEMORY

Lecture: Interconnection Networks

Network Properties, Scalability and Requirements For Parallel Processing. Communication assist (CA)

MIMD Overview. Intel Paragon XP/S Overview. XP/S Usage. XP/S Nodes and Interconnection. ! Distributed-memory MIMD multicomputer

Multiprocessing and Scalability. A.R. Hurson Computer Science and Engineering The Pennsylvania State University

Network Properties, Scalability and Requirements For Parallel Processing. Communication assist (CA)

Scalable Multiprocessors

Communication Performance in Network-on-Chips

Interconnection Networks: Topology. Prof. Natalie Enright Jerger

Recall: The Routing problem: Local decisions. Recall: Multidimensional Meshes and Tori. Properties of Routing Algorithms

[ 7.2.5] Certain challenges arise in realizing SAS or messagepassing programming models. Two of these are input-buffer overflow and fetch deadlock.

Lecture 3: Topology - II

Multiprocessors & Thread Level Parallelism

Portland State University ECE 588/688. Directory-Based Cache Coherence Protocols

INTERCONNECTION NETWORKS LECTURE 4

CS575 Parallel Processing

Learning Curve for Parallel Applications. 500 Fastest Computers

EE/CSCI 451: Parallel and Distributed Computation

Outline. Limited Scaling of a Bus

Interconnection Networks

COMP Parallel Computing. SMM (1) Memory Hierarchies and Shared Memory

NOW Handout Page 1. Outline. Networks: Routing and Design. Routing. Routing Mechanism. Routing Mechanism (cont) Properties of Routing Algorithms

CS252 Lecture Notes Multithreaded Architectures

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E)

CMSC 611: Advanced. Parallel Systems

Interconnection topologies (cont.) [ ] In meshes and hypercubes, the average distance increases with the dth root of N.

CSC630/CSC730: Parallel Computing

High Performance Computing

NOW Handout Page 1. Recap: Gigaplane Bus Timing. Scalability

Architecture or Parallel Computers CSC / ECE 506

The Impact of Optics on HPC System Interconnects

Advanced Memory Organizations

Page 1. Multilevel Memories (Improving performance using a little cash )

ECE 669 Parallel Computer Architecture

Module 17: "Interconnection Networks" Lecture 37: "Introduction to Routers" Interconnection Networks. Fundamentals. Latency and bandwidth

Module 5: Performance Issues in Shared Memory and Introduction to Coherence Lecture 10: Introduction to Coherence. The Lecture Contains:

SMP and ccnuma Multiprocessor Systems. Sharing of Resources in Parallel and Distributed Computing Systems

Parallel Architectures

Lecture 2: Snooping and Directory Protocols. Topics: Snooping wrap-up and directory implementations

Scalable Cache Coherence

Network-on-chip (NOC) Topologies

Outline. Distributed Shared Memory. Shared Memory. ECE574 Cluster Computing. Dichotomy of Parallel Computing Platforms (Continued)

CS 770G - Parallel Algorithms in Scientific Computing Parallel Architectures. May 7, 2001 Lecture 2

Parallel Computing Interconnection Networks

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

BlueGene/L. Computer Science, University of Warwick. Source: IBM

Aleksandar Milenkovich 1

CMSC 611: Advanced Computer Architecture

Parallel Architecture. Sathish Vadhiyar

EE/CSCI 451: Parallel and Distributed Computation

Interconnect Technology and Computational Speed

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

Scalable Cache Coherence. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Convergence of Parallel Architecture

ECE 4750 Computer Architecture, Fall 2017 T06 Fundamental Network Concepts

Parallel Architectures

Packet Switch Architecture

Packet Switch Architecture

ECE 669 Parallel Computer Architecture

Dr e v prasad Dt

CMSC 611: Advanced. Distributed & Shared Memory

Routing Algorithm. How do I know where a packet should go? Topology does NOT determine routing (e.g., many paths through torus)

Lecture 15: Networks & Interconnect Interface, Switches, Routing, Examples Professor David A. Patterson Computer Science 252 Fall 1996

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections )

06-Dec-17. Credits:4. Notes by Pritee Parwekar,ANITS 06-Dec-17 1

CS Parallel Algorithms in Scientific Computing

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Flynn s Classification

Lecture 7: Distributed memory

Interconnection Networks

CS/COE1541: Intro. to Computer Architecture

Concurrent/Parallel Processing

Parallel Programming Models and Architecture

CS 498 Hot Topics in High Performance Computing. Networks and Fault Tolerance. 9. Routing and Flow Control

ECE 551 System on Chip Design

Parallel Computer Architecture II

Transcription:

Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 19, 1998 Topics Network design space Contention Active messages

Networks Design Options: Topology Routing Direct vs. Indirect Physical implementation Evaluation Criteria: Latency Bisection Bandwidth Contention and hot-spot behavior Partitionability Cost and scalability Fault tolerance 2

Buses P P P Bus Simple and cost-effective for small-scale multiprocessors Not scalable (limited bandwidth; electrical complications) 3

Crossbars Each port has link to every other port + Low latency and high throughput - Cost grows as O(N^2) so not very scalable. P P P P Crossbar M M M M - Difficult to arbitrate and to get all data lines into and out of a centralized crossbar. Used in small-scale MPs (e.g., C.mmp) and as building block for other networks (e.g., Omega). 4

Rings Cheap: Cost is O(N). Point-to-point wires and pipelining can be used to make them very fast. + High overall bandwidth P P P - High latency O(N) Examples: KSR machine, Hector Ring P P P 5

Trees Cheap: Cost is O(N). Latency is O(logN). H-Tree Easy to layout as planar graphs (e.g., H-Trees). For random permutations, root can become bottleneck. To avoid root being bottleneck, notion of Fat-Trees (used in CM-5) channels are wider as you move towards root. Fat Tree 6

Hypercubes 0-D 1-D 2-D 3-D 4-D Also called binary n-cubes. # of nodes = N = 2^n. Latency is O(logN); Out degree of PE is O(logN) Minimizes hops; good bisection BW; but tough to layout in 3-space Popular in early message-passing computers (e.g., intel ipsc, NCUBE) Used as direct network ==> emphasizes locality 7

Multistage Logarithmic Networks Cost is O(NlogN); latency is O(logN); throughput is O(N). Generally indirect networks. Many variations exist (Omega, Butterfly, Benes,...). Used in many machines: BBN Butterfly, IBM RP3,... 8

Omega Network 000 001 Omega Network 000 001 010 011 010 011 100 101 100 101 110 111 110 111 All stages are same, so can use recirculating network. Single path from source to destination. Can add extra stages and pathways to minimize collisions and increase fault tolerance. Can support combining. Used in IBM RP3. 9

Butterfly Network 000 001 Butterfly Network 000 001 010 011 010 011 100 101 100 101 110 111 split on MSB split on LSB 110 111 Equivalent to Omega network. Easy to see routing of messages. Also very similar to hypercubes (direct vs. indirect though). Clearly see that bisection of network is (N / 2) channels. Can use higher-degree switches to reduce depth. Used in BBN machines. 10

k-ary n-cubes 4-ary 3-cube Generalization of hypercubes (k-nodes in a string) Total # of nodes = N = k^n. k > 2 reduces # of channels at bisection, thus allowing for wider channels but more hops. 11

Routing Strategies and Latency Store-and-Forward routing: Tsf = Tc ( D L / W) L = msg length, D = # of hops, W = width, Tc = hop delay Wormhole routing: Twh = Tc (D + L / W) # of hops is an additive rather than multiplicative factor Virtual Cut-Through routing: Older and similar to wormhole. When blockage occurs, however, message is removed from network and buffered. Deadlock are avoided through use of virtual channels and by using a routing strategy that does not allow channel-dependency cycles. 12

Advantages of Low-Dimensional Nets What can be built in VLSI is often wire-limited LDNs are easier to layout: more uniform wiring density (easier to embed in 2-D or 3-D space) mostly local connections (e.g., grids) Compared with HDNs (e.g., hypercubes), LDNs have: shorter wires (reduces hop latency) fewer wires (increases bandwidth given constant bisection width)» increased channel width is the major reason why LDNs win! Factors that limit end-to-end latency: LDNs: number of hops HDNs: length of message going across very narrow channels LDNs have better hot-spot throughput more pins per node than HDNs 13

Performance Under Contention

Types of Hot Spots Module Hot Spots: Lots of PEs accessing the same PE's memory at the same time. Possible solutions: suitable distribution or replication of data high BW memory system design Location Hot Spots: Lots of PEs accessing the same memory location at the same time Possible solutions: caches for read-only data, updates for R-W data software or hardware combining 15

NYU Ultracomputer/ IBM RP3 Focus on scalable bandwidth and synchronization in presence of hot-spots. Machine model: Paracomputer (or WRAM model of Borodin) Autonomous PEs sharing a central memory Simultaneous reads and writes to the same location can all be handled in a single cycle. Semantics given by the serialization principle:... as if all operations occurred in some (unspecified) serial order. Obviously the above is a very desirable model. Question is how well can it be realized in practise? To achieve scalable synchronization, further extended read (write) operations with atomic read-modify-write (fetch-&-op) primitives. 16

The Fetch-&-Add Primitive F&A(V,e) returns old value of V and atomically sets V = V + e; If V = k, and X = F&A(V, a) and Y = F&A(V, b) done at same time One possible result: X = k, Y = k+a, and V = k+a+b. Another possible result: Y = k, X = k+b, and V = k+a+b. Example use: Implementation of task queues. Insert: myi = F&A(qi, 1); Q[myI] = data; full[myi] = 1; infinite Delete: myi = F&A(qd, 1); while (!full[myi]) ; data = Q[myI]; full[myi] = 0; Q full qd qi 17

The IBM RP3 (1985) Design Plan: 512 RISC processors (IBM 801s) Distributed main memory with software cache coherence Two networks: Low latency Banyan and a combining Omega ==> Goal was to build the NYU Ultracomputer model P Mem Map unit Cache NI (interleave) main memory L G moveable boundary between local and global storage NETWORKS P Mem Map unit Cache NI Interesting aspects: (interleave) main memory L G Data distribution scheme to address locality and module hot spots Combining network design to address synchronization bottlenecks 18

Combining Network Omega topology; 64-port network resulting from 6-levels of 2x2 switches. Request and response networks are integrated together. Key observation: To any destination module, the paths from all sources form a tree. 000 001 Omega Network 000 001 010 011 010 011 100 101 100 101 110 111 110 111 19

Combining Network Details Requests must come together locationally (to same location), spatially (in queue of same switch), and temporally (within a small time window) for combining to happen. B forward (requests) B M M combining queue combining queue queue M insert B reverse (replies) wait buffer lookup queue M B 20

Contention for the Network Location Hot Spot: Higher accesses to a single location imposed on a uniform background traffic. May arise due to synch accesses or other heavily shared data Not only are accesses to hot-spot location delayed, they found all other accesses were delayed too. (Tree Saturation effect.) 21

Saturation Model Parameters: p = # of PEs; r = # of refs / PE / cycle; h = % refs from PE to hot spot Total traffic to hot-spot memory module = rhp + r(1-h) "rhp" is hot-spot refs and "r(1-h)" is due to uniform traffic Latencies for all refs rise suddenly when [rhp + r(1-h)] = 1, assuming memory handles one request per cycle. Tree Saturation Effect: Buffers at all switches in the shaded area fill up, and even non-hot-spot requests have to pass through there. P filled up buffers P P They found that combining helped in handling such location hot spots. 22 M M Hot Spot Module M

Bandwidth Issues: Summary Network Bandwidth Memory Bandwidth local bandwidth global bandwidth Hot-Spot Issues module hot spots location hot spots 23

Active Messages (slide content courtesy of David Culler)

Problems with Blocking Send/Receive Node 1 Node 2 3-way latency compute compute send receive Remember: back-to-back DMA hardware... compute compute Time 25

Problems w/ Non-blocking Send/Rec Node 1 send send Node 2 send send expensive buffering compute compute receive receive receive receive With receive hints: ``waiting-matching store problem Time 26

Problems with Shared Memory Local storage hierarchy: access several levels before communication starts (DASH: 30cycles) DRAM resources reserved by outstanding requests difficulty in suspending threads L2-$ recv send NI Inappropriate semantics in some cases: only read/write cache lines signals turn into consistency issues fill L1-$ miss Example: broadcast tree while(!me->flag); left->data = me->data; left->flag = 1; right->data = me->data; right->flag = 1; data CPU ld 27

Active Messages Idea: Associate a small amount of remote computation with each message Node Handler Primary Computation data IP Head of the message is the address of its handler Handler executes immediately upon arrival extracts msg from network and integrates it with computation, possibly replies handler does not ``compute No buffering beyond transport data stored in pre-allocated storage quick service and reply, e.g., remote-fetch 28 Note: user-level handler

Active Message Example: Fetch&Add static volatile int value; static volatile int flag; int FetchNAdd(int proc, int *addr, int inc) { flag = 0; AM(proc, FetchNAdd_h, addr, inc, MYPROC); while(!flag) ; return value; } void FetchNAdd_rh(int data) { value = data; flag++; } void FetchNAdd_h(int *addr, int inc, int retproc) { int v = inc + *addr; *addr = v; AM_reply(retproc, FetchNAdd_rh, v); } 29

Send/Receive Using Active Messages Node 1 Node 2 compute compute MatchTable send Req To Send send handler recv handler Ready To Recv DATA data handler receive > use handlers to implement protocol Reduces send+recv overhead from 95 µsec to 3 µsec on CM-5. 30