CS252 Graduate Computer Architecture Lecture 14. Multiprocessor Networks March 7 th, Review: VLIW: Very Large Instruction Word

Similar documents
CS252 Graduate Computer Architecture Lecture 14. Multiprocessor Networks March 9 th, 2011

Recall: The Routing problem: Local decisions. Recall: Multidimensional Meshes and Tori. Properties of Routing Algorithms

CS 258, Spring 99 David E. Culler Computer Science Division U.C. Berkeley Wide links, smaller routing delay Tremendous variation 3/19/99 CS258 S99 2

Interconnection Network

Getting CPI under 1: Outline

CSE 820 Graduate Computer Architecture. week 6 Instruction Level Parallelism. Review from Last Time #1

NOW Handout Page 1. Review from Last Time #1. CSE 820 Graduate Computer Architecture. Lec 8 Instruction Level Parallelism. Outline

Computer Science 146. Computer Architecture

ECE 669 Parallel Computer Architecture

Chapter 9 Multiprocessors

5008: Computer Architecture

Networks: Routing, Deadlock, Flow Control, Switch Design, Case Studies. Admin

Communication Performance in Network-on-Chips

L14-15 Scalable Interconnection Networks. Scalable, High Performance Network

NOW Handout Page 1. Outline. Networks: Routing and Design. Routing. Routing Mechanism. Routing Mechanism (cont) Properties of Routing Algorithms

TDT Appendix E Interconnection Networks

4. Networks. in parallel computers. Advances in Computer Architecture

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E)

Network Properties, Scalability and Requirements For Parallel Processing. Communication assist (CA)

Lecture: Interconnection Networks

Interconnection topologies (cont.) [ ] In meshes and hypercubes, the average distance increases with the dth root of N.

Lecture 2 Parallel Programming Platforms

Lecture 28: Networks & Interconnect Architectural Issues Professor Randy H. Katz Computer Science 252 Spring 1996

Itanium 2 Processor Microarchitecture Overview

Overview. Processor organizations Types of parallel machines. Real machines

Multiprocessor Interconnection Networks

EE382 Processor Design. Illinois

Module 17: "Interconnection Networks" Lecture 37: "Introduction to Routers" Interconnection Networks. Fundamentals. Latency and bandwidth

ESE 545 Computer Architecture Instruction-Level Parallelism (ILP): Speculation, Reorder Buffer, Exceptions, Superscalar Processors, VLIW

Network Properties, Scalability and Requirements For Parallel Processing. Communication assist (CA)

Static Multiple-Issue Processors: VLIW Approach

Lecture 24: Interconnection Networks. Topics: topologies, routing, deadlocks, flow control

INTERCONNECTION NETWORKS LECTURE 4

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University

Routing Algorithm. How do I know where a packet should go? Topology does NOT determine routing (e.g., many paths through torus)

Interconnection Networks

Introduction to Multiprocessors (Part I) Prof. Cristina Silvano Politecnico di Milano

Interconnection Networks: Topology. Prof. Natalie Enright Jerger

Several Common Compiler Strategies. Instruction scheduling Loop unrolling Static Branch Prediction Software Pipelining

Lecture 2: Topology - I

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

Lecture 12: Interconnection Networks. Topics: dimension/arity, routing, deadlock, flow control

Network-on-chip (NOC) Topologies

Multiprocessors - Flynn s Taxonomy (1966)

Lecture 13: Interconnection Networks. Topics: lots of background, recent innovations for power and performance

[ ] In earlier lectures, we have seen that switches in an interconnection network connect inputs to outputs, usually with some kind buffering.

Interconnection Network

Performance of Computer Systems. CSE 586 Computer Architecture. Review. ISA s (RISC, CISC, EPIC) Basic Pipeline Model.

Advance CPU Design. MMX technology. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ. ! Basic concepts

Advanced issues in pipelining

Lecture 3: Topology - II

EE/CSCI 451: Parallel and Distributed Computation

MIMD Overview. Intel Paragon XP/S Overview. XP/S Usage. XP/S Nodes and Interconnection. ! Distributed-memory MIMD multicomputer

Interconnection Network. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

ECE 4750 Computer Architecture, Fall 2017 T06 Fundamental Network Concepts

Parallel Architecture. Sathish Vadhiyar

Routing Algorithms. Review

UNIT 8 1. Explain in detail the hardware support for preserving exception behavior during Speculation.

Chapter 4 Data-Level Parallelism

Interconnection networks

COSC 6374 Parallel Computation. Parallel Computer Architectures

Concurrent/Parallel Processing

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT-1

CS575 Parallel Processing

CS 770G - Parallel Algorithms in Scientific Computing Parallel Architectures. May 7, 2001 Lecture 2

EE/CSCI 451: Parallel and Distributed Computation

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Design of Parallel Algorithms. The Architecture of a Parallel Computer

COSC 6374 Parallel Computation. Parallel Computer Architectures

CSE Introduction to Parallel Processing. Chapter 4. Models of Parallel Processing

Advanced Parallel Architecture. Annalisa Massini /2017

Lecture 16: On-Chip Networks. Topics: Cache networks, NoC basics

Packet Switch Architecture

Packet Switch Architecture

Donn Morrison Department of Computer Science. TDT4255 ILP and speculation

Parallel Architectures

CS425 Computer Systems Architecture

COSC 6385 Computer Architecture - Thread Level Parallelism (I)

CS Parallel Algorithms in Scientific Computing

Parallel Architectures

SHARED MEMORY VS DISTRIBUTED MEMORY

Computer Systems Architecture

Interconnect Technology and Computational Speed

Interconnection Networks

Scalability and Classifications

CPS 303 High Performance Computing. Wensheng Shen Department of Computational Science SUNY Brockport

Topologies. Maurizio Palesi. Maurizio Palesi 1

Lecture 7: Parallel Processing

CS252 Graduate Computer Architecture Lecture 6. Recall: Software Pipelining Example

Parallel Architectures

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

Outline. Distributed Shared Memory. Shared Memory. ECE574 Cluster Computing. Dichotomy of Parallel Computing Platforms (Continued)

Multiple Issue ILP Processors. Summary of discussions

Computer Systems Architecture

EEC 581 Computer Architecture. Instruction Level Parallelism (3.6 Hardware-based Speculation and 3.7 Static Scheduling/VLIW)

Lecture 15: PCM, Networks. Today: PCM wrap-up, projects discussion, on-chip networks background

CPS104 Computer Organization and Programming Lecture 20: Superscalar processors, Multiprocessors. Robert Wagner

Interconnection Networks. Issues for Networks

Three basic multiprocessing issues

Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW. Computer Architectures S

Transcription:

CS252 Graduate Computer Architecture Lecture 14 Multiprocessor Networks March 7 th, 212 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~kubitron/cs252 Review: VLIW: Very Large Instruction Word Each instruction has explicit coding for multiple operations In IA-64, grouping called a packet In Transmeta, grouping called a molecule (with atoms as ops) Tradeoff instruction space for simple decoding The long instruction word has room for many operations By definition, all the operations the compiler puts in the long instruction word are independent => execute in parallel E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch» 16 to 24 bits per field => 7*16 or 112 bits to 7*24 or 168 bits wide Need compiling technique that schedules across several branches 2 Problems with 1st Generation VLIW Increase in code size generating enough operations in a straight-line code fragment requires ambitiously unrolling loops whenever VLIW instructions are not full, unused functional units translate to wasted bits in instruction encoding Operated in lock-step; no hazard detection HW a stall in any functional unit pipeline caused entire processor to stall, since all functional units must be kept synchronized Compiler might prediction function units, but caches hard to predict Binary code compatibility Pure VLIW => different numbers of functional units and unit latencies require different versions of the code 3 Intel/HP IA-64 Explicitly Parallel Instruction Computer (EPIC) IA-64: instruction set architecture 128 64-bit integer regs + 128 82-bit floating point regs» Not separate register files per functional unit as in old VLIW Hardware checks dependencies (interlocks binary compatibility over time) 3 Instructions in 128 bit bundles ; field determines if instructions dependent or independent Smaller code size than old VLIW, larger than x86/risc Groups can be linked to show independence > 3 instr Predicated execution (select 1 out of 64 1-bit flags) 4% fewer mispredictions? Speculation Support: deferred exception handling with poison bits Speculative movement of loads above stores + check to see if incorect Itanium was first implementation (21) Highly parallel and deeply pipelined hardware at 8Mhz 6-wide, 1-stage pipeline at 8Mhz on.18 µ process Itanium 2 is name of 2nd implementation (25) 6-wide, 8-stage pipeline at 1666Mhz on.13 µ process 3/7/212 Caches: 32 KB I, 32 KB D, cs252-s12, 128 KB Lecture14 L2I, 128 KB L2D, 9216 KB L3 4

Itanium EPIC Design Maximizes SW-HW Synergy (Copyright: Intel at Hotchips ) Architecture Features programmed by compiler: Branch Hints Fetch Instruction Cache & Branch Predictors Explicit Parallelism Micro-architecture Features in hardware: Issue Fast, Simple 6-Issue Register Stack Predication & Rotation Register Handling 128 GR & 128 FR, Register Remap & Stack Engine Control Data & Control Speculation Memory Hints Memory Subsystem Three levels of cache: L1, L2, L3 Speculation Deferral Management 5 Bypasses & Dependencies Parallel Resources 4 Integer + 4 MMX Units 2 FMACs (4 for SSE) 2 LD/ST units 32 entry ALAT 1 Stage In-Order Core Pipeline (Copyright: Intel at Hotchips ) Front End Pre-fetch/Fetch of up to 6 instructions/cycle Hierarchy of branch predictors Decoupling buffer Instruction Delivery Dispersal of up to 6 instructions on 9 ports Reg. remapping Reg. stack engine EXPAND RENAME WORD-LINE DECODE Execution 4 single cycle ALUs, 2 ld/str Advanced load control Predicate delivery & branch Nat/Exception//Retirement REGISTER READ IPG FET ROT EXP REN WLD REG EXE DET WRB INST POINTER GENERATION FETCH ROTATE EXECUTE EXCEPTION DETECT Operand Delivery Reg read + Bypasses Register scoreboard Predicated dependencies WRITE-BACK 6 What is Parallel Architecture? A parallel computer is a collection of processing elements that cooperate to solve large problems Most important new element: It is all about communication! What does the programmer (or OS or Compiler writer) think about? Models of computation:» PRAM? BSP? Sequential Consistency? Resource Allocation:» how powerful are the elements?» how much memory? What mechanisms must be in hardware vs software What does a single processor look like?» High performance general purpose processor» SIMD processor/vector Processor Data access, Communication and Synchronization» how do the elements cooperate and communicate? Parallel Programming Models Programming model is made up of the languages and libraries that create an abstract view of the machine Shared Memory» different processors share a global view of memory» may be cache coherent or not» Communication occurs implicitly via loads and store Message Passing» No global view of memory (at least not in hardware)» Communication occurs explicitly via messages Data What data is private vs. shared? How is logically shared data accessed or communicated? Synchronization What operations can be used to coordinate parallelism What are the atomic (indivisible) operations? Cost How do we account for the cost of each of the above?» how are data transmitted between processors? 3/7/212» what are the abstractions cs252-s12, and Lecture14 primitives for cooperation? 7 8

Flynn s Classification (1966) Broad classification of parallel computing systems SISD: Single Instruction, Single Data conventional uniprocessor SIMD: Single Instruction, Multiple Data one instruction stream, multiple data paths distributed memory SIMD (MPP, DAP, CM-1&2, Maspar) shared memory SIMD (STARAN, vector computers) MIMD: Multiple Instruction, Multiple Data message passing machines (Transputers, ncube, CM-5) non-cache-coherent shared memory machines (BBN Butterfly, T3D) cache-coherent shared memory machines (Sequent, Sun Starfire, SGI Origin) MISD: Multiple Instruction, Single Data Not a practical configuration 9 Examples of MIMD Machines Symmetric Multiprocessor Multiple processors in box with shared memory communication Current MultiCore chips like this Every processor runs copy of OS Non-uniform shared-memory with separate I/O through host Multiple processors» Each with local memory» general scalable network Extremely light OS on node provides simple services» Scheduling/synchronization Network-accessible host for I/O Cluster Many independent machine connected with general network Communication through messages P P P P 1 Network Bus Memory P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M P/M Host Paper Discussion: Future of Wires Future of Wires, Ron Ho, Kenneth Mai, Mark Horowitz Fanout of 4 metric (FO4) FO4 delay metric across technologies roughly constant Treats 8 FO4 as absolute minimum (really says 16 more reasonable) Wire delay Unbuffered delay: scales with (length) 2 Buffered delay (with repeaters) scales closer to linear with length Sources of wire noise Capacitive coupling with other wires: Close wires Inductive coupling with other wires: Can be far wires Future of Wires continued Cannot reach across chip in one clock cycle! This problem increases as technology scales Multi-cycle long wires! Not really a wire problem more of a CAD problem?? How to manage increased complexity is the issue Seems to favor ManyCore chip design?? 11 12

What characterizes a network? Topology (what) physical interconnection structure of the network graph direct: node connected to every switch indirect: nodes connected to specific subset of switches Routing Algorithm (which) restricts the set of paths that msgs may follow many algorithms with different properties» deadlock avoidance? Switching Strategy (how) how data in a msg traverses a route circuit switching vs. packet switching Flow Control Mechanism (when) when a msg or portions of it traverse a route what happens when traffic is encountered? 13 Formalism network is a graph V = {switches and nodes} connected by communication channels C V V Channel has width w and signaling rate f = channel bandwidth b = wf phit (physical unit) data transferred per cycle flit - basic unit of flow-control Number of input (output) channels is switch degree Sequence of switches and links followed by a message is a route Think streets and intersections 14 Links and Channels...ABC123 => Transmitter Receiver...QR67 => transmitter converts stream of digital symbols into signal that is driven down the link receiver converts it back tran/rcv share physical protocol trans + link + rcv form Channel for digital info flow between switches link-level protocol segments stream of symbols into larger units: packets or messages (framing) node-level protocol embeds commands for dest communication assist within packet 15 Clock Synchronization? Receiver must be synchronized to transmitter To know when to latch data Fully Synchronous Same clock and phase: Isochronous Same clock, different phase: Mesochronous» High-speed serial links work this way» Use of encoding (8B/1B) to ensure sufficient high-frequency component for clock recovery Fully Asynchronous No clock: Request/Ack signals Different clock: Need some sort of clock recovery? Data Req Transmitter Asserts Data Ack t t1 t2 t3 t4 t5 16

Administrative Exam: Two Weeks from Today (3/21) Location: 45 Soda Hall TIME: 5: 8: This info is on the Lecture page (has been) Get on 8 ½ by 11 sheet of notes (both sides) Meet at LaVal s afterwards for Pizza and Beverages Bring dumb calculator (no network connection) Assume that major papers we have discussed may show up on exam Topological Properties Routing Distance - number of links on route Diameter - maximum routing distance Average Distance A network is partitioned by a set of links if their removal disconnects the graph 17 18 Interconnection Topologies Class of networks scaling with N Logical Properties: distance, degree Physical properties length, width Fully connected network diameter = 1 degree = N cost?» bus => O(N), but BW is O(1) - actually worse» crossbar => O(N 2 ) for BW O(N) VLSI technology determines switch degree Example: Linear Arrays and Rings Linear Array Torus Torus arranged to use short wires Linear Array Diameter? Average Distance? Bisection bandwidth? Route A -> B given by relative address R = B-A Torus? Examples: FDDI, SCI, FiberChannel Arbitrated Loop, KSR1 19 2

Example: Multidimensional Meshes and Tori On Chip: Embeddings in two dimensions 2D Grid 2D Torus 3D Cube n-dimensional array N = k n-1 X...X k O nodes described by n-vector of coordinates (i n-1,..., i O ) n-dimensional k-ary mesh: N = k n k = n N described by n-vector of radix k coordinate n-dimensional k-ary torus (or k-ary n-cube)? 21 6 x 3 x 2 Embed multiple logical dimension in one physical dimension using long wires When embedding higher-dimension in lower one, either some wires longer than others, or all wires long 22 Trees Fat-Trees Diameter and ave distance logarithmic k-ary tree, height n = log k N address specified n-vector of radix k coordinates describing path down from root Fixed degree Route up to common ancestor and down R = B xor A let i be position of most significant 1 in R, route up i+1 levels down in direction given by low i+1 bits of B H-tree space is O(N) with O( N) long wires Bisection BW? Fat Tree Fatter links (really more of them) as you go up, so bisection BW scales with N 23 24

Butterflies k-ary n-cubes vs k-ary n-flies degree n vs degree k 16 node butterfly 4 3 2 1 1 1 1 1 1 building block N switches vs N log N switches diminishing BW per node vs constant requires locality vs little benefit to locality Can you route all permutations? Tree with lots of roots! N log N (actually N/2 x logn) Exactly one route from any source to any dest R = A xor B, at level i use straight edge if r i =, otherwise cross edge Bisection N/2 vs N (n-1)/n (for n-cube) 25 26 Benes network and Fat Tree 16-node Benes Network (Unidirectional) 16-node 2-ary Fat-Tree (Bidirectional) Hypercubes Also called binary n-cubes. # of nodes = N = 2 n. O(logN) Hops Good bisection BW Complexity Out degree is n = logn correct dimensions in order with random comm. 2 ports per processor Back-to-back butterfly can route all permutations What if you just pick a random mid point? -D 1-D 2-D 3-D 4-D 5-D! 27 28

Some Properties Routing relative distance: R = (b n-1 -a n-1,..., b -a ) traverse ri = b i -a i hops in each dimension dimension-order routing? Adaptive routing? Average Distance Wire Length? n x 2k/3 for mesh nk/2 for cube Degree? Bisection bandwidth? Partitioning? k n-1 bidirectional links Physical layout? 2D in O(N) space Short wires higher dimension? The Routing problem: Local decisions Input Receiver Input Buffer Cross-bar Control Routing, Scheduling Buffer Transmiter Routing at each hop: Pick next output port! 29 3 How do you build a crossbar? I o I o I 1 I 2 I 3 Input buffered switch Input R R1 O I 1 O i R2 Cross-bar O 2 R3 I 2 O 3 Scheduling I 3 phase I o I 1 I 2 I 3 RAM addr Din Dout O O i O 2 O 3 Independent routing logic per input FSM Scheduler logic arbitrates each output priority, FIFO, random Head-of-line blocking problem Message at head of queue blocks messages behind it 31 32

Buffered Switch Input R R1 R2 R3 Control How would you build a shared pool? 33 Properties of Routing Algorithms Routing algorithm: R: N x N -> C, which at each switch maps the destination node n d to the next channel on the route which of the possible paths are used as routes? how is the next hop determined?» arithmetic» source-based port select» table driven» general computation Deterministic route determined by (source, dest), not intermediate state (i.e. traffic) Adaptive route influenced by traffic along the way Minimal only selects shortest paths Deadlock free no traffic pattern can lead to a situation where packets are deadlocked and never move forward 34 Example: Simple Routing Mechanism need to select output port for each input packet in a few cycles Simple arithmetic in regular topologies ex: x, y routing in a grid» west (-x) x <» east (+x) x >» south (-y) x =, y <» north (+y) x =, y >» processor x =, y = Reduce relative address of each dimension in order Dimension-order routing in k-ary d-cubes e-cube routing in n-cube 35 Communication Performance Trailer Error Code Typical Packet includes data + encapsulation bytes Unfragmented packet size S = S data +S encapsulation Routing Time: Time(S) s-d = overhead + routing delay + channel occupancy + contention delay Channel occupancy = S/b = (S data + S encapsulation )/b Routing delay in cycles (» Time to get head of packet to next hop Contention? Data Payload Routing and Control Header Sequence of symbols transmitted over a channel digital symbol 36

Store&Forward vs Cut-Through Routing Contention Store & Forward Routing Cut-Through Routing Source Dest Dest Time Time: h(s/b + ) vs S/b + h OR(cycles): h(s/w + ) vs S/w + h what if message is fragmented? wormhole vs virtual cut-through 3 2 1 3 2 37 1 3 2 1 Two packets trying to use the same link at same time limited buffering drop? Most parallel mach. networks block in place link-level flow control tree saturation Closed system - offered load depends on delivered Source Squelching 38 Bandwidth What affects local bandwidth? packet density: b x S data /S routing delay: b x S data /(S + w ) contention» endpoints» within the network Aggregate bandwidth bisection bandwidth» sum of bandwidth of smallest set of links that partition the network total bandwidth of all the channels: Cb suppose N hosts issue packet every M cycles with ave dist» each msg occupies h channels for l = S/w cycles each» C/N channels available per node» link utilization for store-and-forward: = (hl/m channel cycles/node)/(c/n) = Nhl/MC < 1!» link utilization for wormhole routing? 39 Saturation Latency 8 7 6 5 4 3 2 1.2.4.6.8 1 Delivered Bandwidth Saturation 4 Delivered Bandwidth.8.7.6.5.4.3.2.1.2.4.6.8 1 1.2 Offered Bandwidth Saturation

How Many Dimensions? n = 2 or n = 3 Short wires, easy to build Many hops, low bisection bandwidth Requires traffic locality n >= 4 Harder to build, more wires, longer average length Fewer hops, better bisection bandwidth Can handle non-local traffic k-ary n-cubes provide a consistent framework for comparison N = k n scale dimension (n) or nodes per dimension (k) assume cut-through Traditional Scaling: Latency scaling with N Ave Latency T(S=4) 14 12 1 8 6 4 2 2 4 6 8 1 Machine Size (N) Assumes equal channel width independent of node count or dimension dominated by average distance n=2 n=3 n=4 k=2 S/w Ave Latency T(S=14) 25 2 15 1 5 2 4 6 8 1 Machine Size (N) 41 42 Average Distance Dally Paper: In the 3D world For N nodes, bisection area is O(N 2/3 ) 1 9 8 7 256 124 16384 148576 Ave Distance 6 5 4 ave dist = n(k-1)/2 3 2 1 5 1 15 2 25 Dimension but, equal channel width is not equal cost! Higher dimension => more channels For large N, bisection bandwidth is limited to O(N 2/3 ) Bill Dally, IEEE TPDS, [Dal9a] For fixed bisection bandwidth, low-dimensional k-ary n- cubes are better (otherwise higher is better) i.e., a few short fat wires are better than many long thin wires What about many long fat wires? 43 44

Dally paper (con t) Equal Bisection,W=1 for hypercube W= ½k Three wire models: Constant delay, independent of length Logarithmic delay with length (exponential driver tree) Linear delay (speed of light/optimal repeaters) Equal cost in k-ary n-cubes Equal number of nodes? Equal number of pins/wires? Equal bisection bandwidth? Equal area? Equal wire length? What do we know? switch degree: n diameter = n(k-1) total links = Nn pins per node = 2wn bisection = k n-1 = N/k links in each directions 2Nw/k wires cross the middle Logarithmic Delay Linear Delay 45 46 Latency for Equal Width Channels Latency with Equal Pin Count 25 3 3 Average Latency (S = 4, D = 2) 256 124 2 16384 148576 15 1 5 5 1 15 2 25 Dimension Ave Latency T(S=4B) 25 2 15 1 5 256 nodes 124 nodes 16 k nodes 1M nodes 5 1 15 2 25 Dimension (n) Ave Latency T(S = 14 B) 25 2 15 1 5 256 nodes 124 nodes 16 k nodes 1M nodes 5 1 15 2 25 Dimension (n) total links(n) = Nn Baseline n=2, has w = 32 (128 wires per node) fix 2nw pins => w(n) = 64/n distance up with n, but channel time down 47 48

Latency with Equal Bisection Width Larger Routing Delay (w/ equal pin) Ave Latency T(S=4) 1 9 8 7 6 5 4 3 256 nodes 2 124 nodes 16 k nodes 1 1M nodes 5 1 15 2 25 Dimension (n) N-node hypercube has N bisection links 2d torus has 2N 1/2 Fixed bisection w(n) = N 1/n / 2 = k/2 1 M nodes, n=2 has w=512! Ave Latency T(S = 14 B) 1 9 8 7 6 5 4 3 2 1 256 nodes 124 nodes 16 k nodes 1M nodes 5 1 15 2 25 Dimension (n) Dally s conclusions strongly influenced by assumption of small routing delay Here, Routing delay =2 49 5 Saturation Latency 25 2 15 1 S/w=4 S/w=16 S/w=8 S/w=4 Discuss of paper: Virtual Channel Flow Control Basic Idea: Use of virtual channels to reduce contention Provided a model of k-ary, n-flies Also provided simulation Tradeoff: Better to split buffers into virtual channels Example (constant total storage for 2-ary 8-fly): 5.2.4.6.8 1 Ave Channel Utilization Fatter links shorten queuing delays 51 52

Input When are virtual channels allocated? Cross-Bar Hardware efficient design For crossbar Reducing routing delay: Express Cubes Problem: Low-dimensional networks have high k Consequence: may have to travel many hops in single dimension Routing latency can dominate long-distance traffic patterns Solution: Provide one or more express links Two separate processes: Virtual channel allocation Switch/connection allocation Virtual Channel Allocation Choose route and free output virtual channel Really means: Source of link tracks channels at destination Switch Allocation For incoming virtual channel, negotiate switch on outgoing pin 53 Like express trains, express elevators, etc» Delay linear with distance, lower constant» Closer to speed of light in medium» Lower power, since no router cost Express Cubes: Improving performance of k-ary n-cube interconnection networks, Bill Dally 1991 Another Idea: route with pass transistors through links 54 Summary Network Topologies: Topology Degree Diameter Ave Dist Bisection D (D ave) @ P=124 1D Array 2 N-1 N / 3 1 huge 1D Ring 2 N/2 N/4 2 2D Mesh 4 2 (N 1/2-1) 2/3 N 1/2 N 1/2 63 (21) 2D Torus 4 N 1/2 1/2 N 1/2 2N 1/2 32 (16) k-ary n-cube 2n nk/2 nk/4 nk/4 15 (7.5) @n=3 Hypercube n =log N n n/2 N/2 1 (5) Fair metrics of comparison Equal cost: area, bisection bandwidth, etc Routing Algorithms restrict set of routes within the topology simple mechanism selects turn at each hop arithmetic, selection, lookup Virtual Channels Adds complexity to router Can be used for performance Can be used for deadlock avoidance 55