High-speed network switch RHiNET-2/SW and its implementation with optical interconnections

Size: px
Start display at page:

Download "High-speed network switch RHiNET-2/SW and its implementation with optical interconnections"

Transcription

1 High-speed network switch RHiNET-2/SW and its implementation with optical interconnections S. Nishimura (1), T. Kudoh (2), H. Nishi (2), J. Yamamoto (2), K. Harasawa (3), N. Matsudaira (3), S. Akutsu (3), K. Tasyo (4), and H. Amano (5) (1) RWCP Optical Interconnection Hitachi Laboratory. (c/o Central Research Laboratory, Hitachi, Ltd.) Higashi-Koigakubo, Kokubunji, Tokyo , JAPAN (2) Real World Computing Partnership Tsukuba Research Center (3) Hitachi Communication Systems, Inc. (4) Synergetech, Inc. (5) Dept. of Information and Computer Science, Keio University Abstract--RHiNET-2/SW is a network switch that enables high-performance parallel computing in a distributed environment. We have produced a prototype network-switch board (RHiNET-2/SW) for the RHiNET-2 parallel-computing system. Eight pairs of 800-Mbit/s 12-channel optical interconnection modules and a one-chip CMOS ASIC switch LSI (a 784-pin BGA package) are mounted on a single board. This switch allows high-speed 8-Gbit/s/port parallel optical data transmission over a distance of up to 100 m, and the aggregate throughput is 64 Gbit/s/board. By using a large amount of embedded memory on the switching LSI, RHiNET-2/SW allows low-latency, free-topology network performance. We evaluated the reliability of each optical port by measuring BER: no errors were detected during bit packet data transmission at a data rate of 880 Mbit/s 10 bits (BER: < ). This test result shows that the RHiNET-2/SW can provide highthroughput, long-transmission-length, and highly reliable data transmission. I. INTRODUCTION Network-based parallel processing using commodity components, such as personal computers, has received attention as an important parallel-computing environment [1-3]. In most of today's offices and laboratories, there are tens of personal computers and workstations, which are not always in use. If the processing power of such idle computers could be combined, the resulting processing power might be comparable to that of a supercomputer. However, most high performance cluster systems consisting of personal computers or workstations use a system area network (or a server area network: SAN) such as Myrinet [4] as their interconnection. SAN provides low latency high bandwidth communication without discarding any packet. It also provides high bisection bandwidth, which is required for high performance parallel computing. Since they are designed to connect dedicated computers in a small place, both the link length and topology are restricted. On the other hand, highspeed LANs with more than 1 Gbit/s link bandwidth are becoming available [5-6]. LANs provide relatively flexible topology choices and longer length of links. Nevertheless, the communication latency of most of today's commodity LANs tends to be larger than that of SANs because of their store-and-forward routing strategy. Moreover, today's LANs support the IP protocol, consisting of a lot of layers, which introduce overhead. LASN (Local Area System Network) is a new class of networks which has the advantages of both SANs and LANs. As shown in Fig. 1, an LASN assumes an environment in which personal computers and workstations are distributed within one or more floors of a building (i.e. a LAN environment).

2 RHiNET (RWCP High performance network) is the first network designed with the concepts of an LASN in mind. cables are used for electrical interconnection, the transmission length is limited to about 10 m by electrical circuit drivability). To meet the requirements of RHiNET, RHiNET-2/SW provides large internal memory blocks and supports topology free, reliable, low-latency and highbandwidth communication. To achieve such highspeed optical transmission in RHiNET, 8.8-Gbit/s (800-Mbit/s 11-bit) synchronized parallel optical interconnection is used for each data link. Synchronized parallel optical interconnection allow s high-speed, long-transmission-length, low-latency node-to-node interconnection [9-13]. III. OPTICAL INTERCONNECTION MODULE Fig. 1: Schematic structure of RHiNET II. CONCEPT OF THE RHINET We have developed the firstprototype called RHiNET-1 using 1.33-Gbit/s optical interconnections [7] and second prototype called RHiNET-2 using 8.8-Git/s optical interconnections. In RHiNET-2, PCs are interconnected via highthroughput 8 8 network switches (the RHiNET- 2/SW) [8]. RHiNET-2/SW has eight optical input and eight optical output ports (Fig. 2). Each port has an 8-Gbit/s transmission capacity, and the aggregate throughput is 64 Gbit/s. A 12-bit synchronized parallel optical signal is converted to a 12-bit electrical signal in the optical receiver, switched by the SW-LSI, and re-converted to a 12- bit parallel optical signal in the optical transmitter. The transmission length is limited to 100 m by the skew of the fiber ribbon (however, when copper We use synchronized parallel 12-channel optical transmitter and receiver modules in RHiNET-2/SW [9, 10] (Fig. 3). The optical transmitter modules consist of a 1.3-µm edge-emitting laser-diode (LD) array and a single-mode-fiber (SMF) array. The channel configuration is made up of 11 lowvoltage-differential-signaling (LVDS)/positive-level emitter-coupled-logic (P-ECL) non-return-to-zero (NRZ) data signals and one LVDS/P-ECL clock signal. The P-ECL output signals from the receiver module are converted to LVDS signals with a level converter. The input clock signal is used to latch the 11 data signals in the transmitter and receiver modules in order to eliminate skew caused by the logic LSIs. A transmission length is up to 100 m and total throughput is up to 8.8 Gbit/s/module (800-Mbit/s 11-bit data and 1-bit clock). To achieve high-density implementation with highspeed signaling devices, we have to overcome many complex problems, as such as crosstalk, skew and propagation-loss. optical input 800 Mbits 10 bits/port; with clock & framing 12-channel optical receiver RX 10+2 SW-LSI 8 8 switch Internal memory: 512 kbyte 12-channel optical transmitter TX optical output 800 Mbit/s 10 bits/port; with clock & framing printed circuit board electrical signals (differential) LVDS 800 Mbit/s 10 bits/port; with clock & framing Fig. 2: Schematic structure of RHiNET-2/SW

3 Fig. 3: 12-channel parallel optical interconnection modules [9, 10] IV. SWITCHING LSI A. Overview We developed a 64-Gbit/s/chip high-throughput CMOS switching LSI for the RHiNET-2/SW (Figs. 2, 4 and 5). This switching LSI has eight input and eight output ports. Both input and output ports consist of 10-bit data signals, a clock signal, and a framing signal of 800 Mbit/s. The core switch logic operates at a clock rate of 100 MHz. Therefore, 1:8 demultiplexers are provided at the input ports and 1:8 multiplexers are provided at the output ports bit incoming data are transformed to the 80- bit data by the demultiplexer. ECC decoders and encoders are provided at the input and output ports respectively. The ECC decoder decodes the 80-bit data to a 66-bit data and is handled by the core logic. Input signals synchronized with the transmission clock are retimed to be in-phase with the baseclock (200 MHz) in the elastic buffer. Since the source synchronous clocking is used, an elastic buffer is provided at each input port to compensate the difference between the transmission clock and the baseclock of up to 100-ppm. All electrical I/O interfaces are 2.5-V LVDS- CMOS devices. To achieve high-speed I/O, rise and fall times (< 0.3 ns) and a signal skew (< 0.3 ns) must be very small. We used 0.18-µm technology to fabricate the LSI. The LSI-package is a 784-pin ball grid array (BGA), the pin-pitch of the package is 1.27 mm, and the package size is mm. There are 384 high-speed signal pins (12 bits/port 8 ports 2 pins [differential] for input and output; data rate: 800 Mbit/s/pin). We customized the assignment of the LSI pins to achieve highspeed, low-crosstalk data I/O with a compact, highdensity circuit board. Fig. 4: Block diagram of the switch core in the SW-LSI for RHiNET-2/SW Fig. 5: Floor plan of the SW-LSI B. Switching functions RHiNET-2/SW has the following features (Figs. 4 and 5): 1) Asynchronous wormhole routing Store and forward routing, which is commonly used in conventional LAN switches/routers, yields a large latency. Wormhole routing achieves low latency switching, since a switch can simultaneously transmit the first part of a packet if possible, even while receiving the latter part of the same packet [2]. However, the performance of pure wormhole routing is severely degraded when a message is multicast in a loaded network. To cope with this problem, asynchronous wormhole

4 routing (which provides a certain size of packet buffer) is adopted. 2) No packet discarding The switch never discards packets even when the network is severely congested. 3) In-order delivery The network ensures in-order delivery of packets. 4) Free topology design while avoiding deadlock The switch avoids deadlock by providing a number of VCs (virtual channels) at each input port. By using a different VC as a packet travels through the switches, no cyclic dependency is generated. RHiNET-2/SW has 16 VCs at each input port. This means the diameter of the network can be up to 16. Since each switch has eight ports, this number of VCs provides virtually free topology of the network. 5) Supports up to 100-m links RHiNET-2/SW supports optical links up to100 m long. Since an optical signal propagates in the fiber at the speed of 5 ns/m, the round-trip delay of a 100-m-long optical link is about 1 µs. The handshake logic of the switch also yields some delays. Therefore, handshake will produce up to 1.5-µs delay. When data rate is 8 Gbit/s, 1.5-µs delay corresponds to the time to transfer 1.5 kbytes. Therefore, to receive data without discarding anything, the receiver side should send a handshake message to stop transmission when it does not have enough usable memory space (less than 1.5 kbytes + maximum packet size) in the input buffer. Such a flow-control mechanism is called the slack buffer [7]. In RHiNET-2/SW, each of the 16 VCs of an input port provides a 4-kbytes slack-buffer mechanism. Multiple slack buffers are therefore provided for each input port. 6) Multiple-bit-rate support Each port can be set to the bit rate of 8 Gbit/s, 2 Gbit/s or 1 Gbit/s. The slower bit rates are provided to support slower network interfaces. The maintenance processor sets the bit rate. C. Packets Figure 6 shows the packet format of RHiNET- 2/SW handled in core logic (64-bit data). RHiNET- 2/SW supports variably sized packets. A data packet contains a maximum data size of 2 kbytes. The hop counter is incremented when a packet goes through a switch and is used to detect an irregularly routed packet caused by wrong routing table or damaged header. A handshake packet includes programmable almost full flags of all VCs. A ping/pong packet reports its own logical ID, physical ID, and port ID. Command packets are used to exchange information between maintenance processors of adjacent nodes. They are immediately forwarded to the maintenance processor when received. The payload of the command packet is the message to the maintenance processor. Fig. 6: RHiNET-2/SW packet format D. Routing Routing is done according to the routing information statically stored in the routing table of each switch. Each switch has a routing table with 65,536 entries. An entry is a 9-bit full bitmap of the outputs (8 bits correspond to the output ports, and one bit corresponds to the maintenance processor), and setting multiple bits of an entry provides multicasting. And the routing ID of a header of a data packet is used as an entry id of the routing table. The maintenance processor sets the entries of the routing table. For example, if destination routing is used and there is no multicasting, an entry of the routing table can correspond to a node; thus, a maximum of 65,536 nodes can be supported. E. Maintenance processor and hot-plug support An on-chip maintenance module and an off-chip maintenance processor are provided to configure routing tables and support dynamic link detection. While a link has not been established, RHiNET- 2/SW continuously transmits ping packets. When a switch receives a ping packet, it replies with a pong

5 packet and then the link between the two switches is established. The ping-and-pong packet includes the sender's physical ID (Fig. 6). By receiving a ping or a pong packet, the maintenance processor obtains the physical ID of the switch at the other end of the link. Then, the maintenance processors of the switches exchange the necessary information to set the routing table. RHiNET-2/SW transmits handshaking packets in regular intervals during a link is established. It then detects the link disconnection if it receives no handshake packet for a certain period of time. In such a case, RHiNET- 2/SW starts to transmit ping packets again. V. HIGH-DENSITY IMPLEMENTATION OF HIGH- SPEED SIGNALS In RHiNET-2/SW, to realize high-speed, highdensity integration with the optical interconnection module and SW-LSI, we employ a MULTIWIRE TM * interconnect board (MWB TM ) as a printed circuit board. The MWB TM can achieve high wiring density and superior electrical characteristics (low-loss, high-accuracy 50-ohm impedance, and lowreflection). The MWB TM uses copper wires (0.1 mm diameter) that are coated with polyimide insulation and can therefore be cross wired. This also accounts for the high wiring density (a 0.3 mm wire-pitch can be achieved). Since constant diameter wires are incorporated in the MWB TM s, controlled characteristics impedance (Z 0 : 50 Ω) can be easily realized. Furthermore, by utilizing very thin wires, with adequate spacing, crosstalk, and bending-loss are minimized. We measured the physical characteristics of the MWB TM (propagation-loss and crosstalk). In 150- mm-long straight wires, the 3-dB-down bandwidth was greater than 2.4 GHz, and the crosstalk on the receiver side was less than 1.2% at 900 MHz and a wire-pitch of 0.5 mm. We then optimized the layout of the circuit board based on the experimental results to realize low-crosstalk, high-speed, and high-density electrical I/O [8]. LSI socket. This socket was designed specifically for the high-speed LSI (bandwidth: DC to 6 GHz; path inductance: <1 nh; and capacitance [signal-tosignal]: < 1 pf). Each port has 800-Mbit/s 12-bit optical I/O channels and uses one pair of the 12- channel parallel optical interconnection modules. The board size is mm. The eight pairs of optical transmitter and receiver modules are mounted near the SW-LSI. The daughter board has an H8 microprocessor subboard to control the maintenance-signals of the SW-LSI. A crystal oscillator is mounted to generate the 200-MHz internal clock signal. The structure and layout of the circuit board are optimized for high-speed, high-density implementation [8]. Figure 8 shows a photograph of the RHiNET- 2/SW. These are four sockets of the four-by-twelvechannel fiber adapters. The motherboard is mounted here on the upper layer of the cabinet. The power supply unit and maintenance processorcard are packaged here in the lower layer. (a) SW-LSI mounted side VI. RHINET-2/SW We have produced a prototype of the RHiNET- 2/SW eight-by-eight network switch (Fig. 7). In the center of the board, the SW-LSI is mounted in an *: MULTIWIRE is a trademark owned by ADVANCED INTERCONNECTION TECHNOLOGY, INC. (b) Optical modules mounted side Fig. 7: Layout of the motherboard of the RHiNET-2/SW

6 Fig. 8 Photograph of the cabinet the optical receiver module and reconverted into electrical signals and sent to the error-rate detector (ERD). The fiber runs were 50 m long. Figure 10 shows the eye-pattern of the measured electrical output signal and the waveform of the clock signal. A clear eye-pattern was obtained. The signal rise time (Tr) and fall time (Tf) of the electrical output signal were both less than 400 ps. The jitter was less than 100 ps. We evaluated the reliability of each optical port by measuring the BER. We observed no errors during bit packet data transmission at a data rate of 880 Mbit/s 10 bits. (This corresponds to a BER of less than ) We used a pseudo-random word sequence (PRWS) as a data pattern. These test results show that the reliability of the I/O ports in RHiNET-2/SW is sufficient for RHiNET-2 and that our high-speed and high-density circuit-board layout enables us to construct a high-performance network switch. VII. EVALUATION TEST RESULTS We measured the signal eye-pattern by oscilloscope and bit-error (BER) rate by error-rate detector (the measurement setup is shown in Fig. 8). The 800-Mbit/s 12-bit electrical data signals were generated by the data generator (DG) as a clock signal (CLKI), a framing data signal (AI), and 10-bit packet data signals (DI[9..0]). These 12-bit electrical signals were converted to 12-bit optical signals by the optical transmitter module and transmitted through the 12-channel fiber ribbon. The optical signals were input to an RX-port of RHiNET-2/SW. In RHiNET-2/SW, the 12-bit optical input signals were converted to electrical signals in the RX-port, propagated through the SW-LSI, and reconverted to optical signals in the corresponding TX-port, then transmitted from the TX-port as optical signals. The output signals were received by Fig. 10: Measured eye-pattern of an electrically re-converted 0th data bit [D0] and waveform of the clock signal [CLK] (200 mv/div; 250 ps/div; data rate: 800 Mbit/s). To achieve highly reliable (error-free) parallel interconnection, suppressing skew is the most important improvement that must be made. This is even more important than improving the sensitivity and bandwidth. Our system requirement was that the skew be suppressed to within 20% of the clockcycle. In the case of 800 Mbit/s transmission, the Fig. 9: Experimental setup to measure the signal eye-patterns and BER of RHiNET-2/SW.

7 skew should be suppressed to less than 250 ps. To suppress the skew, we used high-speed LVDS electrical circuits, and precisely controlled the lengthwise placement of the wires. To suppress the skew between data signals, the 800-Mbit/s 11-bit synchronized parallel data signals were retimed with an 800-MHz clock signal using gate-latching in the TX and RX modules and at the TX- and RX-ports of the RHiNET-2/SW. The fiber length was 50 m. We measured the skew of the 10-bit data signal at the two points of the RHiNET-2/SW using a setup shown in Fig. 9. These two points were the electrical output pins of SW-LSI, and the optical output port of the optical transmitter module (Fig. 11). In the input port of the SW-LSI, the skew was eliminated by the gate-latching, but the output signal of the SW-LSI had a 141-ps skew caused by nonuniformity of the LSI output port. We eliminated this skew by using gate-latching in the optical TX module, and the skew of the optical output signal from the output port was 19.4 ps. The maximum skew of our 50-m-long 12-channel fiber ribbon was 50 ps. Thus, after 50-m-long fiber transmission, the worst-case fiber skew is 69.4 ps. Therefore, the skew of the data-signal is sufficiently suppressed by the gate-latching, and thus supports high-speed and highly reliable synchronized parallel data transmission. VIII. RELATED WORKS Myrinet [4] is one of the most popular SAN widely used for cluster computing. Myrinet switches never discard any packets, and provides reasonably high link bandwidth (1.28 Gbit/s) and very low latency. However, Myrinet switches support fewer number of virtual channels. Therefore, network topology restricted so as to avoid deadlock by using carefully selected routing paths. GSN [14], is high bandwidth and low latency interconnect standard, which provides 6.4 Gbit/s link bandwidth of error-free, and flow controlled data. Although it provides four virtual channels for each link, it is difficult to support a deadlock free routing in a free topology because of the channel number limitation. A fat tree is used in the cluster using GSN. Compaq uses the SC interconnect [15] for its inter-server connection. The SC Interconnect consists of a high-bandwidth crossbar switch and a PCI adapter for each node. The detailed architecture of the SC interconnects is not disclosed. However, it also uses a fat tree topology to keep a high degree of bisection bandwidth without deadlock. IX. SUMMARY Fig. 10 Skew of 10-bit data based on the edge of the 0th data bit (in I/O port 4). We have developed the RHiNET-2/SW network for high-performance computing using personal computers distributed in an office or floor environment. Optical interconnection allows highspeed, highly reliable data transmission over a long distance. To achieve high-speed and low-latency node-to-node interconnection, we implemented eight pairs of 8.8-Gbit/s optical interconnection modules and a 64-Gbit/s SW-LSI in a compact circuit board. We have produced an optical interconnection module for RHiNET-2/SW that is capable of speeds of up to 8.8 Gbit/s and a onechip CMOS ASIC switch (784-pin BGA). RHiNET- 2/SW has eight input and eight output optical data ports. The bandwidth of each port is 8 Gbit/s (aggregate throughput of the switch is 64 Gbit/s). We developed a high-speed, high-density implementation technology to overcome electrical problems such as signal propagation-loss and crosstalk. All of the electrical interfaces are composed of high-speed CMOS-LVDS logic. The structure and layout of the circuit board is optimized for high-speed, high-density implementation. Our prototype system achieved 880-Mbit/s 10-bit parallel data transmission. We

8 observed no errors during bit packet data transmission at a data rate of 880 Mbit/s 10 bits with a 50-m fiber. (This corresponds to a BER of less than ) We have thus successfully produced a compact high-throughput optical I/O network switch using a one-chip SW-LSI and eight pairs of optical interconnection modules. This switch enables high-performance parallel computing in a distributed computing environment. ACKNOWLEDGEMENT We are grateful for the assistance and advice of Takahiko Takahashi and Kazuyoshi Satoh of the Device Development Center, Hitachi, Ltd., Atsushi Takai and Atsushi Miura of the Telecommunication and Information Infrastructure Systems Group, Hitachi, Ltd., T. Keicho of Hitachi ULSI Systems Co., Ltd., Y. Keikoin and K. Ohsugi of Hitachi Information Technology Co., Ltd., and M. Tanaka of Hitachi Communication Systems, Inc. REFERENCES [1] T. Kudoh, J. Yamamoto, F. Sudoh, H. Amano, Y. Ishikawa, and M. Sato: "Memory based light weight communication architecture for local area distributed computing'', Innovative architecture for future generation highperformance processors and systems, IEEE Computer Society Press, pp , [2] L.M. Ni, "Should Scalable Parallel Computers Support Efficient Hardware Multicast", Proceeding of 1995 Int'l Conference on Parallel Processing Workshop on Challenges for Parallel Processing, pp. 2-7, August [3] T. Horie, H. Ishihara, T. Shimizu, and M. Ikesaka, "AP1000 Architecture and Performance of LU Decomposition", Proceedings of 1991 Int'l Conference on Parallel Processing, pp , August [4] [5] HIPPI-6400 working drafts, T11.1 maintenance drafts of ANSI NCITS [6] IEEE802.3 Higher Speed Study Group /public/index.html [7] H. Nishi, K. Tasho, T. Kudoh, H. Amano, "RHiNET-1/SW: One-chip switch ASIC for a local area system network", Proc. COOL Chips III, Apr to appear [8] S. Nishimura, T. Kudoh, H. Nishi, K. Harasawa, N. Matsudaira, S. Akutsu, K. Tasyo, and H. Amano, "A network switch using optical interconnection for high performance parallel computing using PCs", pp. 5-12, Anchorage U.S.A., Oct [9] A. Takai, T. Kato, S. Yamashita, S. Hanatani, Y. Motegi, K. Ito, H. Abe, and H. Kodera, "200- Mb/s/ch 100-m Optical Subsystem Interconnections Using 8-Channel 1.3-µm Laser Diode Arrays and Single-Mode Fiber Arrays", J. of Lightwave Technology 12, pp , [10] 4.htm [11] J. W. Goodman F. I. Leonberger, Sun-Yuan Athale, and R. A. Kung, "Optical interconnects for VLSI system", Proceedings of the IEEE 72, pp , July [12] D. A. B. Miller and H. W. Ozaktas, "Limit to the Bit-rate Capacity of Electrical Interconnection from the Aspect Ratio of the System Architecture", Journal of Parallel and Distributed Computing 41, pp , [13] S. Nishimura, H. Inoue, H. Matsuoka, and T. Yokota: "Optical interconnection subsystem used in the RWC-1 massively parallel computer", IEEE Journal of Selected Topics on Quantum Electronics 5, pp , [14] [15] ounce_p3.html

RHiNET-3/SW: an 80-Gbit/s high-speed network switch for distributed parallel computing

RHiNET-3/SW: an 80-Gbit/s high-speed network switch for distributed parallel computing RHiNET-3/SW: an 0-Gbit/s high-speed network switch for distributed parallel computing S. Nishimura 1, T. Kudoh 2, H. Nishi 2, J. Yamamoto 2, R. Ueno 3, K. Harasawa 4, S. Fukuda 4, Y. Shikichi 4, S. Akutsu

More information

High-speed, high-bandwidth DRAM memory bus with Crosstalk Transfer Logic (XTL) interface. Outline

High-speed, high-bandwidth DRAM memory bus with Crosstalk Transfer Logic (XTL) interface. Outline High-speed, high-bandwidth DRAM memory bus with Crosstalk Transfer Logic (XTL) interface Hideki Osaka Hitachi Ltd., Kanagawa, Japan oosaka@sdl.hitachi.co.jp Toyohiko Komatsu Hitachi Ltd., Kanagawa, Japan

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

Lecture 13: Interconnection Networks. Topics: lots of background, recent innovations for power and performance

Lecture 13: Interconnection Networks. Topics: lots of background, recent innovations for power and performance Lecture 13: Interconnection Networks Topics: lots of background, recent innovations for power and performance 1 Interconnection Networks Recall: fully connected network, arrays/rings, meshes/tori, trees,

More information

NoC Test-Chip Project: Working Document

NoC Test-Chip Project: Working Document NoC Test-Chip Project: Working Document Michele Petracca, Omar Ahmad, Young Jin Yoon, Frank Zovko, Luca Carloni and Kenneth Shepard I. INTRODUCTION This document describes the low-power high-performance

More information

InfiniBand SDR, DDR, and QDR Technology Guide

InfiniBand SDR, DDR, and QDR Technology Guide White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses

More information

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance

More information

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E)

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Lecture 12: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) 1 Topologies Internet topologies are not very regular they grew

More information

Network on Chip Architecture: An Overview

Network on Chip Architecture: An Overview Network on Chip Architecture: An Overview Md Shahriar Shamim & Naseef Mansoor 12/5/2014 1 Overview Introduction Multi core chip Challenges Network on Chip Architecture Regular Topology Irregular Topology

More information

High Performance Interconnect and NoC Router Design

High Performance Interconnect and NoC Router Design High Performance Interconnect and NoC Router Design Brinda M M.E Student, Dept. of ECE (VLSI Design) K.Ramakrishnan College of Technology Samayapuram, Trichy 621 112 brinda18th@gmail.com Devipoonguzhali

More information

The Earth Simulator System

The Earth Simulator System Architecture and Hardware for HPC Special Issue on High Performance Computing The Earth Simulator System - - - & - - - & - By Shinichi HABATA,* Mitsuo YOKOKAWA and Shigemune KITAWAKI The Earth Simulator,

More information

Ultrafast photonic packet switching with optical control

Ultrafast photonic packet switching with optical control Ultrafast photonic packet switching with optical control Ivan Glesk, Koo I. Kang, and Paul R. Prucnal Department of Electrical Engineering, Princeton University, Princeton, NJ 8544 glesk@ee.princeton.edu

More information

LVDS applications, testing, and performance evaluation expand.

LVDS applications, testing, and performance evaluation expand. Stephen Kempainen, National Semiconductor Low Voltage Differential Signaling (LVDS), Part 2 LVDS applications, testing, and performance evaluation expand. Buses and Backplanes D Multi-drop D LVDS is a

More information

Section 3 - Backplane Architecture Backplane Designer s Guide

Section 3 - Backplane Architecture Backplane Designer s Guide Section 3 - Backplane Architecture Backplane Designer s Guide March 2002 Revised March 2002 The primary criteria for backplane design are low cost, high speed, and high reliability. To attain these often-conflicting

More information

Deadlock-free Fault-tolerant Routing in the Multi-dimensional Crossbar Network and Its Implementation for the Hitachi SR2201

Deadlock-free Fault-tolerant Routing in the Multi-dimensional Crossbar Network and Its Implementation for the Hitachi SR2201 Deadlock-free Fault-tolerant Routing in the Multi-dimensional Crossbar Network and Its Implementation for the Hitachi SR2201 Yoshiko Yasuda, Hiroaki Fujii, Hideya Akashi, Yasuhiro Inagami, Teruo Tanaka*,

More information

Module 17: "Interconnection Networks" Lecture 37: "Introduction to Routers" Interconnection Networks. Fundamentals. Latency and bandwidth

Module 17: Interconnection Networks Lecture 37: Introduction to Routers Interconnection Networks. Fundamentals. Latency and bandwidth Interconnection Networks Fundamentals Latency and bandwidth Router architecture Coherence protocol and routing [From Chapter 10 of Culler, Singh, Gupta] file:///e /parallel_com_arch/lecture37/37_1.htm[6/13/2012

More information

Ting Wu, Chi-Ying Tsui, Mounir Hamdi Hong Kong University of Science & Technology Hong Kong SAR, China

Ting Wu, Chi-Ying Tsui, Mounir Hamdi Hong Kong University of Science & Technology Hong Kong SAR, China CMOS Crossbar Ting Wu, Chi-Ying Tsui, Mounir Hamdi Hong Kong University of Science & Technology Hong Kong SAR, China OUTLINE Motivations Problems of Designing Large Crossbar Our Approach - Pipelined MUX

More information

Network Media and Layer 1 Functionality

Network Media and Layer 1 Functionality Network Media and Layer 1 Functionality BSAD 146 Dave Novak Dean, Chapter 3, pp 93-124 Objectives Introduction to transmission media Basic cabling Coaxial Twisted pair Optical fiber Basic wireless (NIC)

More information

INTERNATIONAL STANDARD

INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 11518-10 First edition 2001-03 Information technology High-performance parallel interface Part 10: 6 400 Mbit/s Physical Layer (HIPPI-6400-PH) Reference number ISO/IEC 11518-10:2001(E)

More information

DesignCon SerDes Architectures and Applications. Dave Lewis, National Semiconductor Corporation

DesignCon SerDes Architectures and Applications. Dave Lewis, National Semiconductor Corporation DesignCon 2004 SerDes Architectures and Applications Dave Lewis, National Semiconductor Corporation Abstract When most system designers look at serializer/deserializer (SerDes) devices, they often compare

More information

Lecture: Interconnection Networks

Lecture: Interconnection Networks Lecture: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm 1 Packets/Flits A message is broken into multiple packets (each packet

More information

APEX II The Complete I/O Solution

APEX II The Complete I/O Solution APEX II The Complete I/O Solution July 2002 Altera introduces the APEX II device family: highperformance, high-bandwidth programmable logic devices (PLDs) targeted towards emerging network communications

More information

100 Gbit/s Computer Optical Interconnect

100 Gbit/s Computer Optical Interconnect 100 Gbit/s Computer Optical Interconnect Ivan Glesk, Robert J. Runser, Kung-Li Deng, and Paul R. Prucnal Department of Electrical Engineering, Princeton University, Princeton, NJ08544 glesk@ee.princeton.edu

More information

Problem Formulation. Specialized algorithms are required for clock (and power nets) due to strict specifications for routing such nets.

Problem Formulation. Specialized algorithms are required for clock (and power nets) due to strict specifications for routing such nets. Clock Routing Problem Formulation Specialized algorithms are required for clock (and power nets) due to strict specifications for routing such nets. Better to develop specialized routers for these nets.

More information

250 Mbps Transceiver in LC FB2M5LVR

250 Mbps Transceiver in LC FB2M5LVR 250 Mbps Transceiver in LC FB2M5LVR DATA SHEET 650 nm 250 Mbps Fiber Optic Transceiver with LC Termination LVDS I/O IEC 61754-20 Compliant FEATURES LC click lock mechanism for confident connections Compatible

More information

Hardware Technology of the SX-9 (2) - Internode Switch -

Hardware Technology of the SX-9 (2) - Internode Switch - Hardware Technology of the SX-9 (2) - Internode Switch - ANDO Noriyuki, KASUGA Yasuhiro, SUZUKI Masaki, YAMAMOTO Takahito Abstract The internode connection system of the SX-9 is a dedicated high-speed

More information

Lecture 24: Interconnection Networks. Topics: topologies, routing, deadlocks, flow control

Lecture 24: Interconnection Networks. Topics: topologies, routing, deadlocks, flow control Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control 1 Topology Examples Grid Torus Hypercube Criteria Bus Ring 2Dtorus 6-cube Fully connected Performance Bisection

More information

XS1 Link Performance and Design Guidelines

XS1 Link Performance and Design Guidelines XS1 Link Performance and Design Guidelines IN THIS DOCUMENT Inter-Symbol Delay Data Rates Link Resources Booting over XMOS links XS1 System Topologies Layout Guidelines Deployment Scenarios EMI This document

More information

Packaging Technology of the SX-9

Packaging Technology of the SX-9 UMEZAWA Kazuhiko, HAMAGUCHI Hiroyuki, TAKEDA Tsutomu HOSAKA Tadao, NATORI Masaki, NAGATA Tetsuya Abstract This paper is intended to outline the packaging technology used with the SX-9. With the aim of

More information

Development of Optical Wiring Technology for Optical Interconnects

Development of Optical Wiring Technology for Optical Interconnects Development of Optical Wiring Technology for Optical Interconnects Mitsuhiro Iwaya*, Katsuki Suematsu*, Harumi Inaba*, Ryuichi Sugizaki*, Kazuyuki Fuse*, Takuya Nishimoto* 2, Kenji Kamoto* 3 We had developed

More information

Optical Interconnection as an IP Macro of COMS LSIs (OIP)

Optical Interconnection as an IP Macro of COMS LSIs (OIP) Optical Interconnection as an IP Macro of COMS LSIs (OIP) Takashi Yoshikawa, Ichiro Hatakeyama, Kazunori Miyoshi, and Kazuhiko Kurata Optical Interconnection NEC Laboratory, RWCP Tomohiro Kudoh, and Hiroaki

More information

Hybrid Integration of a Semiconductor Optical Amplifier for High Throughput Optical Packet Switched Interconnection Networks

Hybrid Integration of a Semiconductor Optical Amplifier for High Throughput Optical Packet Switched Interconnection Networks Hybrid Integration of a Semiconductor Optical Amplifier for High Throughput Optical Packet Switched Interconnection Networks Odile Liboiron-Ladouceur* and Keren Bergman Columbia University, 500 West 120

More information

A Single Chip Shared Memory Switch with Twelve 10Gb Ethernet Ports

A Single Chip Shared Memory Switch with Twelve 10Gb Ethernet Ports A Single Chip Shared Memory Switch with Twelve 10Gb Ethernet Ports Takeshi Shimizu, Yukihiro Nakagawa, Sridhar Pathi, Yasushi Umezawa, Takashi Miyoshi, Yoichi Koyanagi, Takeshi Horie, Akira Hattori Hot

More information

INTERCONNECTION NETWORKS LECTURE 4

INTERCONNECTION NETWORKS LECTURE 4 INTERCONNECTION NETWORKS LECTURE 4 DR. SAMMAN H. AMEEN 1 Topology Specifies way switches are wired Affects routing, reliability, throughput, latency, building ease Routing How does a message get from source

More information

NoC Round Table / ESA Sep Asynchronous Three Dimensional Networks on. on Chip. Abbas Sheibanyrad

NoC Round Table / ESA Sep Asynchronous Three Dimensional Networks on. on Chip. Abbas Sheibanyrad NoC Round Table / ESA Sep. 2009 Asynchronous Three Dimensional Networks on on Chip Frédéric ric PétrotP Outline Three Dimensional Integration Clock Distribution and GALS Paradigm Contribution of the Third

More information

CMSC 611: Advanced. Interconnection Networks

CMSC 611: Advanced. Interconnection Networks CMSC 611: Advanced Computer Architecture Interconnection Networks Interconnection Networks Massively parallel processor networks (MPP) Thousands of nodes Short distance (

More information

Processor Architectures At A Glance: M.I.T. Raw vs. UC Davis AsAP

Processor Architectures At A Glance: M.I.T. Raw vs. UC Davis AsAP Processor Architectures At A Glance: M.I.T. Raw vs. UC Davis AsAP Presenter: Course: EEC 289Q: Reconfigurable Computing Course Instructor: Professor Soheil Ghiasi Outline Overview of M.I.T. Raw processor

More information

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies Mohsin Y Ahmed Conlan Wesson Overview NoC: Future generation of many core processor on a single chip

More information

Future Gigascale MCSoCs Applications: Computation & Communication Orthogonalization

Future Gigascale MCSoCs Applications: Computation & Communication Orthogonalization Basic Network-on-Chip (BANC) interconnection for Future Gigascale MCSoCs Applications: Computation & Communication Orthogonalization Abderazek Ben Abdallah, Masahiro Sowa Graduate School of Information

More information

Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices

Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices November 2008, ver. 1.1 Introduction LVDS is becoming the most popular differential I/O standard for high-speed transmission

More information

CompuScope 3200 product introduction

CompuScope 3200 product introduction CompuScope 3200 product introduction CompuScope 3200 is a PCI bus based board-level product that allows the user to capture up to 32 bits of singleended CMOS/TTL or differential ECL/PECL digital data into

More information

Overlaid Mesh Topology Design and Deadlock Free Routing in Wireless Network-on-Chip. Danella Zhao and Ruizhe Wu Presented by Zhonghai Lu, KTH

Overlaid Mesh Topology Design and Deadlock Free Routing in Wireless Network-on-Chip. Danella Zhao and Ruizhe Wu Presented by Zhonghai Lu, KTH Overlaid Mesh Topology Design and Deadlock Free Routing in Wireless Network-on-Chip Danella Zhao and Ruizhe Wu Presented by Zhonghai Lu, KTH Outline Introduction Overview of WiNoC system architecture Overlaid

More information

FPGA based Design of Low Power Reconfigurable Router for Network on Chip (NoC)

FPGA based Design of Low Power Reconfigurable Router for Network on Chip (NoC) FPGA based Design of Low Power Reconfigurable Router for Network on Chip (NoC) D.Udhayasheela, pg student [Communication system],dept.ofece,,as-salam engineering and technology, N.MageshwariAssistant Professor

More information

Design of a System-on-Chip Switched Network and its Design Support Λ

Design of a System-on-Chip Switched Network and its Design Support Λ Design of a System-on-Chip Switched Network and its Design Support Λ Daniel Wiklund y, Dake Liu Dept. of Electrical Engineering Linköping University S-581 83 Linköping, Sweden Abstract As the degree of

More information

Optical networking technology

Optical networking technology 1 Optical networking technology Technological advances in semiconductor products have essentially been the primary driver for the growth of networking that led to improvements and simplification in the

More information

Building petabit/s data center network with submicroseconds latency by using fast optical switches Miao, W.; Yan, F.; Dorren, H.J.S.; Calabretta, N.

Building petabit/s data center network with submicroseconds latency by using fast optical switches Miao, W.; Yan, F.; Dorren, H.J.S.; Calabretta, N. Building petabit/s data center network with submicroseconds latency by using fast optical switches Miao, W.; Yan, F.; Dorren, H.J.S.; Calabretta, N. Published in: Proceedings of 20th Annual Symposium of

More information

Lecture: Interconnection Networks. Topics: TM wrap-up, routing, deadlock, flow control, virtual channels

Lecture: Interconnection Networks. Topics: TM wrap-up, routing, deadlock, flow control, virtual channels Lecture: Interconnection Networks Topics: TM wrap-up, routing, deadlock, flow control, virtual channels 1 TM wrap-up Eager versioning: create a log of old values Handling problematic situations with a

More information

More on IO: The Universal Serial Bus (USB)

More on IO: The Universal Serial Bus (USB) ecture 37 Computer Science 61C Spring 2017 April 21st, 2017 More on IO: The Universal Serial Bus (USB) 1 Administrivia Project 5 is: USB Programming (read from a mouse) Optional (helps you to catch up

More information

December 2002, ver. 1.1 Application Note For more information on the CDR mode of the HSDI block, refer to AN 130: CDR in Mercury Devices.

December 2002, ver. 1.1 Application Note For more information on the CDR mode of the HSDI block, refer to AN 130: CDR in Mercury Devices. Using HSDI in Source- Synchronous Mode in Mercury Devices December 2002, ver. 1.1 Application Note 159 Introduction High-speed serial data transmission has gained increasing popularity in the data communications

More information

Report on the successful demonstration of innovative basic technologies for future optical access network Elastic Lambda Aggregation Network

Report on the successful demonstration of innovative basic technologies for future optical access network Elastic Lambda Aggregation Network April 25, 2017 Nippon Telegraph and Telephone Corporation Hitachi, Ltd. Oki Electric Industry Co., Ltd. Keio University KDDI Research, Inc. Furukawa Electric Co., Ltd. Report on the successful demonstration

More information

System-on-a-Programmable-Chip (SOPC) Development Board

System-on-a-Programmable-Chip (SOPC) Development Board System-on-a-Programmable-Chip (SOPC) Development Board Solution Brief 47 March 2000, ver. 1 Target Applications: Embedded microprocessor-based solutions Family: APEX TM 20K Ordering Code: SOPC-BOARD/A4E

More information

Application of High Speed Serial Data Transmission System in Remote Sensing Camera

Application of High Speed Serial Data Transmission System in Remote Sensing Camera MATEC Web of Conferences 114, 0200 (2017) DOI: 10.101/ matecconf/20171140200 Application of High Speed Serial Data Transmission System in Remote Sensing Camera Zhang Ye 1,a, He Qiangmin 1 and Pan Weijun

More information

Basic Low Level Concepts

Basic Low Level Concepts Course Outline Basic Low Level Concepts Case Studies Operation through multiple switches: Topologies & Routing v Direct, indirect, regular, irregular Formal models and analysis for deadlock and livelock

More information

Simulation of an all Optical Time Division Multiplexing Router Employing TOADs.

Simulation of an all Optical Time Division Multiplexing Router Employing TOADs. Simulation of an all Optical Time Division Multiplexing Router Employing TOADs. Razali Ngah a, Zabih Ghassemlooy a, Graham Swift a, Tahir Ahmad b and Peter Ball c a Optical Communications Research Group,

More information

Signal Integrity Comparisons Between Stratix II and Virtex-4 FPGAs

Signal Integrity Comparisons Between Stratix II and Virtex-4 FPGAs White Paper Introduction Signal Integrity Comparisons Between Stratix II and Virtex-4 FPGAs Signal integrity has become a critical issue in the design of high-speed systems. Poor signal integrity can mean

More information

Computer buses and interfaces

Computer buses and interfaces FYS3240-4240 Data acquisition & control Computer buses and interfaces Spring 2018 Lecture #7 Reading: RWI Ch7 and page 559 Bekkeng 14.02.2018 Abbreviations B = byte b = bit M = mega G = giga = 10 9 k =

More information

Brief Background in Fiber Optics

Brief Background in Fiber Optics The Future of Photonics in Upcoming Processors ECE 4750 Fall 08 Brief Background in Fiber Optics Light can travel down an optical fiber if it is completely confined Determined by Snells Law Various modes

More information

Lecture 16: On-Chip Networks. Topics: Cache networks, NoC basics

Lecture 16: On-Chip Networks. Topics: Cache networks, NoC basics Lecture 16: On-Chip Networks Topics: Cache networks, NoC basics 1 Traditional Networks Huh et al. ICS 05, Beckmann MICRO 04 Example designs for contiguous L2 cache regions 2 Explorations for Optimality

More information

Prototyping NGC. First Light. PICNIC Array Image of ESO Messenger Front Page

Prototyping NGC. First Light. PICNIC Array Image of ESO Messenger Front Page Prototyping NGC First Light PICNIC Array Image of ESO Messenger Front Page Introduction and Key Points Constructed is a modular system with : A Back-End as 64 Bit PCI Master/Slave Interface A basic Front-end

More information

Large scale optical circuit switches for future data center applications

Large scale optical circuit switches for future data center applications Large scale optical circuit switches for future data center applications ONDM2017 workshop Yojiro Moriand Ken-ichi Sato Outline 1. Introduction -Optical circuit switch for datacenter- 2. Sub-switch configuration

More information

Understanding 3M Ultra Hard Metric (UHM) Connectors

Understanding 3M Ultra Hard Metric (UHM) Connectors 3M Electronic Solutions Division 3MUHMWEBID_100809 Understanding 3M Ultra Hard Metric (UHM) Connectors Enabling performance of next generation 2 mm Hard Metric systems 3M Electronic Solutions Division

More information

Memory Systems IRAM. Principle of IRAM

Memory Systems IRAM. Principle of IRAM Memory Systems 165 other devices of the module will be in the Standby state (which is the primary state of all RDRAM devices) or another state with low-power consumption. The RDRAM devices provide several

More information

EE382C Lecture 1. Bill Dally 3/29/11. EE 382C - S11 - Lecture 1 1

EE382C Lecture 1. Bill Dally 3/29/11. EE 382C - S11 - Lecture 1 1 EE382C Lecture 1 Bill Dally 3/29/11 EE 382C - S11 - Lecture 1 1 Logistics Handouts Course policy sheet Course schedule Assignments Homework Research Paper Project Midterm EE 382C - S11 - Lecture 1 2 What

More information

Low Latency Communication on DIMMnet-1 Network Interface Plugged into a DIMM Slot

Low Latency Communication on DIMMnet-1 Network Interface Plugged into a DIMM Slot Low Latency Communication on DIMMnet-1 Network Interface Plugged into a DIMM Slot Noboru Tanabe Toshiba Corporation noboru.tanabe@toshiba.co.jp Hideki Imashiro Hitachi Information Technology Co., Ltd.

More information

Optical SerDes Test Interface for High-Speed and Parallel Testing

Optical SerDes Test Interface for High-Speed and Parallel Testing June 7-10, 2009 San Diego, CA SerDes Test Interface for High-Speed and Parallel Testing Sanghoon Lee, Ph. D Sejang Oh, Kyeongseon Shin, Wuisoo Lee Memory Division, SAMSUNG ELECTRONICS Why Interface? High

More information

Ethernet Technologies

Ethernet Technologies Ethernet Technologies CCNA 1 v3 Module 7 NESCOT CATC 1 10 Mbps Ethernet Legacy Ethernet means: 10BASE5 10BASE2 10BASE-T Common features are: frame format timing parameters transmission process basic design

More information

APEX Devices APEX 20KC. High-Density Embedded Programmable Logic Devices for System-Level Integration. Featuring. All-Layer Copper.

APEX Devices APEX 20KC. High-Density Embedded Programmable Logic Devices for System-Level Integration. Featuring. All-Layer Copper. APEX Devices High-Density Embedded Programmable Logic Devices for System-Level Integration APEX 0KC Featuring All-Layer Copper Interconnect July 00 APEX programmable logic devices provide the flexibility

More information

A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories Japanese Journal of Applied Physics Vol., No. B, 200, pp. 329 3300 #200 The Japan Society of Applied Physics A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static

More information

PSEC-4: Review of Architecture, etc. Eric Oberla 27-oct-2012

PSEC-4: Review of Architecture, etc. Eric Oberla 27-oct-2012 PSEC-4: Review of Architecture, etc. Eric Oberla 27-oct-2012 PSEC-4 ASIC: design specs LAPPD Collaboration Designed to sample & digitize fast pulses (MCPs): Sampling rate capability > 10GSa/s Analog bandwidth

More information

Wormhole Routing Local Area Networks. Multicasting Protocols for High-Speed,

Wormhole Routing Local Area Networks. Multicasting Protocols for High-Speed, Multicasting Protocols for High-Speed, Wormhole Routing Local Area Networks Mario Gerla, Prasasth Palnati, Simon Walton, (University of California, Los Angeles) THE SUPERCOMPUTER SUPERNET University of

More information

6.9. Communicating to the Outside World: Cluster Networking

6.9. Communicating to the Outside World: Cluster Networking 6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and

More information

Lecture 12: Interconnection Networks. Topics: dimension/arity, routing, deadlock, flow control

Lecture 12: Interconnection Networks. Topics: dimension/arity, routing, deadlock, flow control Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control 1 Interconnection Networks Recall: fully connected network, arrays/rings, meshes/tori, trees, butterflies,

More information

The interconnect becomes an increasingly critical system component > Fatter compute nodes > Increasing disparity between local and remote

The interconnect becomes an increasingly critical system component > Fatter compute nodes > Increasing disparity between local and remote Multiterabit Switch Fabrics Enabled by Proximity Communication Hans Eberle, Alex Chow, Bill Coates, Jack Cunningham, Robert Drost, Jo Ebergen, Scott Fairbanks, Jon Gainsley, Nils Gura, Ron Ho, David Hopkins,

More information

PETsys SiPM Readout System

PETsys SiPM Readout System SiPM Readout System FEB/A_v2 FEB/S FEB/I The SiPM Readout System is designed to read a large number of SiPM photo-sensor pixels in applications where a high data rate and excellent time resolution is required.

More information

Reducing SpaceWire Time-code Jitter

Reducing SpaceWire Time-code Jitter Reducing SpaceWire Time-code Jitter Barry M Cook 4Links Limited The Mansion, Bletchley Park, Milton Keynes, MK3 6ZP, UK Email: barry@4links.co.uk INTRODUCTION Standards ISO/IEC 14575[1] and IEEE 1355[2]

More information

Multipoint Streaming Technology for 4K Super-high-definition Motion Pictures

Multipoint Streaming Technology for 4K Super-high-definition Motion Pictures Multipoint Streaming Technology for 4K Super-high-definition Motion Pictures Hirokazu Takahashi, Daisuke Shirai, Takahiro Murooka, and Tatsuya Fujii Abstract This article introduces wide-area multipoint

More information

Ultra-Low Latency, Bit-Parallel Message Exchange in Optical Packet Switched Interconnection Networks

Ultra-Low Latency, Bit-Parallel Message Exchange in Optical Packet Switched Interconnection Networks Ultra-Low Latency, Bit-Parallel Message Exchange in Optical Packet Switched Interconnection Networks O. Liboiron-Ladouceur 1, C. Gray 2, D. Keezer 2 and K. Bergman 1 1 Department of Electrical Engineering,

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Removing the Latency Overhead of the ITB Mechanism in COWs with Source Routing Λ

Removing the Latency Overhead of the ITB Mechanism in COWs with Source Routing Λ Removing the Latency Overhead of the ITB Mechanism in COWs with Source Routing Λ J. Flich, M. P. Malumbres, P. López and J. Duato Dpto. of Computer Engineering (DISCA) Universidad Politécnica de Valencia

More information

Interconnection Networks

Interconnection Networks Lecture 17: Interconnection Networks Parallel Computer Architecture and Programming A comment on web site comments It is okay to make a comment on a slide/topic that has already been commented on. In fact

More information

1-Fiber Detachable DVI module, DVFX-100

1-Fiber Detachable DVI module, DVFX-100 1-Fiber Detachable DVI module, DVFX-100 DATA SHEET Contents Description Features Applications Technical Specifications Functions Drawing Fiber Connection DVI Pin Description Revision History OPTICIS HQ

More information

Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

More information

Developing flexible WDM networks using wavelength tuneable components

Developing flexible WDM networks using wavelength tuneable components Developing flexible WDM networks using wavelength tuneable components A. Dantcha 1, L.P. Barry 1, J. Murphy 1, T. Mullane 2 and D. McDonald 2 (1) Research Institute for Network and Communications Engineering,

More information

Board Design Guidelines for PCI Express Architecture

Board Design Guidelines for PCI Express Architecture Board Design Guidelines for PCI Express Architecture Cliff Lee Staff Engineer Intel Corporation Member, PCI Express Electrical and Card WGs The facts, techniques and applications presented by the following

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

Network management and QoS provisioning - revise. When someone have to share the same resources is possible to consider two particular problems:

Network management and QoS provisioning - revise. When someone have to share the same resources is possible to consider two particular problems: Revise notes Multiplexing & Multiple Access When someone have to share the same resources is possible to consider two particular problems:. multiplexing;. multiple access. The first one is a centralized

More information

Implementation of Software-based EPON-OLT and Performance Evaluation

Implementation of Software-based EPON-OLT and Performance Evaluation This article has been accepted and published on J-STAGE in advance of copyediting. Content is final as presented. IEICE Communications Express, Vol.1, 1 6 Implementation of Software-based EPON-OLT and

More information

Multi-Gigahertz Source Synchronous Testing of an Optical Packet Switching Network

Multi-Gigahertz Source Synchronous Testing of an Optical Packet Switching Network Multi-Gigahertz Source Synchronous Testing of an Optical Packet Switching Network C.E. Gray 1, O. Liboiron-Ladouceur 2, D.C. Keezer 1, K. Bergman 2 1 - Georgia Institute of Technology 2 - Columbia University

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 36 I/O : Networks 2008-04-25 TA Brian Zimmer CS61C L36 I/O : Networks (1) inst.eecs/~cs61c-th NASA To Develop Small Satellites NASA has

More information

Session 4a. Burn-in & Test Socket Workshop Burn-in Board Design

Session 4a. Burn-in & Test Socket Workshop Burn-in Board Design Session 4a Burn-in & Test Socket Workshop 2000 Burn-in Board Design BURN-IN & TEST SOCKET WORKSHOP COPYRIGHT NOTICE The papers in this publication comprise the proceedings of the 2000 BiTS Workshop. They

More information

1/5/2012. Overview of Interconnects. Presentation Outline. Myrinet and Quadrics. Interconnects. Switch-Based Interconnects

1/5/2012. Overview of Interconnects. Presentation Outline. Myrinet and Quadrics. Interconnects. Switch-Based Interconnects Overview of Interconnects Myrinet and Quadrics Leading Modern Interconnects Presentation Outline General Concepts of Interconnects Myrinet Latest Products Quadrics Latest Release Our Research Interconnects

More information

ii) Do the following conversions: output is. (a) (101.10) 10 = (?) 2 i) Define X-NOR gate. (b) (10101) 2 = (?) Gray (2) /030832/31034

ii) Do the following conversions: output is. (a) (101.10) 10 = (?) 2 i) Define X-NOR gate. (b) (10101) 2 = (?) Gray (2) /030832/31034 No. of Printed Pages : 4 Roll No.... rd 3 Sem. / ECE Subject : Digital Electronics - I SECTION-A Note: Very Short Answer type questions. Attempt any 15 parts. (15x2=30) Q.1 a) Define analog signal. b)

More information

Emerging DRAM Technologies

Emerging DRAM Technologies 1 Emerging DRAM Technologies Michael Thiems amt051@email.mot.com DigitalDNA Systems Architecture Laboratory Motorola Labs 2 Motivation DRAM and the memory subsystem significantly impacts the performance

More information

InfiniBand FDR 56-Gbps QSFP+ Active Optical Cable PN: WST-QS56-AOC-Cxx

InfiniBand FDR 56-Gbps QSFP+ Active Optical Cable PN: WST-QS56-AOC-Cxx Data Sheet PN: General Description WaveSplitter s Quad Small Form-Factor Pluggable Plus (QSFP+) active optical cables (AOC) are highperformance active optical cable with bi-directional signal transmission

More information

Routing Algorithms, Process Model for Quality of Services (QoS) and Architectures for Two-Dimensional 4 4 Mesh Topology Network-on-Chip

Routing Algorithms, Process Model for Quality of Services (QoS) and Architectures for Two-Dimensional 4 4 Mesh Topology Network-on-Chip Routing Algorithms, Process Model for Quality of Services (QoS) and Architectures for Two-Dimensional 4 4 Mesh Topology Network-on-Chip Nauman Jalil, Adnan Qureshi, Furqan Khan, and Sohaib Ayyaz Qazi Abstract

More information

LatticeSCM SPI4.2 Interoperability with PMC-Sierra PM3388

LatticeSCM SPI4.2 Interoperability with PMC-Sierra PM3388 August 2006 Technical Note TN1121 Introduction The System Packet Interface, Level 4, Phase 2 (SPI4.2) is a system level interface, published in 2001 by the Optical Internetworking Forum (OIF), for packet

More information

Packaging Technology for Image-Processing LSI

Packaging Technology for Image-Processing LSI Packaging Technology for Image-Processing LSI Yoshiyuki Yoneda Kouichi Nakamura The main function of a semiconductor package is to reliably transmit electric signals from minute electrode pads formed on

More information

VCSEL-based solderable optical modules

VCSEL-based solderable optical modules 4th Symposium on Optical Interconnect for Data Centres VCSEL-based solderable optical modules Hideyuki Nasu FITEL Products Division Furukawa Electric Co., Ltd. H. Nasu/ FITEL Products Division, Furukawa

More information

A Protocol for Realtime Switched Communication in FPGA Clusters

A Protocol for Realtime Switched Communication in FPGA Clusters A Protocol for Realtime Switched Communication in FPGA Clusters Richard D. Anderson Computer Science and Engineering, Box 9637 Mississippi State University Mississippi State, MS 39762 rda62@msstate.edu

More information

I N T E R C O N N E C T A P P L I C A T I O N N O T E. STRADA Whisper 4.5mm Connector Enhanced Backplane and Daughtercard Footprint Routing Guide

I N T E R C O N N E C T A P P L I C A T I O N N O T E. STRADA Whisper 4.5mm Connector Enhanced Backplane and Daughtercard Footprint Routing Guide I N T E R C O N N E C T A P P L I C A T I O N N O T E STRADA Whisper 4.5mm Connector Enhanced Backplane and Daughtercard Footprint Routing Guide Report # 32GC001 01/26/2015 Rev 3.0 STRADA Whisper Connector

More information