DOS - A Scalable Optical Switch for Datacenters

Size: px
Start display at page:

Download "DOS - A Scalable Optical Switch for Datacenters"

Transcription

1 DOS - A Scalable Optical Switch for Datacenters Xiaohui Ye xye@ucdavis.edu Paul Mejia pmmejia@ucdavis.edu Yawei Yin yyin@ucdavis.edu Roberto Proietti rproietti@ucdavis.edu S. J. B. Yoo sbyoo@ucdavis.edu Venkatesh Akella akella@ucdavis.edu Department of Electrical and Computer Engineering University of California, Davis Davis, California, USA ABSTRACT This paper discusses the architecture and performance studies of Datacenter Optical Switch (DOS) designed for scalable and highthroughput interconnections within a data center. DOS exploits wavelength routing characteristics of a switch fabric based on an Arrayed Waveguide Grating Router (AWGR) that allows contention resolution in the wavelength domain. Simulation results indicate that DOS exhibits lower latency and higher throughput even at high input loads compared with electronic switches or previously proposed optical switch architectures such as OSMOSIS [4, 5] and Data Vortex [6, 7]. Such characteristics, together with very high port count on a single switch fabric make DOS attractive for data center applications where the traffic patterns are known to be bursty with high temporary peaks [13]. DOS exploits the unique characteristics of the AWGR fabric to reduce the delay and complexity of arbitration. We present a detailed analysis of DOS using a cycle-accurate network simulator. The results show that the latency of DOS is almost independent of the number of input ports and does not saturate even at very high (approx 90%) input load. Furthermore, we show that even with 2 to 4 wavelengths, the performance of DOS is significantly better than an electrical switch network based on state-of-the-art flattened butterfly topology. Categories and Subject Descriptors C.2.6 [Internetworking]: Routers; C.2.1 [Network Architecture and Design]: Packet-switching networks; General Terms Design, Performance, Management Keywords Data Center Networks, AWGR, Low Latency Optical Switches Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ANCS 10, October 25-26, 2010, La Jolla, CA, USA. Copyright (c) 2010 ACM /10/10 $ INTRODUCTION Low latency, low power, scalable, and high-throughput interconnection is essential for future data centers. Data center switches based on electronic multistage inter-connection topologies (e.g. Fat-Tree, Clos, Torus, Flattened Butterfly [1, 2]) result in large latencies (due to the multi-hop nature of these networks) and very high power consumption in the buffers and the switch fabric. On the other hand, optical interconnects can benefit from the inherent parallelism and high capacity of Wavelength Division Multiplexing (WDM). Furthermore, multiple WDM channels on an output can be used as multiple concurrent channels to avoid head-of-line blocking [3], which results in lower latency and higher performance. Optical switching is being successfully deployed in traditional telecommunication networks. However, there are two key differences between telecom applications and data center applications. First, at least two orders of reduction in latency is required for data center applications (100 s of nanoseconds as opposed to 10 s or 100 s of microseconds). Second, data center switches need to connect many more nodes -- hundreds or thousands in large data centers -- than typical telecom switches. Recently there are some efforts in designing optical switching architecture by considering the particular challenges faced by the networks supporting data centers and high-performance computing. The Optical Shared Memory Supercomputer Interconnect System (OSMOSIS) [4, 5] and Data Vortex [6-8] are two pioneering projects in this area. DOS (Datacenter Optical Switch) is based on an all-optical switching fabric called Arrayed Waveguide Grating Router (AWGR) that has been proven in telecom applications to scale to petabit/second aggregate switching capacity [9-11]. The cyclic wavelength routing characteristic of the AWGR allows different inputs to reach the same output simultaneously by using different wavelengths. Non-blocking switching from any input port to any output port can be easily achieved by simply tuning the wavelength at each input. Basically AWGR is a fully-connected topology. AWGR-based switching fabric is power efficient as the signal is only delivered to the desired output port via the appropriate wavelength, instead of using a broadcast-and-select mechanism that accompanies excessive power splitting losses. Architecturally, the AWGR is a passive and loss-less optical interconnect element. The power consumed in tunable wavelength converters (TWC), the loopback shared buffer, and the control plane logic scales linearly with the number of ports, unlike other

2 switches. Finally, DOS uses label switching [9] with the optical label transmitted on a different wavelength. This allows the control plane to operate at a significantly lower clock frequency than the data rate, though this introduces some restrictions in the minimum packet size requirements, which are quite reasonable for a data center application. Section 6 describes this in more detail. The main contributions of this paper are as follows. The paper provides the details of the architecture of a switch designed for data center applications and evaluates its performance and scalability with cycle-accurate simulations. The paper compares the performance of DOS with other optical switches such as OSMOSIS and Data Vortex, and with an electrical switch based on flattened butterfly topology [12], which is being considered by Cray and other commercial vendors. We propose a simple arbitration scheme that takes advantage of the unique cyclic properties of the AWGR switch fabric that further enhances the scalability of DOS. The main findings of this paper are as follows: a) The proposed optical switching is attractive for data center applications because of its low latency and scalable effective bandwidth even under the input load as high as 90%, b) DOS performs exceptionally well even with as few as 2 to 4 wavelengths per fiber, and c) DOS performs well on such bursty on-off traffic patterns recently described by Wisconsin and Microsoft [13]. The remainder of this paper is organized as follows. Section 2 first reviews the related work. Section 3 presents the details of the DOS architecture. Section 4 provides the details of an arbiter that takes advantage of AWGR to reduce complexity. Section 5 describes how DOS was modeled in GARNET and the simulation infrastructure. Section 6 describes the performance analysis of DOS and comparison with related work. 2. RELATED WORK Conventional data center networks are built in a hierarchical manner, with a large number of cheap, low-speed, small-radix switches at the bottom level to connect with end nodes, and a few expensive, powerful, large-radix switches residing at the top to aggregate and distribute the traffic [14]. Recently, the network architects have adopted fat tree and CLOS topologies to provide high aggregate bandwidth. For example, Al-Fares [1] uses commodity off-the-shelf Ethernet switches while InfiniBand [15] utilizes special switches based on the InfiniBand protocol. Power consumption, latency, and throughput under high input loads are the key challenges with electrical switches. Farrington [16] suggests placing MEMS based optical circuit switches in parallel with electrical switches in the core network to carry slow changed inter-pod traffic, thus reducing power consumption and cost. Nevertheless, their design does not address the challenges of latency and throughput under high input loads. DOS employs optical switching with wavelength domain parallelism and a passive AWGR based switching fabric to overcome these challenges. A simple 5-port optical switch has also been adopted in an on-chip optical mesh network recently [17], wavelength parallelism is used to realize high bandwidth transmission, so that control bit and data bit can be transmitted from one core to another in one clock cycle; while in our work, wavelength parallelism is utilized to boost switching fabric performance. As introduced before, Data Vortex and OSMOSIS represent the state-of-the-art in terms of optical switching in high performance computing applications. OSMOSIS utilizes semiconductor optical amplifiers (SOAs) to realize a synchronous optical crossbar switching fabric by using a broadcast-and-select data path combined with both space- and wavelength- division multiplexing [4, 5]. The OSMOSIS demonstrator switch has 8 broadcast units, each with 8 wavelengths on one fiber to connect with a total of 64 ingress adapters. Each wavelength on each fiber is duplicated and fed to 128 select units where SOAs are used to select the proper wavelength from the proper fiber according to the central scheduler's decision. Each of the 64 egress adapters connects with two select units, thus enabling the egress adapter to receive up to two concurrent cells. To reduce control latency, OSMOSIS allows packet transmission without getting a grant when the traffic load is light by using a speculative transmission (STX) scheme. The Data Vortex is a distributed interconnection network architecture [6, 7] based on deflection routing. Its structure can be visualized as cyclic subgroups that allow for deflections without loss of routing progress. By leveraging optical parallelism and avoiding optical buffer based on deflection routing, the data vortex architecture can achieve high aggregation bandwidth, low latency per hop due to short packet slot time and high potential switching capacity due to transparent wavelength switching. While OSMOSIS and Data Vortex provide significant improvements in capacity and latency compared with electrical switches, they both possess some drawbacks that cannot be avoided due to the architectures that they adopt. The power requirements of OSMOSIS switches can be very high because of its broadcast-and-select architecture -- signals are delivered to every select unit even though, ultimately, only one unit selects the signal. 16 SOAs in each select unit also consume power. In addition, STX can only effectively eliminate control path latency when the traffic load is light. Data Vortex also has some limitations. The banyan-style hierarchical multiple-stage structure becomes extremely complex when scaled to larger network sizes. As the number of nodes increase, packet reordering becomes more frequent due to the non-deterministic nature of the paths traversed by packets propagating through the data vortex networks, and the end-to-end latency of each packet becomes large and non-deterministic. It was observed that the system saturates before the offered load exceeds 50% [7]. Furthermore, with respect to the physical layer scalability [8], the optical gain saturation of the SOA limits the maximum number of payload channels, and the cascaded components spectrum profiles will limit the functional bandwidth achieved. Section 6 describes the performance comparison of DOS with both OSMOSIS and Data Vertex. The simulation and analysis in this paper show that DOS achieves higher throughput and lower latency than both Data Vortex and OSMOSIS. The AWGR based optical switches and optical routers with packet switching capability have been investigated for a number of years [9,18, 19]. Previous works [3, 9, 20-24] mainly focus on applying AWGR techniques in access networks, and telecommunication / IP networks. AWGR serves as the switching fabric in many

3 Figure 1. The system diagram of the proposed optical switch, OLG: Optical Label Generator; PE: Packet Encapsulation; LE: Label Extractor; FDL: Fiber Delay Line; PFC: packet Format Converter; O/E: Optical-to-Electrical converter; E/O: Electrical-to- Optical Converter; TX: Transmitter; RX: Receiver;L(i): Label from Node i. switch architecture designs. Since there is no practical optical buffer available today, store-and-forwarding scheme, which is commonly used in the electrical switch, cannot be duplicated in the optical domain. A simple switch structure [20, 21] is adopted with no centralized control at the center switch; either a TDM based MAC protocol [21] or the input side access control [20] is used to resolve the contention. Many other designs require a centralized control to negotiate resources among multiple requests [3, 9, 22, 23]. Fiber delay line (FDL) is widely used to handle burst traffic [3, 9, 22] and provide priority routing [24]. Researchers have investigated putting a set of delay lines with different lengths in either forwarding paths, loopback paths, or both. The authors in [22] further proposed to add the AWGR inside the FDL loopback path thus providing another layer of loopback and delay. Using the FDL helps reduce the dropping probability significantly, but packet loss is still possible and cannot be eliminated. In addition, the FDL cannot provide arbitrary delay. It is possible the resource is available but the delayed packet cannot access it since it is still traveling through the FDL. Deflection routing [7, 25-27] is another way to handle contention in optical burst and packet switching networks. The contended packets will be sent to the alternative next hop if the desired next hop is not available. Usually an FDL based buffer is needed in optical burst switching in order to adjust the time difference between the label and the burst according to the new path [27]. Many works [7, 25, 26] have shown that deflection routing will help lower the end-to-end latency when the traffic load is light but will degrade the entire system performance under heavy load. Therefore, deflection routing will not be considered in DOS design. 3. DOS - ARCHITECTURE OVERVIEW Figure 1 presents a high-level overview of the DOS architecture. At the core of the switch are an optical switching fabric that includes tunable wavelength converters (TWC), a uniform loss and cyclic frequency (ULCF) AWGR, and a loopback shared buffer system. In addition, the switching system includes a control block that processes the label of the packet and then arbitrates each packet by checking resource availability on the output port side. The Optical Channel Adapter (OCA) serves as the medium interface between DOS and each end node. 3.1 The AWGR Switching Fabric Early-stage optical switch architectures adopt optical-to-electricalto-optical (O/E/O) conversions and electrical switching fabric [28]. An all-optical switching fabric eliminates the overhead of the O/E/O conversions and keeps the payload in the optical domain for better optical transparency [9]. There are numerous all-optical switching fabric architectures [29], most of which can be divided into two categories: space switching and wavelength switching. Space switching can have a configuration of broadcastand-select [19, 28, 30] or a matrix of switching elements (1 x 2 switches [31], 2 x 2 switches [32], etc.). Wavelength switching utilizes optical wavelength converters, with an optical device that can support wavelength-to-space mapping [33-35]. The ULCF AWGR is a promising compact-sized candidate to achieve the wavelength-to-space mapping. The ULCF AWGR allows wavelength routed interconnection in a scalable manner and provides path-independent loss and cyclic-routing characteristics. The AWGR is able to route optical signals from any input AWGR port to any output AWGR port. The routing path for the signal inside the AWGR is determined by the wavelength that carries the signal. Since each output port interconnects with all input ports on separate and distinct wavelengths, the AWGR can easily achieve non-blocking switching by tuning the output wavelength of the TWC to an appropriate wavelength at each input, so that separate paths between inputs and their desired outputs are established. The AWGR can actually achieve concurrent contention-free optical switching, which is more than strictly non-blocking switching. Not only it can connect any idle output to an idle input,

4 but also allows any output to receive multiple concurrent signals that reside on separate and distinct wavelengths. By simply applying an optical DEMUX at the AWGR output, signals from different inputs can be separated and received independently. Ideally, an N N AWGR is a switching fabric with speedup of N provided that a 1:N optical DEMUX and N receivers are available for each AWGR output. To reduce the cost, typically the speedup is less than N. The scalability of a single stage DOS depends on the scalability of AWGR and the capability of TWC. NTT researchers have demonstrated 400-port AWGR that utilizes 400 channels with 25 GHz channel spacing covering a wavelength range from 1530 nm to 1610 nm [36]. A 512 x 512 AWGR with the same channel spacing will be a little bit larger in size than the 400-port AWGR and can be fabricated on an 8-inch Si Wafer if it does not fit in a 6-inch Si wafer. Takada [37] shows an effective way to reduce the size of the wafer required in fabrication by folding the slab waveguides with respect to the surface of reflection. The optical path from any input to any output of a 512 x 512 AWGR will be less than 1 meter in length, so it takes less than 5 ns for signals to propagate through the device. To minimize the crosstalk and achieve any port to any port non-blocking switching, 512-port AWGR requires at least 512 channels that covers a wavelength range of nm if assuming the channel spacing is 25 GHz. To be able to tune between 512 channels rapidly, TWC needs to accommodate a fast and wide-range tunable laser. A monolithic laser with 114 nm tuning range has been demonstrated in [38]. Matsuo [39] shows a tunable laser that can cover 34 channels with a switching latency less than 8 ns. An ultrafast interleaved rear reflector tunable laser with switching time of less than 2 ns has been reported in [40]. Instead of using a single wide-range fast tunable laser, an alternative way is to place multiple fast tunable lasers in parallel and each of them covers a relatively smaller range. In addition, power attenuation in the short distance data center network will be much smaller than long haul transmission and Erbium Doped Fiber Amplifier (EDFA) is not a must. Also, a lot of signal impairments that are considered in the long-haul transmission, e.g. dispersion and non-linear effect, can be considered as negligible in short distance optical networks. Therefore, DOS can use much wider range of wavelengths than telecommunication DWDM (Dense WDM) system. Clearly, 512- port AWGR based optical switch is feasible based on those enabling technologies. Thus, we assume a single stage DOS can scale to 512 ports in this paper and we also assume the number of wavelengths used for DOS is equal or larger than the fabric port count. 3.2 DOS Control Plane For a N-by-N AWGR switching fabric, a concurrent contentionfree switching system can be realized if each OCA RX has a 1:N optical DEMUX and N receivers. To ensure every incoming packet goes to the desired output port, we simply need to first check the destination address and then set the proper wavelength at TWC, so that after wavelength conversion, the packet travels to the desired AWGR output port. This simple control function can be implemented in a fully distributed way. But if each OCA RX only has k (k<n) receivers and a 1:k optical DEMUX, more than 2 packets on different wavelengths will reach the same DEMUX output simultaneously, and none of them can be correctly received when more than k wavelengths appear at the same AWGR output concurrently. To deal with such contention, we define a wavegroup as a set of wavelengths that will come out from the same output port of the optical DEMUX. Arbitration is necessary to guarantee that at most k concurrent packets on k different wavelengths, at most one per each wavegroup, arrive at the AWGR output port at any time. Figure 2 shows the block diagram of the control plane. The optical labels are separated from the optical payloads by a label extractor (LE) and arrive at the DOS control plane. After the optical-toelectrical (O/E) conversion, the labels first enter the label processor. Once the preamble detector detects a valid label, the label processor will start to record the label contents (destination address and packet length) following the preamble. The label processor then maps the destination address to the desired output port, and sends a request to the proper arbiter for contention resolution. Ideally there is no need for arbitration for an asynchronous switching system, since packets can be served on a first-come, first-serve basis. For simplicity, the control plane is implemented with synchronous hardware, which means arbitration is done among the packets whose labels first bit arrive at the control plane within the same clock cycle. However, DOS adopts the justin-time arbitration since there is no buffer in the input data path. Section 4 describes the details of arbitration. Figure 2. The block diagram of DOS control plane, O/E: Optical-to-Electrical converter; Arbiter(j,h): the arbiter for the hth wavegroup of the output j; WV_MGR(j,h): the wavegroup manager for the hth wavegroup of the output j; r(x,ix): the request to Arbiter(x,ix) from Label Processor i; g(y,iy): the grant from Arbiter(y,jy) to TWC Controller j. To facilitate arbitration of variable packet lengths, grants are made at each arbitration cycle as the packet transmission can take variable number of clock cycles. The wavegroup manager associated with each arbiter is responsible for deciding when the grant will be available after the last distribution based on the packet length. The wavegroup manager conveys grant availability information to the arbiter. The arbiter informs the wavegroup manager when the available grant is used. After the arbitration, the control plane generates control signals for TWCs, setting their outputsto the proper wavelengths. For the winning inputs, the control plane assign wavelengths that enable them to send packets to the desired AWGR output, while for inputs that do not get grants, the control plane assigns

5 wavelengths that force them to send packets to the AWGR output connecting with the loopback shared buffer. While the control plane receives the label and makes the decision, the corresponding packet payload travels through the fixed length fiber delay line (FDL) to compensate for the control plane latency. The packets arrive at the inputs of TWCs after the TWCs set their output to the proper wavelengths. The control plane latency is measured as the time between the first bit of the optical label arrives at the O/E converter and the TWC finishes the output wavelength tuning. Since the arbiter can distribute the TWC control signals in the same cycle for the requests that arrive on the same arbitration cycle, the control latency is identical for any packet arriving at any input. Therefore, the length of the FDL is fixed and identical for each input. 3.3 Shared SDRAM Buffer In case when the number of output receivers is less than N, as mentioned before, there could be contention for the same output port. As a result, we need a mechanism to store packets that could not gain access to their desired output port, so that they can retry later. In data center application, packet drop is more critical (unlike in telecom applications), as the resultant timeout and retransmission could result in an unacceptable latency for a computing application. In the past, researchers advocated the use of delay line buffers of different lengths to provide the necessary buffering [3, 9, 24]. Once again, this strategy is not appropriate for data center applications because it introduces unnecessary delay, which increases the latency. Because latency reduction is the main goal of DOS, delivery of the delayed packets should proceed immediately after the corresponding wavegroup becomes idle. Figure 3 indicates a proposed loopback buffer system with optical DEMUX and MUX, O/E and electrical-to-optical (E/O) converters, and the shared synchronous dynamic random access memory (SDRAM) buffer. Figure 3. The loopback shared SDRAM buffer, O/E: Opticalto-Electrical converter; E/O: Electrical-to-Optical Converter. The shared SDRAM buffer receives packets that failed to receive a grant in the arbitration cycle. These delayed packets travel on different wavelengths to the same AWGR output port and are then separated by the optical DEMUX converted from optical to electrical domain and stored in the SDRAM. A 1:N optical DEMUX with N parallel receivers instead of a 1:k optical DEMUX with k receivers (used in OCA RX) will be used so that delayed packets travelling on different wavelengths from different inputs can come out from different outputs of the optical DEMUX and can be correctly received by separate receivers. The shared SDRAM buffer has multiple outputs, each with an E/O converter that can generate a particular wavelength to enable routing of packets from the AWGR input connected with the shared SDRAM buffer to one particular AWGR output. The shared SDRAM buffer can send delayed packets to multiple AWGR output ports simultaneously by using different wavelengths. The shared SDRAM buffer controller provides signals to the shared buffer controller indicating if queues are empty or not. The shared buffer controller makes requests to the control plane based on the queue status. The shared buffer controller may generate multiple requests if more than one queue is not empty and is capable of accepting multiple requests and initiating multiple concurrent transmissions. At most one request will be made for any AWGR output in any arbitration cycle, even if more than one packet for the same AWGR output is stored inside the shared buffer. As packets go to the same AWGR output use the same wavelength, they need to be transmitted serially. The reception of the delayed packet is overlapped with the operation of the shared SDRAM controller as in a cut-through switch. 3.4 In-band Flow Control Flow control is necessary in case the SDRAM buffer becomes full and cannot accept a new packet. We propose to use an in-band ON-OFF flow control [41] that introduces little overhead. The inband flow control channel can be established by utilizing some unused bits in the packet header. When the occupancy of the SDRAM buffer exceeds a certain threshold, the corresponding bits in the header of a delayed packet that is about to send out will be set. Thus when the end node receives and reads the header of this packet, it will temporarily suspend its transmission and wait for another packet indicating the buffer is ready to receive delayed packets again. It is likely there is no delayed packet stored at the shared buffer for some outputs at the moment the buffer utilization exceeds the threshold. In order to convey the information to all end nodes when necessary, the buffer controller will generate multiple copies of a delayed packet that is currently in the buffer and make sure at least one packet, either the duplicated one or the delayed packets, carrying flow control information will be sent to every node. The end node receiving such a duplicated packet will suspend or resume its transmission according to the flow control information and discard the packet afterwards. The worst case occurs in the situation where N-k inputs send packets to the shared buffer as all inputs send packets to the same output continuously, but when the high threshold is reached, all transmitters of the shared loopback buffer are busy with the current transmissions; then the control message is inserted in the header of the packet that will be sent next. In the worst case, the flow control latency can be equal to the packet transmission latency. Therefore, the shared buffer need to have additional space to store another 2*(N-k) packets after the high threshold is reached. 3.5 Optical Channel Adaptor The Optical Channel Adapter provides the interface between the DOS and its end nodes. This allows end nodes to use any protocol they like such as InfiniBand, 10G Ethernet or PCI express. The entire host packet is treated as data by the DOS switch. The OCA extracts the relevant fields of the packet and creates an optical label that is transmitted on a wavelength different from the rest of the packet payload. The wavelength carrying the optical label is later separated from the wavelength carrying the packet payload by a filter, and the label is delivered to the control plane after O/E conversion. The optical label contains a preamble, in addition to the destination and packet length fields to enable clock recovery

6 and data synchronization at the control plane. In the opposite direction, the OCA interfaces between an AWGR output port and a node receiver. As mentioned before, the OCA has a 1:k optical DEMUX and k parallel receivers to accommodate k concurrent packets travelling using k wavelengths on k distinct wavegroups. There is an electrical buffer connecting each receiver to store packets that have been received and are waiting to be delivered to the end node. 4. ARBITRATION IN DOS When compared with a traditional N-by-N electronic switch, arbitration in DOS is simplified due to the following key differences. First, as no packet is buffered at the input and all labels are processed just in time: no input will generate repeated requests except requests from the shared buffer controller. Second, since virtual output queues (VOQs) are not used, every input needs to make only one request and the input accepts the grant when notified, so a simple 2-phase arbiter is sufficient in DOS, instead of a 3-phase arbiter in a conventional VOQ based switch. This also means that multiple iterations (on the average O(log 2 N) ) that are required in a traditional electronic switch are not necessary. A single iteration is sufficient. Third, and most important, due to the wavelength parallelism offered by AWGR fabric and the cyclic nature of the AWGR operation, the number of inputs contending for a given output in the worst case can be reduced by a factor k, where k is the number of wavelengths allowed per AWGR output (which is also the same as the size of the optical DEMUX at the output receiver). As noted before, if k=n, (recall N is the number of ports on the switch) then no arbitration is necessary as there is no contention, but typically k < N, so there is contention, but only the inputs in the corresponding contention group need to be examined to grant an request. It is not necessary to look at the all the inputs. As a result, the complexity of the arbiter decreases by a factor k. Let us explain this in more detail. We define, m=n/k, where m is the number of wavelengths in a wavegroup, according to the previous definition. In DOS, for any particular output, all inputs using wavelengths within a wavegroup form a contention group, because contention will only happen within a wavegroup. Inputs using wavelengths belonging to different wavegroups will never compete for the receiving resource since wavelengths of different wavegroups come out from different outputs of the optical DEMUX. Based on the AWGR cyclic wavelength routing characteristic, m cyclic successive inputs correspond to m cyclic successive wavelengths in a wavegroup for any output. Therefore, a contention group will be formed by m successive inputs for any AWGR output. In summary, by adopting the 1:k optical DEMUX, all N inputs are automatically divided into k contention groups, and each contention group contains m successive input ports. Although 1:k optical DEMUX divides the entire wavelength domain into k wavegroups with m successive wavelengths in one wavegroup, there can be m different partitions on the wavelength domain so that the contention group is composed of different wavelengths for each of m partitions. For example, for a 16-by-16 AWGR with 1:4 optical DEMUX used, we can have a contention group with wavelength 1, 2, 3, and 4, or 2, 3, 4, and 5. From a practical implementation point of view (to reduce cost), it is desirable that each OCA use identical optical DEMUXs with the same partition on the wavelength domain instead of port position dependent optical DEMUX. Recall the cyclic characteristics of AWGR, the signal from one input port must use different wavelengths to go to different output ports, meanwhile signals from different input ports go to the same output port by using different wavelengths. Thus, if we want to use identical optical DEMUXs at each OCA for each AWGR output port, we need to keep an identical partition on the wavelength domain for each output port. That means we must group different input ports together to form contention groups for different output ports. Although static contention group partitioning may degrade the throughput, it helps reduce system complexity. If dynamic partitioning is adopted, not only the control plane arbitration logic will become more complicated, but also wavelength converters will be necessary at each AWGR output in order to keep all optical DEMUXs simple and identical. Figure 4 shows an example of a 16-way optical switch with a 1:4 optical DEMUX used at each AWGR output. In total we have 4 wavegroups and the size of a contention group is 4. To accommodate the loopback input from the shared SDRAM, a 17- by-17 AWGR is necessary. Then three contention groups have four inputs and the last one has five inputs. For different output ports, the partition on the wavelength domain is kept the same so that identical optical DEMUX can be used on all AWGR outputs. For output D, input B, C, D, and E form a contention group; input F, G, H, and I forms another group; input J, K, L, M, and N form the third group; and input O, P, Q, and A form the fourth group. For another output G, each contention group is composed of different inputs as shown in Figure 4. Figure 4. Wavelength routing table and wavegroup partition for a 16-way DOS system with port Q connects with shared SDRAM buffer Figure 5 shows an example illustrating how arbitration works in DOS system. If at an arbitration cycle, we have the following requests, B L, D L, G L, H A, K A, M J, and N J; and we assume the following wavelength paths are active, C P, F K, I L, L A, and E J. We also assume the states of roundrobin scheduler of output A are B, E, I, and O respectively; the states of round-robin scheduler of output J are A, D, K, and M; and the states of round-robin scheduler of output L are C, J, K, and P. The arbitration results for output L will be the following: a

7 grant will be assigned to D and the request from B is denied, since the wavegroup they are competing is idle and input D has a higher priority than input B in the current arbitration cycle; the request from G will also be denied although no other inputs from the same contention group make requests, because there is ongoing transmission I L that uses the wavelength within the same wavegroup. The arbitration results for output A will be the following: H gets the grant since no one from the same contention group competes with it and the wavegroup is idle, while K gets rejected since the wavegroup it associates is not idle. The arbitration results for output J will be, M gets the grant but N gets rejected according to the current order of their priorities. Figure 5. An arbitration example based on the wavegroup partition shown in Figure A CYCLE-ACCURATE SIMULATION MODULE FOR DOS To evaluate the performance of DOS, we used GARNET [42], a cycle-accurate network simulator. GARNET models a classic 5- stage state-of-the-art virtual channel router. The router can have any number of input and output ports and consists of input buffers, route computation logic, virtual channel allocator, switch allocator, and a crossbar switch. Also, in GARNET, the network topology and interconnect bandwidth are configurable. The topology is a point-to-point specification of links, thus irregular topologies can be evaluated as well. A head flit arriving at an input port gets decoded and buffered in the buffer write (BW) stage. At the same time, the output port for this packet is calculated by sending a request to the route computation (RC) unit. If the head flit is successfully allocated a virtual channel in the VC allocation (VA) stage, it arbitrates for the switch input and output ports in the switch allocation (SA) stage. If a switch inputoutput path is acquired, the head flit proceeds to the switch traversal (ST) stage and the link traversal stage (LT) to travel to the next node. We modified GARNET to model DOS that includes a 3-cycle arbiter, a 5-cycle TLD setting and AWGR transit, aggregation, and output buffers. The RC+BW, VA, and SA stages in the classic router model are replaced by the 3-cycle arbiter. The ST stage is replaced by the TLD setting and AWGR transit stages. Input data channels are modeled as 10 Gbps links and optical label channels are modeled as 2 Gbps links. When two packets arrive and contend for the same output wavelength, one packet acquires the output and the other is stored in the buffer. When an output wavelength becomes available, packets stored in the buffer are given priority to acquire the output. At the OCA, each incoming links have 10 Gbps data rates and the outgoing link have a k*10 Gbps data rate. If k packets arrive at the OCA, all k packets are streamed out of the OCA one after the other. 6. PERFORMANCE EVALUATION We use the random traffic generator that GARNET provides at the transmitter of each peripheral node. The destination address is generated based on uniform random distribution, while the interarrival time between two packets follows Poisson distribution. Since the optical label is transmitted at a relatively lower rate compared to the data rate of the optical payload, the duration of the optical label must be kept no longer than the duration of the optical payload. Table 1 shows the minimum packet size required by different data rate if assuming the optical label is 5 bytes constant in length and is transmitted at a rate of 2 Gbps. Increasing the optical label rate or decreasing the optical label length can both decrease the minimum packet size required under higher data rate. The control plane clock speed is 2 GHz and it requires 40 cycles to receive the entire 5-byte label. We assume the loopback shared buffer can store 6*N packets, where N is the size of the switch. The high threshold of on-off flow control is set to 4*N and the low threshold is set to 3*N. We also assume that the end node uses InfiniBand to connect with DOS and the InfiniBand label size is 66 bytes (the minimum possible label size) excluding SOF and EOF. We simulate the cases when the message size (the InfiniBand payload length) is 128 bytes, 256 bytes, 1024 bytes, and 4096 bytes respectively. Due to space limitation, section 6.1 and 6.3 do not show results for the message size of 128 bytes and 1024 bytes, and section 6.2 and 6.4 do not show results for 32-way DOS system, but the trend is the same. Table 1. The Minimum Packet Size for Different Data Rate Data Rate (Gbps) Minimum Packet Size (bytes) End-to-end Latency of DOS System Figure 6 shows the average end-to-end latency versus the network load for DOS under different network sizes and different packet sizes. The end-to-end latency is measured as the time from the first bit of the packet leaves the source to the last bit of the packet is received at the destination. We exclude propagation latency from the total end-to-end latency because the propagation latency will be the same for all simulation cases. The simulation results show that for the contention-free case where each OCA has a 1:N optical DEMUX and N receivers, DOS can achieve constant end-to-end latency no matter the size of the network and the network load, because no contention will happen in this case and all packets experience the same latency at the switch. In contention-free case, the end-to-end latency is only a function of the packet size. The results also show that the endto-end latency will not be a constant and increase as the network

8 load increases when we have k<n receivers. But the results clearly show that a significant end-to-end latency improvement is achieved by just increasing k from 2 to 4 because the instantaneous rate at each AWGR output is doubled. Increasing network size also increases the end-to-end latency, but the influence is insignificant compared to the change of k. Again, the packet size has significant influence on the end-to-end latency. This means the latency introduced at the control plane is only a small portion of the entire end-to-end latency. latency increases as the network load increases, the network size increases, and the packet length increases. 6.2 Effective Bandwidth of DOS We use the effective bandwidth of the entire system as the measurement for the throughput performance of DOS. The effective bandwidth measures the payload rate instead of data rate by excluding all kinds of overhead introduced by the network. Figure 8 shows the effective bandwidth that DOS can achieve under different network sizes and different packet sizes. As DOS does not saturate even when the load is 90%, the effective bandwidth increases linearly as the network load increases. Bandwidth at 100% load approaches the line rate as the packet size increases and the ratio of overhead decreases. The size of the network does not have any impact on the effective bandwidth. (a) (a) (b) Figure 6. The end-to-end latency versus load for DOS system with the message size (a) 256 bytes, (b) 4096 bytes. (b) Figure 8. Effective bandwidth versus load for DOS system with (a) 128-way DOS, (b) 512-way DOS Figure 7. The breakdown of the end-to-end latency Figure 7 shows the breakdown of the average end-to-end latency. It confirms that packet transmission latency is the major component of the end-to-end latency, especially when the packet size is large. The switching latency is the average time a packet spends at the central switch including the possible delay a packet may experience at the loopback shared buffer. The switching 6.3 Contention Probability Figure 9 shows the contention probabilities of DOS under different switch radixes and different packet size. Contention probability is measured as the percentage of fresh incoming packets traveling to the loopback shared buffer after arbitration. The simulation results confirm that the contention probability decreases by increasing the number of concurrent transmissions allowed at each AWGR output. The results also indicate that with a small size network, the contention probability will be a little smaller than a larger size network, since the size of contention group is smaller when the switch has a small radix.

9 (a) (b) Figure 9. Contention probability of DOS under different configurations with message size (a) 256 bytes, (b) 4096 bytes smaller than this threshold. The number of packets is measured at the time when a packet arrives at the input of DOS system. The buffer occupancy can also be measured for individual outputs. The results shown in Figure 10 indicate that DOS actually uses very little buffer even when the network load is heavy, as most of time the average number of packets stored for each destination is less than 1. While there are delayed packets accumulated for certain destinations, no contention occurs at other outputs, as one input may only create contention for one output at a time. This indicates that the entire memory size required by DOS will be much smaller than the anticipated maximum buffer occupancy of any output multiplied by the number of nodes DOS connects. 6.5 Comparison with the Electrical Switch Figures 11 and 12 show the end-to-end latency and the effective bandwidth of DOS compared with a flattened butterfly network, under a 32- and 128-node for a message size of 128 bytes. The average end-to-end latency of the flattened butterfly network increases much faster than that of the DOS system under moderate network load. The flattened butterfly network saturates more easily with increasing network size, while the latency of DOS is almost independent of the size of the network. The bandwidth per channel comparison shows that DOS can support heavy network load of up to 90% (and possibly at > 90%). The reason behind is that AWGR based switching fabric has a speedup of k due to wavelength parallelism, providing a much higher instantaneous output rate at each fabric output than the electrical counterpart. 6.4 Buffer Occupancy (a) Figure 11. The end-to-end latency versus load comparison (b) Figure 10. Buffer occupancy measurement for (a) 128-way DOS, (b) 512-way DOS Next we answer the question: how big should the SDRAM buffer be? The buffer occupancy can be measured as a threshold, where 99% of the time the number of packets stored in the buffer is Figure 12. The bandwidth versus load comparison In addition to uniform random traffic, we also study DOS system performance under hot spot traffic and make comparison with the electrical flatten butterfly network. In hot spot simulation, there is a particular "hot spot" destination that will receive more traffic than any other nodes. The parameter α [43] defines the ratio of total packets destined for this hot spot destination; the rest of

10 packets choose their destinations based on uniform random distribution. Figure 13 and 14 show the end-to-end latency and effective bandwidth of DOS compared with the flattened butterfly network under 128-node for a message size of 128 bytes. The results show that DOS can support hot spot traffic even under high load when α is below 1%, and DOS can still perform well under light to moderate load when α is 2%. On the other side, the end-to-end latency of the flatten butterfly network increases much faster than that of DOS system even with small α, and the effective bandwidth also saturates quickly as the load increases. The results are consistent with the comparison made under uniform random traffic as shown in Figure 11 and 12. Figure 15. The latency of DOS under configurations comparable to OSMOSIS Figure 13. The end-to-end latency versus load comparison Figure 14. The total bandwidth versus load comparison 6.6 Comparison with OSMOSIS Demonstrator and Data Vortex In this section, we compare the performance of the proposed DOS with OSMOSIS demonstrator and Data Vortex. To compare with OSMOSIS demonstrator, we change the data rate to 40 Gbps, while keeping the optical label rate at 2 Gbps unchanged. The control plane will work at 250 MHz to reflect the typical speed of FPGA. Therefore, it will take 5 cycles to receive 5-byte label. We simulate the case where the DOS has a radix of 64 and the message size is 256 bytes (the packet size is 324 bytes) so that the setting is comparable with that of OSMOSIS demonstrator. Figure 15 shows the latency of DOS under the above-mentioned configurations. At the load of 90%, the latency of DOS is still less than 190 ns. In comparison, the minimum achievable latency of OSMOSIS is above 700 nanoseconds if only considering data path delay and STX arbitration [4]. Figure 16 shows the effective bandwidth comparison. DOS achieves a little higher effective bandwidth than OSMOSIS, because the InfiniBand label is smaller than the total overhead of the OSMOSIS. Figure 16. Effective bandwidth comparison, DOS versus OSMOSIS demonstrator Data Vortex uses 16 wavelengths for payload each at 10 Gbps. The packet length is set to a fixed period of 19.3 ns. Accordingly the payload length is 386 bytes. To compare with Data Vortex, we set the data rate of DOS to 160 Gbps, while keeping label rate at 2 Gbps and control plane clock frequency at 2 GHz. As the payload is 386 bytes, the entire packets will be 454 bytes including the InfiniBand label, SOF, and EOF. We also simulate another configuration by changing label rate from 2 Gbps to 10 Gbps while keeping other parameters unchanged. Figure 17 shows the end-to-end latency of DOS. As comparison, Data Vortex achieves an average latency of 110 ns [6]. Figure 18 shows the effective bandwidth comparison. Notice that Data Vortex saturates before the uniform network load reaches 0.5 [7] since deflection routing is used. DOS outperforms both OSMOSIS and Data Vortex in terms of latency and effective bandwidth. The results shown in Figure 17 also indicate label rate actually affects latency performance of DOS, higher label rate can reduce the end-to-end latency. Figure 17. The end-to-end latency of DOS under the configuration comparable to Data Vortex

11 Figure 18. Payload throughput comparison, DOS versus Data Vortex 6.7 ON-OFF Pattern Traffic We also study the performance of DOS under an ON-OFF traffic pattern, which is supposedly typical in data centers [13]. Our traffic pattern consists of ON periods of 2-4 ms (and OFF periods of similar duration) and ran for a total time of 25 ms. During the ON period, the network load is at 90%. However, at 90% network load, the flattened butterfly network was found to saturate. So, we reduced the network load to 30% for flattened butterfly and kept the load at 90% for the DOS system. Figure 19. The end-to-end latency comparison Figure 20. The effective bandwidth comparison Figures 19 and 20 show the average network latency and sustained bandwidth per channel for a 128-node DOS and flattened butterfly networks with message sizes (payload sizes) of 128, 256, 1024, and 4096 bytes. Figure 19 shows that even at 90% load, the average end-to-end latency per packet in DOS is kept low compared with the latency incurred in a lightly loaded flattened butterfly network. Figure 20 shows that DOS can sustain high network load and still provide high bandwidth efficiency. 7. CONCLUSIONS We propose DOS, a scalable optical switch for data center networks in this paper. By exploiting wavelength parallel-ism from AWGR switching fabric, DOS effectively reduces the contention probability at every output. We present a comprehensive performance evaluation for DOS and comparison with electrical switches, OSMOSIS demonstrator, and Data Vortex. The simulation results shows that DOS provides lowlatency and high-throughput switching and does not saturate even at very high (approx 90%) input load. In addition, the latency of DOS is almost independent of the number of input ports. Furthermore, we show that even with a few (k = 2 to 4) wavelengths per output port, the performance of the DOS is quite impressive. As part of future work, we would like to construct a comprehensive power model for DOS. In addition, we will investigate how to scale the system to support tens of thousands nodes by constructing multistage DOS. A prototype implementation of a 4x4 demonstration is also underway. 8. ACKNOWLEDGMENTS The authors acknowledge the support from the Department of Defense through the project Ultra-Low Latency Low-Power All- Optical Interconnection Switch for Peta-Scale Computing under contract #H C REFERENCES [1] Al-Fares, M., et al., A Scalable, Commodity Data Center Network Architecture, in ACM SIGCOMM'08, August [2] Greenberg, A., et al., VL2: a scalable and flexible data center network, in ACM SIGCOMM'09, August [3] Yang, H. and Yoo, S. J. B., Combined input and output alloptical variable buffered switch architecture for future optical routers, IEEE Photonics Technology Letters, vol. 17, pp , June [4] Minkenberg, C., et al., Designing a Crossbar Scheduler for HPC Applications, IEEE Micro, vol. 26, pp , [5] Hemenway, R., et al., Optical-packet-switched interconnect for supercomputer applications, Journal of Optical Networks, [6] Liboiron-Ladouceur, O., et al., The Data Vortex Optical Packet Switched Interconnection Network, Journal of Lightwave Technology, vol. 26, July [7] Bergman, K., et al., Design, Demonstration and Evaluation of an All Optical Processor Memory-Interconnection Network for Petaflop Supercomputing, in ACS Interconnects Workshop, [8] Liboiron-Ladouceur, O., et al., Physical Layer Scalability of WDM Optical Packet Interconnection Networks, Journal of Lightwave Technology, vol. 24, pp , [9] Yoo, S. J. B., Optical packet and burst switching technologies for the future photonic Internet, Journal of Lightwave Technology, vol. 24, pp , [10] Yoo, S. J. B., et al., Rapidly switching all-optical packet routing system with optical-label swapping incorporating tunable wavelength conversion and a uniform-loss cyclic

Dos-A Scalable Optical Switch for Datacenters

Dos-A Scalable Optical Switch for Datacenters Dos-A Scalable Optical Switch for Datacenters Speaker: Lin Wang Research Advisor: Biswanath Mukherjee Ye, X. et al., DOS: A scalable optical switch for datacenters, Proceedings of the 6th ACM/IEEE Symposium

More information

Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges

Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges Speaker: Lin Wang Research Advisor: Biswanath Mukherjee Kachris C, Kanonakis K, Tomkos I. Optical interconnection networks

More information

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Dr. Vinod Vokkarane Assistant Professor, Computer and Information Science Co-Director, Advanced Computer Networks Lab University

More information

Hybrid Integration of a Semiconductor Optical Amplifier for High Throughput Optical Packet Switched Interconnection Networks

Hybrid Integration of a Semiconductor Optical Amplifier for High Throughput Optical Packet Switched Interconnection Networks Hybrid Integration of a Semiconductor Optical Amplifier for High Throughput Optical Packet Switched Interconnection Networks Odile Liboiron-Ladouceur* and Keren Bergman Columbia University, 500 West 120

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Optical Packet Switching

Optical Packet Switching Optical Packet Switching DEISNet Gruppo Reti di Telecomunicazioni http://deisnet.deis.unibo.it WDM Optical Network Legacy Networks Edge Systems WDM Links λ 1 λ 2 λ 3 λ 4 Core Nodes 2 1 Wavelength Routing

More information

Optical networking technology

Optical networking technology 1 Optical networking technology Technological advances in semiconductor products have essentially been the primary driver for the growth of networking that led to improvements and simplification in the

More information

Ultra-Low Latency, Bit-Parallel Message Exchange in Optical Packet Switched Interconnection Networks

Ultra-Low Latency, Bit-Parallel Message Exchange in Optical Packet Switched Interconnection Networks Ultra-Low Latency, Bit-Parallel Message Exchange in Optical Packet Switched Interconnection Networks O. Liboiron-Ladouceur 1, C. Gray 2, D. Keezer 2 and K. Bergman 1 1 Department of Electrical Engineering,

More information

IV. PACKET SWITCH ARCHITECTURES

IV. PACKET SWITCH ARCHITECTURES IV. PACKET SWITCH ARCHITECTURES (a) General Concept - as packet arrives at switch, destination (and possibly source) field in packet header is used as index into routing tables specifying next switch in

More information

New Approaches to Optical Packet Switching in Carrier Networks. Thomas C. McDermott Chiaro Networks Richardson, Texas

New Approaches to Optical Packet Switching in Carrier Networks. Thomas C. McDermott Chiaro Networks Richardson, Texas New Approaches to Optical Packet Switching in Carrier Networks Thomas C. McDermott Chiaro Networks Richardson, Texas Outline Introduction, Vision, Problem statement Approaches to Optical Packet Switching

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

170 Index. Delta networks, DENS methodology

170 Index. Delta networks, DENS methodology Index A ACK messages, 99 adaptive timeout algorithm, 109 format and semantics, 107 pending packets, 105 piggybacking, 107 schematic represenation, 105 source adapter, 108 ACK overhead, 107 109, 112 Active

More information

Hybrid Optical Switching Network and Power Consumption in Optical Networks

Hybrid Optical Switching Network and Power Consumption in Optical Networks Hybrid Optical Switching Network and Power Consumption in Optical Networks Matteo Fiorani and Maurizio Casoni Department of Information Engineering University of Modena and Reggio Emilia Via Vignolese

More information

UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING

UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING 1 UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING Syllabus: Physical layer and overview of PL Switching: Multiplexing: frequency division multiplexing, wave length division multiplexing, synchronous

More information

UNIT- 2 Physical Layer and Overview of PL Switching

UNIT- 2 Physical Layer and Overview of PL Switching UNIT- 2 Physical Layer and Overview of PL Switching 2.1 MULTIPLEXING Multiplexing is the set of techniques that allows the simultaneous transmission of multiple signals across a single data link. Figure

More information

1. INTRODUCTION light tree First Generation Second Generation Third Generation

1. INTRODUCTION light tree First Generation Second Generation Third Generation 1. INTRODUCTION Today, there is a general consensus that, in the near future, wide area networks (WAN)(such as, a nation wide backbone network) will be based on Wavelength Division Multiplexed (WDM) optical

More information

InfiniBand SDR, DDR, and QDR Technology Guide

InfiniBand SDR, DDR, and QDR Technology Guide White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses

More information

Optimizing the performance of a data vortex interconnection network

Optimizing the performance of a data vortex interconnection network Vol. 6, No. 4 / April 2007 / JOURNAL OF OPTICAL NETWORKING 369 Optimizing the performance of a data vortex interconnection network Assaf Shacham and Keren Bergman Department of Electrical Engineering,

More information

Working Analysis of TCP/IP with Optical Burst Switching Networks (OBS)

Working Analysis of TCP/IP with Optical Burst Switching Networks (OBS) Working Analysis of TCP/IP with Optical Burst Switching Networks (OBS) Malik Salahuddin Nasir, Muabshir Ashfaq and Hafiz Sabir Hussain CS&IT Department, Superior University, 17-km off Riwind Road, Lahore

More information

Optical Fiber Communications. Optical Networks- unit 5

Optical Fiber Communications. Optical Networks- unit 5 Optical Fiber Communications Optical Networks- unit 5 Network Terminology Stations are devices that network subscribers use to communicate. A network is a collection of interconnected stations. A node

More information

CHAPTER TWO LITERATURE REVIEW

CHAPTER TWO LITERATURE REVIEW CHAPTER TWO LITERATURE REVIEW 2.1 Introduction. This chapter provides in detail about the multiple access technologies and the OCDMA system. It starts with a discussion on various existing multiple-access

More information

100 Gbit/s Computer Optical Interconnect

100 Gbit/s Computer Optical Interconnect 100 Gbit/s Computer Optical Interconnect Ivan Glesk, Robert J. Runser, Kung-Li Deng, and Paul R. Prucnal Department of Electrical Engineering, Princeton University, Princeton, NJ08544 glesk@ee.princeton.edu

More information

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks Internet Traffic Characteristics Bursty Internet Traffic Statistical aggregation of the bursty data leads to the efficiency of the Internet. Large Variation in Source Bandwidth 10BaseT (10Mb/s), 100BaseT(100Mb/s),

More information

Bridging and Switching Basics

Bridging and Switching Basics CHAPTER 4 Bridging and Switching Basics This chapter introduces the technologies employed in devices loosely referred to as bridges and switches. Topics summarized here include general link-layer device

More information

Basic Low Level Concepts

Basic Low Level Concepts Course Outline Basic Low Level Concepts Case Studies Operation through multiple switches: Topologies & Routing v Direct, indirect, regular, irregular Formal models and analysis for deadlock and livelock

More information

An optically transparent ultra high speed LAN-ring employing OTDM

An optically transparent ultra high speed LAN-ring employing OTDM An optically transparent ultra high speed LAN-ring employing OTDM K. Bengi, G. Remsak, H.R. van As Vienna University of Technology, Institute of Communication Networks Gusshausstrasse 25/388, A-1040 Vienna,

More information

Configuration of Offset Time in Optical Burst Switching Networks for Delay Sensitive Traffic

Configuration of Offset Time in Optical Burst Switching Networks for Delay Sensitive Traffic Configuration of Offset Time in Optical Burst Switching Networks for Delay Sensitive Traffic Anupam Soni and Yatindra Nath Singh anusoni@iitk.ac.in,ynsingh@iitk.ac.in. Abstract In Optical Burst Switching

More information

Resource Sharing for QoS in Agile All Photonic Networks

Resource Sharing for QoS in Agile All Photonic Networks Resource Sharing for QoS in Agile All Photonic Networks Anton Vinokurov, Xiao Liu, Lorne G Mason Department of Electrical and Computer Engineering, McGill University, Montreal, Canada, H3A 2A7 E-mail:

More information

Hybrid On-chip Data Networks. Gilbert Hendry Keren Bergman. Lightwave Research Lab. Columbia University

Hybrid On-chip Data Networks. Gilbert Hendry Keren Bergman. Lightwave Research Lab. Columbia University Hybrid On-chip Data Networks Gilbert Hendry Keren Bergman Lightwave Research Lab Columbia University Chip-Scale Interconnection Networks Chip multi-processors create need for high performance interconnects

More information

Performance Evaluation of k-ary Data Vortex Networks with Bufferless and Buffered Routing Nodes

Performance Evaluation of k-ary Data Vortex Networks with Bufferless and Buffered Routing Nodes Performance Evaluation of k-ary Data Vortex Networks with Bufferless and Buffered Routing Nodes Qimin Yang Harvey Mudd College, Engineering Department, Claremont, CA 91711, USA qimin_yang@hmc.edu ABSTRACT

More information

Phastlane: A Rapid Transit Optical Routing Network

Phastlane: A Rapid Transit Optical Routing Network Phastlane: A Rapid Transit Optical Routing Network Mark Cianchetti, Joseph Kerekes, and David Albonesi Computer Systems Laboratory Cornell University The Interconnect Bottleneck Future processors: tens

More information

Communication Networks

Communication Networks Communication Networks Chapter 3 Multiplexing Frequency Division Multiplexing (FDM) Useful bandwidth of medium exceeds required bandwidth of channel Each signal is modulated to a different carrier frequency

More information

Delayed reservation decision in optical burst switching networks with optical buffers

Delayed reservation decision in optical burst switching networks with optical buffers Delayed reservation decision in optical burst switching networks with optical buffers G.M. Li *, Victor O.K. Li + *School of Information Engineering SHANDONG University at WEIHAI, China + Department of

More information

Lecture: Interconnection Networks

Lecture: Interconnection Networks Lecture: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm 1 Packets/Flits A message is broken into multiple packets (each packet

More information

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E)

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Lecture 12: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) 1 Topologies Internet topologies are not very regular they grew

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Lecture (04 & 05) Packet switching & Frame Relay techniques Dr. Ahmed ElShafee

Lecture (04 & 05) Packet switching & Frame Relay techniques Dr. Ahmed ElShafee Agenda Lecture (04 & 05) Packet switching & Frame Relay techniques Dr. Ahmed ElShafee Packet switching technique Packet switching protocol layers (X.25) Frame Relay ١ Dr. Ahmed ElShafee, ACU Fall 2011,

More information

Lecture (04 & 05) Packet switching & Frame Relay techniques

Lecture (04 & 05) Packet switching & Frame Relay techniques Lecture (04 & 05) Packet switching & Frame Relay techniques Dr. Ahmed ElShafee ١ Dr. Ahmed ElShafee, ACU Fall 2011, Networks I Agenda Packet switching technique Packet switching protocol layers (X.25)

More information

Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges

Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges Christoforos Kachris, Konstantinos Kanonakis, Ioannis Tomkos Athens Information Technology, Athens, Greece Email: kachris,

More information

Retransmission schemes for Optical Burst Switching over star networks

Retransmission schemes for Optical Burst Switching over star networks Retransmission schemes for Optical Burst Switching over star networks Anna Agustí-Torra, Gregor v. Bochmann*, Cristina Cervelló-Pastor Escola Politècnica Superior de Castelldefels, Universitat Politècnica

More information

Thomas Moscibroda Microsoft Research. Onur Mutlu CMU

Thomas Moscibroda Microsoft Research. Onur Mutlu CMU Thomas Moscibroda Microsoft Research Onur Mutlu CMU CPU+L1 CPU+L1 CPU+L1 CPU+L1 Multi-core Chip Cache -Bank Cache -Bank Cache -Bank Cache -Bank CPU+L1 CPU+L1 CPU+L1 CPU+L1 Accelerator, etc Cache -Bank

More information

Network-on-Chip Architecture

Network-on-Chip Architecture Multiple Processor Systems(CMPE-655) Network-on-Chip Architecture Performance aspect and Firefly network architecture By Siva Shankar Chandrasekaran and SreeGowri Shankar Agenda (Enhancing performance)

More information

The GLIMPS Terabit Packet Switching Engine

The GLIMPS Terabit Packet Switching Engine February 2002 The GLIMPS Terabit Packet Switching Engine I. Elhanany, O. Beeri Terabit Packet Switching Challenges The ever-growing demand for additional bandwidth reflects on the increasing capacity requirements

More information

A Dynamic NOC Arbitration Technique using Combination of VCT and XY Routing

A Dynamic NOC Arbitration Technique using Combination of VCT and XY Routing 727 A Dynamic NOC Arbitration Technique using Combination of VCT and XY Routing 1 Bharati B. Sayankar, 2 Pankaj Agrawal 1 Electronics Department, Rashtrasant Tukdoji Maharaj Nagpur University, G.H. Raisoni

More information

Unavoidable Constraints and Collision Avoidance Techniques in Performance Evaluation of Asynchronous Transmission WDMA Protocols

Unavoidable Constraints and Collision Avoidance Techniques in Performance Evaluation of Asynchronous Transmission WDMA Protocols 1th WEA International Conference on COMMUICATIO, Heraklion, reece, July 3-5, 8 Unavoidable Constraints and Collision Avoidance Techniques in Performance Evaluation of Asynchronous Transmission WDMA Protocols

More information

Novel Passive Optical Switching Using Shared Electrical Buffer and Wavelength Converter

Novel Passive Optical Switching Using Shared Electrical Buffer and Wavelength Converter Novel Passive Optical Switching Using Shared Electrical Buffer and Wavelength Converter Ji-Hwan Kim 1, JungYul Choi 2, Jinsung Im 1, Minho Kang 1, and J.-K. Kevin Rhee 1 * 1 Optical Internet Research Center,

More information

OFFH-CDM ALL-OPTICAL NETWORK

OFFH-CDM ALL-OPTICAL NETWORK Patent Title: OFFH-CDM ALL-OPTICAL NETWORK Inventor: FOULI K., MENIF M., LADDADA R., AND FATHALLAH H. Status: US PATENT PENDING, APRIL 2008 Reference Number: 000819-0100 1 US Patent Pending: 000819-0100

More information

BROADBAND AND HIGH SPEED NETWORKS

BROADBAND AND HIGH SPEED NETWORKS BROADBAND AND HIGH SPEED NETWORKS ATM SWITCHING ATM is a connection-oriented transport concept An end-to-end connection (virtual channel) established prior to transfer of cells Signaling used for connection

More information

Enabling High Performance Data Centre Solutions and Cloud Services Through Novel Optical DC Architectures. Dimitra Simeonidou

Enabling High Performance Data Centre Solutions and Cloud Services Through Novel Optical DC Architectures. Dimitra Simeonidou Enabling High Performance Data Centre Solutions and Cloud Services Through Novel Optical DC Architectures Dimitra Simeonidou Challenges and Drivers for DC Evolution Data centres are growing in size and

More information

Routing Algorithm. How do I know where a packet should go? Topology does NOT determine routing (e.g., many paths through torus)

Routing Algorithm. How do I know where a packet should go? Topology does NOT determine routing (e.g., many paths through torus) Routing Algorithm How do I know where a packet should go? Topology does NOT determine routing (e.g., many paths through torus) Many routing algorithms exist 1) Arithmetic 2) Source-based 3) Table lookup

More information

Lecture 3: Topology - II

Lecture 3: Topology - II ECE 8823 A / CS 8803 - ICN Interconnection Networks Spring 2017 http://tusharkrishna.ece.gatech.edu/teaching/icn_s17/ Lecture 3: Topology - II Tushar Krishna Assistant Professor School of Electrical and

More information

NoC Test-Chip Project: Working Document

NoC Test-Chip Project: Working Document NoC Test-Chip Project: Working Document Michele Petracca, Omar Ahmad, Young Jin Yoon, Frank Zovko, Luca Carloni and Kenneth Shepard I. INTRODUCTION This document describes the low-power high-performance

More information

Lambda Networks DWDM. Vara Varavithya Department of Electrical Engineering King Mongkut s Institute of Technology North Bangkok

Lambda Networks DWDM. Vara Varavithya Department of Electrical Engineering King Mongkut s Institute of Technology North Bangkok Lambda Networks DWDM Vara Varavithya Department of Electrical Engineering King Mongkut s Institute of Technology North Bangkok vara@kmitnb.ac.th Treads in Communication Information: High Speed, Anywhere,

More information

Design of Optical Burst Switches based on Dual Shuffle-exchange Network and Deflection Routing

Design of Optical Burst Switches based on Dual Shuffle-exchange Network and Deflection Routing Design of Optical Burst Switches based on Dual Shuffle-exchange Network and Deflection Routing Man-Ting Choy Department of Information Engineering, The Chinese University of Hong Kong mtchoy1@ie.cuhk.edu.hk

More information

Chapter 4 NETWORK HARDWARE

Chapter 4 NETWORK HARDWARE Chapter 4 NETWORK HARDWARE 1 Network Devices As Organizations grow, so do their networks Growth in number of users Geographical Growth Network Devices : Are products used to expand or connect networks.

More information

Switch Datapath in the Stanford Phictious Optical Router (SPOR)

Switch Datapath in the Stanford Phictious Optical Router (SPOR) Switch Datapath in the Stanford Phictious Optical Router (SPOR) H. Volkan Demir, Micah Yairi, Vijit Sabnis Arpan Shah, Azita Emami, Hossein Kakavand, Kyoungsik Yu, Paulina Kuo, Uma Srinivasan Optics and

More information

Packetisation in Optical Packet Switch Fabrics using adaptive timeout values

Packetisation in Optical Packet Switch Fabrics using adaptive timeout values Packetisation in Optical Packet Switch Fabrics using adaptive timeout values Brian B. Mortensen COM DTU Technical University of Denmark DK-28 Kgs. Lyngby Email: bbm@com.dtu.dk Abstract Hybrid electro-optical

More information

Module 17: "Interconnection Networks" Lecture 37: "Introduction to Routers" Interconnection Networks. Fundamentals. Latency and bandwidth

Module 17: Interconnection Networks Lecture 37: Introduction to Routers Interconnection Networks. Fundamentals. Latency and bandwidth Interconnection Networks Fundamentals Latency and bandwidth Router architecture Coherence protocol and routing [From Chapter 10 of Culler, Singh, Gupta] file:///e /parallel_com_arch/lecture37/37_1.htm[6/13/2012

More information

Efficient Queuing Architecture for a Buffered Crossbar Switch

Efficient Queuing Architecture for a Buffered Crossbar Switch Proceedings of the 11th WSEAS International Conference on COMMUNICATIONS, Agios Nikolaos, Crete Island, Greece, July 26-28, 2007 95 Efficient Queuing Architecture for a Buffered Crossbar Switch MICHAEL

More information

Internetworking Part 1

Internetworking Part 1 CMPE 344 Computer Networks Spring 2012 Internetworking Part 1 Reading: Peterson and Davie, 3.1 22/03/2012 1 Not all networks are directly connected Limit to how many hosts can be attached Point-to-point:

More information

Routers Technologies & Evolution for High-Speed Networks

Routers Technologies & Evolution for High-Speed Networks Routers Technologies & Evolution for High-Speed Networks C. Pham Université de Pau et des Pays de l Adour http://www.univ-pau.fr/~cpham Congduc.Pham@univ-pau.fr Router Evolution slides from Nick McKeown,

More information

1/29/2008. From Signals to Packets. Lecture 6 Datalink Framing, Switching. Datalink Functions. Datalink Lectures. Character and Bit Stuffing.

1/29/2008. From Signals to Packets. Lecture 6 Datalink Framing, Switching. Datalink Functions. Datalink Lectures. Character and Bit Stuffing. /9/008 From Signals to Packets Lecture Datalink Framing, Switching Peter Steenkiste Departments of Computer Science and Electrical and Computer Engineering Carnegie Mellon University Analog Signal Digital

More information

Quest for High-Performance Bufferless NoCs with Single-Cycle Express Paths and Self-Learning Throttling

Quest for High-Performance Bufferless NoCs with Single-Cycle Express Paths and Self-Learning Throttling Quest for High-Performance Bufferless NoCs with Single-Cycle Express Paths and Self-Learning Throttling Bhavya K. Daya, Li-Shiuan Peh, Anantha P. Chandrakasan Dept. of Electrical Engineering and Computer

More information

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University 18 447 Lecture 26: Interconnects James C. Hoe Department of ECE Carnegie Mellon University 18 447 S18 L26 S1, James C. Hoe, CMU/ECE/CALCM, 2018 Housekeeping Your goal today get an overview of parallel

More information

WHITE PAPER. Latency & Jitter WHITE PAPER OVERVIEW

WHITE PAPER. Latency & Jitter WHITE PAPER OVERVIEW Latency & Jitter In Networking Performance Evaluation OVERVIEW Latency and jitter are two key measurement parameters when evaluating and benchmarking the performance of a network, system or device. Different

More information

AllWave FIBER BENEFITS EXECUTIVE SUMMARY. Metropolitan Interoffice Transport Networks

AllWave FIBER BENEFITS EXECUTIVE SUMMARY. Metropolitan Interoffice Transport Networks AllWave FIBER BENEFITS EXECUTIVE SUMMARY Metropolitan Interoffice Transport Networks OFS studies and other industry studies show that the most economic means of handling the expected exponential growth

More information

CMSC 611: Advanced. Interconnection Networks

CMSC 611: Advanced. Interconnection Networks CMSC 611: Advanced Computer Architecture Interconnection Networks Interconnection Networks Massively parallel processor networks (MPP) Thousands of nodes Short distance (

More information

Lecture 13: Interconnection Networks. Topics: lots of background, recent innovations for power and performance

Lecture 13: Interconnection Networks. Topics: lots of background, recent innovations for power and performance Lecture 13: Interconnection Networks Topics: lots of background, recent innovations for power and performance 1 Interconnection Networks Recall: fully connected network, arrays/rings, meshes/tori, trees,

More information

Local Area Network Overview

Local Area Network Overview Local Area Network Overview Chapter 15 CS420/520 Axel Krings Page 1 LAN Applications (1) Personal computer LANs Low cost Limited data rate Back end networks Interconnecting large systems (mainframes and

More information

Routing Algorithms. Review

Routing Algorithms. Review Routing Algorithms Today s topics: Deterministic, Oblivious Adaptive, & Adaptive models Problems: efficiency livelock deadlock 1 CS6810 Review Network properties are a combination topology topology dependent

More information

Sharing Tunable Wavelength Converters in AWG-based IP Optical Switching Nodes

Sharing Tunable Wavelength Converters in AWG-based IP Optical Switching Nodes Sharing Tunable Wavelength Converters in AWG-based IP Optical Switching Nodes Achille Pattavina, Marica Rebughini, Antonio Sipone Dept. of Electronics and Information, Politecnico di Milano, Italy {pattavina}@elet.polimi.it

More information

Cisco Series Internet Router Architecture: Packet Switching

Cisco Series Internet Router Architecture: Packet Switching Cisco 12000 Series Internet Router Architecture: Packet Switching Document ID: 47320 Contents Introduction Prerequisites Requirements Components Used Conventions Background Information Packet Switching:

More information

Architectures and Performance of AWG-based. Optical Switching Nodes for IP Networks

Architectures and Performance of AWG-based. Optical Switching Nodes for IP Networks Architectures and Performance of AWG-based 1 Optical Switching Nodes for IP Networks Stefano Bregni, IEEE Senior Member, Achille Pattavina, IEEE Senior Member, Gianluca Vegetti Dept. of Electronics and

More information

6.9. Communicating to the Outside World: Cluster Networking

6.9. Communicating to the Outside World: Cluster Networking 6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and

More information

Enhancing Bandwidth Utilization and QoS in Optical Burst Switched High-Speed Network

Enhancing Bandwidth Utilization and QoS in Optical Burst Switched High-Speed Network 91 Enhancing Bandwidth Utilization and QoS in Optical Burst Switched High-Speed Network Amit Kumar Garg and R S Kaler School of Electronics and Communication Eng, Shri Mata Vaishno Devi University (J&K),

More information

SAMBA-BUS: A HIGH PERFORMANCE BUS ARCHITECTURE FOR SYSTEM-ON-CHIPS Λ. Ruibing Lu and Cheng-Kok Koh

SAMBA-BUS: A HIGH PERFORMANCE BUS ARCHITECTURE FOR SYSTEM-ON-CHIPS Λ. Ruibing Lu and Cheng-Kok Koh BUS: A HIGH PERFORMANCE BUS ARCHITECTURE FOR SYSTEM-ON-CHIPS Λ Ruibing Lu and Cheng-Kok Koh School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 797- flur,chengkokg@ecn.purdue.edu

More information

Name of Course : E1-E2 CFA. Chapter 15. Topic : DWDM

Name of Course : E1-E2 CFA. Chapter 15. Topic : DWDM Name of Course : E1-E2 CFA Chapter 15 Topic : DWDM Date of Creation : 28.03.2011 DWDM 1.0 Introduction The emergence of DWDM is one of the most recent and important phenomena in the development of fiber

More information

Topics for Today. Network Layer. Readings. Introduction Addressing Address Resolution. Sections 5.1,

Topics for Today. Network Layer. Readings. Introduction Addressing Address Resolution. Sections 5.1, Topics for Today Network Layer Introduction Addressing Address Resolution Readings Sections 5.1, 5.6.1-5.6.2 1 Network Layer: Introduction A network-wide concern! Transport layer Between two end hosts

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Chapter 15 Local Area Network Overview

Chapter 15 Local Area Network Overview Chapter 15 Local Area Network Overview LAN Topologies Bus and Tree Bus: stations attach through tap to bus full duplex allows transmission and reception transmission propagates throughout medium heard

More information

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies Mohsin Y Ahmed Conlan Wesson Overview NoC: Future generation of many core processor on a single chip

More information

Optical Burst Switching (OBS): The Dawn of A New Era in Optical Networking

Optical Burst Switching (OBS): The Dawn of A New Era in Optical Networking Optical Burst Switching (OBS): The Dawn of A New Era in Optical Networking Presented by Yang Chen (LANDER) Yang Chen (Lander) 1 Outline Historical Review Burst reservation Burst assembly OBS node Towards

More information

On the Performance of a Large-Scale Optical Packet Switch Under Realistic Data Center Traffic

On the Performance of a Large-Scale Optical Packet Switch Under Realistic Data Center Traffic On the Performance of a Large-Scale Optical Packet Switch Under Realistic Data Center Traffic Speaker: Lin Wang Research Advisor: Biswanath Mukherjee Switch architectures and control Traffic generation

More information

Multiconfiguration Multihop Protocols: A New Class of Protocols for Packet-Switched WDM Optical Networks

Multiconfiguration Multihop Protocols: A New Class of Protocols for Packet-Switched WDM Optical Networks Multiconfiguration Multihop Protocols: A New Class of Protocols for Packet-Switched WDM Optical Networks Jason P. Jue, Member, IEEE, and Biswanath Mukherjee, Member, IEEE Abstract Wavelength-division multiplexing

More information

Developing flexible WDM networks using wavelength tuneable components

Developing flexible WDM networks using wavelength tuneable components Developing flexible WDM networks using wavelength tuneable components A. Dantcha 1, L.P. Barry 1, J. Murphy 1, T. Mullane 2 and D. McDonald 2 (1) Research Institute for Network and Communications Engineering,

More information

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control Chapter 12 Congestion in Data Networks Effect of Congestion Control Ideal Performance Practical Performance Congestion Control Mechanisms Backpressure Choke Packet Implicit Congestion Signaling Explicit

More information

Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system

Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system Wang Miao, * Jun Luo, Stefano Di Lucente, Harm Dorren, and Nicola Calabretta COBRA Research Institute,

More information

POS on ONS Ethernet Cards

POS on ONS Ethernet Cards 20 CHAPTER This chapter describes packet-over-sonet/sdh (POS) and its implementation on ONS Ethernet cards. This chapter contains the following major sections: POS Overview, page 20-1 POS Interoperability,

More information

Media Access Control (MAC) Sub-layer and Ethernet

Media Access Control (MAC) Sub-layer and Ethernet Media Access Control (MAC) Sub-layer and Ethernet Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF MAC Sub-layer The MAC sub-layer is a sub-layer

More information

Adaptive Data Burst Assembly in OBS Networks

Adaptive Data Burst Assembly in OBS Networks Adaptive Data Burst Assembly in OBS Networks Mohamed A.Dawood 1, Mohamed Mahmoud 1, Moustafa H.Aly 1,2 1 Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt 2 OSA Member muhamed.dawood@aast.edu,

More information

A NOVEL DECENTRALIZED ETHERNET-BASED PASSIVE OPTICAL NETWORK ARCHITECTURE

A NOVEL DECENTRALIZED ETHERNET-BASED PASSIVE OPTICAL NETWORK ARCHITECTURE A NOVEL DECENTRALIZED ETHERNET-BASED PASSIVE OPTICAL NETWORK ARCHITECTURE A. Hadjiantonis, S. Sherif, A. Khalil, T. Rahman, G. Ellinas, M. F. Arend, and M. A. Ali, Department of Electrical Engineering,

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues 168 430 Computer Networks Chapter 13 Congestion in Data Networks What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity

More information

ECE 697J Advanced Topics in Computer Networks

ECE 697J Advanced Topics in Computer Networks ECE 697J Advanced Topics in Computer Networks Switching Fabrics 10/02/03 Tilman Wolf 1 Router Data Path Last class: Single CPU is not fast enough for processing packets Multiple advanced processors in

More information

REDUCING CAPEX AND OPEX THROUGH CONVERGED OPTICAL INFRASTRUCTURES. Duane Webber Cisco Systems, Inc.

REDUCING CAPEX AND OPEX THROUGH CONVERGED OPTICAL INFRASTRUCTURES. Duane Webber Cisco Systems, Inc. REDUCING CAPEX AND OPEX THROUGH CONVERGED OPTICAL INFRASTRUCTURES Duane Webber Cisco Systems, Inc. Abstract Today's Cable Operator optical infrastructure designs are becoming more important as customers

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK REVIEW ON CAPACITY IMPROVEMENT TECHNIQUE FOR OPTICAL SWITCHING NETWORKS SONALI

More information

Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation

Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation Kshitij Bhardwaj Dept. of Computer Science Columbia University Steven M. Nowick 2016 ACM/IEEE Design Automation

More information

SIMULATION ISSUES OF OPTICAL PACKET SWITCHING RING NETWORKS

SIMULATION ISSUES OF OPTICAL PACKET SWITCHING RING NETWORKS SIMULATION ISSUES OF OPTICAL PACKET SWITCHING RING NETWORKS Marko Lackovic and Cristian Bungarzeanu EPFL-STI-ITOP-TCOM CH-1015 Lausanne, Switzerland {marko.lackovic;cristian.bungarzeanu}@epfl.ch KEYWORDS

More information

BROADBAND AND HIGH SPEED NETWORKS

BROADBAND AND HIGH SPEED NETWORKS BROADBAND AND HIGH SPEED NETWORKS SWITCHING A switch is a mechanism that allows us to interconnect links to form a larger network. A switch is a multi-input, multi-output device, which transfers packets

More information

Prioritized Shufflenet Routing in TOAD based 2X2 OTDM Router.

Prioritized Shufflenet Routing in TOAD based 2X2 OTDM Router. Prioritized Shufflenet Routing in TOAD based 2X2 OTDM Router. Tekiner Firat, Ghassemlooy Zabih, Thompson Mark, Alkhayatt Samir Optical Communications Research Group, School of Engineering, Sheffield Hallam

More information