Achieving Energy-Proportionality in Fat-tree DCNs

Size: px
Start display at page:

Download "Achieving Energy-Proportionality in Fat-tree DCNs"

Transcription

1 Achieving Energy-Proportionality in Fat-tree DCNs Qing Yi Department of Computer Science Portland State University Portland, Oregon Suresh Singh Department of Computer Science Portland State University Portland, Oregon Abstract Fat-tree is a popular topology for data center networks (DCNs). Many recent papers propose new energyefficient techniques that reduce the power consumption of fattree DCNs by dynamically powering off idle network switches and links. However, fat-tree DCNs adopting those techniques still consume a significant amount of energy even when lightly loaded. In this paper, we examine the approach of merging traffic in a fat-tree DCN in order to consolidate the traffic into fewer switches. We first derive an analytical model for the lower-bound of active switches required in a fat-tree DCN as a function of load, assuming traffic merging. Then we formulate a power optimization model and propose a scalable heuristic algorithm to find the minimum subset of network switches and links that satisfies a variety of traffic patterns and loads in actual DCNs. Simulation results show that our approach can substantially reduce the number of active switches and lower the energy consumption of a fat-tree DCN when the load is light. We show that our solution achieves near energy proportionality. I. INTRODUCTION With the exponential growth of global Internet traffic in recent years, data center networks (DCNs) have increasingly adopted architectures that are capable of providing high communication bandwidth in order to accommodate heavy traffic requirements between servers. However, in most data centers, traffic loading levels vary from time to time. To meet the peak traffic requirements, most data centers are designed with redundant capacity, resulting in large part of the networks underutilized for significant length of time. Consequently, networking equipment becomes idle, but it still consumes energy and is a major contributor to the overall energy cost for a data center. In fact, networks in a data center normally consume -% of the total electricity the data center consumes []. However, during low loadings, this fraction can increase to 5% [7] because servers can be turned to lower power states automatically while network switches cannot. Many researchers have studied the DCN energy consumption problem. One notable approach is ElasticTree [], in which the researchers propose to force traffic in a network to the leftmost switches in a fat-tree topology in order to power off unused switches. In contrast to ElasticTree, [7] replaces larger switches with many smaller ones to enable better packing of traffic into fewer number of switches. There are a number of other approaches studied, including those focusing on link-rate adaptation. The approach in [7] is static and thus does not adapt to load or different loading patterns. Link-rate adaptation is very effective at minimizing link energy consumption but, given that a lion s share of energy is used by the switch, this technique is not very effective. The ElasticTree approach [] is the most energy-efficient of all but is still sub-optimal because, as we show, for many light loading patterns, a large number of switches still need to remain active. k/ aggregation switches k/ edge switches k/ servers k / k/ k/+ k / Pod Pod Fig.. Fat-tree model. k pods We use a k-ary fat-tree shown in Fig. as an example to illustrate the deficiency of the prior approaches and explain the motivations of our contribution. The k-ary fat-tree interconnects k 3 / servers using three layers of switches. The edgelayer switches and aggregation-layer switches are organized in k pods. Each pod includes k/ and k/ and each edge switch connects to k/ servers. There are k / and each core switch has k links connecting to k pods. In previous approaches, all the are always powered on as they are connected to servers and have to remain active to be able to forward traffic upward and downward at all times. Even if a server has very little traffic going to the connecting edge switch, the switch will be fully powered on although very lightly loaded. Our contribution in this paper is to enable powering off more switches and links by consolidating traffic at edge and aggregation layer. A. Our Approach: Merging Consider an m-port switch connected with m servers. Assuming each server offers a load of fraction, the total

2 traffic to this switch from the servers is m. Normally the m switch interfaces remain active even when is small. If the traffic can be consolidated, we need at most m interfaces to be active. When the load is small, more switch interfaces of the switch can be powered off. In other words, there is potential power savings if traffic can be merged. In previous papers [] [8], we provided a hardware design of a device called merge network. Rather than repeat that discussion here, we provide a functional model of what such a network does and then use it in the remainder of this paper. We illustrate a m m merge network in Fig.. The merge network has m connections to the m servers and m connections to the m switch interfaces. A merge network has the property that it pushes all traffic from servers to the leftmost interface of the switch. If that interface is busy, then the traffic is forced to the next interface, and so on. This traffic merging behavior ensures that several switch interfaces can be put to low power mode without compromising connectivity. Of course, because of the fact that we are breaking the - association of a switch interface to a server interface, several layer protocols will break. In the previous paper mentioned above, we have addressed this issue as well and showed how the merge network and certain additional switch software can overcome this limitation. Some additional observations about the merge network are as follows: ) The merge network is a fully analog device with no transceivers and, as a result, its power consumption is below one watt. ) We note that merge networks do not cause any packet loss or increase in delay. 3) Consider the uplink from the servers to the merge network. All traffic coming into the merge network is output on the leftmost q m links connected to the q leftmost interfaces of the switch, where q = m (assuming a normalized unit capacity for links). This is accomplished internally by sensing packets on links and automatically redirecting them to the leftmost output from the merge network that is free. The network can be visualized as a train switching station where trains are re-routed by switching the tracks (rather than storeand-forward as is done in hubs or switches). ) On the downlink to the servers, traffic from the switch to the m servers is sent out along the leftmost r m switch interfaces to the merge network. The packets are then sent out along the m links attached to the servers from the output of the merge network. The manner in which this is accomplished is described in [] (note that the challenge is to correctly route the packets flowing through the merge network to the appropriate destinations). We can connect a merge network to multiple switches. In this scenario, traffic are merged to interfaces of the leftmost switches, and the right-end switches with all idle interfaces can be powered off, achieving more energy savings. In this work, we apply merge networks at two locations within a pod m-port switch m x m merge network m servers Fig.. Schematic of a merge network. of a fat-tree network one location is between the servers and the and the other location is between the and the. Fig. 3 shows a single pod of a fat-tree after applying merge networks. As shown, we utilize one k / k / merge network to connect all the servers in a pod to the interfaces of and apply another merge network to connect to. k/ ports k/ ports k/ ports k/ ports to k/ k-port k/ k X k merge network k/ k-port k/ k X k merge network k/ X k/ = k / servers Fig. 3. Merge network applied to pod in a fat-tree. B. Contributions and Paper Organization The primary contribution of this paper is to demonstrate that traffic merging yields near energy proportionality. The remainder of the paper is organized as follows. In Section II we introduce some related work and background. Section III presents the schematic of applying merge networks to a fat-tree network and derives the analytical model for the lower bound of energy consumption. Section IV formulates the power optimization model for computing routes with the goal of minimizing energy consumption. This model is different from those developed in previous papers because we also consider merge networks and our minimization function includes the number of active interfaces as a parameter. Section V presents an algorithm that computes routes every second based on traffic load. Lastly, in Section VI, we show the results of simulating realistic larger fat-tree DCNs and analyze the energy savings obtained when using merge networks and Section VII summarizes the conclusions and the main contributions. II. RELATED WORK The rapid increasing of Internet services and cloud applications requires DCNs to support higher bandwidth and

3 better scalability. During the past few years, many new DCN architectures have been proposed to address these challenges. For instance, hierarchical topologies, including fat-tree [3], Clos [5] and flattened butterfly [], are designed to maximize the network cross-section bandwidth and optimize the costefficiency ratio. Some server-centric DCNs, such as DCell [8] [3] and BCube [9], use servers as network routing devices, aiming to obtain better scalability and fault-tolerance. In general, these DCN topologies are intended to maximize network bandwidth and achieve optimal throughput and better scalability. With the growing size of data centers and rising energy costs, many have turned to the DCN energy efficiency problem recently. New energy-efficient network devices are used, including energy-proportional switches or links. Abts et al. [] propose to dynamically adapt the link rate according to traffic intensity to save energy. The earlier work of Gupta et al. [] inspired the study of energy-efficient DCN topology through powering off idle interfaces or devices. For example, Heller et al. present an ElasticTree [] that adapts the network topology to varying traffic loads. For a given traffic load, ElasticTree assigns traffic flows to routes and thus finds a minimal subset of network and powers off all unnecessary switches and links to save energy. CARPO [6] consolidates negative-correlated flows into a smaller set of links and power off unused ones. Adnan and Gupta [] propose a path consolidation algorithm to select the path that best overlaps with the other paths and thus dynamically form a minimum network topology. Recently, Widiaja et al. [7] compare different sizes of switches and conclude that it is more energy-efficient to use smaller-sized switches when the traffic is highly localized. Our work complements prior work by applying merge networks to edge-level switches and aggregation-layer switches to further reduce the number of active switches and achieve energy-proportionality for fat-tree DCNs. III. LOWER BOUND ON ENERGY CONSUMPTION In this section, we compute the minimum number of active switches required for two models to support different traffic and loadings. In the first model, we consider a fat-tree DCN without merge networks. Similar to ElasticTree, we force traffic to the left so that more aggregation and can be powered off. The second model uses merge networks to further merge the traffic to the left, so can be idle and powered off. A. Energy Minimization without Traffic Merging In order to derive analytical expressions for minimal energy consumption of fat-trees for different type of loadings, we use fraction of active switches as the metric for energy consumption and assume an underlying routing protocols that pushes traffic to the left. We use three parameters to model different types of loadings. A packet from a server goes to another server connected to the same edge switch with probability, which goes to a server in the same pod but another edge switch with probability p, and with probability p 3 = p it goes to a server in a different pod. Thus of the traffic is never seen by either the core or the while p of the traffic is not seen by the. By varying and p we can model very different types of traffic. Let denote the average load offered by each server expressed as a fraction of link speed (which we normalize to ). Thus, the total load in the data center is k 3 /. We have the following equalities for total traffic at the level of, pod and : Traffic per edge switch = k Traffic for all in a pod = ( p)k Traffic for all = k ( p p)k Note that traffic flow is symmetric and the numbers above correspond to both, traffic into and out of a switch or switches. We observe that all the need to remain active at all levels of loads to ensure servers have network connectivity. This gives us k / active. Within each pod we have total traffic equal to ( )k / going into/from the k/ from/to the. Given each link has a normalized capacity of, and that there are k/ interfaces per aggregation switch connected to the edge switches, we require at least ( p)k / k/ active aggregation switches per pod. Since there are k pods, the total number of active becomes k ( p)k. Finally, since every core switch is connected to an aggregation switch from each of the k pods, the number of active we require is simply, ( p p)k3 / k where we divide the total traffic passing through the by the number of links per switch and round up. Therefore, the total number of active switches can be written as: Active Switches = k + k ( p)k + ( p p )k Fig. plots the fraction of active switches as a function of load for five different scenarios when k =. The plot with the labels -o corresponds to the case when all traffic is between the servers connected to an edge switch. In other words, no traffic needs to flow to the or to the. As expected, the graph stays flat. However, what is relevant here is that even at light loads of., all the are fully active. At this load value, each server generates /th of the uplink capacity of traffic (similarly for downlink) but the energy consumed is the same as when the link is fully loaded. The total combined traffic from all the 6 servers is.6, which is less than the capacity of a single link. The plot with the labels corresponds to the extreme case when all the traffic is destined for servers in a different pod. Hence the core and will be utilized. Contrasting this with the case discussed above, we observe that energy scales approximately linearly with load, if we discount the. This is the desired behavior for energyproportional networking.

4 Fraction of active switches =.,p =.,p 3 =. =.5,p =.5,p 3 =. =.5,p =.5,p 3 =.5 Active Switches for k =. p =.5,p =.5,p =.5 3 =.,p =.,p 3 = Fraction of active switches Active Switches for k = with Traffic Merging =.,p =.,p 3 =. =.5,p =.5,p 3 =. =.5,p =.5,p 3 =.5 =.5,p =.5,p 3 =.5 =.,p =.,p 3 = Fig.. Active switches for the model in Section III-A. Fig. 5. Active switches for the model with traffic merging. B. Energy Savings Due to Traffic Merging To save energy used by the, we apply a k k merge network between the k / servers and the k/. There are two consequences after applying the merge networks. First, the p traffic that goes to other subnets within the same pod has no necessity to go through the aggregation level. Instead, it is transferred directly to the destination servers. The traffic loading parameters are changed to p and p and p = + p and p =. Accordingly, the number of active required in each pod is ( p )k = ( p p)k. Second, traffic from servers is now sent to a merge network and consolidated to the leftmost edge switch and the idle can be put to low power mode to save energy. Therefore, The active in one pod can be calculated as ( k )/( k k ) =, which changes with the traffic load. The total number of active switches after applying the merge networks can be written as: Active Switches = k k + k ( p )k + ( p p )k = k k ( p p)k + k + ( p p )k Fig. 5 shows the fraction of active switches in a k = fat-tree with merge networks for the same traffic models as Fig.. It is very illustrative of the benefits of traffic merging. The number of active switches is reduced more when the traffic load is lower. It is noticeable that the flat line in Fig. becomes more linear with the load after the traffic merging. IV. OPTIMIZATION POWER MODEL We formulate a power model of network switches and links to compute the minimal power consumed by a DCN. Given a network G(V,E), where V is the set of nodes and E is the set of links (u, v) connecting node u and v. End hosts and switches are all considered nodes in the network so V = V + V where V is the set of end hosts and V is the set of switches. Assuming the power consumption of each switch and each link is P s and P l respectively. With n active switches and k u active interfaces for each active switch u, the power optimization problem is expressed as: Minimize P total = k u P l + n P s () u V Subject to s S, gs,w i gw,s i = t i s,d w V s w V s d D, gw,d i gd,w i = () ti s,d w V d w V d u, u S, D, f u,w f w,u = (3) w V u w V u (u, v) E, f u,v c u,v x u,v () u V, f w,u c u y u and f u,w c u y u (5) w V u (u, v) E, x u,v = x v,u (6) u V, w V u,x u,w y u and x w,u y u (7) u V, w V u,y u (x u,w + x w,u ) (8) We use binary variables y u and x u,v to represent the power state of node u and link (u, v) respectively. For example, if x u,v =, link (u, v) is active; else it is idle. We have k u = x u,w and n = y u where V u is set of switches w V u u V connect to node u. gu,v i represents individual flow of the ith traffic demand t i s,d routed through (u, v). f u,w is the total flow assigned to (u, w) and f u,w = gu,w. i S and D are the set i of source nodes and destination nodes and V s and V d are set of switches connect to source node s and destination node d, respectively. For given network topologies and traffic loads, the optimization model finds optimal flow assignments involving minimum number of switches and links ((n, k u ) that consume minimum total power P total. The optimization problem is a mixed integer linear programming problem (MIP) that

5 subjects to capacity constraint, flow conservation and demand satisfaction. Demand satisfaction () specifies that the traffic flow from/to a source/destination node equals to certain traffic demand. Flow conservation (3) ensures the same amount of traffic enters an intermediate node and exits from the node. Capacity constraint () makes sure flows go through active links and traffic flow not surpass the link capacity. After applying the merge networks, the capacity of a node is not necessarily equal to the sum of the capacities of links connected to it. Therefore, we define capacity constraint c u for each node u as in (5). In addition, the bidirectional link constraint (6) confirms that both directions of a link are powered ON if there is a flow assigned to either direction of the link. Constraints (7) and (8) correlate the power states of each switch and it connected links. We implement the power optimization model for fat-tree in CPLEX. For a given traffic matrix, the model outputs the active switches and links, plus the total flow assignment to each link. For simplicity, we allow flow splitting in our implementation. V. GREEDY FLOW ASSIGNMENT ALGORITHM The formal optimization model finds the optimal flow assignments for a given network topology and traffic load. However, the optimization problem with a large scale DCN is a NP-hard problem and cannot be solved in a reasonable time frame. We propose a heuristic greedy algorithm in order to find a near-optimal route assignment. Our greedy flow assignment algorithm is based on the Dijkstra s algorithm that solves the shortest path problem. For each flow, the algorithm finds a route with sufficient bandwidth between the source node and the destination node with the lowest cost. The cost function is defined as the sum of the cost of switches and links along the route. By carefully defining the cost value of each node and each link, our greedy algorithm finds the lowest-cost route for each flow incrementally, and ultimately obtains the optimal routing for all the flows. The greedy algorithm is described as in Algorithm. Each link and each node has a fixed capacity. We only assign a flow to a link when there is available bandwidth at that link and also at the source and destination node. Once a flow is assigned, the amount of traffic demand is subtracted from the bandwidth of the link and the nodes on both ends. Link cost cost(u, v) is defined as a constant value of for all links while node cost cost(v) is initialized as. Each link is counted in the cost of the route and we are ensured to find the shortest route. Once node v is used in a route once, cost(v) is updated to. This makes sure that a switch that has been used in a previous route will have higher priority to be reused. As a result, we can achieve the minimum overall number of active switches. We use higher link cost than node cost in order to avoid detour routes between switches. The greedy algorithm is not optimal, but we verified that the results produced by the algorithm are very close to those from the CPLEX optimization solver in Section 3 for all traffic types and loads. However, the optimization model can only Algorithm Flow Assignment algorithm : function FLOWASSIGN(source, sink, demand) : for each vertex v in Graph do 3: dist[v] Infinity : dist[source] 5: insert (source, dist[source]) to Q 6: while Q is not empty do 7: u first pair in Q 8: if u == sink then 9: break : for each neighbor v of u do : if (capacity(u, v)!=)and : (capacity(u)!=)then 3: alt dist[u]+cost(v)+cost(u, v) : else 5: alt Infinity 6: if alt < dist[v] then 7: dist[v] alt 8: previous[v] u 9: update (v, dist[v]) in Q : for v = sink; v! = ; v = previous[v] do : insert v to route : return route scale to k =fat-tree networks. In the next part of this paper, we use this greedy algorithm to simulate larger scale fat-tree networks. VI. SIMULATION RESULTS We simulate a k =fat-tree network consisting of 3 servers and 8 switches. We apply merge networks between servers and, and between and. The merge network switches traffic flows to the left switches in each pod. Flow splitting is allowed for simplicity. The experimental traffic traces are generated following the On/Off pattern derived from production data centers [5] []. The duration of the On/Off period and the packet interarrival time follow the lognormal distribution. We generate different traffic type including Random, Stride(n), and Staggered(n), each of which has different patterns of near and far traffic. For instance, a flow in Stride(n) goes from node i and to node [(i + n) mod N] (N is the total number of servers). Source and destination nodes in Random type are uniformly distributed. Staggered(n) is staggered probability traffic and assigns fixed probabilities for traffic going to the same subnet and to the same pod. We generate 8 traffic suites with parameters, p and p showed in Table I. Flows in Stride() always go to the next server. In a k =fat-tree, each edge switch connects to 6 servers and forms a subnet. Flows from the first 5 servers go to the same subnet. While flow from the 6th server travels to the next subnet or the next pod. Therefore, 5/6 of the traffic goes to the same subnet. Flows in Stride(6) always travel cross subnets. Stride(36) have all flows traveling to other pods.

6 TABLE I PROBABILITIES OF FLOW GOING TO THE SAME SUBNET ( ), TO THE SAME POD (p ) AND TO DIFFERENT PODS ( p ) FOR ALL TRAFFIC SUITES STUDIED Traffic Suite p p Random.% 7% 9.8% Stride() 83.3% 3.9%.8% Stride(6) % 83.3% 6.7% Stride(36) % % % Stride(6) % % % Staggered() % % % Staggered() 5% 3% % Staggered(3) % 3% 5% For each traffic suite above, we generate packet traces from each server with the load fraction varies from. to.7. The packet traces in every one-second interval are organized as a traffic matrix. In simulation, we feed the traffic matrix to the greedy algorithm and output the number of active switches and the number of active interfaces. A. Fig. 6 compares the number of active switches at each level before and after applying traffic merging. As we can find, for all traffic suites, the number of active switches at edge level reduces significantly after applying merge networks. For aggregation-level switches, we observe obvious reduction for Stride(6), Staggered() and Staggered(3). We notice from table I that these three traffic suites have higher p, and as we discussed in Section III-B, a amount of p of traffic is switched away from aggregation level after traffic merging, which means a substantial part of the traffic originally going to aggregation layer has been cut short to be transferred directly through with merge networks. Fig. 7 illustrates the fraction of total active switches of the DCNs before and after using merge networks. The fraction of active switches reduced from % 6% to % 35% for light traffic loadings (..). B. Energy cost The above discussions focused on reducing the number of active switches. The overall energy cost a DCN consists of the cost incurred by switches and links. However, cost incurred by links is negligible and can be incorporated within the cost of interfaces of switches. Generally speaking, the energy cost of a switch can be roughly partitioned into the cost of chassis and the interfaces. As described in [7] [6], a reasonable approximation to the cost of a k-port switch is: Switch Cost = C + k log k + k The constant C accounts for static costs of the switch such as fan etc. The second term corresponds to the cost of the interconnection fabric within the switch which is a significant contributor to energy consumption (typically 3% %). This cost scales as k log k for a k-port switch. The last term is the contribution to the cost from the active interfaces. This term folds into itself the cost of the linecards that the interfaces are Compare the number of active switches for Stride() Compare the number of active switches for Stride(36) Compare the number of active switches for Staggered() Compare the number of active switches for Staggered(3) Compare the number of active switches for Stride(6) Compare the number of active switches for Stride(6) Compare the number of active switches for Staggered() Compare the number of active switches for Random Fig. 6. Compare number of active switches with vs. without traffic merging. on. For the purposes of comparing the overall cost reduction of traffic merging, we set C to 5% of the maximum switch cost. That is: C = k log k + k If the traffic load fraction going to a switch is, the merge network will switch the traffic to the leftmost m = k interfaces. The cost of a switch with merge networks is thus: Switch Cost = C + m log m + m Fig. 8 shows the overall cost improvement over approaches without merge networks. It demonstrates that our traffic merging method can save up to 9% of energy cost when the traffic load is low. Fig. 9 shows that the traffic merging can achieve better energy efficiency that is closer to the ideal energy proportionality. VII. CONCLUSIONS We present the approach of traffic merging at edge-layer and aggregation-layer switches in fat-tree DCNs in order to save energy. We formulate the analytical power model for fattree DCNs with merge networks and find the lower bound

7 Fraction of active switches Fraction of active switches Fraction of Active Switches out of 8 Random.3 Stride() Stride(6). Stride(36) Staggered(). Staggered() Staggered(3) Fraction of Active Switches out of 8 after Traffic Merging Random Stride() Stride(6) Stride(36) Staggered() Staggered() Staggered(3) Fig. 7. Fraction of active switches before using traffic merging (Top) and after using traffic merging (Down). % Cost improvement Cost Improvement using Traffic Merging Random 3 Stride() Stride(6) Stride(36) Staggered() Staggered() Staggered(3) Fig. 8. Reduction in total cost after using traffic merging. of the minimum number of switches for different traffic load. We simulate large-size fat-tree DCNs with the merge networks and present a heuristic greedy algorithm to find near-optimal subset of switches for different traffic patterns and load. The results show that our approach achieves up to 7% 9% total energy savings through traffic merging and achieves almost perfect energy proportionality. REFERENCES Fraction of cost Cost Improvement with Random Traffic Without traffic merging With traffic merging Ideal energy proportionality Fig. 9. Fraction of total cost without traffic merging vs. using traffic merging. [] Muhhamad Abdullah Adnan and Rajesh Gupta. Path Consolidation for Dynamic Right-sizing of Data Center Networks. In Proceedings IEEE Sixth International Conference on Cloud Computing, 3. [3] Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat. A Scalable, Commodity Data Center Network Architecture. In SIGCOMM, pages 63 7, 8. [] Theophilus Benson, Aditya Akella, and David A. Maltz. Network Traffic Characteristics of Data Centers in the Wild. In IMC,. [5] Charles Clos. A Study of Non-Blocking Switching Networks. The Bell System Technical Journal, 3():6, March 953. [6] V. Eramo, A. Germoni, A. Cianfrani, E. Miucci, and M. Listanti. Comparison in Power Consumption of MVMC and BENES Optical Packet Switches. In Proceedings IEEE NOC (Network on Chip), pages 5 8,. [7] Albert Greenberg, James Hamilton, David A. Maltz, and Parveen Patel. The Cost of a Cloud: Research Problems in Data Center Networks. In SIGCOMM CCR, pages 68 73, 9. [8] Chaunxiong Guo, Haitao Wu, Kun Tan, Lei Shi, Yongguang Zhang, and Songwu Lu. DCell: A Scalable and Fault-tolerant Network Structure for Data Centers. In SIGCOMM, pages 75 86, 8. [9] Chuanxiong Guo, Guohan Lu, Dan Li, Haitao Wu, Xuan Zhang, Yunfeng Shi, Chen Tian, Yongguang Zhang, and Songwu Lu. BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers. In SIGCOMM, pages 63 7, 9. [] Maruti Gupta and Suresh Singh. Greening of the Internet. In Proceedings of ACM SIGCOMM, 3. [] Brandon Heller, Srini Seetharaman, Priya Mahadevan, Yannis Yiakoumis, Puneet Sharma, Sujata Banerjee, and Nick McKeown. ElasticTree: Saving Energy in Data Center Networks. In NSDI,. [] John Kim, William J. Dally, and Dennis Abts. Flattened Butterfly: A Cost-Efficient Topology for High-Radix Networks. In ISCA, pages 6 37, 7. [3] Markus Kliegl, Jason Lee, Jun Li, Xinchao Zhang, Chuanxiong Guo, and David Rincon. Generalized DCell Structure for Load-Balanced Data Center Networks. In INFOCOM,. [] Suresh Singh and Candy Yiu. Putting the Cart Before the Horse: Merging Traffic for Energy Conservation. In IEEE Communications Magazine. June. [5] Theophilus Benson and Ashok Anand and Aditya Akella and Ming Zhang. Understanding Data Center Traffic Characteristics. In WREN, 9. [6] Xiaodong Wang, Yanjun Yan, Xiaorui Wang, Kefa Lu, and Qing Cao. CARPO: Correlation-aware Power Optimization in Data Center Networks. In INFOCOM, pages 5 33,. [7] Indra Widjaja, Anwar Walid, Yanbin Luo, Yang Xu, and H. Jonathan Chao. Switch Sizing for Energy Efficient Datacenter Networks. In Proceedings GreenMetrics 3 Workshop (in conjunction with ACM Sigmetrics 3), Pittsburgh, PA, June 3. [8] Candy Yiu and Suresh Singh. Merging Traffic to Save Energy in the Enterprise. In E-Energy,. [] Dennis Abts, Michael R. Marty, Philip M. Wells, Peter Klausler, and Hong Liu. Energy Proportional Datacenter Networks. In ISCA,.

Agile Traffic Merging for DCNs

Agile Traffic Merging for DCNs Agile Traffic Merging for DCNs Qing Yi and Suresh Singh Portland State University, Department of Computer Science, Portland, Oregon 97207 {yiq,singh}@cs.pdx.edu Abstract. Data center networks (DCNs) have

More information

Research Article Volume 6 Issue No. 3

Research Article Volume 6 Issue No. 3 DOI 10.4010/2016.615 ISSN 2321 3361 2016 IJESC Research Article Volume 6 Issue No. 3 Data Centre Networks Workload Analysis for Power Optimization Using Consolidation S.Venkatraman 1, T.Ashok Kumar 2,

More information

The Importance of Switch Dimension for Energy-Efficient Datacenter Networks

The Importance of Switch Dimension for Energy-Efficient Datacenter Networks The Importance of Switch Dimension for Energy-Efficient Datacenter Networks Indra Widjaja a, Anwar Walid a, Yanbin Luo b, Yang Xu b,, H. Jonathan Chao b a Bell Labs, Alcatel-Lucent, 600 Mountain Ave, Murray

More information

Reducing Power Consumption in Data Centers by Jointly Considering VM Placement and Flow Scheduling

Reducing Power Consumption in Data Centers by Jointly Considering VM Placement and Flow Scheduling Journal of Interconnection Networks c World Scientific Publishing Company Reducing Power Consumption in Data Centers by Jointly Considering VM Placement and Flow Scheduling DAWEI LI and JIE WU Department

More information

ENERGY-EFFICIENT TRAFFIC MERGING FOR DATACENTER NETWORKS

ENERGY-EFFICIENT TRAFFIC MERGING FOR DATACENTER NETWORKS ENERGY-EFFICIENT TRAFFIC MERGING FOR DATACENTER NETWORKS 1 RUSHIKESH MADHUKAR BAGE, 2 GAURAV DIGAMBAR DOIPHODE 1, 2 Graduate Student, Department Of Computer Science, Mumbai, Maharashtra, India 1 Email

More information

Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network

Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network Matthew A. Owens, Deep Medhi Networking Research Lab, University of Missouri Kansas City, USA {maog38, dmedhi}@umkc.edu

More information

Data Center Network Topologies II

Data Center Network Topologies II Data Center Network Topologies II Hakim Weatherspoon Associate Professor, Dept of Computer cience C 5413: High Performance ystems and Networking April 10, 2017 March 31, 2017 Agenda for semester Project

More information

Dynamic Distributed Flow Scheduling with Load Balancing for Data Center Networks

Dynamic Distributed Flow Scheduling with Load Balancing for Data Center Networks Available online at www.sciencedirect.com Procedia Computer Science 19 (2013 ) 124 130 The 4th International Conference on Ambient Systems, Networks and Technologies. (ANT 2013) Dynamic Distributed Flow

More information

Joint Power Optimization Through VM Placement and Flow Scheduling in Data Centers

Joint Power Optimization Through VM Placement and Flow Scheduling in Data Centers Joint Power Optimization Through VM Placement and Flow Scheduling in Data Centers Dawei Li, Jie Wu Department of Computer and Information Sciences Temple University, Philadelphia, USA {dawei.li, jiewu}@temple.edu

More information

BCube: A High Performance, Servercentric. Architecture for Modular Data Centers

BCube: A High Performance, Servercentric. Architecture for Modular Data Centers BCube: A High Performance, Servercentric Network Architecture for Modular Data Centers Chuanxiong Guo1, Guohan Lu1, Dan Li1, Haitao Wu1, Xuan Zhang1;2, Yunfeng Shi1;3, Chen Tian1;4, Yongguang Zhang1, Songwu

More information

Two-Aggregator Topology Optimization without Splitting in Data Center Networks

Two-Aggregator Topology Optimization without Splitting in Data Center Networks Two-Aggregator Topology Optimization without Splitting in Data Center Networks Soham Das and Sartaj Sahni Department of Computer and Information Science and Engineering University of Florida Gainesville,

More information

Distributed Multipath Routing for Data Center Networks based on Stochastic Traffic Modeling

Distributed Multipath Routing for Data Center Networks based on Stochastic Traffic Modeling Distributed Multipath Routing for Data Center Networks based on Stochastic Traffic Modeling Omair Fatmi and Deng Pan School of Computing and Information Sciences Florida International University Miami,

More information

TCEP: Traffic Consolidation for Energy-Proportional High-Radix Networks

TCEP: Traffic Consolidation for Energy-Proportional High-Radix Networks TCEP: Traffic Consolidation for Energy-Proportional High-Radix Networks Gwangsun Kim Arm Research Hayoung Choi, John Kim KAIST High-radix Networks Dragonfly network in Cray XC30 system 1D Flattened butterfly

More information

FCell: Towards the Tradeoffs in Designing Data Center Network Architectures

FCell: Towards the Tradeoffs in Designing Data Center Network Architectures FCell: Towards the Tradeoffs in Designing Data Center Network Architectures Dawei Li and Jie Wu Department of Computer and Information Sciences Temple University, Philadelphia, USA {dawei.li, jiewu}@temple.edu

More information

L19 Data Center Network Architectures

L19 Data Center Network Architectures L19 Data Center Network Architectures by T.S.R.K. Prasad EA C451 Internetworking Technologies 27/09/2012 References / Acknowledgements [Feamster-DC] Prof. Nick Feamster, Data Center Networking, CS6250:

More information

Interconnection Networks: Topology. Prof. Natalie Enright Jerger

Interconnection Networks: Topology. Prof. Natalie Enright Jerger Interconnection Networks: Topology Prof. Natalie Enright Jerger Topology Overview Definition: determines arrangement of channels and nodes in network Analogous to road map Often first step in network design

More information

DISCO: Distributed Traffic Flow Consolidation for Power Efficient Data Center Network

DISCO: Distributed Traffic Flow Consolidation for Power Efficient Data Center Network DISCO: Distributed Traffic Flow Consolidation for Power Efficient Data Center Network Kuangyu Zheng, Xiaorui Wang, and Jia Liu The Ohio State University, Columbus, OH, USA Abstract Power optimization for

More information

Data Center Switch Architecture in the Age of Merchant Silicon. Nathan Farrington Erik Rubow Amin Vahdat

Data Center Switch Architecture in the Age of Merchant Silicon. Nathan Farrington Erik Rubow Amin Vahdat Data Center Switch Architecture in the Age of Merchant Silicon Erik Rubow Amin Vahdat The Network is a Bottleneck HTTP request amplification Web search (e.g. Google) Small object retrieval (e.g. Facebook)

More information

Tree-Based Minimization of TCAM Entries for Packet Classification

Tree-Based Minimization of TCAM Entries for Packet Classification Tree-Based Minimization of TCAM Entries for Packet Classification YanSunandMinSikKim School of Electrical Engineering and Computer Science Washington State University Pullman, Washington 99164-2752, U.S.A.

More information

DFFR: A Distributed Load Balancer for Data Center Networks

DFFR: A Distributed Load Balancer for Data Center Networks DFFR: A Distributed Load Balancer for Data Center Networks Chung-Ming Cheung* Department of Computer Science University of Southern California Los Angeles, CA 90089 E-mail: chungmin@usc.edu Ka-Cheong Leung

More information

Multi-resource Energy-efficient Routing in Cloud Data Centers with Network-as-a-Service

Multi-resource Energy-efficient Routing in Cloud Data Centers with Network-as-a-Service in Cloud Data Centers with Network-as-a-Service Lin Wang*, Antonio Fernández Antaº, Fa Zhang*, Jie Wu+, Zhiyong Liu* *Institute of Computing Technology, CAS, China ºIMDEA Networks Institute, Spain + Temple

More information

FlatNet: Towards A Flatter Data Center Network

FlatNet: Towards A Flatter Data Center Network FlatNet: Towards A Flatter Data Center Network Dong Lin, Yang Liu, Mounir Hamdi and Jogesh Muppala Department of Computer Science and Engineering Hong Kong University of Science and Technology, Hong Kong

More information

Towards Predictable + Resilient Multi-Tenant Data Centers

Towards Predictable + Resilient Multi-Tenant Data Centers Towards Predictable + Resilient Multi-Tenant Data Centers Presenter: Ali Musa Iftikhar (Tufts University) in joint collaboration with: Fahad Dogar (Tufts), {Ihsan Qazi, Zartash Uzmi, Saad Ismail, Gohar

More information

Ad hoc and Sensor Networks Topology control

Ad hoc and Sensor Networks Topology control Ad hoc and Sensor Networks Topology control Goals of this chapter Networks can be too dense too many nodes in close (radio) vicinity This chapter looks at methods to deal with such networks by Reducing/controlling

More information

New Fault-Tolerant Datacenter Network Topologies

New Fault-Tolerant Datacenter Network Topologies New Fault-Tolerant Datacenter Network Topologies Rana E. Ahmed and Heba Helal Department of Computer Science and Engineering, American University of Sharjah, Sharjah, United Arab Emirates Email: rahmed@aus.edu;

More information

Parallel Computing Platforms

Parallel Computing Platforms Parallel Computing Platforms Network Topologies John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 14 28 February 2017 Topics for Today Taxonomy Metrics

More information

Energy Aware Network Operations

Energy Aware Network Operations Energy Aware Network Operations Priya Mahadevan, Puneet Sharma, Sujata Banerjee, Parthasarathy Ranganathan HP Labs Email: {firstname.lastname}@hp.com Abstract Networking devices today consume a non-trivial

More information

OPTICAL NETWORKS. Virtual Topology Design. A. Gençata İTÜ, Dept. Computer Engineering 2005

OPTICAL NETWORKS. Virtual Topology Design. A. Gençata İTÜ, Dept. Computer Engineering 2005 OPTICAL NETWORKS Virtual Topology Design A. Gençata İTÜ, Dept. Computer Engineering 2005 Virtual Topology A lightpath provides single-hop communication between any two nodes, which could be far apart in

More information

Performing MapReduce on Data Centers with Hierarchical Structures

Performing MapReduce on Data Centers with Hierarchical Structures INT J COMPUT COMMUN, ISSN 1841-9836 Vol.7 (212), No. 3 (September), pp. 432-449 Performing MapReduce on Data Centers with Hierarchical Structures Z. Ding, D. Guo, X. Chen, X. Luo Zeliu Ding, Deke Guo,

More information

Lecture 2: Topology - I

Lecture 2: Topology - I ECE 8823 A / CS 8803 - ICN Interconnection Networks Spring 2017 http://tusharkrishna.ece.gatech.edu/teaching/icn_s17/ Lecture 2: Topology - I Tushar Krishna Assistant Professor School of Electrical and

More information

Interconnection Networks: Routing. Prof. Natalie Enright Jerger

Interconnection Networks: Routing. Prof. Natalie Enright Jerger Interconnection Networks: Routing Prof. Natalie Enright Jerger Routing Overview Discussion of topologies assumed ideal routing In practice Routing algorithms are not ideal Goal: distribute traffic evenly

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

TCP improvements for Data Center Networks

TCP improvements for Data Center Networks TCP improvements for Data Center Networks Tanmoy Das and rishna M. Sivalingam Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai 636, India Emails: {tanmoy.justu@gmail.com,

More information

An Efficient Algorithm for Solving Traffic Grooming Problems in Optical Networks

An Efficient Algorithm for Solving Traffic Grooming Problems in Optical Networks An Efficient Algorithm for Solving Traffic Grooming Problems in Optical Networks Hui Wang, George N. Rouskas Operations Research and Department of Computer Science, North Carolina State University, Raleigh,

More information

WDM Network Provisioning

WDM Network Provisioning IO2654 Optical Networking WDM Network Provisioning Paolo Monti Optical Networks Lab (ONLab), Communication Systems Department (COS) http://web.it.kth.se/~pmonti/ Some of the material is taken from the

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

A SDN-like Loss Recovery Solution in Application Layer Multicast Wenqing Lei 1, Cheng Ma 1, Xinchang Zhang 2, a, Lu Wang 2

A SDN-like Loss Recovery Solution in Application Layer Multicast Wenqing Lei 1, Cheng Ma 1, Xinchang Zhang 2, a, Lu Wang 2 5th International Conference on Information Engineering for Mechanics and Materials (ICIMM 2015) A SDN-like Loss Recovery Solution in Application Layer Multicast Wenqing Lei 1, Cheng Ma 1, Xinchang Zhang

More information

arxiv: v1 [cs.ni] 11 Oct 2016

arxiv: v1 [cs.ni] 11 Oct 2016 A Flat and Scalable Data Center Network Topology Based on De Bruijn Graphs Frank Dürr University of Stuttgart Institute of Parallel and Distributed Systems (IPVS) Universitätsstraße 38 7569 Stuttgart frank.duerr@ipvs.uni-stuttgart.de

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

Memory Placement in Network Compression: Line and Grid Topologies

Memory Placement in Network Compression: Line and Grid Topologies ISITA212, Honolulu, Hawaii, USA, October 28-31, 212 Memory Placement in Network Compression: Line and Grid Topologies Mohsen Sardari, Ahmad Beirami, Faramarz Fekri School of Electrical and Computer Engineering,

More information

Efficient Cluster Based Data Collection Using Mobile Data Collector for Wireless Sensor Network

Efficient Cluster Based Data Collection Using Mobile Data Collector for Wireless Sensor Network ISSN (e): 2250 3005 Volume, 06 Issue, 06 June 2016 International Journal of Computational Engineering Research (IJCER) Efficient Cluster Based Data Collection Using Mobile Data Collector for Wireless Sensor

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Packet Classification Using Dynamically Generated Decision Trees

Packet Classification Using Dynamically Generated Decision Trees 1 Packet Classification Using Dynamically Generated Decision Trees Yu-Chieh Cheng, Pi-Chung Wang Abstract Binary Search on Levels (BSOL) is a decision-tree algorithm for packet classification with superior

More information

Performance Improvement of Hardware-Based Packet Classification Algorithm

Performance Improvement of Hardware-Based Packet Classification Algorithm Performance Improvement of Hardware-Based Packet Classification Algorithm Yaw-Chung Chen 1, Pi-Chung Wang 2, Chun-Liang Lee 2, and Chia-Tai Chan 2 1 Department of Computer Science and Information Engineering,

More information

Time Efficient Energy-Aware Routing in Software Defined Networks

Time Efficient Energy-Aware Routing in Software Defined Networks Efficient Energy-Aware Routing in Software Defined Networks Yu-Hao Chen, Tai-Lin Chin, Chin-Ya Huang, Shan-Hsiang Shen, Ruei-Yan Huang Dept. of Computer Science and Information Engineering, National Taiwan

More information

Chapter 3 : Topology basics

Chapter 3 : Topology basics 1 Chapter 3 : Topology basics What is the network topology Nomenclature Traffic pattern Performance Packaging cost Case study: the SGI Origin 2000 2 Network topology (1) It corresponds to the static arrangement

More information

Congestion Control in Datacenters. Ahmed Saeed

Congestion Control in Datacenters. Ahmed Saeed Congestion Control in Datacenters Ahmed Saeed What is a Datacenter? Tens of thousands of machines in the same building (or adjacent buildings) Hundreds of switches connecting all machines What is a Datacenter?

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

This is a repository copy of PON Data Centre Design with AWGR and Server Based Routing.

This is a repository copy of PON Data Centre Design with AWGR and Server Based Routing. This is a repository copy of PON Data Centre Design with AWGR and Server Based Routing. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/116818/ Version: Accepted Version Proceedings

More information

Dynamic Virtual Network Traffic Engineering with Energy Efficiency in Multi-Location Data Center Networks

Dynamic Virtual Network Traffic Engineering with Energy Efficiency in Multi-Location Data Center Networks Dynamic Virtual Network Traffic Engineering with Energy Efficiency in Multi-Location Data Center Networks Mirza Mohd Shahriar Maswood 1, Chris Develder 2, Edmundo Madeira 3, Deep Medhi 1 1 University of

More information

Computer Based Image Algorithm For Wireless Sensor Networks To Prevent Hotspot Locating Attack

Computer Based Image Algorithm For Wireless Sensor Networks To Prevent Hotspot Locating Attack Computer Based Image Algorithm For Wireless Sensor Networks To Prevent Hotspot Locating Attack J.Anbu selvan 1, P.Bharat 2, S.Mathiyalagan 3 J.Anand 4 1, 2, 3, 4 PG Scholar, BIT, Sathyamangalam ABSTRACT:

More information

An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks

An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks First Author A.Sandeep Kumar Narasaraopeta Engineering College, Andhra Pradesh, India. Second Author Dr S.N.Tirumala Rao (Ph.d)

More information

On Power Management Policies for Data Centers

On Power Management Policies for Data Centers 215 IEEE International Conference on Data Science and Data Intensive Systems On Power Management Policies for Data Centers Zygmunt J. Haas * and Shuyang Gu *School of Electrical and Computer Engineering,

More information

DevoFlow: Scaling Flow Management for High-Performance Networks

DevoFlow: Scaling Flow Management for High-Performance Networks DevoFlow: Scaling Flow Management for High-Performance Networks Andy Curtis Jeff Mogul Jean Tourrilhes Praveen Yalagandula Puneet Sharma Sujata Banerjee Software-defined networking Software-defined networking

More information

FAR: A Fault-avoidance Routing Method for Data Center Networks with Regular Topology

FAR: A Fault-avoidance Routing Method for Data Center Networks with Regular Topology FAR: A Fault-avoidance Routing Method for Data Center Networks with Regular Topology Yantao Sun School of Computer and Information Technology Beijing Jiaotong University Beijing 100044, China ytsun@bjtu.edu.cn

More information

Simulating DataCenter Network Topologies. Suraj Ketan Samal Upasana Nayak

Simulating DataCenter Network Topologies. Suraj Ketan Samal Upasana Nayak Simulating DataCenter Network Topologies Suraj Ketan Samal Upasana Nayak 1 Agenda Data Center Networks (DCN) Project Proposal (Our Work) Network Topologies & Properties Simulation using NS-3 Conclusion

More information

A Modified Heuristic Approach of Logical Topology Design in WDM Optical Networks

A Modified Heuristic Approach of Logical Topology Design in WDM Optical Networks Proceedings of the International Conference on Computer and Communication Engineering 008 May 3-5, 008 Kuala Lumpur, Malaysia A Modified Heuristic Approach of Logical Topology Design in WDM Optical Networks

More information

Greening Backbone Networks: Reducing Energy Consumption by Shutting Off Cables in Bundled Links

Greening Backbone Networks: Reducing Energy Consumption by Shutting Off Cables in Bundled Links Greening Backbone Networks: Reducing Energy Consumption by Shutting Off Cables in Bundled Links Will Fisher Princeton University wafisher@princeton.edu Martin Suchara Princeton University msuchara@princeton.edu

More information

DEPLOYMENT OF HYBRID MULTICAST SWITCHES IN ENERGY-AWARE DATA CENTER NETWORK: A CASE OF FAT-TREE TOPOLOGY

DEPLOYMENT OF HYBRID MULTICAST SWITCHES IN ENERGY-AWARE DATA CENTER NETWORK: A CASE OF FAT-TREE TOPOLOGY DEPLOYMENT OF HYBRID MULTICAST SWITCHES IN ENERGY-AWARE DATA CENTER NETWORK: A CASE OF FAT-TREE TOPOLOGY Tosmate Cheocherngngarn, Jean Andrian and Deng Pan Department of Electrical and Computer Engineering

More information

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California,

More information

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup Yan Sun and Min Sik Kim School of Electrical Engineering and Computer Science Washington State University Pullman, Washington

More information

A report at Australia/China SKA Big Data Workshop

A report at Australia/China SKA Big Data Workshop Theme 1 Terabit Networking A report at Australia/China SKA Big Data Workshop Xiaoying Zheng (Shanghai Advanced Research Institute, Chinese Academy of Sciences, China) Paul Brooks (Trident Subsea Cable,

More information

A Congestion Contribution-based Traffic Engineering Scheme using Software-Defined Networking

A Congestion Contribution-based Traffic Engineering Scheme using Software-Defined Networking A Congestion Contribution-based Traffic Engineering Scheme using Software-Defined Networking Dongjin Hong, Jinyong Kim, and Jaehoon (Paul) Jeong Department of Electrical and Computer Engineering, Sungkyunkwan

More information

Network Traffic Characteristics of Data Centers in the Wild. Proceedings of the 10th annual conference on Internet measurement, ACM

Network Traffic Characteristics of Data Centers in the Wild. Proceedings of the 10th annual conference on Internet measurement, ACM Network Traffic Characteristics of Data Centers in the Wild Proceedings of the 10th annual conference on Internet measurement, ACM Outline Introduction Traffic Data Collection Applications in Data Centers

More information

DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic Cloud Portal, (EEATCP)

DCCN: A Non-Recursively Built Data Center Architecture for Enterprise Energy Tracking Analytic Cloud Portal, (EEATCP) Computational and Applied Mathematics Journal 2015; 1(3): 107-121 Published online April 30, 2015 (http://www.aascit.org/journal/camj) DCCN: A Non-Recursively Built Data Center Architecture for Enterprise

More information

Towards Reproducible Performance Studies Of Datacenter Network Architectures Using An Open-Source Simulation Approach

Towards Reproducible Performance Studies Of Datacenter Network Architectures Using An Open-Source Simulation Approach Towards Reproducible Performance Studies Of Datacenter Network Architectures Using An Open-Source Simulation Approach Daji Wong, Kiam Tian Seow School of Computer Engineering Nanyang Technological University

More information

Lecture 3: Topology - II

Lecture 3: Topology - II ECE 8823 A / CS 8803 - ICN Interconnection Networks Spring 2017 http://tusharkrishna.ece.gatech.edu/teaching/icn_s17/ Lecture 3: Topology - II Tushar Krishna Assistant Professor School of Electrical and

More information

Utilizing Datacenter Networks: Centralized or Distributed Solutions?

Utilizing Datacenter Networks: Centralized or Distributed Solutions? Utilizing Datacenter Networks: Centralized or Distributed Solutions? Costin Raiciu Department of Computer Science University Politehnica of Bucharest We ve gotten used to great applications Enabling Such

More information

CONSTRUCTION AND EVALUATION OF MESHES BASED ON SHORTEST PATH TREE VS. STEINER TREE FOR MULTICAST ROUTING IN MOBILE AD HOC NETWORKS

CONSTRUCTION AND EVALUATION OF MESHES BASED ON SHORTEST PATH TREE VS. STEINER TREE FOR MULTICAST ROUTING IN MOBILE AD HOC NETWORKS CONSTRUCTION AND EVALUATION OF MESHES BASED ON SHORTEST PATH TREE VS. STEINER TREE FOR MULTICAST ROUTING IN MOBILE AD HOC NETWORKS 1 JAMES SIMS, 2 NATARAJAN MEGHANATHAN 1 Undergrad Student, Department

More information

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E)

Lecture 12: Interconnection Networks. Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Lecture 12: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) 1 Topologies Internet topologies are not very regular they grew

More information

Venice: Reliable Virtual Data Center Embedding in Clouds

Venice: Reliable Virtual Data Center Embedding in Clouds Venice: Reliable Virtual Data Center Embedding in Clouds Qi Zhang, Mohamed Faten Zhani, Maissa Jabri and Raouf Boutaba University of Waterloo IEEE INFOCOM Toronto, Ontario, Canada April 29, 2014 1 Introduction

More information

Energy-Aware Routing: a Reality Check

Energy-Aware Routing: a Reality Check 1 Energy-Aware Routing: a Reality Check Aruna Prem Bianzino 1, Claude Chaudet 1, Federico Larroca 2, Dario Rossi 1, Jean-Louis Rougier 1 1 Institut TELECOM, TELECOM ParisTech, CNRS LTCI UMR 5141, Paris,

More information

On Optimal Traffic Grooming in WDM Rings

On Optimal Traffic Grooming in WDM Rings 110 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 20, NO. 1, JANUARY 2002 On Optimal Traffic Grooming in WDM Rings Rudra Dutta, Student Member, IEEE, and George N. Rouskas, Senior Member, IEEE

More information

Chapter 5: Analytical Modelling of Parallel Programs

Chapter 5: Analytical Modelling of Parallel Programs Chapter 5: Analytical Modelling of Parallel Programs Introduction to Parallel Computing, Second Edition By Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar Contents 1. Sources of Overhead in Parallel

More information

Minimization of Network Power Consumption with Redundancy Elimination

Minimization of Network Power Consumption with Redundancy Elimination Minimization of Network Power Consumption with Redundancy Elimination Frédéric Giroire 1, Joanna Moulierac 1, Truong Khoa Phan 1,andFrédéric Roudaut 2 1 Joint project MASCOTTE, I3S(CNRS-UNS), INRIA, Sophia-Antipolis,

More information

Interconnection networks

Interconnection networks Interconnection networks When more than one processor needs to access a memory structure, interconnection networks are needed to route data from processors to memories (concurrent access to a shared memory

More information

SPAIN: High BW Data-Center Ethernet with Unmodified Switches. Praveen Yalagandula, HP Labs. Jayaram Mudigonda, HP Labs

SPAIN: High BW Data-Center Ethernet with Unmodified Switches. Praveen Yalagandula, HP Labs. Jayaram Mudigonda, HP Labs SPAIN: High BW Data-Center Ethernet with Unmodified Switches Jayaram Mudigonda, HP Labs Mohammad Al-Fares, UCSD Praveen Yalagandula, HP Labs Jeff Mogul, HP Labs 1 Copyright Copyright 2010 Hewlett-Packard

More information

Quantifying Internet End-to-End Route Similarity

Quantifying Internet End-to-End Route Similarity Quantifying Internet End-to-End Route Similarity Ningning Hu and Peter Steenkiste Carnegie Mellon University Pittsburgh, PA 523, USA {hnn, prs}@cs.cmu.edu Abstract. Route similarity refers to the similarity

More information

Recall: The Routing problem: Local decisions. Recall: Multidimensional Meshes and Tori. Properties of Routing Algorithms

Recall: The Routing problem: Local decisions. Recall: Multidimensional Meshes and Tori. Properties of Routing Algorithms CS252 Graduate Computer Architecture Lecture 16 Multiprocessor Networks (con t) March 14 th, 212 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~kubitron/cs252

More information

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies Mohsin Y Ahmed Conlan Wesson Overview NoC: Future generation of many core processor on a single chip

More information

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web TR020701 April 2002 Erbil Yilmaz Department of Computer Science The Florida State University Tallahassee, FL 32306

More information

Impact of Ethernet Multipath Routing on Data Center Network Consolidations

Impact of Ethernet Multipath Routing on Data Center Network Consolidations Impact of Ethernet Multipath Routing on Data Center Network Consolidations D. Belabed, S. Secci, G. Pujolle and D. Medhi The 4th International Workshop on Data Center Performance The 34th IEEE International

More information

Energy Proportionality of an Enterprise Network

Energy Proportionality of an Enterprise Network Energy Proportionality of an Enterprise Network Priya Mahadevan, Sujata Banerjee, and Puneet Sharma HP Labs Palo Alto, California, U.S.A priya.mahadevan@hp.com, sujata.banerjee@hp.com,puneet.sharma@hp.com

More information

Traffic Grooming and Regenerator Placement in Impairment-Aware Optical WDM Networks

Traffic Grooming and Regenerator Placement in Impairment-Aware Optical WDM Networks Traffic Grooming and Regenerator Placement in Impairment-Aware Optical WDM Networks Ankitkumar N. Patel, Chengyi Gao, and Jason P. Jue Erik Jonsson School of Engineering and Computer Science The University

More information

CS 498 Hot Topics in High Performance Computing. Networks and Fault Tolerance. 9. Routing and Flow Control

CS 498 Hot Topics in High Performance Computing. Networks and Fault Tolerance. 9. Routing and Flow Control CS 498 Hot Topics in High Performance Computing Networks and Fault Tolerance 9. Routing and Flow Control Intro What did we learn in the last lecture Topology metrics Including minimum diameter of directed

More information

Wavelength Assignment in a Ring Topology for Wavelength Routed WDM Optical Networks

Wavelength Assignment in a Ring Topology for Wavelength Routed WDM Optical Networks Wavelength Assignment in a Ring Topology for Wavelength Routed WDM Optical Networks Amit Shukla, L. Premjit Singh and Raja Datta, Dept. of Computer Science and Engineering, North Eastern Regional Institute

More information

Virtualization of Data Center towards Load Balancing and Multi- Tenancy

Virtualization of Data Center towards Load Balancing and Multi- Tenancy IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 4 Issue 6, June 2017 ISSN (Online) 2348 7968 Impact Factor (2017) 5.264 www.ijiset.com Virtualization of Data Center

More information

OPTIMIZING ENERGY CONSUMPTION OF CLOUD COMPUTING SYSTEMS

OPTIMIZING ENERGY CONSUMPTION OF CLOUD COMPUTING SYSTEMS School of Innovation Design and Engineering Västerås, Sweden Thesis for the Degree of Master of Science in Computer Science with Specialization in Embedded Systems 15.0 credits OPTIMIZING ENERGY CONSUMPTION

More information

Per-Packet Load Balancing in Data Center Networks

Per-Packet Load Balancing in Data Center Networks Per-Packet Load Balancing in Data Center Networks Yagiz Kaymak and Roberto Rojas-Cessa Abstract In this paper, we evaluate the performance of perpacket load in data center networks (DCNs). Throughput and

More information

Hierarchical Traffic Grooming Formulations

Hierarchical Traffic Grooming Formulations Hierarchical Traffic Grooming Formulations Hui Wang, George N. Rouskas Operations Research and Department of Computer Science, North Carolina State University, Raleigh, NC 27695-8206 USA Abstract Hierarchical

More information

Interconnection topologies (cont.) [ ] In meshes and hypercubes, the average distance increases with the dth root of N.

Interconnection topologies (cont.) [ ] In meshes and hypercubes, the average distance increases with the dth root of N. Interconnection topologies (cont.) [ 10.4.4] In meshes and hypercubes, the average distance increases with the dth root of N. In a tree, the average distance grows only logarithmically. A simple tree structure,

More information

Scalable Data Center Multicast. Reporter: 藍于翔 Advisor: 曾學文

Scalable Data Center Multicast. Reporter: 藍于翔 Advisor: 曾學文 Scalable Data Center Multicast Reporter: 藍于翔 Advisor: 曾學文 Outline Introduction Scalable multicast Conclusion Comparison Reference Exploring Efficient and Scalable Multicast Routing in Future Data Center

More information

Micro load balancing in data centers with DRILL

Micro load balancing in data centers with DRILL Micro load balancing in data centers with DRILL Soudeh Ghorbani (UIUC) Brighten Godfrey (UIUC) Yashar Ganjali (University of Toronto) Amin Firoozshahian (Intel) Where should the load balancing functionality

More information

Ad hoc and Sensor Networks Chapter 10: Topology control

Ad hoc and Sensor Networks Chapter 10: Topology control Ad hoc and Sensor Networks Chapter 10: Topology control Holger Karl Computer Networks Group Universität Paderborn Goals of this chapter Networks can be too dense too many nodes in close (radio) vicinity

More information

Leveraging Heterogeneity to Reduce the Cost of Data Center Upgrades

Leveraging Heterogeneity to Reduce the Cost of Data Center Upgrades Leveraging Heterogeneity to Reduce the Cost of Data Center Upgrades Andy Curtis joint work with: S. Keshav Alejandro López-Ortiz Tommy Carpenter Mustafa Elsheikh University of Waterloo Motivation Data

More information

New QoS Measures for Routing and Wavelength Assignment in WDM Networks

New QoS Measures for Routing and Wavelength Assignment in WDM Networks New QoS Measures for Routing and Wavelength Assignment in WDM Networks Shi Zhong Xu and Kwan L. Yeung Department of Electrical & Electronic Engineering The University of Hong Kong Pokfulam, Hong Kong Abstract-A

More information

Chapter 2 Designing Crossbar Based Systems

Chapter 2 Designing Crossbar Based Systems Chapter 2 Designing Crossbar Based Systems Over the last decade, the communication architecture of SoCs has evolved from single shared bus systems to multi-bus systems. Today, state-of-the-art bus based

More information

Capacity planning and.

Capacity planning and. Hints on capacity planning (and other approaches) Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Some economical principles Assume users have

More information

Power Comparison of Cloud Data Center Architectures

Power Comparison of Cloud Data Center Architectures IEEE ICC 216 - Communication QoS, Reliability and Modeling Symposium Power Comparison of Cloud Data Center Architectures Pietro Ruiu, Andrea Bianco, Claudio Fiandrino, Paolo Giaccone, Dzmitry Kliazovich

More information

Load Balancing for Multiple Traffic Matrices Using SDN Hybrid Routing

Load Balancing for Multiple Traffic Matrices Using SDN Hybrid Routing 204 IEEE 5th International Conference on High Performance Switching and Routing (HPSR) Load Balancing for Multiple Traffic Matrices Using SDN Hybrid Routing Junjie Zhang, Kang Xi, Min Luo, H. Jonathan

More information