Joint Power Optimization Through VM Placement and Flow Scheduling in Data Centers

Size: px
Start display at page:

Download "Joint Power Optimization Through VM Placement and Flow Scheduling in Data Centers"

Transcription

1 Joint Power Optimization Through VM Placement and Flow Scheduling in Data Centers Dawei Li, Jie Wu Department of Computer and Information Sciences Temple University, Philadelphia, USA {dawei.li, Zhiyong Liu, Fa Zhang Institute of Computing Technology Chinese Academy of Sciences, Beijing, P. R. China {zyliu, Abstract Two important components that consume the majority of IT power in data centers are the servers and the Data Center Network (DCN). Existing works fail to fully utilize power management techniques on the servers and in the DCN at the same time. In this paper, we jointly consider VM placement on servers with scalable frequencies and flow scheduling in the DCN, to minimize the overall system s power consumption. Due to the convex relation between a server s power consumption and its operating frequency, we prove that, given the number of servers to be used, computation workloads should be allocated to severs in a balanced way, to minimize the power consumption on servers. To reduce the power consumption of the DCN, we further consider the flow requirements among the VMs during VM allocation and assignment. Also, after VM placement, flow consolidation is conducted to reduce the number of active switches and ports. We notice that, choosing the minimum number of servers to accommodate the VMs may result in high power consumption on servers, due to servers increased operating frequencies. Choosing the optimal number of servers purely based on servers power consumption leads to reduced power consumption on servers, but may increase power consumption of the DCN. We propose to choose the optimal number of servers to be used, based on the overall system s power consumption. Simulations show that, our joint power optimization method helps to reduce the overall power consumption significantly, and outperforms various existing state-of-the-art methods in terms of reducing the overall system s power consumption. Index Terms Data centers, data center networks, virtual machine (VM) placement, flow scheduling, joint power optimization. I. INTRODUCTION High power consumption in data centers has become a critical issue. As reported recently [1], the consumption of data centers worldwide has already reached the level of an entire country, such as Argentina or the Netherlands. High power consumption of data centers not only results in high electricity bills, increases carbon dioxide emissions, but also increases the possibility of system failures, because the corresponding power distribution and cooling systems are approaching their peak capacity. Within a data center, IT equipment consumes about 4% of the. Two important components that consume the majority of IT power in data centers are the servers and the Data Center Network (DCN). In a typical data center, the servers take up the majority of the IT power (up to 9%) when they are fully utilized [2], [3]. As servers are becoming more and more power efficient [4], the DCN, which mainly includes the switches that interconnect the servers, can consume an amount of power comparable to that of the servers, i.e., 5% of the IT power, when servers are at typical low levels /14/$31. c 14 IEEE VM1 VM2 VM3 VM4 Fig. 1. (a) VM4 VM3 VM2 VM1 (b) Motivational Example Most often, services are provided by data centers to customers through various virtualization technologies, and are presented as Virtual Machines (VMs). A set of VMs not only have computation requirements; they also require communications among themselves to complete the specified tasks. Intuitively, power management on the servers and in the DCN should be considered jointly to reduce the overall system s power consumption. A. Motivational Example We give an example that motivates our work in this paper. Some assumptions are not explicitly stated, and will be explained later. We consider a two-level network with four switches and four servers; they are connected as in one pod of the classic Fat-Tree [5] architecture. We consider ideal servers whose power consumption is p(u) = p v + u 2 [6], where u is the utilization assigned to the server, and p v =.4 is the static power of the entire server. Each VM has a utilization requirement of.25. The only flows among the VMs are flow 1,3 from VM1 to VM3, and flow 2,4 from VM2 to VM4. Both of the flows are with bandwidth requirement 1. All the network links are bidirectional, and both directions have the same capacity, 1. Power consumption on a switch is a constant power p w =.2 plus power consumption of its active ports. Each active port consumes p port =.2. One extreme assignment is to distribute the VMs equivalently among all the servers, as shown in Fig.1(a); in this case, all the four switches should be powered on to satisfy the communication requirements of the two flows. The consumption of the system is p 1 = 4 ( ) = Another extreme assignment is to power on just one server as shown in Fig. 1(b). In this case, flow 1,3 and flow 2,4 are contained in the server and do not need to travel through the DCN. Thus, no switches are required to be powered on. The consumption of the system is p 2 = = 1.4. Between the two extreme assignments, we can choose to power on half of the servers. If we assign VM1 and VM2 to one server, and VM3 and VM4 to another, as shown in Fig. 1(c), only one switch and its two ports needs to be powered on; the consumption is p 3 = 2 ( ) = Since VM1 and VM3 VM2 VM1 VM4 VM3 (c) VM3 VM1 VM4 VM2 (d)

2 have a communication requirement, and VM2 and VM4 have a communication requirement, intuitively, we should assign VM1 and VM3 to one server, and assign VM2 and VM4 to the other server, as shown in Fig. 1(d); in this case, the two flows do not need to travel through the DCN, and the total power consumption is p 4 = 2 ( ) = 1.3. Notice that, even for this simple example, we have various options regarding which servers and switches to power on. General problems with large numbers of VMs, servers, and switches will be more difficult to solve. Notice also that, Fig. 1(d) achieves the minimal power consumption; it provides us key intuitions: we should place VMs with large communication requirements close to each other, and care should be taken to decide how many servers should be powered on to accommodate the VMs. B. Contributions and Paper Organization In this paper, we jointly consider VM placement on servers with scalable frequencies and flow scheduling/consolidation in the DCN, to optimize the overall system s power consumption. Our main contributions are as follows. To the best of our knowledge, jointly considering servers with scalable frequencies and flow scheduling in the DCN lacks extensive research efforts. Since modern power/energy-optimized servers are usually designed to have scalable frequencies, our work contributes to power optimization in practical data centers. We prove that, given the number of servers to be used, to achieve minimal power consumption on servers, computation workloads of the VMs should be allocated to the chosen severs in a balanced way. This is due to the convex relation between a server s power consumption and its operating frequency. We further consider the flow requirements among the VMs during VM allocation and assignment, to reduce the power consumption on the DCN. Also, after VM placement, flow consolidation is conducted to reduce the number of active switches and ports. We propose to choose the optimal number of servers to be used to accommodate the VMs, based on the overall system power consumption, instead of just the power consumption on servers. Our proposal is based on the following two facts. Choosing the minimum number of servers to be used may result in high power consumption on servers, due to servers increased operating frequencies. Choosing the optimal number of servers purely based on servers power consumption leads to reduced power consumption on servers, but may increase power consumption on the DCN, resulting in an increased overall power consumption. We conduct various simulations to compare our joint power optimization method with several state-of-theart ones. Results show that, our joint method helps to reduce the overall system s power consumption significantly, and outperforms existing state-of-the-art methods in terms of reducing the overall system s power consumption. The rest of the paper is organized as follows. Related works are provided in Section II. The system model is described in Section III. In Section IV, we discuss our joint power optimization method for the problem in special cases, where servers have special operating frequencies and VMs have special computation requirements. In Section V, we consider the general cases. Simulations are conducted in Section VI. Conclusions are made in VII. II. RELATED WORK There have been tremendous existing works on power management on servers. A heuristic-based solution outline for the power-aware consolidation problem of virtualized clusters is presented in [7]. The problem of optimally allocating a power budget among servers in order to minimize mean response time is addressed in [8]. Reference [6] considers the relation between the server s performance and power consumption; based on the relation, a mixed integer programming model is applied to achieve improvement on power-efficiency while providing performance guarantees. In [9], the power consumption and application-level performances of servers are controlled in a holistic way so that explicit guarantees on both can be explicitly achieved. A large number of works also consider power management in the DCN. [] conducts traffic consolidation based on the correlation-aware analysis for data center network traffic flows. [11] conducts energy-aware flow scheduling in a distributed way, which can avoid the single point of failure problem and has good scalability. [12] discusses how to reduce energy consumption in high-density DCNs from a routing perspective; the key idea is to use as few network devices to provide the routing service as possible, with little to no sacrifice on the network performance. [13] presents a framework for DCN power efficiency; basically, it periodically tries to find a minimum power network subset that satisfies current traffic conditions. [14] considers the power optimization of DCNs in a hierarchical perspective; it establishes a two-level power optimization model to reduce the power consumption of DCNs by switching off network switches and links while guaranteeing full connectivity and maximum link utilization. Quite a few works consider coordinated power management on both servers and DCNs. [15] presents a data center scheduling methodology that combines energy efficiency and network awareness and aims to achieve the balance between individual job performances, traffic demands, and energy consumed by the data center. The correlation analysis in [] is extended to also consider the VM utilization levels to ensure joint power optimization of the DCN and the servers [16]. [17] also considers joint power optimization through VM placement on servers and flows scheduling on the DCN; they assume that each VM can be assigned a set of specified servers. [18] presents a general framework for joint power optimization on servers and the DCN. The problem of achieving energy efficiency is characterized as a time-aware model, and is proven to be NP-hard. These existing works fail to fully utilize the power management techniques on servers and in the DCN at the same time. In this paper, we jointly consider VM placement on servers with scalable frequencies and flow scheduling in DCNs, to minimize the overall system s power consumption. III. SYSTEM MODEL AND PROBLEM FORMULATION A. Platform Model We consider the Fat-Tree DCN architecture, where each switch has K ports. Each Top of Rack (ToR) switch connects to N r = K/2 servers, and K/2 aggregate switches. Each pod consists of K/2 ToR switches, and thus, N p = (K/2) 2

3 server rack pod Fig. 2. The Fat-Tree architecture with K = 4. Rectangle represent switches, and circles represent servers. servers. The architecture consists of K pods. At the top of the architecture are K core switches. The total number of servers is N = K 3 /4, and the total number of switches is W = 5K 2 /4. Fig. 2 shows a Fat-Tree architecture with K = 4. 1) Server Model: We assume that servers can work on a set of discrete frequencies {f 1, f 2,, f F }, where F is the total number of available frequencies. Without loss of generality, we assume that these frequency values are sorted in ascending order. All frequency values are normalized by the maximum frequency, f F, i.e., f F = 1. We adopt the power consumption model for servers in [6]. We denote the power consumption of a server, when it is running CPU-intensive services, with normalized CPU utilization, u under frequency f by p v (u, f). Then, we have: p v (u, f) = p v + uf 2, (1) where p v is the static power that is independent of the CPU utilization and the operating frequency. Notice that, p v (u, f) represents the power consumption of the entire server, instead of just the power consumption of the CPU. p v can consist of static power of the CPU and the powers of various other devices, such as disks, RAMs, and network interface cards, etc. uf 2 can be regarded as the dynamic power consumption that is proportional to the square of the operating frequency. This power consumption model for the entire server has been verified to have high accuracy through various experiments [6]. 2) Switch Model: We adopt the simplified model as in [16]. The power consumption of a switch can be calculated as follows: { p w p w = + n a p port, if the switch is on; (2), if the switch is off. where p w is the switch chassis power, n a is the number of active ports on the switch, and p port is the power consumption of one port. We do not consider link power consumption explicitly, because it can be easily incorporated into the switch port s power consumption. B. VM Model We consider a set of small application-specific VMs, V = {v 1, v 2,, v M }, where M is the total number of VMs. The VM model is abstracted from [6]. The services that the VMs provide are CPU-intensive; each VM consumes a fixed amount of CPU time while the frequency of the CPU is kept fixed. In other words, each VM requires a normalized utilization under the maximum frequency of the server, u(v i, f F ), i.e.,u(v i, 1). Then, the normalized utilization of v i under frequency f, is u(v i, f) = u(v i, f F )f F /f = u(v i, 1)/f. The normalization is conducted such that, the amounts of workloads that the VM completes during a certain amount of time at different frequencies are equal, i.e., u(v i, f)f = u(v i, 1). Denote V j = {v j,1, v j,2,, v j, Vj } as the set of VMs assigned onto the the jth server. Denote f v (j) as the frequency Notation TABLE I. Description NOTATIONS N, W the numbers of servers and switches. M the number of virtual machines. K the number of ports on a switch. V the set of all VMs. V j the set of VMs assigned to the jth server. p v the constant power of a server when it is active. f v (j) the jth server s execution frequency. P v i power consumption of the ith server. p v (i) the consumption of the i chosen servers. F the number of available frequencies of every server. p w the switch chassis power. p port the per-port power consumption. s i,j the status of the jth port of the ith switch. u(v, f) VM v s normalized utilization under frequency f. flow i,j the flow from VM, v i to VM, v j. b(v i, v j) bandwidth requirement of flow i,j. of the jth server. For the jth server to fulfill the execution requirements of the virtual machines assigned onto it, the normalized utilization under f v (j) should also be less than or equal to 1. If the assigned workload utilization is greater than 1, the server has to switch to a higher frequency level. Thus, we have the following requirements for all the servers. V j u(v j,i, f v (j)) 1. j = 1, 2,, N. (3) i=1 Given the assignments of VMs to servers, the power consumption on the jth server can be calculated as follows: { Pj v = p v + V j i=1 u(v j,i, f v (j))(f v (j)) 2, V j 1; (4), V j =. There may exist a flow from the ith VM to the jth VM, denoted by flow i,j. We denote b(v i, v j ) as the bandwidth requirement of flow i,j. Notice that b(v j, v i ) may not be equal to b(v i, v j ), because the two flows, flow i,j and flow j,i are with different directions. Also, flow i,j and flow j,i are independent from each other, and they can take different routing paths. Given the assignments of VMs to servers, and given a routing path for all flows, the port status of each switch can be determined. Denote s i,j as the status of the jth port of the ith switch, i = 1, 2,, W, j = 1, 2,, K. Specifically, s i,j = 1 if and only if the jth port of the ith switch is active, and s i,j = if and only if the jth port of the ith switch is off. Thus, the power consumption of the ith switch is: { Pi w p w +( K j=1 = s i,j)p port, if K j=1 s i,j 1;, if K j=1 s (5) i,j =. C. Problem Formulation In a practical data center, for a time period, we are given a set of VMs, V, and their communication bandwidth requirement matrix b M M. We need to consider the following issues: i) allocate the VMs to servers, ii) set the frequency of each server, and iii) decide which switches and ports to use to support the communication requirements among all the VMs. The constraints are that the VMs execution and communication requirements are met. Our final goal is to minimize the overall power consumption on both the servers and the DCN. The problem can be formulated as follows. N min j=1 P j v + W i=1 P i w (6) s.t. Vj i=1 u(v j,i, f v (j)) 1. j = 1, 2,, N. (7) N j=1 V j = M; (8) flow i,j link k b(v i, v j ) 1, all links in the DCN. (9)

4 Equation (6) is the objective. Constraints in (7) ensure that each server is assigned a workload with utilization less than 1, under the adopted frequency f v (j). Constraints in (8) actually guarantee that each VM is assigned to a certain server. Constraints in (9) require that the communication volume through each link does not exceed the capacity of the link. More detailed problem formulations have been presented in [16] and [17], and the problems have been shown to be NPcomplete. Our problem is similar to the problem formulations in essence, and considers frequency scaling capabilities on servers; thus, this is also NP-complete. We do not provide the detailed formulations here, and just introduce several important notations and the basic ideas. In the following section, we consider special cases, where servers have special discrete frequency levels, and VMs have special utilization requirements. We move onto the general cases in Section V. IV. SPECIAL CASES In this section, we assume that the difference between each server s two consecutive frequencies is uniform, and is equal to f 1. Thus, f i = if 1, i = 1,, F. We also assume that each VM s normalized utilization under the maximum frequency is f 1. Thus, a server at frequency f i can host i VMs. If a server is assigned a certain number (greater than zero) of VMs, we require the server to be on; otherwise, we do not need to use this server, and can put this server in off state, consuming zero power. We consider the following question: what is the optimal VM allocation to optimize the power consumption on servers, given the number of servers to be used? A. Optimal VM Allocation Given the # of Servers to Be Used We define the following property of a VM allocation: Definition. A VM allocation is balanced if the numbers of VMs assigned to any two servers, which are on, differ no greater than 1; otherwise, the assignment is unbalanced. Illustration. For example, if there are servers in total, the first 7 servers are assigned 6 VMs each, the eighth server is assigned 5 VMs, and the last two servers are off, then, this assignment is balanced. However, if the eighth server is assigned 4 VMs, this assignment is unbalanced. Theorem 1: An optimal VM allocation that minimizes the power consumption on servers is balanced. Proof: We choose to prove it by contradiction. Assume that the optimal allocation, A opt, is unbalanced, then, there exist at least two servers, S 1 and S 2, whose assigned numbers of VMs, n 1 and n 2, respectively, differ greater than 1, i.e., n 1 n 2 > 1. Without loss of generality, we assume that n 1 > n For the server with n 1 VMs, its optimal power consumption is p 1 = p v + (n 1 f 1 ) 2, and for the server with n 2 VMs, its optimal power consumption is p 2 = p v + (n 2 f 1 ) 2. Based on A opt, we develop another assignment, A, by moving one VM on server S 1 to server S 2, while keeping other VM assignments on other servers unchanged. In A, the power consumption on server S 1 is p 1 = p v + ((n 1 1)f 1 ) 2, and the power consumption on server S 2 is p 2 = p v + ((n 2 + 1)f 1 ) 2. The difference of the power consumption on two servers S 1 and S 2 in A opt and A can be calculated as: p 1+p 2 (p 1 +p 2 ) = ((n 1 1)f 1 ) 2 + ((n 2 +1)f 1 ) 2 ((n 1 f 1 ) 2 + (n 2 f 1 ) 2 ) = 2f1 2 (n n 1 ) <. () Algorithm 1 Special Case VM Grouping SCVG(V, n) Input: The VM set, V and number of servers to use, n; Output: A partition of the VMs to n groups, V g 1,, V n g ; 1: V = ; V g i =, 1 i n; 2: for j = 1,, n do 3: Find v V, such that v has the least weight with VMs in V ; 4: V = V v; V = V + v; V g j = V g j + v; 5: while V do 6: Find the groups that have the least number of VMs, V g j 1,, V g j ; n 7: for i = 1,, n do 8: Find vi V, such that v i has the greatest weight, r i with VMs in V g j i, among all VMs in V; 9: v = arg v i max{r i }; : V = V v ; V g j i = V g j i + v ; Thus, the overall power consumption of A is less than that of A opt, which contradicts the assumption that A opt is an optimal allocation. Hence, this theorem is proven. B. Server-Level VM Grouping Assume that we have chosen to use n servers. In the above subsection, we have shown that, in order to achieve the minimal power consumption on the n servers, the VM allocation should be balanced. In other words, we should allocate M/n VMs to each of the first (M mod n) servers, and allocate M/n VMs to each of the next n (M mod n) servers. Since our goal is to minimize the system s overall power consumption, we also need to consider which VMs should be allocated to which servers, such that less numbers of switches and switch ports should be active to support the communication requirements among VMs, and that the power consumption of the DCN is minimized. The intuition is to assign a group of VMs that have high communication requirements among themselves to the same server, and assign VMs with low communication requirements to different servers. We define the (communication) weight between two VMs, v i and v j, by w(v i, v j ), which can be calculated as the sum of the bandwidth requirements of flow i,j and flow j,i : w(v i, v j ) = b(v i, v j ) + b(v j, v i ). (11) Thus, we need to partition the VMs to n groups, V g 1, V g 2,, V n g, such that the overall weight between VMs, which are not assigned to the same group, is minimized, and that the numbers of VMs in the n groups differ by at most 1. Notice that, each group V g i will be assigned to a server. The above VM grouping problem falls into the class of the balanced minimum k-cut problem, which is a known NPcomplete problem [19]. We develop a heuristic algorithm to solve it. First, all groups are initialized as empty, i.e., V g i =, 1 i n. Second, we add one VM to each of the n groups; during this process, the VMs are added sequentially. The VM to be added to the first group can be chosen randomly; after that, the VM is removed from the remaining VM set. To add a VM to the jth group, we choose the VM that has the minimum weight with VMs that have already been allocated. The process continues until each of the n groups has one and only one VM. The intuition is that VMs that have a low weight between them should be allocated to different groups. Finally, we allocated

5 VMs to the groups one by one; in each step, we choose to add a VM to the group that has the least number of VMs. The VM to be added is chosen, such that the VM has the greatest weight to all the VMs in the group, among all other VMs. Since in each step, there might be several groups having the same minimum number of VMs, to break the ties, we choose the group that achieves the maximum communication requirement with the corresponding VM among the tying groups. Algorithm 1 gives the details of our algorithm. C. Rack-Level VM Grouping Notice that, given the number of servers to be used, n, the minimum number of racks that need to be used is n r = n/(k/2), where K/2 is the number of servers in a rack. In order to power on the minimum number of ToR switches, we choose to use exactly n r racks. Another question still needs to be answered: how many servers should be powered on in each specific rack? The constraint of this problem is that, the number of servers in each rack is no greater than N r. To avoid unnecessarily powering on the aggregate switches due to unbalanced numbers of active servers in racks, we adopt another balanced assignment: the first (n mod n r ) racks power on n/n r servers each, and the next n r (n mod n r ) racks power on n/n r servers each. Next, we need to decide which groups from V g 1,, V n g, should be assigned to the same rack. The intuition is similar: we intend to assign the groups with high communication requirements among themselves into the same rack, while assigning the groups with low communication requirements into different racks, to reduce the inter-rack communication. We define the weight between two groups, V g i and V g j as follows: w(v g i, V g j ) = v k V g i,v l V g j w(v k, v l ). (12) Then, we need to partition the n groups into n r balanced parts, V1 r,, Vn r r. The same algorithm as that of Algorithm 1 can be applied to partition the n groups to n r parts. Notice that, each part Vi r will be assigned to a rack. D. Pod-Level VM Grouping We choose the minimum number of pods that should be used, n p = n r /(K/2), where K/2 is the number of ToRs in a pod. We need to decide which parts should be allocated to the same pod. To avoid unnecessarily powering on core switches due to unbalanced numbers of active racks in pods, we adopt another balanced allocation: the first (n r mod n p ) pods power on n r /n p racks each, and the next n p (n r mod n p ) pods power on n r /n p racks each. We further need to partition the n r parts into n p categories, V p 1,, V n p p, such that the inter-category communication is minimized. We define the weight between two parts, Vi r and Vj r as follows: w(vi r, Vj r ) = w(v g k, V g l ). (13) V g k V i r,v g l V j r The same algorithm as that of Algorithm 1 can be applied to partition the n r parts to n p categories. Notice that each category, V p i will be assigned to a pod. E. VM Placement and Flow Scheduling Since the Fat-Tree architecture is a symmetric architecture, after the above hierarchical VM grouping process, we can just place the first category on the first pod, place the first part of the first category on the first rack of the first pod, and place the first group of each part on the first server in the corresponding rack. After this VM placement, we then consider assigning flows among VMs to switches, ports and links, to meet the communication requirements among all VMs. We assume that a flow cannot be split to take different paths. Given the VM placement, each flow may have multiple paths to take. If a flow s source and destination VMs belong to different servers in the same rack, the path for the flow is unique. If a flow s source and destination VMs belong to different racks in the same pod, there are K/2 different shortest length paths. If a flow s source and destination VMs belong to different pods, there are (K/2) 2 shortest length paths. In order to reduce the power consumption for communications, a common intuition is to consolidate the flows to the minimal number of switches and ports, such that the remaining network elements can be put into sleep mode for power saving. We develop a heuristic based on the greedy bin packing strategy: for each flow between a pair of VMs, assign the flow to its leftmost path that has enough capacity for the flow. We conduct this assignment until we have assigned all the flows. Upon finishing, we can get the numbers of active aggregate switches, active core switches, and the corresponding ports that should be active. Thus, the consumption of the DCN can be easily achieved. F. Our Overall Approach In the previous subsections, we have assumed to use a given number of servers, i.e., n. In this subsection, we address how to choose the n to minimize the overall system s power consumption. Given the number of VMs to assign, M, (M NF ), the minimum number of servers that should be used is n min = M/F, and the maximum number of servers that can be used is n max = min{m, N}. If the number of used servers is n, (n min n n max ), according to Theorem 1, in a balanced VM allocation, the optimal power consumption of the n servers can be calculated as follows. p v (n) = (M mod n)(p v + (( M/n f 1 ) 2 ) +(n (M modn))(p v +( M/n f 1 ) 2 ). (14) A simple method to determine the number of servers to use is to purely base it on the power consumption of the servers, i.e., choosing the number that minimizes p v (n). However, our goal is to minimize the power consumption of the overall system, instead of the power consumption only of the servers. We denote the power consumption of the switches (if using n servers), after the hierarchical VM grouping and flow consolidation processes, by p w (n). We propose to determine the optimal number of servers to be used as follows. n opt n max =arg i min (i) + p w (i)}. (15) i=n min{pv Given a problem instance, we first determine n opt through the hierarchical VM grouping, and flow consolidation processes. Then, we take the corresponding VM grouping and flow consolidation strategies, that minimize the overall system s power consumption. This completes our joint power optimization method for the overall problem in the special cases. G. An Illustration Example Assume the we have 32 VMs to be assigned to a Fat-Tree with K=4, which can accommodate 16 servers. Assume that the optimal number of servers to be used is determined as n opt = 8. The communication requirements among VMs are

6 VM1 VM3 VM17 VM19 VM2 VM4 VM18 VM VM1-4 VM5 VM7 VM21 VM23 VM6 VM8 VM22 VM24 VM9 VM11 VM25 VM27 VM VM12 VM26 VM28 (a) 32 VMs and their communications. VM5-8 VM9-12 VM13-16 VM17- VM21-24 VM25-28 VM13 VM15 VM29 VM31 VM29-32 (b) VM assignment. Fig. 3. An illustration example. (a) The communication requirements among VMs based on the all the flows among VMs. Thick lines represent high bandwidth requirements, and thin lines represent low bandwidth requirements. (b) A final VM assignments to the servers, racks, and pods. VM14 VM16 VM VM32 shown in Fig. 3(a), where thick lines represent high bandwidth requirements, while thin lines represent low bandwidth requirements. First, the 32 VMs should be partitioned into n opt =8 groups, such that the inter-group communications are minimized. After that, we need to partition the 8 groups into 4 parts, where each part will be assigned to a rack, such that the inter-part communications are minimized. Finally, we need to further partition the 4 parts into 2 categories, such that the inter-category communications are minimized. V. GENERAL CASES In general cases, servers may have an arbitrary set of discrete frequencies, and VMs can have different utilization requirements. Instead of considering the general cases directly, we first consider the ideal case, where servers can have continuous operating frequencies, and VMs can be split arbitrarily. A. Optimal VM Allocation Given the # of Servers to Be Used Assume that the total utilization of all the VMs is U. If we have decided to use n servers to host the VMs, the optimal workload allocation is to evenly allocate the workload of VMs to the n servers. We have the following theorem. Theorem 2: Given a set of VMs whose total utilization is U. The optimal way to allocate the workload to n servers is to allocate U/n to each of the servers, and each server operates on frequency U/nf F, i.e., U/n. Proof: Assume that the ith server is assigned workload u i 1, 1 i n. If server i operates on frequency f v (i), the assigned normalized utilization under frequency f v (i) should be less than or equal to 1, i.e., u i /f v (i) 1. The power consumption on server i is Pi v = p v + u i f v (i). Since u i f v (i) f F = 1, Server i s power consumption is minimized when f v (i) = u i. Thus, the minimal power consumption of server i is Pi v = p v + u 2 i. The power consumption of all severs is p v g(n) = n i=1 P i v = np v + n i=1 u2 i. Notice that n i=1 u i = U. According to Jensen s inequality, ( n i=1 u2 i )/n ( n i=1 u i) 2 /n 2. Thus, p v g(n) is minimized when u 1 = u 2 = = u n = U/n. The minimal power consumption on the n servers is p v g(n) = np v + U 2 /n. B. Our Overall Approach Theorem 2 indicates that, in the ideal situation, workload of VMs should be perfectly balanced and partitioned to servers. Since modern power-optimized servers usually have a considerable number of discrete frequencies and we are considering small application-specific VMs, it does provide a reference for how to allocate VMs to servers in practical situations. For a practical situation, where servers have a set of discrete frequencies to operate on, and the VMs cannot be split, given the number of servers to be used, to achieve the workload balanced partition is already NP-hard [] without considering the communications among VMs. We develop another heuristic based on the worst-fit partitioning scheme [] to achieve workload balanced partition and reduce the communication requirement among the partitions. The algorithm is no different in essence from Algorithm 1. The only different is that, in each step, instead of choosing the groups with the minimum number of assigned VMs, the algorithm in the general case, will choose the group with the minimum assigned utilization, to add a corresponding VM. After that, the hierarchical VM grouping procedures, which allocate groups to parts and parts to categories, and the flow consolidation process, are essentially the same as those described in Section IV for the special cases. We propose to determine the number of servers to minimize the overall system power consumption as follows. n opt n max =arg i min g(i) + p w i=n min{pv g (i)}, (16) where p v g(i) and p w g (i) is the power consumption of servers and switches, respectively, after the hierarchical VM placement and flow consolidation processes for the general cases. VI. SIMULATIONS We conduct extensive simulations to justify the power reductions of our proposed joint power optimization method. We denote our overall method by J SN, because our method Jointly applies the power management techniques on the servers with Scalable frequencies, and on the data center Network to minimize the overall system s power consumption. We compare our method with the following important methods. No SN: This method does not allow frequency scaling on servers, nor does it apply flow consolidation in the DCN. All servers operate on the maximum frequency. Power consumption on a sever is the static power p v plus the utilization assigned to the server times 1. Notice that, we normalize all frequency values by the maximum frequency value, f F. Also, all power consumption values are normalized by the maximum dynamic power consumption, i.e., ff 2 = 1. In this method, all switches in the DCN are on, and all ports on each switch are active. Power consumption of a switch is equal to p w plus power consumption of all active ports. No S: This method does not allow frequency scaling on servers; it does apply flow consolidation in the DCN. It chooses the minimum number of servers to accommodate all the VMs. All the chosen servers operate on the maximum frequency; other servers are powered off, and consume zero power. Algorithms similar to ours are applied to conduct balanced VM allocation and assignment, and flow scheduling in the

7 Fig Fig (a) K = 6, M = 54. Simulations for the special cases (a) K = 6, M = 54. Simulations for the general cases (b) K = 6, M = DCN. Notice that, this method is the state-of-the-art workload consolidation method. No N: This method does apply frequency scaling on servers; it does not apply flow consolidation in the DCN. Based on the power consumption characteristics of servers, it chooses the optimal number of servers to power on, to minimize the power consumption on servers. However, all switches are powered on, and all ports are active. S SN: This method applies both frequency scaling on servers and flow consolidation in the DCN. However, it first chooses the optimal number of servers to power on, to minimize the power consumption on servers, as in No N. After that, algorithms similar to ours are applied to conduct balanced VM allocation and assignment, and flow consolidation in the DCN. A. Simulation Settings We conduct simulations for both special cases and general cases, for Fat-Tree architectures with K = 6. For an architecture with switch port number K = 6, the total number of servers is N = 54. 1) The Special Cases: Each server has a set of five available frequencies: f 1 =.2, f 2 =.4, f 3 =.6, f 4 =.8, and f 5 = 1. Each VM s normalized utilization is.2. For each VM, we assume that it has a random number of flows going out of it to other VMs. The random numbers follow a distribution close to the normal distribution with the center being K. To evaluate the performances of our proposed joint power optimization method under various computation workloads, we choose fixed values for p v, p w, and p port. We set p v =.2, i.e., the static power is % of the maximum dynamic power. We set p w =.2, and p port =.2. The settings are based on practical values, and similar settings have been applied in previous works []. As reported in [21], servers usually have a low utilization in data centers, though they are designed to accommodate the worst cases. We vary the number of VMs to be equal to the number of servers, 1.5 times the number of servers, and 2 times the number of servers. These correspond (b) K = 6, M = (c) K = 6, M = (c) K = 6, M = 8. to different average data center utilizations, namely,.2,.3, and.4, respectively. To evaluate the performances of our method under different data center configurations, we fix the number of VMs as M = 8, vary p v as.1,.2,.3,.4, and.5, and vary p w as.2,.4, and.6. 2) The General Cases: We also design simulations for the general cases. To verify that our joint method applies to general cases, we let servers have the following set of operating frequencies: f 1 =.3, f 2 =.5, f 3 =.75, f 4 =, 8, and f 5 = 1, instead of the previous case where f i = if 1. We randomly generate the VM s utilizations within [.1,.3], with a mean value of.2. To evaluate the performances of our method under various computation workloads, we choose fixed values for p v, p w, and p port. We set p v =.2, p w =.2, and p port =.2. The number of VMs is varied to be N, 3N/2, and 2N, respectively. To evaluate the performances of our method under different data center configurations, we assume that M = 8 and p port =.2, and vary p v and p w. B. Simulation Results and Analysis The simulation results for the special cases are shown in Fig. 4 and Table II. As we can see from Fig. 4, servers consume the majority of the power, especially when frequency scaling on servers are not allowed. Thus, applying power management techniques on servers has great potential for power reduction. When frequency scaling is enabled on servers, the power consumption on servers is reduced from 21.6 (N SN) to 9.72 (No N), which is about a 55.% reduction. When flow consolidation and power management on switches are involved, the power consumption on switches is reduced from (No SN) to 3.9 (No S), which is about a 72.92% reduction. However, since No N and No S do not apply power management both on servers and in the DCN at the same time, they still have a large overall power consumption. S SN and J SN apply power management both on servers and in the DCN at the same time. However, S SN chooses the number

8 TABLE II. OVERALL POWER CONSUMPTION FOR DIFFERENT PLATFORM PARAMETERS (SPECIAL CASES) p v p w =.2 pw =.4 pw =.6 No SN No S No N S SN J SN No SN No S No N S SN J SN No SN No S No N S SN J SN TABLE III. OVERALL POWER CONSUMPTION FOR DIFFERENT PLATFORM PARAMETERS (GENERAL CASES) p v p w =.2 pw =.4 pw =.6 No SN No S No N S SN J SN No SN No S No N S SN J SN No SN No S No N S SN J SN of servers to use, to minimize the power consumption only on servers; its power consumption on switches is much greater than that of J SN, though its power consumption on servers is lower than that of J SN. Our J SN method chooses the number of servers to minimize the overall power consumption; thus, it achieves the minimum overall power consumption. From Table II, we can see that the power reduction achieved by J SN is more significant when the static power consumption p v is smaller. Since modern power/energy-optimized servers, especially in data centers, tend to have low static power consumption, J SN can achieve a considerable power reduction in modern and future data centers. All simulation results show that our joint power optimization method achieves the minimum power consumption among all methods. This is because our method always target at reducing the overall system s power consumption. The simulation results for the general cases are shown in Fig. 5 and Table III. The power consumption of various methods demonstrate similar patterns to those in the special cases. These results verify that our joint power optimization methods works well in general cases. VII. CONCLUSION In this paper, we consider joint power optimization through VM placement on servers with scalable frequencies and flow scheduling in DCNs. Due to the convex relation between a server s power consumption and its operating frequency, we prove that, given the number of servers to be used, computation workload should be allocated to the chosen severs in a balanced way, to achieve minimal power consumption on servers. To reduce the power consumption in the DCN, we further consider the flow requirements among the VMs during VM partitioning and assignment. Also, after VM placement, a flow consolidation is applied to reduce the number of active switches and ports. Extensive simulations show that, our joint power optimization method outperforms various existing stateof-the-art methods in terms of saving the system s overall power consumption. REFERENCES [1] J. K. W. Forrest and N. Kindler, Data centers: how to cut carbon emissions and costs, McKinsey Bus. Technol., vol. 14, no. 6, 8. [2] D. Abts, M. Marty, P. Wells, P. Klausler, and H. Liu, Energy proportional datacenter networks, in ISCA,, pp [3] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, and N. McKeown, Elastictree: Saving energy in data center networks, in 7th USENIX NSDI,, pp [4] L. A. Barroso and U. Holzle, The case for energy-proportional computing, Computer, vol. 4, no. 12, pp , 7. [5] M. Al-Fares, A. Loukissas, and A. Vahdat, A scalable, commodity data center network architecture, in ACM SIGCOMM Conf. on Data Comm., 8, pp [6] V. Petrucci, E. Carrera, O. Loques, J. C. B. Leite, and D. Mosse, Optimized management of power and performance for virtualized heterogeneous server clusters, in 11th IEEE/ACM CCGrid, May 11, pp [7] S. Srikantaiah, A. Kansal, and F. Zhao, Energy aware consolidation for cloud computing, in 8 Conference on Power Aware Computing and Systems, 8, pp.. [8] A. Gandhi, M. Harchol-Balter, R. Das, and C. Lefurgy, Optimal power allocation in server farms, in Eleventh International Joint Conference on Measurement and Modeling of Computer Systems, 9, pp [9] X. Wang and Y. Wang, Co-con: Coordinated control of power and application performance for virtualized server clusters, in Quality of Service, International Workshop on, July 9, pp [] X. Wang, Y. Yao, X. Wang, K. Lu, and Q. Cao, Carpo: Correlationaware power optimization in data center networks, in IEEE INFOCOM, Mar. 12, pp [11] R. Liu, H. Gu, X. Yu, and X. Nian, Distributed flow scheduling in energy-aware data center networks, Communications Letters, IEEE, vol. 17, no. 4, pp , Apr. 13. [12] Y. Shang, D. Li, and M. Xu, Energy-aware routing in data center network, in the First ACM SIGCOMM Workshop on Green Networking, pp [13] N. H. Thanh, B. D. Cuong, T. D. Thien, P. N. Nam, N. Q. Thu, T. T. Huong, and T. M. Nam, Ecodane: A customizable hybrid testbed for green data center networks, in Advanced Technologies for Communications, International Conference on, Oct. 13, pp [14] Y. Zhang and N. Ansari, Hero: Hierarchical energy optimization for data center networks, in IEEE ICC, June 12, pp [15] D. Kliazovich, P. Bouvry, and S. Khan, Dens: Data center energyefficient network-aware scheduling, in Green Computing and Communications, IEEE/ACM Int l Conference on Cyber, Physical and Social Computing, Dec., pp [16] K. Zheng, X. Wang, L. Li, and X. Wang, Joint power optimization of data center network and servers with correlation analysis, in IEEE INFOCOM, Mar. 12, pp [17] H. Jin, T. Cheocherngngarn, D. Levy, A. Smith, D. Pan, J. Liu, and N. Pissinou, Joint host-network optimization for energy-efficient data center networking, in IEEE IPDPS, May 13, pp [18] L.Wang, F. Zhang, J. A. Aroca, A. Vasilakos, K. Zheng, C. Hou, D. Li, and Z. Liu, Greendcn: A general framework for achieving energy efficiency in data center networks, IEEE JSAC, vol. 32, no. 1, pp. 4 15, Jan. 14. [19] H. Saran and V. V. Vazirani, Finding k-cuts within twice the optimal, in Foundations of Computer Science, 32nd Annual Symposium on, Oct. 1991, pp [] H. Aydin and Q. Yang, Energy-aware partitioning for multiprocessor real-time systems, in IPDPS, Apr. 3. [21] P. Padala, K. G. Shin, X. Zhu, M. Uysal, Z. Wang, S. Singhal, A. Merchant, and K. Salem, Adaptive control of virtualized resources in utility computing environments, in 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems, 7, pp

Reducing Power Consumption in Data Centers by Jointly Considering VM Placement and Flow Scheduling

Reducing Power Consumption in Data Centers by Jointly Considering VM Placement and Flow Scheduling Journal of Interconnection Networks c World Scientific Publishing Company Reducing Power Consumption in Data Centers by Jointly Considering VM Placement and Flow Scheduling DAWEI LI and JIE WU Department

More information

Joint Optimization of Server and Network Resource Utilization in Cloud Data Centers

Joint Optimization of Server and Network Resource Utilization in Cloud Data Centers Joint Optimization of Server and Network Resource Utilization in Cloud Data Centers Biyu Zhou, Jie Wu, Lin Wang, Fa Zhang, and Zhiyong Liu Beijing Key Laboratory of Mobile Computing and Pervasive Device,

More information

Energy-Aware Scheduling for Acyclic Synchronous Data Flows on Multiprocessors

Energy-Aware Scheduling for Acyclic Synchronous Data Flows on Multiprocessors Journal of Interconnection Networks c World Scientific Publishing Company Energy-Aware Scheduling for Acyclic Synchronous Data Flows on Multiprocessors DAWEI LI and JIE WU Department of Computer and Information

More information

Multi-resource Energy-efficient Routing in Cloud Data Centers with Network-as-a-Service

Multi-resource Energy-efficient Routing in Cloud Data Centers with Network-as-a-Service in Cloud Data Centers with Network-as-a-Service Lin Wang*, Antonio Fernández Antaº, Fa Zhang*, Jie Wu+, Zhiyong Liu* *Institute of Computing Technology, CAS, China ºIMDEA Networks Institute, Spain + Temple

More information

Agile Traffic Merging for DCNs

Agile Traffic Merging for DCNs Agile Traffic Merging for DCNs Qing Yi and Suresh Singh Portland State University, Department of Computer Science, Portland, Oregon 97207 {yiq,singh}@cs.pdx.edu Abstract. Data center networks (DCNs) have

More information

Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms

Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms Dawei Li and Jie Wu Department of Computer and Information Sciences Temple University Philadelphia, USA {dawei.li,

More information

Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms

Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms Energy-aware Scheduling for Frame-based Tasks on Heterogeneous Multiprocessor Platforms Dawei Li and Jie Wu Temple University, Philadelphia, PA, USA 2012-9-13 ICPP 2012 1 Outline 1. Introduction 2. System

More information

6 Distributed data management I Hashing

6 Distributed data management I Hashing 6 Distributed data management I Hashing There are two major approaches for the management of data in distributed systems: hashing and caching. The hashing approach tries to minimize the use of communication

More information

Power Comparison of Cloud Data Center Architectures

Power Comparison of Cloud Data Center Architectures IEEE ICC 216 - Communication QoS, Reliability and Modeling Symposium Power Comparison of Cloud Data Center Architectures Pietro Ruiu, Andrea Bianco, Claudio Fiandrino, Paolo Giaccone, Dzmitry Kliazovich

More information

FCell: Towards the Tradeoffs in Designing Data Center Network Architectures

FCell: Towards the Tradeoffs in Designing Data Center Network Architectures FCell: Towards the Tradeoffs in Designing Data Center Network Architectures Dawei Li and Jie Wu Department of Computer and Information Sciences Temple University, Philadelphia, USA {dawei.li, jiewu}@temple.edu

More information

DISCO: Distributed Traffic Flow Consolidation for Power Efficient Data Center Network

DISCO: Distributed Traffic Flow Consolidation for Power Efficient Data Center Network DISCO: Distributed Traffic Flow Consolidation for Power Efficient Data Center Network Kuangyu Zheng, Xiaorui Wang, and Jia Liu The Ohio State University, Columbus, OH, USA Abstract Power optimization for

More information

Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network

Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network Temporal Bandwidth-Intensive Virtual Network Allocation Optimization in a Data Center Network Matthew A. Owens, Deep Medhi Networking Research Lab, University of Missouri Kansas City, USA {maog38, dmedhi}@umkc.edu

More information

Computer Based Image Algorithm For Wireless Sensor Networks To Prevent Hotspot Locating Attack

Computer Based Image Algorithm For Wireless Sensor Networks To Prevent Hotspot Locating Attack Computer Based Image Algorithm For Wireless Sensor Networks To Prevent Hotspot Locating Attack J.Anbu selvan 1, P.Bharat 2, S.Mathiyalagan 3 J.Anand 4 1, 2, 3, 4 PG Scholar, BIT, Sathyamangalam ABSTRACT:

More information

Data Center Network Topologies II

Data Center Network Topologies II Data Center Network Topologies II Hakim Weatherspoon Associate Professor, Dept of Computer cience C 5413: High Performance ystems and Networking April 10, 2017 March 31, 2017 Agenda for semester Project

More information

LID Assignment In InfiniBand Networks

LID Assignment In InfiniBand Networks LID Assignment In InfiniBand Networks Wickus Nienaber, Xin Yuan, Member, IEEE and Zhenhai Duan, Member, IEEE Abstract To realize a path in an InfiniBand network, an address, known as Local IDentifier (LID)

More information

Link Scheduling in Multi-Transmit-Receive Wireless Networks

Link Scheduling in Multi-Transmit-Receive Wireless Networks Macau University of Science and Technology From the SelectedWorks of Hong-Ning Dai 2011 Link Scheduling in Multi-Transmit-Receive Wireless Networks Hong-Ning Dai, Macau University of Science and Technology

More information

Cascaded Coded Distributed Computing on Heterogeneous Networks

Cascaded Coded Distributed Computing on Heterogeneous Networks Cascaded Coded Distributed Computing on Heterogeneous Networks Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT,

More information

Time Efficient Energy-Aware Routing in Software Defined Networks

Time Efficient Energy-Aware Routing in Software Defined Networks Efficient Energy-Aware Routing in Software Defined Networks Yu-Hao Chen, Tai-Lin Chin, Chin-Ya Huang, Shan-Hsiang Shen, Ruei-Yan Huang Dept. of Computer Science and Information Engineering, National Taiwan

More information

Wavelength Assignment in a Ring Topology for Wavelength Routed WDM Optical Networks

Wavelength Assignment in a Ring Topology for Wavelength Routed WDM Optical Networks Wavelength Assignment in a Ring Topology for Wavelength Routed WDM Optical Networks Amit Shukla, L. Premjit Singh and Raja Datta, Dept. of Computer Science and Engineering, North Eastern Regional Institute

More information

Guaranteeing Heterogeneous Bandwidth Demand in Multitenant Data Center Networks

Guaranteeing Heterogeneous Bandwidth Demand in Multitenant Data Center Networks 1648 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 23, NO. 5, OCTOBER 2015 Guaranteeing Heterogeneous Bandwidth Demand in Multitenant Data Center Networks Dan Li, Member, IEEE, Jing Zhu, Jianping Wu, Fellow,

More information

Efficient VM Placement with Multiple Deterministic and Stochastic Resources in Data Centers

Efficient VM Placement with Multiple Deterministic and Stochastic Resources in Data Centers Efficient VM Placement with Multiple Deterministic and Stochastic Resources in Data Centers Hao Jin, Deng Pan, Jing Xu, and Niki Pissinou Florida International University Miami, FL Abstract Virtual machines

More information

4 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 1, JANUARY 2014

4 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 1, JANUARY 2014 4 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 1, JANUARY 2014 GreenDCN: A General Framework for Achieving Energy Efficiency in Data Center Networks Lin Wang, Fa Zhang, Jordi Arjona Aroca,

More information

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Stretch-Optimal Scheduling for On-Demand Data Broadcasts Stretch-Optimal Scheduling for On-Demand Data roadcasts Yiqiong Wu and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University, University Park, PA 6 E-mail: fywu,gcaog@cse.psu.edu

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

The Importance of Switch Dimension for Energy-Efficient Datacenter Networks

The Importance of Switch Dimension for Energy-Efficient Datacenter Networks The Importance of Switch Dimension for Energy-Efficient Datacenter Networks Indra Widjaja a, Anwar Walid a, Yanbin Luo b, Yang Xu b,, H. Jonathan Chao b a Bell Labs, Alcatel-Lucent, 600 Mountain Ave, Murray

More information

Quality-Assured Cloud Bandwidth Auto-Scaling for Video-on-Demand Applications

Quality-Assured Cloud Bandwidth Auto-Scaling for Video-on-Demand Applications Quality-Assured Cloud Bandwidth Auto-Scaling for Video-on-Demand Applications Di Niu, Hong Xu, Baochun Li University of Toronto Shuqiao Zhao UUSee, Inc., Beijing, China 1 Applications in the Cloud WWW

More information

QADR with Energy Consumption for DIA in Cloud

QADR with Energy Consumption for DIA in Cloud Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 4, April 2014,

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

Elastic Scaling of Virtual Clusters in Cloud Data Center Networks

Elastic Scaling of Virtual Clusters in Cloud Data Center Networks Elastic Scaling of Virtual lusters in loud Data enter Networks Shuaibing Lu*, Zhiyi Fang*, Jie Wu, and Guannan Qu* *ollege of omputer Science and Technology, Jilin University hangchun,130012, hina enter

More information

A Mathematical Computational Design of Resource-Saving File Management Scheme for Online Video Provisioning on Content Delivery Networks

A Mathematical Computational Design of Resource-Saving File Management Scheme for Online Video Provisioning on Content Delivery Networks A Mathematical Computational Design of Resource-Saving File Management Scheme for Online Video Provisioning on Content Delivery Networks Dr.M.Upendra Kumar #1, Dr.A.V.Krishna Prasad *2, Dr.D.Shravani #3

More information

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 143 CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 6.1 INTRODUCTION This chapter mainly focuses on how to handle the inherent unreliability

More information

An Optimized Virtual Machine Migration Algorithm for Energy Efficient Data Centers

An Optimized Virtual Machine Migration Algorithm for Energy Efficient Data Centers International Journal of Engineering Science Invention (IJESI) ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 8 Issue 01 Ver. II Jan 2019 PP 38-45 An Optimized Virtual Machine Migration Algorithm

More information

Splitter Placement in All-Optical WDM Networks

Splitter Placement in All-Optical WDM Networks plitter Placement in All-Optical WDM Networks Hwa-Chun Lin Department of Computer cience National Tsing Hua University Hsinchu 3003, TAIWAN heng-wei Wang Institute of Communications Engineering National

More information

Virtual Network Embedding with Substrate Support for Parallelization

Virtual Network Embedding with Substrate Support for Parallelization Virtual Network Embedding with Substrate Support for Parallelization Sheng Zhang, Nanjing University, P.R. China Jie Wu, Temple University, USA Sanglu Lu, Nanjing University, P.R. China Presenter: Pouya

More information

Virtualization of Data Center towards Load Balancing and Multi- Tenancy

Virtualization of Data Center towards Load Balancing and Multi- Tenancy IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 4 Issue 6, June 2017 ISSN (Online) 2348 7968 Impact Factor (2017) 5.264 www.ijiset.com Virtualization of Data Center

More information

Comp Online Algorithms

Comp Online Algorithms Comp 7720 - Online Algorithms Notes 4: Bin Packing Shahin Kamalli University of Manitoba - Fall 208 December, 208 Introduction Bin packing is one of the fundamental problems in theory of computer science.

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Clustering-Based Distributed Precomputation for Quality-of-Service Routing*

Clustering-Based Distributed Precomputation for Quality-of-Service Routing* Clustering-Based Distributed Precomputation for Quality-of-Service Routing* Yong Cui and Jianping Wu Department of Computer Science, Tsinghua University, Beijing, P.R.China, 100084 cy@csnet1.cs.tsinghua.edu.cn,

More information

Modeling Service Applications for Optimal Parallel Embedding

Modeling Service Applications for Optimal Parallel Embedding Modeling Service Applications for Optimal Parallel Embedding Changcheng Huang Department of Systems and Computer Engineering Carleton University Ottawa, Canada huang@sce.carleton.ca Jiafeng Zhu Huawei

More information

Department of Information Technology Sri Venkateshwara College of Engineering, Chennai, India. 1 2

Department of Information Technology Sri Venkateshwara College of Engineering, Chennai, India. 1 2 Energy-Aware Scheduling Using Workload Consolidation Techniques in Cloud Environment 1 Sridharshini V, 2 V.M.Sivagami 1 PG Scholar, 2 Associate Professor Department of Information Technology Sri Venkateshwara

More information

Connectivity-aware Virtual Network Embedding

Connectivity-aware Virtual Network Embedding Connectivity-aware Virtual Network Embedding Nashid Shahriar, Reaz Ahmed, Shihabur R. Chowdhury, Md Mashrur Alam Khan, Raouf Boutaba Jeebak Mitra, Feng Zeng Outline Survivability in Virtual Network Embedding

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

Spectrum Allocation Policies in Fragmentation Aware and Balanced Load Routing for Elastic Optical Networks

Spectrum Allocation Policies in Fragmentation Aware and Balanced Load Routing for Elastic Optical Networks Spectrum Allocation Policies in Fragmentation Aware and Balanced Load Routing for Elastic Optical Networks André C. S. Donza, Carlos R. L. Francês High Performance Networks Processing Lab - LPRAD Universidade

More information

Nodes Energy Conserving Algorithms to prevent Partitioning in Wireless Sensor Networks

Nodes Energy Conserving Algorithms to prevent Partitioning in Wireless Sensor Networks IJCSNS International Journal of Computer Science and Network Security, VOL.17 No.9, September 2017 139 Nodes Energy Conserving Algorithms to prevent Partitioning in Wireless Sensor Networks MINA MAHDAVI

More information

SDPMN: Privacy Preserving MapReduce Network Using SDN

SDPMN: Privacy Preserving MapReduce Network Using SDN 1 SDPMN: Privacy Preserving MapReduce Network Using SDN He Li, Hai Jin arxiv:1803.04277v1 [cs.dc] 12 Mar 2018 Services Computing Technology and System Lab Cluster and Grid Computing Lab School of Computer

More information

Towards Makespan Minimization Task Allocation in Data Centers

Towards Makespan Minimization Task Allocation in Data Centers Towards Makespan Minimization Task Allocation in Data Centers Kangkang Li, Ziqi Wan, Jie Wu, and Adam Blaisse Department of Computer and Information Sciences Temple University Philadelphia, Pennsylvania,

More information

Introduction to Approximation Algorithms

Introduction to Approximation Algorithms Introduction to Approximation Algorithms Dr. Gautam K. Das Departmet of Mathematics Indian Institute of Technology Guwahati, India gkd@iitg.ernet.in February 19, 2016 Outline of the lecture Background

More information

On Modularity Clustering. Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari)

On Modularity Clustering. Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari) On Modularity Clustering Presented by: Presented by: Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari) Modularity A quality index for clustering a graph G=(V,E) G=(VE) q( C): EC ( ) EC ( ) + ECC (, ')

More information

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti International Journal of Computer Engineering and Applications, ICCSTAR-2016, Special Issue, May.16 MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti 1 Department

More information

Profile-based Static Virtual Machine Placement for Energy-Efficient Data center

Profile-based Static Virtual Machine Placement for Energy-Efficient Data center 2016 IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems

More information

DATA centers have become important infrastructures for

DATA centers have become important infrastructures for 260 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 28, NO. 1, JANUARY 2017 Towards the Tradeoffs in Designing Data Center Network Architecres Dawei Li, Jie Wu, Fellow, IEEE, Zhiyong Liu, and

More information

Venice: Reliable Virtual Data Center Embedding in Clouds

Venice: Reliable Virtual Data Center Embedding in Clouds Venice: Reliable Virtual Data Center Embedding in Clouds Qi Zhang, Mohamed Faten Zhani, Maissa Jabri and Raouf Boutaba University of Waterloo IEEE INFOCOM Toronto, Ontario, Canada April 29, 2014 1 Introduction

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

Research Article Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks

Research Article Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks Scientific Programming Volume 2016, Article ID 3101658, 10 pages http://dx.doi.org/10.1155/2016/3101658 Research Article Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center

More information

VNE-Sim: A Virtual Network Embedding Simulator

VNE-Sim: A Virtual Network Embedding Simulator VNE-Sim: A Virtual Network Embedding Simulator Soroush Haeri and Ljiljana Trajković Communication Networks Laboratory http://www.ensc.sfu.ca/~ljilja/cnl/ Simon Fraser University Vancouver, British Columbia,

More information

Performance Improvement of Hardware-Based Packet Classification Algorithm

Performance Improvement of Hardware-Based Packet Classification Algorithm Performance Improvement of Hardware-Based Packet Classification Algorithm Yaw-Chung Chen 1, Pi-Chung Wang 2, Chun-Liang Lee 2, and Chia-Tai Chan 2 1 Department of Computer Science and Information Engineering,

More information

Virtual Machine Placement in Cloud Computing

Virtual Machine Placement in Cloud Computing Indian Journal of Science and Technology, Vol 9(29), DOI: 10.17485/ijst/2016/v9i29/79768, August 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Virtual Machine Placement in Cloud Computing Arunkumar

More information

Minimal Test Cost Feature Selection with Positive Region Constraint

Minimal Test Cost Feature Selection with Positive Region Constraint Minimal Test Cost Feature Selection with Positive Region Constraint Jiabin Liu 1,2,FanMin 2,, Shujiao Liao 2, and William Zhu 2 1 Department of Computer Science, Sichuan University for Nationalities, Kangding

More information

Review On Data Replication with QoS and Energy Consumption for Data Intensive Applications in Cloud Computing

Review On Data Replication with QoS and Energy Consumption for Data Intensive Applications in Cloud Computing Review On Data Replication with QoS and Energy Consumption for Data Intensive Applications in Cloud Computing Ms. More Reena S 1, Prof.Nilesh V. Alone 2 Department of Computer Engg, University of Pune

More information

A Hop-by-hop Energy Efficient Distributed Routing Scheme

A Hop-by-hop Energy Efficient Distributed Routing Scheme A Hop-by-hop Energy Efficient Distributed Routing Scheme Chenying Hou Technology, CAS University of Chinese Academy of Sciences houchenying@ict.ac.cn Lin Wang Technology, CAS University of Chinese Academy

More information

Multi-path based Algorithms for Data Transfer in the Grid Environment

Multi-path based Algorithms for Data Transfer in the Grid Environment New Generation Computing, 28(2010)129-136 Ohmsha, Ltd. and Springer Multi-path based Algorithms for Data Transfer in the Grid Environment Muzhou XIONG 1,2, Dan CHEN 2,3, Hai JIN 1 and Song WU 1 1 School

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

A scheduling model of virtual machine based on time and energy efficiency in cloud computing environment 1

A scheduling model of virtual machine based on time and energy efficiency in cloud computing environment 1 Acta Technica 62, No. 3B/2017, 63 74 c 2017 Institute of Thermomechanics CAS, v.v.i. A scheduling model of virtual machine based on time and energy efficiency in cloud computing environment 1 Xin Sui 2,

More information

Effective Memory Access Optimization by Memory Delay Modeling, Memory Allocation, and Slack Time Management

Effective Memory Access Optimization by Memory Delay Modeling, Memory Allocation, and Slack Time Management International Journal of Computer Theory and Engineering, Vol., No., December 01 Effective Memory Optimization by Memory Delay Modeling, Memory Allocation, and Slack Time Management Sultan Daud Khan, Member,

More information

Optimization of Heterogeneous Caching Systems with Rate Limited Links

Optimization of Heterogeneous Caching Systems with Rate Limited Links IEEE ICC Communication Theory Symposium Optimization of Heterogeneous Caching Systems with Rate Limited Links Abdelrahman M Ibrahim, Ahmed A Zewail, and Aylin Yener Wireless Communications and Networking

More information

Study of Load Balancing Schemes over a Video on Demand System

Study of Load Balancing Schemes over a Video on Demand System Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video

More information

Research on Load Balancing in Task Allocation Process in Heterogeneous Hadoop Cluster

Research on Load Balancing in Task Allocation Process in Heterogeneous Hadoop Cluster 2017 2 nd International Conference on Artificial Intelligence and Engineering Applications (AIEA 2017) ISBN: 978-1-60595-485-1 Research on Load Balancing in Task Allocation Process in Heterogeneous Hadoop

More information

Traffic Grooming and Regenerator Placement in Impairment-Aware Optical WDM Networks

Traffic Grooming and Regenerator Placement in Impairment-Aware Optical WDM Networks Traffic Grooming and Regenerator Placement in Impairment-Aware Optical WDM Networks Ankitkumar N. Patel, Chengyi Gao, and Jason P. Jue Erik Jonsson School of Engineering and Computer Science The University

More information

Job Re-Packing for Enhancing the Performance of Gang Scheduling

Job Re-Packing for Enhancing the Performance of Gang Scheduling Job Re-Packing for Enhancing the Performance of Gang Scheduling B. B. Zhou 1, R. P. Brent 2, C. W. Johnson 3, and D. Walsh 3 1 Computer Sciences Laboratory, Australian National University, Canberra, ACT

More information

Network Topology Control and Routing under Interface Constraints by Link Evaluation

Network Topology Control and Routing under Interface Constraints by Link Evaluation Network Topology Control and Routing under Interface Constraints by Link Evaluation Mehdi Kalantari Phone: 301 405 8841, Email: mehkalan@eng.umd.edu Abhishek Kashyap Phone: 301 405 8843, Email: kashyap@eng.umd.edu

More information

Load Balancing in Distributed System through Task Migration

Load Balancing in Distributed System through Task Migration Load Balancing in Distributed System through Task Migration Santosh Kumar Maurya 1 Subharti Institute of Technology & Engineering Meerut India Email- santoshranu@yahoo.com Khaleel Ahmad 2 Assistant Professor

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Achieving Energy-Proportionality in Fat-tree DCNs

Achieving Energy-Proportionality in Fat-tree DCNs Achieving Energy-Proportionality in Fat-tree DCNs Qing Yi Department of Computer Science Portland State University Portland, Oregon 977 Email: yiq@cs.pdx.edu Suresh Singh Department of Computer Science

More information

Online Optimization of VM Deployment in IaaS Cloud

Online Optimization of VM Deployment in IaaS Cloud Online Optimization of VM Deployment in IaaS Cloud Pei Fan, Zhenbang Chen, Ji Wang School of Computer Science National University of Defense Technology Changsha, 4173, P.R.China {peifan,zbchen}@nudt.edu.cn,

More information

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This

More information

Using Hybrid Algorithm in Wireless Ad-Hoc Networks: Reducing the Number of Transmissions

Using Hybrid Algorithm in Wireless Ad-Hoc Networks: Reducing the Number of Transmissions Using Hybrid Algorithm in Wireless Ad-Hoc Networks: Reducing the Number of Transmissions R.Thamaraiselvan 1, S.Gopikrishnan 2, V.Pavithra Devi 3 PG Student, Computer Science & Engineering, Paavai College

More information

Connected Components of Underlying Graphs of Halving Lines

Connected Components of Underlying Graphs of Halving Lines arxiv:1304.5658v1 [math.co] 20 Apr 2013 Connected Components of Underlying Graphs of Halving Lines Tanya Khovanova MIT November 5, 2018 Abstract Dai Yang MIT In this paper we discuss the connected components

More information

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web TR020701 April 2002 Erbil Yilmaz Department of Computer Science The Florida State University Tallahassee, FL 32306

More information

AUTO-SCALING OF NETWORK RESOURCES

AUTO-SCALING OF NETWORK RESOURCES AUTO-SCALING OF NETWORK RESOURCES Sabidur Rahman Nov. 4, 2016 Friday Group Meeting, Netlab 1 Agenda Define auto-scaling Network Function Virtualization Motivation Literature review Problem statement and

More information

A PMU-Based Three-Step Controlled Separation with Transient Stability Considerations

A PMU-Based Three-Step Controlled Separation with Transient Stability Considerations Title A PMU-Based Three-Step Controlled Separation with Transient Stability Considerations Author(s) Wang, C; Hou, Y Citation The IEEE Power and Energy Society (PES) General Meeting, Washington, USA, 27-31

More information

Prefix Computation and Sorting in Dual-Cube

Prefix Computation and Sorting in Dual-Cube Prefix Computation and Sorting in Dual-Cube Yamin Li and Shietung Peng Department of Computer Science Hosei University Tokyo - Japan {yamin, speng}@k.hosei.ac.jp Wanming Chu Department of Computer Hardware

More information

CQNCR: Optimal VM Migration Planning in Cloud Data Centers

CQNCR: Optimal VM Migration Planning in Cloud Data Centers CQNCR: Optimal VM Migration Planning in Cloud Data Centers Presented By Md. Faizul Bari PhD Candidate David R. Cheriton School of Computer science University of Waterloo Joint work with Mohamed Faten Zhani,

More information

Diamond: Nesting the Data Center Network with Wireless Rings in 3D Space

Diamond: Nesting the Data Center Network with Wireless Rings in 3D Space Diamond: Nesting the Data Center Network with Wireless Rings in 3D Space Yong Cui 1, Shihan Xiao 1, Xin Wang 2, Zhenjie Yang 1, Chao Zhu 1, Xiangyang Li 1,3, Liu Yang 4, and Ning Ge 1 1 Tsinghua University

More information

A New Combinatorial Design of Coded Distributed Computing

A New Combinatorial Design of Coded Distributed Computing A New Combinatorial Design of Coded Distributed Computing Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT, USA

More information

Benefits of Coded Placement for Networks with Heterogeneous Cache Sizes

Benefits of Coded Placement for Networks with Heterogeneous Cache Sizes Benefits of Coded Placement for Networks with Heterogeneous Cache Sizes Abdelrahman M. Ibrahim, Ahmed A. Zewail, and Aylin Yener ireless Communications and Networking Laboratory CAN Electrical Engineering

More information

Group Secret Key Generation Algorithms

Group Secret Key Generation Algorithms Group Secret Key Generation Algorithms Chunxuan Ye and Alex Reznik InterDigital Communications Corporation King of Prussia, PA 9406 Email: {Chunxuan.Ye, Alex.Reznik}@interdigital.com arxiv:cs/07024v [cs.it]

More information

Insensitive Traffic Splitting in Data Networks

Insensitive Traffic Splitting in Data Networks Juha Leino and Jorma Virtamo. 2005. Insensitive traffic splitting in data networs. In: Proceedings of the 9th International Teletraffic Congress (ITC9). Beijing, China, 29 August 2 September 2005, pages

More information

Maximum Coverage Range based Sensor Node Selection Approach to Optimize in WSN

Maximum Coverage Range based Sensor Node Selection Approach to Optimize in WSN Maximum Coverage Range based Sensor Node Selection Approach to Optimize in WSN Rinku Sharma 1, Dr. Rakesh Joon 2 1 Post Graduate Scholar, 2 Assistant Professor, Department of Electronics and Communication

More information

Directional Sensor Control for Maximizing Information Gain

Directional Sensor Control for Maximizing Information Gain Directional Sensor Control for Maximizing Information Gain Shankarachary Ragi a, Hans D. Mittelmann b, and Edwin K. P. Chong a a Department of Electrical and Computer Engineering, Colorado State University,

More information

Efficient Load Balancing and Disk Failure Avoidance Approach Using Restful Web Services

Efficient Load Balancing and Disk Failure Avoidance Approach Using Restful Web Services Efficient Load Balancing and Disk Failure Avoidance Approach Using Restful Web Services Neha Shiraz, Dr. Parikshit N. Mahalle Persuing M.E, Department of Computer Engineering, Smt. Kashibai Navale College

More information

Toward Low Cost Workload Distribution for Integrated Green Data Centers

Toward Low Cost Workload Distribution for Integrated Green Data Centers Toward Low Cost Workload Distribution for Integrated Green Data Centers 215 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

Topology Control in 3-Dimensional Networks & Algorithms for Multi-Channel Aggregated Co

Topology Control in 3-Dimensional Networks & Algorithms for Multi-Channel Aggregated Co Topology Control in 3-Dimensional Networks & Algorithms for Multi-Channel Aggregated Convergecast Amitabha Ghosh Yi Wang Ozlem D. Incel V.S. Anil Kumar Bhaskar Krishnamachari Dept. of Electrical Engineering,

More information

Fractal Graph Optimization Algorithms

Fractal Graph Optimization Algorithms Fractal Graph Optimization Algorithms James R. Riehl and João P. Hespanha Abstract We introduce methods of hierarchically decomposing three types of graph optimization problems: all-pairs shortest path,

More information

Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

More information

Efficient Migration A Leading Solution for Server Consolidation

Efficient Migration A Leading Solution for Server Consolidation Efficient Migration A Leading Solution for Server Consolidation R Suchithra, MCA Department Jain University Bangalore,India N.Rajkumar, PhD. Department of Software Engineering Ramakrishna College of Engineering

More information

8. CONCLUSION AND FUTURE WORK. To address the formulated research issues, this thesis has achieved each of the objectives delineated in Chapter 1.

8. CONCLUSION AND FUTURE WORK. To address the formulated research issues, this thesis has achieved each of the objectives delineated in Chapter 1. 134 8. CONCLUSION AND FUTURE WORK 8.1 CONCLUSION Virtualization and internet availability has increased virtualized server cluster or cloud computing environment deployments. With technological advances,

More information

Rollout Algorithms for Logical Topology Design and Traffic Grooming in Multihop WDM Networks

Rollout Algorithms for Logical Topology Design and Traffic Grooming in Multihop WDM Networks Rollout Algorithms for Logical Topology Design and Traffic Grooming in Multihop WDM Networks Kwangil Lee Department of Electrical and Computer Engineering University of Texas, El Paso, TX 79928, USA. Email:

More information

OPTIMIZING ENERGY CONSUMPTION OF CLOUD COMPUTING SYSTEMS

OPTIMIZING ENERGY CONSUMPTION OF CLOUD COMPUTING SYSTEMS School of Innovation Design and Engineering Västerås, Sweden Thesis for the Degree of Master of Science in Computer Science with Specialization in Embedded Systems 15.0 credits OPTIMIZING ENERGY CONSUMPTION

More information

VM Migration Planning in Software-Defined Data Center Networks

VM Migration Planning in Software-Defined Data Center Networks 206 IEEE 8th International Conference on High Performance Computing and Communications; IEEE 4th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems Migration

More information