Reducing Peak Power Consumption in Data Centers. Thesis

Size: px
Start display at page:

Download "Reducing Peak Power Consumption in Data Centers. Thesis"

Transcription

1 Reducing Peak Power Consumption in Data Centers Thesis Presented in partial fulfillment of the requirements for the Degree Master of Science in the Graduate School of The Ohio State University By George Michael Green, B.A., M.A., J.D. Graduate Program in Computer Science and Engineering The Ohio State University 2013 Master's Thesis Committee: Rajiv Ramnath, Advisor Jayashree Ramanathan

2 Copyright by George Michael Green 2013

3 Abstract Data centers are assets of high value in the enterprise, but are currently being devalued because their actual lifetimes are falling well short of their predicted lifetimes, due to the rapid growth of peak power consumption, which enterprises have not been able to control effectively. In order to address this devaluation of the data center, based on a review of relevant research, it is argued that enterprises must reduce the growth of peak power consumption both by increasing server utilization percentages during peak power demand periods, and by decreasing idle penalties. These objectives can be accomplished by using a combination of server consolidation, more effective collocation of applications, server virtualization, dynamic voltage and frequency scaling, and CPU sleep states. ii

4 Acknowledgments First and foremost, I wish to acknowledge and thank my advisor, Dr. Rajiv Ramnath, for his guidance and support during the writing of this thesis, and throughout my studies as a graduate student in computer science and engineering. I have benefited greatly by having had the opportunity to study under him, and no student could hope to have a better advisor. I also wish to thank the other member of my thesis committee, Dr. Jay Ramanathan, both for her willingness to serve on my committee, and for her teaching and support in the very valuable classes that I took with her. I also wish to thank my family for their love and support. Most especially I wish to thank my parents, who always both encouraged me and supported me in whatever I have pursued. iii

5 Vita Prerequisite coursework in statistics, mathematics, and computer science at The Ohio State University for entry into the graduate program in Computer Science and Engineering 2008 to present... Graduate student, Department of Computer Science and Engineering, The Ohio State University Fields of Study Major Field: Computer Science and Engineering iv

6 Table of Contents Abstract... ii Acknowledgments... iii Vita... iv Table of Contents... v List of Tables... viii List of Figures... ix Chapter1: Business Problem...1 Fixed Power Capacity...1 The Capacity Management Problem...6 Foreshortening of Projected Data Center Lifetime...13 Chapter 2: Problem Analysis...15 Research Focuses on Average or Total Power Rather than Peak Power...15 The Idle Server Problem...16 v

7 Relationship between CPU Utilization and Power Consumed...20 Need to Increase Performance Per Watt (PPW)...25 Problems with Existing Approaches...31 Inability to determine precisely how power is being used...31 Inability to increase PPW while complying with SLAs...32 Inability to reduce the rate of growth of power demand...33 Inability to improve the fit between collocated applications and hardware...34 Chapter 3: Previous Best Approaches to Managing Data Center Power...36 Server Consolidation...36 Virtualization...45 Dynamic Voltage and Frequency Scaling (DVFS)...56 Processor Sleep States...66 Sleep State Latencies...72 Application Aware Power Management of Idle States...81 Improvements in Processors...90 vi

8 Chapter 4: Conclusion and Future Research...94 Raising Average and Peak Server Utilization Levels...94 Reduction of Idle Penalties...99 References vii

9 List of Tables Table 1. Energy Losses in Server CPUs at Various Levels of Utilization...23 Table 2. Energy Losses in Server CPUs at Various Levels of Utilization Versus Newer Hardware with Lower Idle Power Consumption...92 viii

10 List of Figures Figure 1. Percentage of Data Centers that Anticipate the Need for Additional Capacity...7 Figure 2. Power Used by Typical Server CPUs at Various Levels of Utilization, Compared with Energy Proportional Performance...24 ix

11 Chapter 1: Business Problem Fixed Power Capacity Data centers (DCs) are one of the largest capital investments that modern enterprises make. From a business point of view, the ability of the enterprise to make accurate projections about the lifetime of its capital investments, especially the largest ones, is critical to the financial health of the business. In turn, the financial health of the business directly affects its prospects of continuing commercial success. For these reasons, the ability to accurately project DC lifetime must be among the highest priorities for the modern enterprise. As the enterprise attempts to project DC lifetime as accurately as possible, the most significant constraint on this lifetime is the fixed power capacity of the DC. Simply put, the enterprise must be able to project how long 1

12 the DC will be able to operate without the demand for peak power exceeding the fixed power capacity of the DC. There are various reasons why the power capacity of a DC is fixed, but in short, because the electrical infrastructure which is external to the DC, and through which electrical power is provided to the DC, cannot easily be modified to provide additional power capacity after the DC has been constructed, the power capacity of the DC is effectively fixed at the beginning of its life. Before undertaking the construction of a new DC, the enterprise projects future IT needs, and the accompanying demand for DC power. Based on these projections, and the desired life of the DC, the enterprise selects the size and power capacity of the new DC. For example, suppose the enterprise's current use of peak power is approaching 100 kilowatts, and its current DC has a power capacity equal to that amount. In order to ensure the availability of IT service without interruption, the enterprise will need to construct a new DC, which must be completed before the total power demand of the enterprise's current DC(s) reaches the total power capacity of the current DC(s). It is usually the case that the enterprise can project the short-term rate of growth of power 2

13 demand more reliably than it can predict the long-term rate of growth. If the enterprise projects that the demand for peak power will grow by 15 per cent in the following year, then an additional 15 kilowatts of peak power capacity will be required at the end of that time. Normally, unless there is some specific reason to suppose otherwise, managers will also assume that every year for the foreseeable future, the rate of increase in the demand for peak power will also be 15 per cent. Based on this, if the enterprise intends to construct a new DC that will meet its needs for 10 years, the required peak power capacity of the new DC will be 100 kw * ( ) 10 = kw. It is important to observe that even relatively small errors in the estimate of annual peak power growth may result in a significant foreshortening of the life of the new DC. Moreover, such errors are quite common in the current context of rapid growth in peak power demand, as discussed further below. To illustrate the significance of a relatively small error in estimating future power needs, suppose that, in the example above, the actual rate of growth in peak power for the first year of the new DC's life turns out to be 20 per cent per year; that is, suppose that the enterprise DC managers underestimate the following 3

14 year's power growth by 5 kilowatts, which is only 5% of the DC's peak power capacity before the new DC is constructed. It is rare that the growth rate of peak power goes down from year to year, so if that average rate of 20% growth continues for the life of the DC, the new DC's peak power capacity of 400 kw will be reached in approximately 7.7 years, which is a reduction of nearly onefourth in the lifetime of the DC. This example illustrates why even a small error in estimating growth in peak power demand is such a significant problem for the enterprise. The more difficult it is to predict the future growth of peak power demand, the more difficult it is to make accurate predictions of DC lifetime. It might be thought that a solution to this problem is for managers to be conservative in projecting peak power growth rates. Of course, the larger the DC, and the greater its power capacity, that is, its expected lifetime, the greater the cost not only of constructing and financing it, but also of operating it; in this sense, being excessively conservative in predicting the rate of peak power growth is not really a solution, but rather, the trading of one problem for a different, but equally troublesome one. For this reason, in the final analysis, 4

15 the enterprise must be able to make accurate predictions about the lifetime of the DC in order to do an accurate cost-benefit analysis of the capital cost of the DC versus its long-term value to the enterprise. As a result of the fixed power capacity of the DC, in order to provide an accurate prediction of data center lifetime, the enterprise must be able to predict how peak power consumption in the data center will increase over time. Simply put, data center lifetime depends on peak power consumption, because once the peak demand for DC power exceeds the data center's fixed power capacity, the DC can no longer meet the IT needs of the enterprise without building one or more additional DCs. Construction of new DC facilities, of course, requires additional capital investment, so the original problem starts anew. If the peak power demand of the DC does not reach its fixed power capacity earlier than the projected time, then the enterprise has been successful in predicting the lifetime of the DC, and the financial planning which was based on the prediction is also validated. If the peak power consumption in the DC grows more rapidly than expected, however, there will be a corresponding shortening of DC lifetime, the financial planning which was 5

16 based on the inaccurate prediction of DC lifetime will be unsound, and the enterprise will suffer a loss, which will usually be significant, because of the size of the capital investment involved. The Capacity Management Problem As discussed above, if the enterprise cannot accurately predict DC lifetime, it cannot do effective financial planning with respect to DC capital investment. The recent reality, however, with respect to growth in peak power demand is that enterprises are not predicting or controlling it effectively. According to one survey of 150 data centers [1], power draw increased more than 15 times between 1990 and 2005, which represents an average growth of more than 30% per year. Moreover, in the more recent past, enterprises have been even less successful in combating the increasing demand for power. From 2011 to 2012, data center power demand grew by 63% globally [2]. As a result of power demand growing much faster than anticipated, in a 2009 survey of 150 DCs, even enterprises which had recently added DC capacity were being forced to 6

17 consider adding additional capacity in an attempt to keep up with the unabated growth in power demand, as illustrated in Figure 1. Figure 1. Percentage of Data Centers that anticipate the need for additional capacity [3]. To make matters even worse, the alarming rate of growth in power requirements has occurred despite the application of various best practices, including server virtualization and consolidation, dynamic voltage and frequency scaling (DVFS), dynamic migration of server workloads, and the use of CPU sleep states. There are two principal reasons that best practices have made little difference in the rate at which the demand for DC power has grown. The first reason is that there has been relatively little emphasis on reducing 7

18 peak power, either in the research or in industry. While this seems odd given the importance of reducing peak power in order to extend DC lifetime, DC managers tend to focus primarily on the availability of DC services. Even when the demand for services is relatively low, managers have a tendency to overprovision available DC hardware and the power it uses, in order to reduce the probability that service level agreements (SLAs) will not be met. When service demand is at its peak, such overprovisioning also leads to increased peak power demand, and in this way, contributes to reduction in DC lifetime. The second reason that best practices have not reduced peak power demand is that all of the best practices mentioned above focus on reducing power consumption when the demand for DC services is relatively low, and not when the demand for service is at a peak. Because DC managers tend to be much more concerned about the risk of not meeting SLAs than about how much power is being used, significant numbers of servers tend to be idle during periods of low demand, consuming power while doing little or no useful IT work much of the time. This has led to a concern with what could be called the idle server problem, or low average physical server utilization, and it has 8

19 received attention in the literature. Although raising average server utilization can decrease total power consumption in the DC, this approach has been advocated only during periods of lower service demand, and not during periods of peak demand. By focusing on reducing power use when service demand is relatively low, managers minimize the probability of violating SLAs, while achieving power savings that can be quite significant. The benefit of such power savings is generally a reduction in average power use in the data center. This reduction certainly lowers operating expenses for the DC, but unfortunately, it does not contribute to extending data center lifetime, because it does not reduce peak power demand, and generally does not contribute to reducing its rate of growth either. A brief examination follows of some of the best practices which have been employed, to illustrate why they do not provide a reduction in peak power use or its rate of growth. Server virtualization and consolidation can be used to consolidate applications running on virtual servers, which are deployed on a number of different physical servers, to fewer physical servers, so that some physical servers will be left with no applications running on them, and as a 9

20 result, those physical servers can be put into sleep states which consume less power, or even be powered down completely. This strategy can certainly save power, but by its nature, it is used at times when the demand for the services represented by the applications running on the virtual servers is low enough that the servers can be consolidated onto fewer machines. This kind of strategy, therefore, is not useful for reducing power use when the demand for services is at its highest, but this is exactly the situation during peak power demand. Dynamic voltage and frequency scaling (DVFS) is also designed to save power when the demand for CPU cycles is relatively low, so that voltage can be scaled down, and CPU frequency reduced, without reducing service availability below acceptable levels. Scaling of voltage and frequency on CPUs that have this capability, in order to match lower demand for IT services, can result in significant savings of power, because when demand for service is low, idle servers running at full voltage can consume significant amounts of power while doing little or no useful IT work. Thus, by scaling down CPU voltage and frequency of servers during periods of low demand for service, SLAs can be met while saving significant amounts of power. This strategy is aimed at 10

21 reducing wasted power during periods of low demand, and can result in significant reductions in average power use in the DC, thus lowering operating expenses. As is the case with server virtualization and consolidation, however, this strategy does not seem applicable to reducing peak power use, nor has it been applied in that way in the literature. Dynamic migration of server workloads, another best practice which is used to reduce data center power use, has similar benefits to the use of server virtualization and consolidation, which were discussed above. There is one benefit of dynamic migration, however, which is distinct from the general benefits of virtualization and consolidation. Dynamic migration can also be used to reduce not only the use of power for IT, but rather, to reduce the demand for cooling power in the DC. Through the use of server virtualization and consolidation, average power use in the DC can be reduced, as explained above. However, when workloads are consolidated, if care is not taken to evenly distribute the allocated workloads to physical servers in the DC, hotspots may develop. In the case where the DC does not employ localized cooling, which can provide variable amounts of cooling to different areas of 11

22 the DC, depending on the cooling needs of each part of the DC, hotspots can significantly increase the amount of cooling power used. The reason for the increase in power is that the cooling provided to the whole DC must be increased in order to provide sufficient cooling in the area of the hotspot or hotspots. If localized cooling is used, hotspots are less of an issue, because the increased local cooling provided to alleviate the hotspot can be more precisely matched to the need for cooling, without wasting power by providing increased cooling to areas of the DC which do not require it. Because it is used along with server virtualization and consolidation, dynamic migration can contribute to a reduction in average power use in the DC, but does not provide a reduction in peak power demand. During periods of the highest demand, a large percentage of the hardware in the DC is being used, and usually at a large percentage of its capacity, so hotspots, which result from large differences in the IT power being used by hardware in different areas of the DC, do not generally occur. CPU sleep states, which allow a CPU to enter a state of reduced power use when it is idle, can also be used to decrease the average power consumption in 12

23 the DC, by allowing CPUs in idle servers to consume less power during periods of low service demand in the DC. As is the case for the other best practices discussed above, such sleep states do not contribute to reduction in peak power demand, because when service demand is high, servers in the DC are not idle for extended periods of time, so their CPUs cannot conserve power by entering sleep states. Foreshortening of Projected Data Center Lifetime The result of the inability of the enterprise to control the rapid growth of the demand for peak power in its DC, coupled with the fixed power capacity of the DC, is a serious business problem. Enterprises are struggling to accurately predict DC lifetime, and the trend illustrated in Figure 1 above, is that enterprises are overestimating the lifetimes of their DCs, even ones that have been constructed in the recent past. DC lifetimes of 10 to 15 years have been the norm in the past, but the unabated growth in power demand over the last 15 to 20 years has changed the landscape dramatically. The 63% annual rate of 13

24 growth from 2011 to 2012 in DC power cited above [2] would require that new data centers be constructed with a power capacity of 130 times the current DC power requirements of the enterprise, for a lifetime of 10 years, or 1500 times the current power requirements, for a lifetime of 15 years. These calculations of required power capacities for new data centers also assume that the 63% annual rate of growth will not continue to increase in the future, but the history of the growth in power demand over the past 15 to 20 years shows that there is no good basis for such an optimistic assumption. Such enormous data center power capacities will require extremely large investments of capital, and in the long-term, will be difficult to sustain from a business point of view. Measures to use DC power more effectively, specifically during periods of peak demand, are urgently needed. Chapter 2 provides an analysis of the problem. Chapter 3 examines the most promising approaches to reducing the rapid growth in peak power demand in enterprise data centers. Chapter 4 provides conclusions, and recommendations for future research. 14

25 Chapter 2: Problem Analysis Research Focuses on Total Power Rather than Peak Power The power consumed in a data center can be viewed in many ways, but past research has tended to focus on analyzing and reducing average power (which is directly related to total power), rather than on peak power. Researchers are often not explicit about which of these two types of power consumption they are concerned with, but as discussed further in a review of the research in Chapter 3, most research can be viewed as being concerned with one of the two, and the overwhelming majority of past research develops strategies for data center power reduction which are aimed at decreasing average power, or total power. While reducing average or total power consumption is beneficial, the benefit is limited to lowering operating costs, because reducing total power consumed reduces the total cost for electrical power in the DC. Reducing 15

26 average or total power does not, however, generally contribute to reducing peak power, and therefore does not extend DC lifetime. To increase DC lifetime, researchers must develop methods to meet the challenge of reducing the amount of power consumed under conditions of peak demand for services, while still complying with relevant SLAs. Further, it can be argued that, even though reducing average power consumption does not typically decrease peak power consumption, the converse is not true. That is, methods which lower peak power consumption will generally also enable further reductions in the consumption of power during non-peak demand periods. This is true because any method which reduces power use during periods of more stringent demand for services, and the power they require, will also be capable of reducing power use during periods of lower demand. This point is further explored in Chapter 3, below. The Idle Server Problem There is a specific reason that it is often difficult to determine whether a given piece of research is aimed at reducing average power, peak power, or perhaps 16

27 both. Much of the research done on reducing data center power consumption in the past has focused in some way on what can be called the idle server problem. As commonly observed in the literature, data center servers are typically highly overprovisioned; i.e., managers keep significantly more servers on at any given time than are needed to provide the required level of service. Average server utilization in data centers is typically less than 30% [4, 5], but idle servers are usually not turned off, and utilize 60% or more of peak power when in the idle state [4]. This overprovisioning approach results from a strong preference for service availability as opposed to conservation of power. In short, the more servers there are that are powered on in the data center, the lower the danger that SLAs will not be met; at the same time, however, keeping a much larger number of servers running than needed to provide the required level of service significantly increases total power consumption. Another widely cited result of overprovisioning is the lower levels of average server utilization, as cited above. According to a 2012 report in The New York Times: Energy efficiency varies widely from company to company. But at the request of The Times, the consulting firm McKinsey & Company analyzed energy use by data centers and found that, on average, 17

28 they were using only 6 percent to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations [6]. The 6 to 12 percent of average power which is cited here as being used for computations witnesses to the vast amounts of power that are being wasted in data centers due to low levels of server utilization. Since managers have been much more concerned in the past with service availability than with reducing power consumption, the low levels of average server utilization have been considered acceptable. As attention has shifted to power consumption as a significant concern in data center management, researchers have sought ways to reduce overprovisioning and increase average server utilization without incurring detrimental reductions in service availability. Researchers have attempted to address this idle server problem in a number of different ways, which are reviewed in Chapter 3, below. It should be noted, however, that strategies for addressing the idle server problem do not generally reduce peak power consumption in the data center, and thus, do not extend data center lifetime. This is true because such strategies focus on reducing overprovisioning, and such reduction can be done effectively during 18

29 periods of low service demand, but not during periods of high service demand, i.e., the times during which peak power demand is occurring. In this sense, solutions to the idle server problem generally offer reductions in total power consumption, or average power consumption, which, as pointed out above, reduces operating costs, but does not extend data center lifetime. In one sense, though, as further discussed below, solutions to the idle server problem and solutions to the peak power management problem do have a very significant point in common. Addressing the idle server problem requires reducing the amount of time that servers which are powered on spend idling; this in turn increases average server utilization during low demand periods. As discussed below, the same kind of increase in server utilization levels is exactly what is required to reduce wasted power during periods of peak demand. The challenge is that of meeting SLAs while increasing average server utilization during peak demand periods. 19

30 Relationship between CPU Utilization and Power Consumed This thesis focuses on reducing data center power consumption attributable to CPUs in servers. While other IT hardware in the DC, such as network devices, data storage, and memory, also consume power, server CPUs consume a significant portion of the power used by IT equipment in a data center. An important part of the DC power puzzle is that IT hardware is generally not energy proportional. If a device is energy proportional, then the amount of power which it dissipates is proportional to the utilization of the device as a percentage of its full utilization. In other words, if a device is utilized at half of its full utilization level, then if the device is energy proportional, it utilizes half of full power. One of the greatest challenges in DC power management is that IT equipment is generally not energy proportional; i.e., at utilization levels of less than one-hundred per cent, the percentage of maximum power that a digital device uses is typically significantly greater than the percentage of utilization. 20

31 Another way of saying this is that power consumption decreases more slowly than the rate of utilization. With respect to CPUs, this can be understood by considering the meaning of CPU utilization. Operating systems typically provide some way of obtaining CPU utilization as a percentage. A CPU utilization of, say, 30%, can be understood as follows. At any given time, the CPU is either executing programs, user programs or system programs, or it is idling. When it is idling, CPU cycles are still consumed, i.e., the CPU is still running, but it is doing nothing useful. A CPU which is idling does not consume the same amount of power that a CPU which is executing programs does, but it generally consumes a significant percentage of that power, typically between 50 and 60% [4, 7, 31]. Thus, at any given time, the CPU can be in one of two power consumption modes ; it is either executing programs, and consuming 100% of its rated power, or it is idling, and is consuming a significant fraction of its rated power, for the sake of example, say 60%. If the operating system reports a CPU utilization of 30%, this means that, for the time interval to which the report applies, the CPU was executing programs 30% of the time, and was idling 70% of the time. The average power consumption during this period would be (0.30 X rated power) + (0.70 X idle power), which, 21

32 assuming an idle power equal to 60% of the rated power, equals 72% of rated power. This example illustrates why a lack of energy proportionality is a major concern in power management in data centers. In this example, 42/72, or 58% of the power that is consumed is wasted; i.e., it is not contributing to the work being done. In general, since server CPUs are not energy proportional, at utilization levels of less than 100%, some percentage of the power that is consumed will be wasted in this sense. To illustrate the significance of the lack of energy proportionality more clearly, Table 1 below shows the percentage of energy lost at various utilization levels for a server CPU which consumes 60% of maximum power at idle. If a server consumes a somewhat lower percentage of full power at idle, the energy losses will be somewhat lower, but the differences are not very significant. The table and graph illustrate that, when servers are utilized at lower rates, energy losses are significant. In order to minimize the power needed during peak demand periods, and thereby to extend data center lifetime as much as possible, power losses must be minimized as much as possible. It follows from this that lower rates of server utilization must be avoided during peak demand 22

33 CPU Utilization Percentage Percentage of Full Power Consumed Percentage of Power Wasted Percentage of Full Power Consumed Energy Proportional Table 1: Energy Losses in Server CPUs at Various Levels of Utilization 23

34 Percentage of Full Power CPU Power Consumption Percentage CPU Utilization Percentage Typical "Energy Proportional" Figure 2: Power Used by Typical Server CPUs at Various Levels of Utilization, Compared with Energy Proportional Performance periods. Put differently, during periods of peak service demand, servers must be utilized at the highest levels of utilization possible while still complying with SLAs. Only in this way can energy consumption be minimized during 24

35 peak demand periods, which is the key to extending data center lifetime. It is noteworthy that even levels of CPU utilization as high as 80% still involve energy losses of nearly one-sixth of the power consumed, which is significant. If data center lifetime is to be extended as much as possible, average utilization levels lower than this must be avoided if at all possible. Need to Increase Performance Per Watt (PPW) As introduced in the last section, a notion of power efficiency is useful in analyzing power use in data centers. Power efficiency can be viewed as how much work can be done for the power that is consumed. Viewed in this way, it can only be sensibly employed as a relative measure. If two ways of executing a given program with a given input use different amounts of power, then the one which uses less power is more power efficient. Of course, since latency is also important in data center performance, we must also consider execution time as an aspect of satisfactory performance. Along these lines, if we view issues of power use in a DC in terms of the relevant SLAs for the applications 25

36 involved, we require some metric which quantifies performance (units of execution time) per unit of power (watts), which is typically denoted performance per watt (PPW). The definition of PPW which is used here is the following [7]: PPW = 1 Tavg * Pavg Where Tavg denotes the average execution time for a given computational task, and Pavg denotes the average power, in watts, required to complete the task. This metric only makes sense, of course, to compare power used and execution time for a given task. In this sense, this is a relative measure. In this thesis, no attempt is made to provide an absolute measure, but rather, the definition of PPW given above is a comparative measure which can be used to quantify which of two executions of a given application with a given input uses less power, or which of two executions of a given application with a given input that use the same amount of power provides better performance. Such a notion also allows quantification of how large the relative power reduction or improvement in performance is. 26

37 To reduce data center power consumption during periods of peak demand for services, PPW must be maximized. One important way to increase PPW is to reduce, or ideally, minimize, CPU idling, and thus raise CPU utilization. One way to minimize CPU idling is to deploy multiple applications that tend to have complementary requirements for CPU cycles on a single server; that is, when one of the deployed applications needs CPU cycles, the other applications deployed on the server do not. Using such an approach, when a given application needs CPU cycles, they are available, but also, the likelihood that some application needs CPU cycles at any given time will be as high as possible. In this way, the CPU is less likely to idle. While this approach to reducing CPU idling, and thereby raising PPW, is a good one in theory, finding applications which have complementary patterns of demand for CPU cycles is a challenging problem. The applications which are run in data centers are generally considered to be of two types, namely, interactive applications and batch applications. While this classification is not a strict dichotomy, for the present discussion, it is sufficient to discuss the characteristics of these two types of applications as though they were 27

38 essentially distinct and non-overlapping. Interactive applications have complex patterns of demand for CPU cycles, because the demand depends not only on computational characteristics of the applications themselves, but also on the behavior of the users of the applications. From a performance point of view, interactive applications must be able to achieve response times that are acceptable to human users. The number of CPU cycles which is required to provide such response times, however, depends not only on the characteristics of the application itself, but also on the number of users which are using the application at any given time. Predicting the number of users is not difficult to do on a coarse time scale, say a day or a week, but doing it on a finer time scale, say on the scale of a second or of milliseconds, is much more difficult. In order to reduce CPU idling and increase CPU utilization, however, accurate predictions on such fine time scales are necessary. This is because the CPU demand of an application cannot be predicted accurately enough on such time scales, and so the probability that there is too low or too high a demand for cycles during a particular period is too high; therefore, the probability of under-utilization (excessive idling) or over-utilization (loss in performance) is too high as well. In short, the difficulty in accurately predicting user demand 28

39 on small enough time scales makes achieving consistently high levels of CPU utilization, without unacceptable loss in performance, too difficult to achieve for interactive applications. For batch applications, on the other hand, the picture is very different. The demand pattern for CPU cycles can be predicted quite accurately for batch applications, because these applications tend to consume a definite number of CPU cycles, with relatively little variation [8]. Also, because batch applications, by definition, do not involve interaction with users, their performance demands are more flexible. In general, as long as the entire batch application runs within a certain period of time, the performance will be acceptable. Based on the description of the characteristics of interactive and batch applications given above, one idea to increase CPU utilization has been to change the problem slightly from the way it has been described above. Rather than searching for applications which have complementary demand patterns, some researchers have asked if there are sets of applications which could be 29

40 made to have complementary demand patterns, while still meeting the relevant SLAs. The characteristics of interactive and batch applications can here be used in a fruitful way, because, by deploying interactive and batch applications on the same server, the CPU can be kept busy by servicing the demand for CPU cycles from the interactive applications during periods when those applications require the CPU, and during periods when the interactive applications do not require the CPU, the batch applications can be serviced. By giving the right priority of access to CPU cycles to the two types of applications, the SLAs for both types of applications can be met, and CPU utilization can be increased significantly. This approach is considered more fully in Chapter 3, below, but it holds the promise of increasing CPU utilization significantly, while still meeting the relevant SLAs for the applications involved. This, in turn, holds out the promise of significant increases in PPW during peak service demand periods in data centers. 30

41 Problems with Existing Approaches In this section, various deficiencies in the current approaches to power management, especially with respect to management of peak power, are identified. Inability to determine precisely how power is being used Data center managers typically have a good deal of general information about variables relevant to power use, such as average server utilization, total power used over a significant period of time, such as a week or a month, and the power use profiles of individual types of servers, in terms of how much power is used for a given utilization level. More detailed information, however, about how power is used, in particular, how individual applications use power, is lacking. If data center managers had such information, they could utilize it to increase PPW. They could do this by seeking ways to collocate applications 31

42 which have complementary demands for power, as explained above, in order to increase CPU utilization while still meeting SLAs for the applications involved. Since managers typically lack such information about how applications use power, it is difficult for them to develop strategies to increase PPW while still meeting relevant SLAs. Thus, one important requirement for increasing PPW is acquiring information about how applications use power, and employing this information to develop application collocation strategies which increase server CPU utilization while meeting SLAs. Inability to increase PPW while complying with SLAs None of the research initiatives which have been pursued to improve data center power management, including server consolidation, virtualization. dynamic voltage and frequency scaling (DVFS), CPU sleep states, continuous monitoring and redeployment (dynamic migration), heterogeneity awareness, or strategic replacement of hardware, has been able to significantly increase PPW while complying with SLAs. The reasons why these various approached have been unable to increase PPW are discussed in detail in Chapter 3, below. 32

43 As argued above, a significant improvement in PPW, in particular, during peak service demand periods, is the only way to reduce excessive power losses in the data center, and to thereby reduce the rapid rate of growth in power demand. To be sure, all of the research approaches mentioned above seek to increase PPW by reducing power losses in some way; no one of them, however, can by itself contribute to increasing PPW during peak demand periods, which as argued above, is the only way to extend data center lifetime. Inability to reduce the rate of growth of power demand Because none of the approaches to data center power management pursued so far has been able to significantly increase PPW during peak service demand periods, energy losses during peak demand periods remain high. As the demand for power grows, these energy losses cause the data center to fall behind in its ability to keep pace with the constantly, and rapidly, increasing power demands. As long as this remains the case, data centers will struggle to meet the demand for power, and data center lifetimes will be foreshortened, as argued above. If, on the other hand, strategies can be found to reduce power 33

44 losses, and to increase energy proportionality, data centers will be in a much better position to stay ahead of the increasing demands for power, and to keep data center lifetime as close as possible to projected values. Inability to improve the fit between collocated applications The applications deployed on servers in modern data centers are typically not able to keep the server CPU busy constantly. As a result, running only one application, or a small number of applications, tends to result in low rates of CPU utilization. This leads to the significant power losses explained earlier. In order to raise CPU utilization levels, and increase energy proportionality, a number of applications can be collocated on the server to increase levels of CPU utilization. Of course, the SLAs of the applications must still be met also. The problem of choosing the applications to collocate on a server in this manner is not a trivial one. The research has not been successful in discovering the characteristics of applications which are relevant to successful collocation aimed at increasing energy proportionality. Further work is needed to identify the characteristics of applications which are relevant to making beneficial 34

45 collocation decisions, and to develop methods for profiling applications in terms of these characteristics. As explained above, one promising strategy for increasing energy proportionality and reducing power losses is to collocate batch and interactive applications. It is likely that other characteristics are also relevant to collocation decisions. Since collocation is one of the principle strategies for increasing CPU utilization and raising energy proportionality, there is a critical need for research addressing these issues. 35

46 Chapter 3: Previous Best Approaches to Managing Data Center Power In this chapter, previous approaches to managing data center power are reviewed. As discussed in Chapter 2, above, it is often not clear whether particular studies are attempting to reduce average power, peak power, or in some sense, both. For each approach, and the studies which utilize it, this question is analyzed. Server Consolidation To increase the average level of utilization for servers in the data center, and thereby save power, one of the first approaches that presents itself is to combine the applications that are running on various under-utilized servers onto a smaller number of servers which will be able to service the requests for those applications, while spending less time idling. This process of combining 36

47 applications from a larger number of physical servers onto a smaller number of physical servers is called consolidation. Server consolidation has been used as a response to server sprawl, a phenomenon which results in a larger number of servers than needed consuming more space and energy than necessary in order to service the workload in the data center [9]. Because server sprawl leads to lower average server utilization, it reduces energy proportionality in the data center, and increases the amount of power required to service workloads. Not surprisingly, consolidation has been shown to increase server utilization, and to reduce power costs [10]. While the idea of consolidating under-utilized servers appears beneficial, the more difficult question is to what extent servers can be consolidated to garner the benefit of power savings, while still meeting SLAs. This question is related to another one, which is the types of applications that should be consolidated onto a single server. Although combining applications onto a smaller number of servers can increase average utilization levels, during periods of peak service demand for the various applications, the power demand patterns of the various applications consolidated on a single server may result in a peak power demand which is too high, resulting in the inability of the server to service the demand of all of the applications while still 37

48 consuming an amount of power less than or equal to the peak power capacity of the server. For these reasons, simply consolidating applications without considering their service and power demand patterns is likely to result in a less than optimal pattern of power use, or in a failure of service. In this sense, the understanding of the power usage characteristics of the consolidated workload is critical to making good consolidation decisions. Subramanian et al. [10] develop an algorithm which, given a set of server workloads, determines the number of servers required, the frequency of each server, and the correspondence between workloads and servers. This work makes use not only of consolidation, but also of dynamic voltage and frequency scaling, or DVFS, which is further discussed below in a section of this chapter devoted to this approach to power reduction. The authors model the consolidation problem as one of variable-sized bin packing, such that the total power used by all the servers is minimized, the workloads which require servicing in any given time interval are packed onto the number of servers required, and each server's frequency is set at the minimum frequency required to service the workload. Given the problem formulated in this way, the authors 38

49 prove that the problem is NP-hard, by providing a polynomial-time reduction of the partition problem, a known NP-hard problem, to the variable-sized bin packing formulation of the consolidation problem. They also provide an algorithm which approximates a solution to the consolidation problem modeled as stated above, and which runs in O(n 2 log n), for a problem of n workloads. The algorithm is run every T units of time, and the value of T can be chosen to be small enough to allow for the algorithm to make adjustments to the consolidation scheme in order to respond to changes in demand for the various workloads over time. The authors provide an approximation ratio for their algorithm, which characterizes the ratio between the power consumed by their algorithm's solution and the optimal consolidation solution. Finally, they provide experimental results which show the power savings of consolidation for various types of consolidated workloads. Although the algorithm which the authors provide has a time complexity, O(n 2 log n), which makes it feasible for realistic data center workloads, the approximation ratio which is provided as an upper bound on the departure of the algorithm's output from an optimal solution is not very meaningful. Even 39

50 in the best case, i.e., when all of the workloads to be consolidated have exactly the same mean and standard deviation, if the number of workloads, n, is very large (even if it has a value of 50, which is extremely small for the number of workloads in a modern data center; a value in the thousands would be more typical), the minimum value of the approximation ratio is about 2, which means that the author's algorithm outputs a packing of the workloads on servers which uses twice as much power as the optimum solution. This solution thus still loses 50% of the power that is used for computation (Information Technology, or IT) in the data center, which does not offer a meaningful degree of energy proportionality. The experimental results which are given are much more promising; it should be noted, however, that the results are based on experiments with only a small number of workloads, from one to five. The results do demonstrate the potentially very significant benefits of consolidation, which is shown to save 41.3% of power for an exponential distribution, and 60.7% for a pareto distribution, as compared to running the same workloads without consolidation. Despite the rather weak bound provided by the approximation 40

51 ratio cited earlier, the authors show that the experimental results of their algorithm come within 6.42% of the optimum, i.e., minimum, power use. The approach developed in this research can clearly provide a reduction both in average power use in the data center, and in peak power use, since the algorithm offered can be applied to attempt to minimize power use during any time interval, whether it be a period of low service demand, or of the highest service demand. The author's use of frequency variability as a way of both reducing instantaneous power consumption, and of reducing server idling, is a valuable technique. A limitation of the approach used, however, is that it does not consider the variability of a given workload's demand for service, or the types of workloads involved, when making consolidation choices. As discussed further below, considering the service demand pattern of the workload, and its service requirements, i.e., whether it is a batch or interactive workload, can be exploited to increase server utilization and increase energy proportionality. 41

Virtualizing the SAP Infrastructure through Grid Technology. WHITE PAPER March 2007

Virtualizing the SAP Infrastructure through Grid Technology. WHITE PAPER March 2007 Virtualizing the SAP Infrastructure through Grid Technology WHITE PAPER March 2007 TABLE OF CONTENTS TABLE OF CONTENTS 2 Introduction 3 The Complexity of the SAP Landscape 3 Specific Pain Areas 4 Virtualizing

More information

The data quality trends report

The data quality trends report Report The 2015 email data quality trends report How organizations today are managing and using email Table of contents: Summary...1 Research methodology...1 Key findings...2 Email collection and database

More information

RightScale 2018 State of the Cloud Report DATA TO NAVIGATE YOUR MULTI-CLOUD STRATEGY

RightScale 2018 State of the Cloud Report DATA TO NAVIGATE YOUR MULTI-CLOUD STRATEGY RightScale 2018 State of the Cloud Report DATA TO NAVIGATE YOUR MULTI-CLOUD STRATEGY RightScale 2018 State of the Cloud Report As Public and Private Cloud Grow, Organizations Focus on Governing Costs Executive

More information

A Cool Scheduler for Multi-Core Systems Exploiting Program Phases

A Cool Scheduler for Multi-Core Systems Exploiting Program Phases IEEE TRANSACTIONS ON COMPUTERS, VOL. 63, NO. 5, MAY 2014 1061 A Cool Scheduler for Multi-Core Systems Exploiting Program Phases Zhiming Zhang and J. Morris Chang, Senior Member, IEEE Abstract Rapid growth

More information

Efficient Data Center Virtualization Requires All-flash Storage

Efficient Data Center Virtualization Requires All-flash Storage White Paper Efficient Data Center Virtualization Requires All-flash Storage By Scott Sinclair, Storage Analyst November 2015 This ESG White Paper was commissioned by Pure Storage and is distributed under

More information

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management WHITE PAPER: ENTERPRISE AVAILABILITY Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management White Paper: Enterprise Availability Introduction to Adaptive

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

Avoiding Costs From Oversizing Datacenter Infrastructure

Avoiding Costs From Oversizing Datacenter Infrastructure Avoiding Costs From Oversizing Datacenter Infrastructure White Paper # 37 Executive Summary The physical and power infrastructure of data centers is typically oversized by more than 100%. Statistics related

More information

Java Without the Jitter

Java Without the Jitter TECHNOLOGY WHITE PAPER Achieving Ultra-Low Latency Table of Contents Executive Summary... 3 Introduction... 4 Why Java Pauses Can t Be Tuned Away.... 5 Modern Servers Have Huge Capacities Why Hasn t Latency

More information

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments WHITE PAPER Application Performance Management The Case for Adaptive Instrumentation in J2EE Environments Why Adaptive Instrumentation?... 3 Discovering Performance Problems... 3 The adaptive approach...

More information

Multi-threading technology and the challenges of meeting performance and power consumption demands for mobile applications

Multi-threading technology and the challenges of meeting performance and power consumption demands for mobile applications Multi-threading technology and the challenges of meeting performance and power consumption demands for mobile applications September 2013 Navigating between ever-higher performance targets and strict limits

More information

Computational performance and scalability of large distributed enterprise-wide systems supporting engineering, manufacturing and business applications

Computational performance and scalability of large distributed enterprise-wide systems supporting engineering, manufacturing and business applications Computational performance and scalability of large distributed enterprise-wide systems supporting engineering, manufacturing and business applications Janusz S. Kowalik Mathematics and Computing Technology

More information

The Green Imperative Power and cooling in the data center

The Green Imperative Power and cooling in the data center The Green Imperative Power and cooling in the data center An interview with Fred Miller, Product Line Manager, Eaton Data Center Solutions Expanding power demands. Increasing energy costs. Excessive heat.

More information

Overcoming the Challenges of Server Virtualisation

Overcoming the Challenges of Server Virtualisation Overcoming the Challenges of Server Virtualisation Maximise the benefits by optimising power & cooling in the server room Server rooms are unknowingly missing a great portion of their benefit entitlement

More information

Digital Workflow 10 Tech Rules to Guide You

Digital Workflow 10 Tech Rules to Guide You Last updated: 10/11/10 Digital Workflow 10 Tech Rules to Guide You Introduction Whether your goal is to become paperless, or just to get more out of the technology you use, you need to (1) find the easy

More information

Real-Time Monitoring: Understanding the Commonly (Mis)Used Phrase

Real-Time Monitoring: Understanding the Commonly (Mis)Used Phrase Real-Time Monitoring: Understanding the Commonly (Mis)Used Phrase White Paper 30 October 2015 Executive Summary Matt Lane, Geist DCIM President, discusses real-time monitoring and how monitoring critical

More information

The Transition to Networked Storage

The Transition to Networked Storage The Transition to Networked Storage Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary... 3 2.0 The Emergence of the Storage Area Network... 3 3.0 The Link Between Business

More information

Next Generation Backup: Better ways to deal with rapid data growth and aging tape infrastructures

Next Generation Backup: Better ways to deal with rapid data growth and aging tape infrastructures Next Generation Backup: Better ways to deal with rapid data growth and aging tape infrastructures Next 1 What we see happening today. The amount of data businesses must cope with on a daily basis is getting

More information

VMAX3: Adaptable Enterprise Resiliency

VMAX3: Adaptable Enterprise Resiliency ESG Solution Showcase VMAX3: Adaptable Enterprise Resiliency Date: August 2015 Author: Scott Sinclair, Analyst Abstract: As enterprises respond to the ever- present reality of rapid data growth, IT organizations

More information

RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE.

RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE. RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE. Is putting Contact us INTRODUCTION You know the headaches of managing an infrastructure that is stretched to its limit. Too little staff. Too many users. Not

More information

Controlling Costs and Driving Agility in the Datacenter

Controlling Costs and Driving Agility in the Datacenter Controlling Costs and Driving Agility in the Datacenter Optimizing Server Infrastructure with Microsoft System Center Microsoft Corporation Published: November 2007 Executive Summary To help control costs,

More information

CICS insights from IT professionals revealed

CICS insights from IT professionals revealed CICS insights from IT professionals revealed A CICS survey analysis report from: IBM, CICS, and z/os are registered trademarks of International Business Machines Corporation in the United States, other

More information

IT optimization White paper May Optimize energy use for the data center through enhanced measurement and management

IT optimization White paper May Optimize energy use for the data center through enhanced measurement and management IT optimization White paper May 2008 Optimize energy use for the data center through enhanced measurement and management Table of Contents 3 Overview Recognize the critical need for energy-efficient data

More information

Hybrid IT for SMBs. HPE addressing SMB and channel partner Hybrid IT demands ANALYST ANURAG AGRAWAL REPORT : HPE. October 2018

Hybrid IT for SMBs. HPE addressing SMB and channel partner Hybrid IT demands ANALYST ANURAG AGRAWAL REPORT : HPE. October 2018 V REPORT : HPE Hybrid IT for SMBs HPE addressing SMB and channel partner Hybrid IT demands October 2018 ANALYST ANURAG AGRAWAL Data You Can Rely On Analysis You Can Act Upon HPE addressing SMB and partner

More information

How Microsoft IT Reduced Operating Expenses Using Virtualization

How Microsoft IT Reduced Operating Expenses Using Virtualization How Microsoft IT Reduced Operating Expenses Using Virtualization Published: May 2010 The following content may no longer reflect Microsoft s current position or infrastructure. This content should be viewed

More information

STATE OF STORAGE IN VIRTUALIZED ENVIRONMENTS INSIGHTS FROM THE MIDMARKET

STATE OF STORAGE IN VIRTUALIZED ENVIRONMENTS INSIGHTS FROM THE MIDMARKET STATE OF STORAGE IN VIRTUALIZED ENVIRONMENTS INSIGHTS FROM THE MIDMARKET PAGE 1 ORGANIZATIONS THAT MAKE A GREATER COMMITMENT TO VIRTUALIZING THEIR OPERATIONS GAIN GREATER EFFICIENCIES. PAGE 2 SURVEY TOPLINE

More information

Using Virtualization to Reduce Cost and Improve Manageability of J2EE Application Servers

Using Virtualization to Reduce Cost and Improve Manageability of J2EE Application Servers WHITEPAPER JANUARY 2006 Using Virtualization to Reduce Cost and Improve Manageability of J2EE Application Servers J2EE represents the state of the art for developing component-based multi-tier enterprise

More information

Healthcare Information and Management Systems Society HIMSS. U.S. Healthcare Industry Quarterly HIPAA Compliance Survey Results: Summer 2002

Healthcare Information and Management Systems Society HIMSS. U.S. Healthcare Industry Quarterly HIPAA Compliance Survey Results: Summer 2002 Healthcare Information and Management Systems Society HIMSS U.S. Healthcare Industry Quarterly HIPAA Compliance Survey Results: Summer 2002 HIMSS / Phoenix Health Systems Healthcare Industry Quarterly

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

Graph Structure Over Time

Graph Structure Over Time Graph Structure Over Time Observing how time alters the structure of the IEEE data set Priti Kumar Computer Science Rensselaer Polytechnic Institute Troy, NY Kumarp3@rpi.edu Abstract This paper examines

More information

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads Towards Energy Proportionality for Large-Scale Latency-Critical Workloads David Lo *, Liqun Cheng *, Rama Govindaraju *, Luiz André Barroso *, Christos Kozyrakis Stanford University * Google Inc. 2012

More information

HOW WELL DO YOU KNOW YOUR IT NETWORK? BRIEFING DOCUMENT

HOW WELL DO YOU KNOW YOUR IT NETWORK? BRIEFING DOCUMENT HOW WELL DO YOU KNOW YOUR IT NETWORK? BRIEFING DOCUMENT ARE YOU REALLY READY TO EXECUTE A GLOBAL IOT STRATEGY? Increased demand driven by long-term trends of the Internet of Things, WLAN, connected LED

More information

How To Reduce the IT Budget and Still Keep the Lights On

How To Reduce the IT Budget and Still Keep the Lights On How To Reduce the IT Budget and Still Keep the Lights On By Charles Williams and John Carnegie CIOs are now more challenged than ever to demonstrate mature financial management disciplines, greater transparency,

More information

Aligning CFO and CIO Priorities

Aligning CFO and CIO Priorities white paper economics Aligning and Priorities Forward-thinking organizations are viewing computing as an investment in business transformation, not just a way to cut costs for IT. Thanks to the, s and

More information

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS Abstract Virtualization and workload management are essential technologies for maximizing scalability, availability and

More information

Big data and data centers

Big data and data centers Big data and data centers Contents Page 1 Big data and data centers... 3 1.1 Big data, big IT... 3 1.2 The IT organization between day-to-day business and innovation... 4 2 Modern data centers... 5 2.1

More information

8. CONCLUSION AND FUTURE WORK. To address the formulated research issues, this thesis has achieved each of the objectives delineated in Chapter 1.

8. CONCLUSION AND FUTURE WORK. To address the formulated research issues, this thesis has achieved each of the objectives delineated in Chapter 1. 134 8. CONCLUSION AND FUTURE WORK 8.1 CONCLUSION Virtualization and internet availability has increased virtualized server cluster or cloud computing environment deployments. With technological advances,

More information

Imperva Incapsula Survey: What DDoS Attacks Really Cost Businesses

Imperva Incapsula Survey: What DDoS Attacks Really Cost Businesses Survey Imperva Incapsula Survey: What DDoS Attacks Really Cost Businesses BY: TIM MATTHEWS 2016, Imperva, Inc. All rights reserved. Imperva and the Imperva logo are trademarks of Imperva, Inc. Contents

More information

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured System Performance Analysis Introduction Performance Means many things to many people Important in any design Critical in real time systems 1 ns can mean the difference between system Doing job expected

More information

Direct Variations DIRECT AND INVERSE VARIATIONS 19. Name

Direct Variations DIRECT AND INVERSE VARIATIONS 19. Name DIRECT AND INVERSE VARIATIONS 19 Direct Variations Name Of the many relationships that two variables can have, one category is called a direct variation. Use the description and example of direct variation

More information

Roadmap to the Efficient Cloud: 3 Checkpoints for the Modern Enterprise

Roadmap to the Efficient Cloud: 3 Checkpoints for the Modern Enterprise Roadmap to the Efficient Cloud: 3 Checkpoints for the Modern Enterprise Roadmap for the Modern Enterprise As your AWS environment grows, the importance of instilling governance and following best practice

More information

Moving Technology Infrastructure into the Future: Value and Performance through Consolidation

Moving Technology Infrastructure into the Future: Value and Performance through Consolidation Moving Technology Infrastructure into the Future: Value and Performance through Consolidation An AMI-Partners Business Benchmarking White Paper Sponsored by: HP Autumn Watters Ryan Brock January 2014 Introduction

More information

IPv6 Readiness in the Communication Service Provider Industry

IPv6 Readiness in the Communication Service Provider Industry IPv6 Readiness in the Communication Service Provider Industry An Incognito Software Report Published April 2014 Executive Summary... 2 Methodology and Respondent Profile... 3 Methodology... 3 Respondent

More information

Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center

Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center Adapting to Change Concern over the pace of technology change is increasing among data center managers. This was reflected

More information

Predicting Messaging Response Time in a Long Distance Relationship

Predicting Messaging Response Time in a Long Distance Relationship Predicting Messaging Response Time in a Long Distance Relationship Meng-Chen Shieh m3shieh@ucsd.edu I. Introduction The key to any successful relationship is communication, especially during times when

More information

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 143 CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 6.1 INTRODUCTION This chapter mainly focuses on how to handle the inherent unreliability

More information

Cloud Computing Capacity Planning

Cloud Computing Capacity Planning Cloud Computing Capacity Planning Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction One promise of cloud computing is that virtualization

More information

Using Simulation to Understand Bottlenecks, Delay Accumulation, and Rail Network Flow

Using Simulation to Understand Bottlenecks, Delay Accumulation, and Rail Network Flow Using Simulation to Understand Bottlenecks, Delay Accumulation, and Rail Network Flow Michael K. Williams, PE, MBA Manager Industrial Engineering Norfolk Southern Corporation, 1200 Peachtree St., NE, Atlanta,

More information

CHAPTER 6 ENERGY AWARE SCHEDULING ALGORITHMS IN CLOUD ENVIRONMENT

CHAPTER 6 ENERGY AWARE SCHEDULING ALGORITHMS IN CLOUD ENVIRONMENT CHAPTER 6 ENERGY AWARE SCHEDULING ALGORITHMS IN CLOUD ENVIRONMENT This chapter discusses software based scheduling and testing. DVFS (Dynamic Voltage and Frequency Scaling) [42] based experiments have

More information

Application generators: a case study

Application generators: a case study Application generators: a case study by JAMES H. WALDROP Hamilton Brothers Oil Company Denver, Colorado ABSTRACT Hamilton Brothers Oil Company recently implemented a complex accounting and finance system.

More information

Business Case for the Cisco ASR 5500 Mobile Multimedia Core Solution

Business Case for the Cisco ASR 5500 Mobile Multimedia Core Solution Business Case for the Cisco ASR 5500 Mobile Multimedia Core Solution Executive Summary The scale, use and technologies of mobile broadband networks are changing rapidly. Mobile broadband growth continues

More information

THE REAL ROOT CAUSES OF BREACHES. Security and IT Pros at Odds Over AppSec

THE REAL ROOT CAUSES OF BREACHES. Security and IT Pros at Odds Over AppSec THE REAL ROOT CAUSES OF BREACHES Security and IT Pros at Odds Over AppSec EXECUTIVE SUMMARY Breaches still happen, even with today s intense focus on security. According to Verizon s 2016 Data Breach Investigation

More information

SIX Trends in our World

SIX Trends in our World Data Center Infrastructure for Cloud and Energy Efficiency SIX Trends in our World Connectivity Simplicity StruxureWare TM for data centers TuanAnh Nguyen, Solution Engineer Manager, Vietnam Schneider

More information

Updating the contents and structure of Computer Engineering Larry Hughes Electrical and Computer Engineering Dalhousie University 18 November 2016

Updating the contents and structure of Computer Engineering Larry Hughes Electrical and Computer Engineering Dalhousie University 18 November 2016 Introduction Updating the contents and structure of Computer Engineering Larry Hughes Electrical and Computer Engineering Dalhousie University 8 November 06 The Department of Electrical and Computer Engineering

More information

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710 COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX

More information

Recording end-users security events: A step towards increasing usability

Recording end-users security events: A step towards increasing usability Section 1 Network Systems Engineering Recording end-users security events: A step towards increasing usability Abstract D.Chatziapostolou and S.M.Furnell Network Research Group, University of Plymouth,

More information

Bring Your Own Device (BYOD)

Bring Your Own Device (BYOD) Bring Your Own Device (BYOD) An information security and ediscovery analysis A Whitepaper Call: +44 345 222 1711 / +353 1 210 1711 Email: cyber@bsigroup.com Visit: bsigroup.com Executive summary Organizations

More information

Overcoming Rack Power Limits with VPS Dynamic Redundancy and Intel RSD

Overcoming Rack Power Limits with VPS Dynamic Redundancy and Intel RSD Overcoming Rack Power Limits with VPS Dynamic Redundancy and Intel RSD Summary This paper describes how SourceMix, a dynamic redundancy technology from VPS, allows Intel Rack Scale Design (Intel RSD) customers

More information

Backup and Recovery: New Strategies Drive Disk-Based Solutions

Backup and Recovery: New Strategies Drive Disk-Based Solutions I D C E X E C U T I V E B R I E F Backup and Recovery: New Strategies Drive Disk-Based Solutions Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com December

More information

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE A COST EFFECTIVE STORAGE CONSOLIDATION SOLUTION THAT REDUCES INFRASTRUCTURE COSTS, IMPROVES PRODUCTIVITY AND SIMPLIFIES DATA CENTER MANAGEMENT. KEY

More information

Good Technology State of BYOD Report

Good Technology State of BYOD Report Good Technology State of BYOD Report New data finds Finance and Healthcare industries dominate BYOD picture and that users are willing to pay device and service plan costs if they can use their own devices

More information

Enabling Hybrid Cloud Transformation

Enabling Hybrid Cloud Transformation Enterprise Strategy Group Getting to the bigger truth. White Paper Enabling Hybrid Cloud Transformation By Scott Sinclair, ESG Senior Analyst November 2018 This ESG White Paper was commissioned by Primary

More information

> Executive summary. Implementing Energy Efficient Data Centers. White Paper 114 Revision 1. Contents. by Neil Rasmussen

> Executive summary. Implementing Energy Efficient Data Centers. White Paper 114 Revision 1. Contents. by Neil Rasmussen Implementing Energy Efficient Data Centers White Paper 114 Revision 1 by Neil Rasmussen Click on a section to jump to it > Executive summary Electricity usage costs have become an increasing fraction of

More information

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization May 2014 Prepared by: Zeus Kerravala The Top Five Reasons to Deploy Software-Defined Networks and Network Functions

More information

THE CUSTOMER SITUATION. The Customer Background

THE CUSTOMER SITUATION. The Customer Background CASE STUDY GLOBAL CONSUMER GOODS MANUFACTURER ACHIEVES SIGNIFICANT SAVINGS AND FLEXIBILITY THE CUSTOMER SITUATION Alliant Technologies is a Premier Service Provider for Red Forge Continuous Infrastructure

More information

Categorizing Migrations

Categorizing Migrations What to Migrate? Categorizing Migrations A version control repository contains two distinct types of data. The first type of data is the actual content of the directories and files themselves which are

More information

Backup 2.0: Simply Better Data Protection

Backup 2.0: Simply Better Data Protection Simply Better Protection 2.0: Simply Better Protection Gain Net Savings of $15 for Every $1 Invested on B2.0 Technologies Executive Summary Traditional backup methods are reaching their technology end-of-life.

More information

Achieving Best in Class Software Savings through Optimization not Negotiation

Achieving Best in Class Software Savings through Optimization not Negotiation Achieving Best in Class Software Savings through Optimization not Negotiation August 10, 2012 Agenda Introduction Industry Trends Best in Class Software Asset Management How good is best in class? How

More information

Cost Model Energy Benefits DirectAire & SmartAire Overview & Explanation

Cost Model Energy Benefits DirectAire & SmartAire Overview & Explanation Cost Model Energy Benefits DirectAire & SmartAire Overview & Explanation A cost model (See figure 1) has been created to provide the user a simplified method for directly comparing the energy cost of a

More information

2010 Web Analytics Progress and Plans in BtoB Organizations: Survey Report

2010 Web Analytics Progress and Plans in BtoB Organizations: Survey Report 2010 Web Analytics Progress and Plans in BtoB Organizations: Survey Report page 1 Web Analytics Association 2010 Web Analytics Progress and Plans in BtoB Organizations: Survey Report Prepared by the Web

More information

` 2017 CloudEndure 1

` 2017 CloudEndure 1 ` 2017 CloudEndure 1 Table of Contents Executive Summary... 3 Production Machines in the Organization... 4 Production Machines Using Disaster Recovery... 5 Workloads Primarily Covered by Disaster Recovery...

More information

Why Enterprises Need to Optimize Their Data Centers

Why Enterprises Need to Optimize Their Data Centers White Paper Why Enterprises Need to Optimize Their Data Centers Introduction IT executives have always faced challenges when it comes to delivering the IT services needed to support changing business goals

More information

DATA CENTER COLOCATION BUILD VS. BUY

DATA CENTER COLOCATION BUILD VS. BUY DATA CENTER COLOCATION BUILD VS. BUY Comparing the total cost of ownership of building your own data center vs. buying third-party colocation services Executive Summary As businesses grow, the need for

More information

AMS Behavioral Modeling

AMS Behavioral Modeling CHAPTER 3 AMS Behavioral Modeling Ronald S. Vogelsong, Ph.D. Overview Analog designers have for many decades developed their design using a Bottom-Up design flow. First, they would gain the necessary understanding

More information

The Growing Impact of Mobile Messaging

The Growing Impact of Mobile Messaging The Growing Impact of Mobile Messaging An Osterman Research White Paper Published November 2007 Osterman Research, Inc. P.O. Box 1058 Black Diamond, Washington 98010-1058 Phone: +1 253 630 5839 Fax: +1

More information

EXECUTIVE REPORT. 4 Critical Steps Financial Firms Must Take for IT Uptime, Security, and Connectivity

EXECUTIVE REPORT. 4 Critical Steps Financial Firms Must Take for IT Uptime, Security, and Connectivity EXECUTIVE REPORT 4 Critical Steps Financial Firms Must Take for IT Uptime, Security, and Connectivity When Millions of Dollars of Financial Transactions are On the Line, Downtime is Not an Option The many

More information

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005 Oracle Database 10g Resource Manager An Oracle White Paper October 2005 Oracle Database 10g Resource Manager INTRODUCTION... 3 SYSTEM AND RESOURCE MANAGEMENT... 3 ESTABLISHING RESOURCE PLANS AND POLICIES...

More information

HPC Solutions in High Density Data Centers

HPC Solutions in High Density Data Centers Executive Report HPC Solutions in High Density Data Centers How CyrusOne s Houston West data center campus delivers the highest density solutions to customers With the ever-increasing demand on IT resources,

More information

Sample Exam. Advanced Test Automation - Engineer

Sample Exam. Advanced Test Automation - Engineer Sample Exam Advanced Test Automation - Engineer Questions ASTQB Created - 2018 American Software Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made,

More information

Why Continuity Matters

Why  Continuity Matters Why Email Continuity Matters Contents What is Email Continuity and Why it Matters........................... 1 Challenges to Email Continuity................................... 2 Increasing Email Management

More information

MAXIMIZING ROI FROM AKAMAI ION USING BLUE TRIANGLE TECHNOLOGIES FOR NEW AND EXISTING ECOMMERCE CUSTOMERS CONSIDERING ION CONTENTS EXECUTIVE SUMMARY... THE CUSTOMER SITUATION... HOW BLUE TRIANGLE IS UTILIZED

More information

Automated, Real-Time Risk Analysis & Remediation

Automated, Real-Time Risk Analysis & Remediation Automated, Real-Time Risk Analysis & Remediation TABLE OF CONTENTS 03 EXECUTIVE SUMMARY 04 VULNERABILITY SCANNERS ARE NOT ENOUGH 06 REAL-TIME CHANGE CONFIGURATION NOTIFICATIONS ARE KEY 07 FIREMON RISK

More information

Survey Highlights Need for Better Server Energy Efficiency

Survey Highlights Need for Better Server Energy Efficiency Survey Highlights Need for Better Server Energy Efficiency DATA CENTER INEFFICIENCIES RAISE IT COSTS AND RISKS CIOs and other IT leaders have probably had their fill of hearing from corporate executives

More information

Supplementary File: Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment

Supplementary File: Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment IEEE TRANSACTION ON PARALLEL AND DISTRIBUTED SYSTEMS(TPDS), VOL. N, NO. N, MONTH YEAR 1 Supplementary File: Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment Zhen Xiao,

More information

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS An Undergraduate Research Scholars Thesis by DENISE IRVIN Submitted to the Undergraduate Research Scholars program at Texas

More information

MediaTek CorePilot 2.0. Delivering extreme compute performance with maximum power efficiency

MediaTek CorePilot 2.0. Delivering extreme compute performance with maximum power efficiency MediaTek CorePilot 2.0 Heterogeneous Computing Technology Delivering extreme compute performance with maximum power efficiency In July 2013, MediaTek delivered the industry s first mobile system on a chip

More information

Simplified. Software-Defined Storage INSIDE SSS

Simplified. Software-Defined Storage INSIDE SSS Software-Defined Storage INSIDE SSS Overcome SDS Challenges Page 2 Simplified Choose the Right Workloads for SDS Using Microsoft Storage Spaces Page 7 The need for agility, scalability, and cost savings

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

EC121 Mathematical Techniques A Revision Notes

EC121 Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes Mathematical Techniques A begins with two weeks of intensive revision of basic arithmetic and algebra, to the level

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

SQL Server 2008 Consolidation

SQL Server 2008 Consolidation Technology Concepts and Business Considerations Abstract The white paper describes how SQL Server 2008 consolidation provides solutions to basic business problems pertaining to the usage of multiple SQL

More information

4. Write sets of directions for how to check for direct variation. How to check for direct variation by analyzing the graph :

4. Write sets of directions for how to check for direct variation. How to check for direct variation by analyzing the graph : Name Direct Variations There are many relationships that two variables can have. One of these relationships is called a direct variation. Use the description and example of direct variation to help you

More information

WWW. FUSIONIO. COM. Fusion-io s Solid State Storage A New Standard for Enterprise-Class Reliability Fusion-io, All Rights Reserved.

WWW. FUSIONIO. COM. Fusion-io s Solid State Storage A New Standard for Enterprise-Class Reliability Fusion-io, All Rights Reserved. Fusion-io s Solid State Storage A New Standard for Enterprise-Class Reliability iodrive Fusion-io s Solid State Storage A New Standard for Enterprise-Class Reliability Fusion-io offers solid state storage

More information

The Value of Automated Penetration Testing White Paper

The Value of Automated Penetration Testing White Paper The Value of Automated Penetration Testing White Paper Overview As an information security expert and the security manager of the company, I am well aware of the difficulties of enterprises and organizations

More information

Credit Union Cyber Crisis: Gaining Awareness and Combatting Cyber Threats Without Breaking the Bank

Credit Union Cyber Crisis: Gaining Awareness and Combatting Cyber Threats Without Breaking the Bank Credit Union Cyber Crisis: Gaining Awareness and Combatting Cyber Threats Without Breaking the Bank Introduction The 6,331 credit unions in the United States face a unique challenge when it comes to cybersecurity.

More information

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced? Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

More information

Code Harvesting with Zeligsoft CX

Code Harvesting with Zeligsoft CX Code Harvesting with Zeligsoft CX Zeligsoft November 2008 Code Harvesting with Zeligsoft CX Code harvesting with component modeling increases software reuse and improves developer efficiency for embedded

More information

GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS.

GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS. GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS. Cloud computing is as much a paradigm shift in data center and IT management as it is a culmination of IT s capacity to drive business

More information

Image resizing and image quality

Image resizing and image quality Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2001 Image resizing and image quality Michael Godlewski Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

Solving Exchange and.pst Management Problems in Microsoft Environments An Osterman Research White Paper

Solving Exchange and.pst Management Problems in Microsoft Environments An Osterman Research White Paper Solving Exchange and.pst Management Problems in Microsoft Environments An Osterman Research White Paper Table of Contents Why You Should Read This White Paper Problems in Managing Exchange and.pst Files

More information