Data Center Energy and Cost Saving Evaluation

Similar documents
This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire

Introducing the Heat Wheel to the Data Center

Reducing Data Center Cooling Costs through Airflow Containment

Data Center Energy Savings By the numbers

Cooling on Demand - scalable and smart cooling solutions. Marcus Edwards B.Sc.

Temperature monitoring and CFD Analysis of Data Centre

Data center cooling evolution. The hottest way to save energy. Lucio Bustaffa: Product Development Manager BlueBox Group

Recapture Capacity for Existing. and Airflow Optimization

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power

Reducing Energy Consumption with

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

CHILLED WATER. HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw

MSYS 4480 AC Systems Winter 2015

Smart Data Centres. Robert M Pe, Data Centre Consultant HP Services SEA

Efficiency of Data Center cooling

DATA CENTER EFFICIENCY: Data Center Temperature Set-point Impact on Cooling Efficiency

Running Your Data Center Under Ideal Temperature Conditions

Free Cooling Investigation of RCMS Data Center

Why would you want to use outdoor air for data center cooling?

Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer

Power and Cooling for Ultra-High Density Racks and Blade Servers

PROCESS & DATA CENTER COOLING. How and Why To Prepare For What s To Come JOHN MORRIS REGIONAL DIRECTOR OF SALES

Google s Green Data Centers: Network POP Case Study

Evaporative free air cooling technology Providing evaporative free air cooling solutions.

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms

Planning a Green Datacenter

Your data center s hidden source of revenue

Thermal management. Thermal management

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc

Retro-Commissioning Report

TWO FINANCIAL IMPACT STUDIES. Seal IT Equipment Cabinets for Significant Annual Cost Savings and Simple Payback in a Few Short Months

IT Air Conditioning Units

Evaluation and Modeling of Data Center Energy Efficiency Measures for an Existing Office Building

Survey and Audit Service Schedule. Airflow and Thermal Imaging Survey Service Schedule. Data Centre Solutions Expertly Engineered

The Energy. David L. Moss Dell Data Center Infrastructure

Is Data Center Free Cooling Feasible in the Middle East? By Noriel Ong, ASEAN Eng., PMP, ATD, PQP

Moving Containment Inside the Enclosure. Jeff Markle Great Lakes Case & Cabinet

Impact of Air Containment Systems

Data Center Temperature Design Tutorial. Ian Seaton Chatsworth Products

Effectiveness and Implementation of Modular Containment in Existing Data Centers

Airside Economizer Comparing Different Control Strategies and Common Misconceptions

Data centers control: challenges and opportunities

Optimiz Chilled January 25, 2013

Optimizing Cooling Performance Of a Data Center

Innovative Data Center Energy Efficiency Solutions

Data Center Trends and Challenges

Virtualization and consolidation

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc.

Total Modular Data Centre Solutions

To Fill, or not to Fill Get the Most out of Data Center Cooling with Thermal Blanking Panels

Airside Economizer Comparing Different Control Strategies and Common Misconceptions

Review on Performance Metrics for Energy Efficiency in Data Center: The Role of Thermal Management

Schneider Electric Cooling Portfolio. Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions

Unique Airflow Visualization Techniques for the Design and Validation of Above-Plenum Data Center CFD Models

A separate CRAC Unit plant zone was created between the external façade and the technical space.

Environmental Data Center Management and Monitoring

WIRELESS ENVIRONMENTAL MONITORS

Variable Density, Closed-Loop, Water-Cooled Data Center Solution

Session 6: Data Center Energy Management Strategies

Air-Side Economizer Cycles Applied to the Rejection of Data Center Heat

A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems.

The End of Redundancy. Alan Wood Sun Microsystems May 8, 2009

FREECOOLING, EVAPORATIVE AND ADIABATIC COOLING TECHNOLOGIES IN DATA CENTER. Applications in Diverse Climates within Europe, Middle East and Africa

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

IT environmental range and data centre cooling analysis

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

Digital Realty Data Center Solutions Digital Chicago Datacampus Franklin Park, Illinois Owner: Digital Realty Trust Engineer of Record: ESD Architect

survey, 71% of respondents listed Improving capacity planning as a top driver for buying DCIM 1.

Improving Rack Cooling Performance Using Blanking Panels

ENCLOSURE HEAT DISSIPATION

Welcome Association of Energy Engineers to the Microsoft Technology Center

Liebert DCW CATALOGUE

Product Brochure DENCO Close Control Units Row-DENCO. High Density Cooling for Data Centres

Reclaim Wasted Cooling Capacity Now Updated with CFD Models to Support ASHRAE Case Study Data

Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center. Precision Cooling For Business-Critical Continuity

84 kw, Tier 3, Direct Expansion, 937 ft 2

REPORT. Energy Efficiency of the San Diego Supercomputer Center and Distributed Data Centers at UCSD. Prepared for:

Server Room & Data Centre Energy Efficiency. Technology Paper 003

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

MEASURE MONITOR UNDERSTAND

Optimization in Data Centres. Jayantha Siriwardana and Saman K. Halgamuge Department of Mechanical Engineering Melbourne School of Engineering

LANL High Performance Computing Facilities Operations. Rick Rivera and Farhad Banisadr. Facility Data Center Management

A Green Approach. Thermal

Rethinking Datacenter Cooling

Data Centre Energy Efficiency. Duncan Clubb EMEA Data Centre Practice Director CS Technology

INNOVATE DESIGN APPLY.

INNOVATE DESIGN APPLY.

Experimental Study of Airflow Designs for Data Centers

Application of TileFlow to Improve Cooling in a Data Center

Datacenter Efficiency Trends. Cary Roberts Tellme, a Microsoft Subsidiary

APC APPLICATION NOTE #114

ZTE ZXCR Series DX In-Row Air Conditioning Product Description

Green Data Centers A Guideline

ASHRAE Data Center Standards: An Overview

White paper adaptive IT infrastructures

Green IT and Green DC

Close Coupled Cooling for Datacentres

FACT SHEET: IBM NEW ZEALAND DATA CENTRE

Transcription:

Available online at www.sciencedirect.com ScienceDirect Energy Procedia 75 (2015 ) 1255 1260 The 7 th International Conference on Applied Energy ICAE2015 Data Center Energy and Cost Saving Evaluation Z. Song a, X. Zhang b, *, C. Eriksson a a Malardalen University, 72123, Vasteras, Sweden b ABB AB, Corporate Research,72178, Vasteras, Sweden Abstract In data centers, about 40% of the total energy is consumed for cooling the IT equipment. Cooling costs are thus one of the major contributors to the total electricity bill of large data centers. This paper studies two factors affecting data center cooling energy consumption, namely air flow management and data center location selection. A unique rack layout with a vertically cooling air flow is proposed. Two cooling systems, computer room air conditioning (CRAC) cooling system and airside economizer (ASE), have been studied. Based on these two cooling systems, four cities have been selected from the worldwide data center locations. A number of energy efficiency metrics are explored for data center cooling, such as power usage effectiveness (PUE), coefficient of performance (COP) and chiller hours. By analyzing the effects of chiller hours and economizer hours, comparative economic results of cooling power consumption are provided in both systems. The results show that the cooling efficiency and operating costs vary significantly with different climate conditions, energy prices and cooling technologies. As climate condition is the major factor which affects the airside economizer, employing the airside economizer in the cold climate yields much lower energy consumption and operation costs. 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license 2015 The Authors. Published by Elsevier Ltd. (http://creativecommons.org/licenses/by-nc-nd/4.0/). Selection and/or peer-review under responsibility of ICAE Peer-review under responsibility of Applied Energy Innovation Institute Keywords: Data center; Cooling; Energy efficiency 1. Introduction The worldwide energy consumption of data centers has increased dramatically, now accounting for about 1.3% of the world s electricity usage [1]. The data centers have evolved significantly during the past decades by adopting more efficient technologies and practices in data center infrastructure management (DCIM). It is estimated that the market will grow by about 5 percent yearly to reach $152 billion in 2016 [2]. Meanwhile there is a trend to build mega-data centers with capacities over 40 MW. Energy efficiency is an important issue in the data centers for minimizing environmental impact, lowering costs of energy consumption and optimizing data center operation performance. * Corresponding author. Tel.: +46-21-323062. E-mail address: Xiaojing.zhang@se.abb.com. 1876-6102 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of Applied Energy Innovation Institute doi:10.1016/j.egypro.2015.07.178

1256 Z. Song et al. / Energy Procedia 75 ( 2015 ) 1255 1260 A modern data center has a large room with many rows of racks filled with a huge number of servers and other IT equipment used for processing, storing and transmitting digital information. An amount of heat is generated by these thousands of servers and other IT equipment. To maintain reliability of the servers and other IT equipment in the data center, it is of importance to maintain proper temperature and humidity conditions. Fig. 1 reports the results of an investigation of 10 random data centers. It reveals the representative power consumption distribution and the variation, showing a spread between 30% and 55% of the total energy consumed by cooling data center IT equipment. Cooling and ventilation systems consume on average about 40% of the total energy used. Therefore, how to reduce power consumption, power costs and increase cooling efficiency and maximize availability must be taken into account before building up a data center. Fig. 1. Data center power consumption distribution and variation. 2. Data center ventilation and air flow management through vertical air flow A modern data center has a large room with many rows of racks filled with servers and other IT equipment used for processing, storing and transmitting digital information. Typically, servers and other IT equipment are lying down in the racks. In the conventional data center with hot aisle/cold aisle method, cold air generated by the cooling system is supplied through a plenum chamber under the floor of the room. At the sides of the racks, perforated air flow panels are arranged on the floor to let the cold air in. Ideally, the cold air will flow up from the air flow panels, entering the tiny spaces between the servers from one side of the servers and leaving from another side. Thus, the cold air from the one side forms a cold corridor and there is a warm corridor at the other side, in order to isolate cold and hot corridors from each other and to eliminate the possibility that hot air exhausted from one rack would enter the inlet of another rack. It can work because the cold air is heated when flowing between the servers. This arrangement has several disadvantages, mainly because air flow cannot be well controlled. In particular, the cold air and warm air will be mixed on the up side of the racks, since the cold air is much easier flowing up than flowing in between the servers, which makes cooling inefficient. A higher pressure drop is introduced due to horizontally flow through servers and air flow through the arrangement of perforated panels, which results in large flow resistance and limits heat transfer between cooling air and server. This consequently makes for higher energy consumption of the cooling system. To solve these problems, data center racks with vertically placed servers are proposed as shown in Fig. 2 (a). Instead of the traditional horizontally placed servers, with tiny spaces between servers, all servers are here placed vertically with the optimal space between servers of less than 30 mm. This limitation considers data center space utilization as well as the cooling air channel for heat transfer. Such server racks can be placed either in a closed cabinet or in open space. Natural convection is well utilized

Z. Song et al. / Energy Procedia 75 ( 2015 ) 1255 1260 1257 through the vertical air flow. The server racks can also be divided into two sections to avoid large temperature differences between bottom servers and the servers on the top of the racks. Instead of air supply though corridors between racks, cooling air is directly supplied from the bottom of racks as shown in Fig. 2 (b). Cold and warm stream mixing is thus avoided. The perforated tiles are no longer needed, which results in reduced flow resistance with lower fan power consumption. Through natural gravity due to air density differences, heat transfer is enhanced by cooling air flowing upwards and ventilation is increased. In order to achieve distributed cooling control, a number of air supplying pipes are provided and placed under the floor of the data center including an inlet adjustable opening in each pipe. The ventilation system consists of main fans and a number of sun-fans, depending on the size of the data center room that provide cooling air to several rack zones. The flow pattern can be well controlled through sub-fans and the opening adjustment. The fan speeds and the supply cooling air openings are determined to minimize the actual total power used for ventilation while satisfying server temperature and hot spot free demands. The cooling air comes either from outside fresh air, an external cooling station or CRAC units, with a separate control system or a control system integrated to the data center infrastructure management. RACKS RACKS RACKS ColdPlenum (a) Data center rack layout with vertical placed servers. (b) Air flow illustration in data center vertical server rack layout. Fig. 2. Data center vertical server rack layout with adjustable inlet. 3. Effect on data center location selection 3.1 CRAC and air-side economizer case study scenarios A typical cooling solution of data center is accomplished by using computer room air conditioning (CRAC) with equipment like air handling units (AHU), chiller and cooling tower, which supply cold air through the plenum under the floor up to each rack and recirculate the expelled hot air to the cooling equipment. An Air-side economizer uses ambient low temperature air from outside directly to cool the internal servers whenever the outside temperature is lower than the set-point temperature of return air in a data center. A computer room air handler (CRAH) is used in the case when outside temperature is not appropriate for cooling. As an air-side economizer does not include humidity control, a dedicated humidification system is needed to stabilize the fluctuation of relative humidity (RH). The Air-side economizer can offer more hours of economization which is more energy efficient and it reduces the cooling load and cooling costs for data centers. This work covers three scenarios: the baseline; the air-side economizer with COP of 2.1; and the air-side economizer with COP of 4.0. A representative data centre in the US [3] with a conventional CRAC cooling is defined as the cooling solution for the baseline scenario. The data center ventilation layout uses the conventional hot aisle/cold aisle arrangement for all scenarios. Under the airside economizer scenarios, COP was assumed to be 2.1 and 4, respectively [4]. The baseline CRAC fan power is 232 kw

1258 Z. Song et al. / Energy Procedia 75 ( 2015 ) 1255 1260 and the ASE fan power is 410 kw since the outside air temperature is often higher than CRAC cooling air temperature. To keep sufficient heat transfer, the ASE requires more fan power. Four cities with very different climate conditions are selected as data center locations for comparison. 3.2 Analysis of chiller hour, economizer hour and humidity A chiller hour is recorded whenever mechanical cooling is required to maintain the maximum allowable IT intake temperature. Economiser hour is collected to determine the number of hours where outside air conditions meet the required data center conditions. Both chiller hour and economizer hour apply to two regions, the region where the data center is cooled entirely by chillers and the transitional region where some of the cooling load is met by the free cooling system and the remainder is met by chillers. For the baseline scenario, the chiller hours are equal to the annual operation hours since the operation could practically make no use of the outdoor climatic conditions, so the data center is cooled by cooling system all the year round. Based on the ASHRAE thermal environment recommended range [5] for data centers and the analysis of 10 years of climate statistic temperature and humidity data in the four cities, the economizer hour and chiller hour for each data center under the design of ASE scenario are calculated and shown in Table 1 and Table 2. Luleå has special advantage of zero chiller hours. It means that no mechanic cooling equipment is required. Seattle is the ASE favourite city in US because less chiller hours are needed there. Humidity control is another concern in data center operation. Humidity should be controlled according the most restrictive ASHRAE recommended range, i.e. 5.5 C dew point to 60% relative humidity and 15 C dew point. The baseline uses a water-cooled chiller plant to cool water through heat exchangers, which maintains more constant humidity levels in the data center. The primary disadvantage of airside economization is the lack of humidity control. The extra energy used for humidification may offset a part of the energy savings from ASE and the additional costs for humidification therefor have to be considered in the ASE benefit evaluation. Table 1. Economizer hours and chiller hours to air-side economizer New York City, USA Seattle, USA Houston, USA Luleå, Sweden Total yearly hours 8760 8760 8760 8760 Economizer hours 8215 8704 7049 8760 Chiller hours (T > 27 C) 551 62 1717 0 Table 2. Humidification costs for ASE system kw required for humidification [6] Costs of humidification ($/kwh) New York City, USA Seattle, USA Houston, USA Luleå, Sweden 128 128 128 128 0.08 0.08 0.05 0.05 Hours/year for humidification 6048 4704 4704 6046 Annual humidification costs 62 48 30 39 (k$)

Z. Song et al. / Energy Procedia 75 ( 2015 ) 1255 1260 1259 3.3 Energy consumption comparison of cooling scenarios The cooling power consumption and the annual operation costs in each design scenario are calculated based on the chiller hours and the PUEs showed in Fig. 3 and Fig. 4. The energy prices in the four cities have been considered. The lower energy price of 0.05 dollar per kwh in Houston and Luleå showed to be an advantage comparing to the energy price of 0.08 dollar per kwh in New York City and Seattle. In the baseline scenario, the performance ratio of total facility energy consumption to server energy consumption was 1.55, which is the same for all of the four data centers. The PUE ratios of the airside economizer scenarios are lower than the baseline, which showed that the airside economizer can reduce energy consumption in most cases. Moreover, minor change in the PUE represented substantial energy savings. Since the cooling system operation in the baseline scenario is assumed to be independent of the climatic conditions, cooling power consumption is the same for all four data centers, which equals to the electric power used for the whole year cooling system operating. Cooling energy consumption is assumed to be proportionally correlated to chiller hours and economization hours. Total electricity consumption for cooling is calculated using the full power load of cooling, and 8760 hours per year. Cooling equipment depreciation and IT downtime were not considered under the operation. The results in Fig. 4 (a) show that airside economization consumed less electric energy than those in the baseline scenario. Airside economization in Luleå provided the largest energy savings due to the cold climate, while that in Houston provided the least energy savings. Table 2 also showed that Luleå has zero chiller hours and the highest economizer hours, while Houston has the most chiller hours and least economizer hours. Thus, the cooling energy consumption in the four data centers are related to the different climate restrictions. The ASE system reduces cooling costs by taking advantage of cool outdoor air to condition indoor spaces. Among the three cooling design scenarios, the airside economizer could save energy compared to the baseline; the ASE 2 scenario with a higher COP and lower PUE saves even more energy than ASE 1. ASE could save up to 35% of power costs compared to the baseline case. However, there are a number of factors that affect data center operation costs, notably weather conditions, local electricity prices and operating policies. In Luleå, the cool climate could contribute to the ASE for all hours of the year. Furthermore, since Luleå enjoys the lowest energy price and the lowest industrial tax rate, it is obviously the most cooling efficient site for operating data centers. As the use of ASE brings an associated concern about moisture from humidity, the cost to humidify is also one of the issues should not be neglected. Though the cost of humidification is a significant expense, results from the ASE scenario in Luleå showed that the humidification costs is 27% of total cooling energy saving. Fig. 3. Case study of data center PUE.

1260 Z. Song et al. / Energy Procedia 75 ( 2015 ) 1255 1260 (a) Cooling power consumption. (b) Operation costs. Fig. 4. Case study of data center cooling power consumption and annual operation costs. 4. Conclusions Data center ventilation and air flow management with vertically placed server racks provide efficient heat transfer with reduced air flow pressure drop and contribute to data center energy saving. There are a number of factors affecting data center total energy consumption and costs, in particular weather conditions, local electric prices and operating policies, and the cooling solution design. For an air-side economizer system, the major factor is the climate condition or location selection. Both temperature and humidity have to be considered. Although the cost of humidification is a significant expense, the current study showed that the benefits from the cold climate is much larger than the costs of humidity regulation. References [1] Koomey J. G, Growth in data center electricity use 2005 to 2010. Analytics Press, Oakland, 2011. [2] Kovar J. F, The 2014 Data Center 100, CRN, 2014. [3] Shehabi A, Ganguly S, Traber K, Price H, Horvath A, Nazaroff W. W, Gadgil A. J, Energy Implications of Economizer Use in California Data Centers, ACEEE Conference Proceedings, Monterey, CA, 2008. [4] Derrick PE J. Joy D, STULZ Water-Side Economizer Solutions, STULZ Data Center Design Guide, 2014. [5] ASHRAE TC 9.9, 2011Thermal Guidelines for Data Processing Environments Expanded Data Center Classes and Usage Guidance, American Society of Heating, Refrigeration and Air-Conditioning Engineers, Inc. 2011. [6] Newcombe L, World IT Environmental Range Analysis, The Chartered Institute for IT, 2011. Biography Xiaojing Zhang received the M.S. degree in environmental engineering from China in 1983 and Ph.D. degree in energy technology in 1997 at KTH, Stockholm. He joined ABB in 1998 and his career included 15 years of industrial experiences in process industry. Dr. Zhang is currently a principal scientist at ABB AB, Corporate Research, Sweden.