New Techniques for Energy-Efficient Data Center Cooling

Similar documents
Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer

Application of TileFlow to Improve Cooling in a Data Center

ENCLOSURE HEAT DISSIPATION

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc

Fundamentals of CFD and Data Center Cooling Amir Radmehr, Ph.D. Innovative Research, Inc.

Google s Green Data Centers: Network POP Case Study

Liebert DCW CATALOGUE

Rethinking Datacenter Cooling

Cooling on Demand - scalable and smart cooling solutions. Marcus Edwards B.Sc.

Reducing Energy Consumption with

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS

A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems.

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms

Power and Cooling for Ultra-High Density Racks and Blade Servers

A Green Approach. Thermal

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire

Optimization in Data Centres. Jayantha Siriwardana and Saman K. Halgamuge Department of Mechanical Engineering Melbourne School of Engineering

Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center. Precision Cooling For Business-Critical Continuity

Data Center Assessment Helps Keep Critical Equipment Operational. A White Paper from the Experts in Business-Critical Continuity TM

Where s the Heat Coming From. Leland Sparks Global Account Technical Consultant Belden

The Efficient Enterprise. All content in this presentation is protected 2008 American Power Conversion Corporation

Recapture Capacity for Existing. and Airflow Optimization

Thermal management. Thermal management

Optimizing Cooling Performance Of a Data Center

Datacenter Efficiency Trends. Cary Roberts Tellme, a Microsoft Subsidiary

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Reducing Data Center Cooling Costs through Airflow Containment

Using CFD Analysis to Predict Cooling System Performance in Data Centers Ben Steinberg, P.E. Staff Applications Engineer

Moving Containment Inside the Enclosure. Jeff Markle Great Lakes Case & Cabinet

Survey and Audit Service Schedule. Airflow and Thermal Imaging Survey Service Schedule. Data Centre Solutions Expertly Engineered

Reclaim Wasted Cooling Capacity Now Updated with CFD Models to Support ASHRAE Case Study Data

Server Room & Data Centre Energy Efficiency. Technology Paper 003

Next Generation Cooling

Effectiveness and Implementation of Modular Containment in Existing Data Centers

Ten Cooling Solutions to Support High-density Server Deployment. Schneider Electric Data Center Science Center White Paper #42

Temperature monitoring and CFD Analysis of Data Centre

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power

Impact of Air Containment Systems

HIGH DENSITY DATACOM INTEGRATED COOLING SYSTEM.

Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise. A White Paper from the Experts in Business-Critical Continuity

DATA CENTER PRECISION COOLING: The Need For A Higher Level Of Service Expertise

Improving Rack Cooling Performance Using Blanking Panels

International 6SigmaDC User Conference CFD Modeling for Lab Energy Savings DCSTG Lab Temperature Setpoint Increase

Schneider Electric Cooling Portfolio. Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions

Overcoming the Challenges of Server Virtualisation

MSYS 4480 AC Systems Winter 2015

Data Center Airflow Management Basics: Comparing Containment Systems

<Insert Picture Here> Austin Data Center 6Sigma DC CFD Model

Smart Data Centres. Robert M Pe, Data Centre Consultant HP Services SEA

1.0 Executive Summary

Five Strategies for Cutting Data Center Energy Costs Through Enhanced Cooling Efficiency

Data Center Lifecycle and Energy Efficiency

Total Modular Data Centre Solutions

Virtualization and consolidation

Ten Cooling Solutions to Support High- Density Server Deployment

Airflow Management s Role in Data Center Cooling Capacity

Efficiency of Data Center cooling

Great Lakes Product Solutions for Cisco Catalyst and Catalyst Switches: ESSAB14D Baffle for Cisco Catalyst 9513 Switch

survey, 71% of respondents listed Improving capacity planning as a top driver for buying DCIM 1.

Energy Logic: Emerson Network Power. A Roadmap for Reducing Energy Consumption in the Data Center. Ross Hammond Managing Director

HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE

Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center

Environmental Data Center Management and Monitoring

How to Increase Data Center Efficiency

Rittal Cooling Solutions A. Tropp

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

1.0 Executive Summary. 2.0 Available Features & Benefits

Switch to Perfection

Optimiz Chilled January 25, 2013

Underfloor v. Overhead Cabling

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

TWO FINANCIAL IMPACT STUDIES. Seal IT Equipment Cabinets for Significant Annual Cost Savings and Simple Payback in a Few Short Months

COOLING SOLUTIONS. SmartRack Close-Coupled Cooling for IT Environments

Data center cooling evolution. The hottest way to save energy. Lucio Bustaffa: Product Development Manager BlueBox Group

Data Center Infrastructure Management ( DCiM )

The Energy. David L. Moss Dell Data Center Infrastructure

CHAPTER 11 Data Centers and IT Equipment

To Fill, or not to Fill Get the Most out of Data Center Cooling with Thermal Blanking Panels

Innovative Data Center Energy Efficiency Solutions

PLAYBOOK. How Do You Plan to Grow? Evaluating Your Critical Infrastructure Can Help Uncover the Right Strategy

Energy Efficiency Best Practice Guide Data Centre and IT Facilities

CFD Modeling of an Existing Raised-Floor Data Center

Liebert XD. Cooling Solutions For High Heat Density Applications. Precision Cooling For Business-Critical Continuity

How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals. Michael Gagnon Coolcentric October

Rittal White Paper 506: Cold Aisle Containment for Improved Data Center Cooling Efficiency By: Daniel Kennedy

Evaporative free air cooling technology Providing evaporative free air cooling solutions.

3300 kw, Tier 3, Chilled Water, 70,000 ft 2

Running Your Data Center Under Ideal Temperature Conditions

HIGH DENSITY RACK COOLING CHOICES

Data Center Energy and Cost Saving Evaluation

BENEFITS OF ASETEK LIQUID COOLING FOR DATA CENTERS

Data Center Temperature Rise During a Cooling System Outage

APC APPLICATION NOTE #126

Green Data Centers A Guideline

Powering Change in the Data Center. A White Paper from the Experts in Business-Critical Continuity TM

Optimizing data centers for high-density computing

CRITICAL INFRASTRUCTURE ASSESSMENTS

Variable Density, Closed-Loop, Water-Cooled Data Center Solution

Close Coupled Cooling for Datacentres

Rack cooling solutions for high density racks management, from 10 to 75 kw

Transcription:

New Techniques for Energy-Efficient Data Center Cooling Applications for Large and Small Data Centers By Wally Phelps Director of Engineering AdaptivCOOL www.adaptivcool.com Copyright Degree Controls, Inc. AdaptivCOOL & Demand Based Cooling is a trademark of Degree Controls, Inc. All rights reserved. Electronic Environments Corporation 410 Forest Street Marlborough, MA 01752 (800) 342-5332 www.eecnet.com

Introduction A recent IT industry study identified power and cooling as the top two concerns of data center operators. For example, Microsoft and Google are building ultra-large data centers in the Northwest where electricity costs are low. However, few companies who rely on their IT infrastructure have the freedom or budget to relocate to low electricity cost areas. Most companies must look for other solutions to reduce power and cooling costs. Recent advances in airflow control for data centers improve cooling and lower energy costs, with utility savings paying for the upgrades in a year or less. New airflow control technology applies to any size data center from 1,000 sq. ft. to 100,000 sq. ft. Solutions are scalable, so they are cost effective. Problem Statement Data center professionals have traditionally been forced to focus their attention on introducing and maintaining high availability hardware and software under strict deadlines, often with little advance notice. Now with both power densities and energy costs rising; cooling and power require more systematic attention. Evaluations conducted by DegreeC on more than a dozen data centers have determined that many of the causes of poor cooling are widespread. In most cases these problems are easily corrected without disrupting IT operations. Common Problems of Data Center Cooling Most data centers are run far below their Network Critical Physical Infrastructure (NCPI) capacity for cooling. While a certain margin is desireable for safety and redundancy, running at data center at 30-50% of capacity, as many do, causes significant waste and results in improper cooling. Here are the common causes of poor and inefficient cooling, found in all but the newest and fully airflow engineered data centers. 1. Mixing. When hot and cool airstreams combine, they increase rack intake temperatures, which causes server failure. They also lower the temperature of the air returning to the Computer Room Air Conditioners (CRACs). Both are undesirable conditions. 2. Recirculation. When hot air from server exhausts is drawn into server intakes instead of returning to the CRAC. This overheats the equipment at the top of the racks. 3. Short Circuiting. When cool air returns directly to the CRAC before doing any cooling work; This reduces the CRAC return air temperature. 4. Leakage. Cool air being delivered uncontrolled, usually by cable cutouts. This cool air is both wasted and reduces CRAC return air temperature. 5. Dehumidification. When the CRAC return air is below the dewpoint of the cooling coils. Rehumidifying is expensive and reduces overall cooling capacity. 6. Underfloor Obstructions. Cables, pipes, conduits, junction boxes, all impede the normal flow of air and cause unpredictable flow through perforated tiles. 7. Underfloor Vortex. Poorly planned CRAC placement reduces floor pressure and produces insufficient cooling in the affected locations. 8. Venturi Effect. Racks placed too close to the CRAC will not get enough cooling because the cool air exiting the CRAC at high velocity limits flow from the perforated tile. In extreme cases air can be sucked from the room to the underfloor.

9. Poor Return Path. Hot air that has no clear route to return to the CRAC causes mixing, higher rack temperatures, and lower CRAC return temperatures. 10. Low CRAC Return Temperature. Common result of many of the above conditions, it reduces the efficiency of the cooling system. It also may trick the system into thinking the room is too cool and throttle back. 11. Defective Cooling System Operation. Many cooling systems are improperly configured or operated. Common conditions are setpoints incorrect, chilled water loops not operating as intended or CRACs in need of maintenance. 12. Rack Orientation. Legacy equipment often uses non-standard airflow paths or rows of racks may not be configured to hot aisle/cold aisle. 13. External Heat and Humidity. Data centers are often affected by outside weather conditions if they aren t protected within climate controlled confines of a larger facility. Cause-and-Effect Interactions Knowing the common causes of poor cooling forms the starting point for solving the problem. Without a fundamental understanding of the interactions (Fig. 1), typical remedies such as moving tiles, reconfiguring racks or changing operating points can have unintended and potentially disastrous effects on cooling in the data center. However by carefully studying the cause-andeffect interactions in the data center (Fig. 1) and determining root cause (Fig. 2), one can develop a systematic approach that can be applied to any size data center. Although each data center is unique, the root causes are similar and so are their solutions. Fig. 2) Simplified fishbone diagram, a structured problem solving tool used to help determine root cause. New Analytic Tool AdaptivCOOL s products and services solve data center cooling problems in a structured fashion. A key tool used to understand the complex interactions is Computational Fluid Dynamics (CFD). The CFD technique produces a computer simulation of airflow and thermal conditions within the Data center. CFD allows rapid testing of a variety of different parameters. (Please see Fig 3). Fig. 1) Simplified interaction diagram Fig. 3) CFD results of a data center with a failed CRAC (in the lower left-hand corner) correlate well with actual measurements.

CFD analysis is extremely useful if the input data is accurate and if experienced hands analyze the output. CFD skills improve dramatically with experience. Repeated applications of the technique in varied conditions increases the user s knowledge base significantly. With each new simulation solution comes closer and closer to measured conditions. This progressive enrichment has built the AdaptivCOOL CFD Knowledgebase. Attempting CFD analysis without accurate input, analysis or verification skills produces results which correlate poorly to measured values. Such off-target solutions can be hazardous to live data centers. Fig. 4 illustrates a typical project sequence. Results at Two Data Centers Two data centers at opposite ends of the size spectrum illustrate how CFD analysis and the AdaptivCOOL CFD knowledgebase can be used in different ways. Site A is a 9,000 sq. ft. build-out for a major international telecom company. The site consists of an existing 3,300 sq. ft. installation with 152 racks that AdaptivCOOL had optimized earlier using CFD analysis of the existing infrastructure. The results of this previous optimization were added to the knowledgebase. Fig. 4) Typical AdaptivCool TM data center project One of the benefits of rigorously including experience back into the knowledgebase is the ability to remediate lower-level issues without necessarily having to do a complete CFD analysis. The AdaptivCOOL CFD knowledgebase allows rapid and cost-effective solutions in smaller data centers and sites that have a specific hot spot despite the rest of the data center operating normally. Fig. 5) Data center map showing original space, on the left, and the proposed expansion space, the shaded area to right. Due to the planned expansion and the addition of 250 racks (shown in Fig. 5) the owner commissioned a new CFD analysis to determine the best location for new racks and CRACs (please see Figs. 6 & 7). This study produced a detailed list of engineering recommendations before construction began. A few examples follow. 1. Improved Siting. The best location for CRACs in the expansion space centralized them, instead of distributing them around the perimeter of the room as the original plan did. Cooling is more consistent this way and a CRAC failure does not disrupt an entire section of the room. This setup means CRACs can be added incrementally as required. Redundant CRACs can be turned off and kept in reserve.

2. Turning Liabilities into Assets. The improved rack layout uses existing support columns as passive dams in several cold aisles. This step allows higher rack density. 3. Improving Airflow. DC power supply cables and cable trays designated to be installed under the raised floor obstructed underfloor airflow. Lowering these barriers by two inches creates consistent flow through the perforated tiles. Fig. 6) Analysis of original plan for expansion CRAC capacity and the space could not be utilized fully. The cost of the CFD analysis was quickly recovered through lower utility bills, less infrastructure, better space utilization, smoother commissioning of the site, and improved cooling redundancy. Site B is the primary data center for a technology leasing corporation and consists of 1400 sq. ft. of raised floor housing 24 racks and several dozen PCs. This relatively small site has experienced hot spot conditions especially when the main building AC is shut down. Site B has installed three auxiliary spot coolers to supplement its two 10-ton CRACs even though one CRAC could theoretically handle the load on paper. The initial audit found no major hotspots but did find a large swing between coldest (66ºF) and warmest (76ºF) intakes. Most of the cool air was delivered through floor cutouts, usually in the wrong places. In addition the CRAC setpoints were set very low (68ºF). (Please see Fig. 8.) Fig. 7) Optimized layout for expansion area. Area to the right has more uniform cooling. Without accurate CFD analysis of the facility the expanded area would have inherited many of the same problems the original facility had. In the best case this would have caused inefficient cooling and required more CRACs and electricity than necessary. In the worst case the load would not be cooled properly regardless of installed Fig. 8) Small Site B - Baseline measurements Based on the experience derived from previous projects the solution could be developed without extensive CFD analysis. First, airflow was engineered to the rack intakes. Then the CRAC setpoints were raised to normal levels. (Please see Figs. 9, 10, 11.)

Fig. 9) AdaptivCOOL TM CFD Knowledgebase solution for small- to medium-sized data centers and isolated hot spots. While this test proved there was enough margin to run with one CRAC, this was advised against as there is no automatic backup if the running CRAC fails. Cooling electricity savings at this site is around 20% or in the range of $3000-$4000 per year depending on the going electricity rate. ROI in this case is 10 months or less due to the low installed cost. The economies resulted from reducing duty cycle of the CRAC compressors and the dry cooler. Summary While server heat density and energy costs continue to soar there is relief for both by engineering airflow to eliminate hotspots while saving energy. Furthermore, recent advances in this technology make it affordable and cost effective for nearly any size data center. Fig. 10) Small Site B - after AdaptivCool TM optimization Fig. 11) Temperature distributions As a further demonstration of the efficiency gained by applying the AdaptivCOOL TM CFD Knowledgebase, one of the CRACs was deliberately turned off. The temperatures throughout the room rose by 2F and stabilized.