Cooling Solutions & Considerations for High Performance Computers

Similar documents
Cooling System Considerations for High Density Colocation and Enterprise Data Centers

Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer

PROCESS & DATA CENTER COOLING. How and Why To Prepare For What s To Come JOHN MORRIS REGIONAL DIRECTOR OF SALES

INNOVATE DESIGN APPLY.

INNOVATE DESIGN APPLY.

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS

System Overview. Liquid Cooling for Data Centers. Negative Pressure Cooling Without Risk of Leaks

Planning for Liquid Cooling Patrick McGinn Product Manager, Rack DCLC

HPC Solutions in High Density Data Centers

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire

NREL Validates LCS Technology. PRESENTED BY Herb Zien CEO, LiquidCool Solutions

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power

Thermal management. Thermal management

Schneider Electric Cooling Portfolio. Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions

Efficiency of Data Center cooling

How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals. Michael Gagnon Coolcentric October

BENEFITS OF ASETEK LIQUID COOLING FOR DATA CENTERS

A Data Center Heat Removal System That Saves... 80% Discover Inertech s Award-winning Data Center Heat Removal System

HIGH DENSITY DATACOM INTEGRATED COOLING SYSTEM.

HIGH DENSITY RACK COOLING CHOICES

LCS Value Proposition November 2014

Cooling on Demand - scalable and smart cooling solutions. Marcus Edwards B.Sc.

Case Study: Energy Systems Integration Facility (ESIF)

Power and Cooling for Ultra-High Density Racks and Blade Servers

Next Generation Cooling

STULZ Micro DC. with Chip-to-Atmosphere TM Liquid to Chip Cooling

MSYS 4480 AC Systems Winter 2015

Energy Efficiency and WCT Innovations

Data Center Energy Savings By the numbers

LANL High Performance Computing Facilities Operations. Rick Rivera and Farhad Banisadr. Facility Data Center Management

FREEING HPC FROM THE DATACENTRE

Optimiz Chilled January 25, 2013

Switch to Perfection

LCP Hybrid Efficient performance with heat pipe technology

Datacenter Efficiency Trends. Cary Roberts Tellme, a Microsoft Subsidiary

Liebert DCW CATALOGUE

Close-coupled cooling

Scalable, Secure. Jason Rylands

Motivair MLC-SC Air-Cooled Scroll Chillers Tons

Green IT and Green DC

Data Center Trends: How the Customer Drives Industry Advances and Design Development

1.0 Executive Summary. 2.0 Available Features & Benefits

Your data center s hidden source of revenue

DATA CENTER EFFICIENCY: Data Center Temperature Set-point Impact on Cooling Efficiency

ASHRAE Data Center Standards: An Overview

ENCLOSURE HEAT DISSIPATION

Product Brochure DENCO Close Control Units Row-DENCO. High Density Cooling for Data Centres

Variable Density, Closed-Loop, Water-Cooled Data Center Solution

Air Containment Design Choices and Considerations

Liebert. AFC from 500 to 1700 kw. The Adiabatic Freecooling Solution with Top-Tier Availability

TABLE OF CONTENTS. About ScaleMatrix... 3 ScaleMatrix vs. Traditional Data Center Comparison... Colocation Services...

Vienna Scientific Cluster s The Immersion Supercomputer: Extreme Efficiency, Needs No Water

Your Solution For Energy and Cost Savings. McQuay Packaged Chiller Plant

Data Center Temperature Design Tutorial. Ian Seaton Chatsworth Products

2016 COOLING THE EDGE SURVEY. Conducted by Vertiv Publication Date: August 30, 2016

The Efficient Enterprise. All content in this presentation is protected 2008 American Power Conversion Corporation

Regulations Determining Computer Room Cooling Selection

One Stop Cooling Solution. Ben Tam Business Development Manager

Liebert. HPC-S Free Cooling Chiller Designed for the Efficient Cooling of Small Data Centers

Optimizing Cooling Performance Of a Data Center

APC InRow Cooler Prototype Initial Report - Test Setup Verification. H. Coles, S. Greenberg Contributing: J. Bean (APC) November 18, 2011

DATA CENTER COLOCATION BUILD VS. BUY

Product Guide. Direct Contact Liquid Cooling Industry leading data center solutions for HPC, Cloud and Enterprise markets.

Data Center Precision Cooling: The Need For A Higher Level Of Service Expertise. A White Paper from the Experts in Business-Critical Continuity

DATA CENTER LIQUID COOLING

CTC Low Temperature Process Chillers

Rolf Brink Founder/CEO

DATA CENTER PRECISION COOLING: The Need For A Higher Level Of Service Expertise

The Energy. David L. Moss Dell Data Center Infrastructure

Total Modular Data Centre Solutions

WHITE PAPER. Oil Immersion Cooling for Today s Data Centers. An analysis of the technology and its business

CHILLED WATER. HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw

Data Center Trends and Challenges

Evaporative free air cooling technology Providing evaporative free air cooling solutions.

3300 kw, Tier 3, Chilled Water, 70,000 ft 2

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

Dealing with the Density Dilemma

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

T66 Central Utility Building (CUB) Control System Reliability, Optimization & Demand Management

November/December 2011

White Paper: Data Center Capacity Management Provides New Business Opportunities & Efficiency at State-of-the-Art One Wilshire June 8, 2017

Liebert DSE Precision Cooling System Industry-best efficiency. Increased capacity. Ensured availability. EFFICIENCY. CAPACITY. AVAILABILITY.

Liquid Cooling Package LCP Cooling Systems

Why would you want to use outdoor air for data center cooling?

Results of Study Comparing Liquid Cooling Methods

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc

How to Increase Data Center Efficiency

Reducing Data Center Cooling Costs through Airflow Containment

Hybrid Warm Water Direct Cooling Solution Implementation in CS300-LC

Data Center Energy and Cost Saving Evaluation

Building Management Solutions. for Data Centers and Mission Critical Facilities

CyrusOne Case Study About CyrusOne

Thermal Design and Management of Servers

PLAYBOOK. How Do You Plan to Grow? Evaluating Your Critical Infrastructure Can Help Uncover the Right Strategy

Creating an Energy-Efficient, Automated Monitoring System for Temperature and Humidity at a World-Class Data Center

Application of TileFlow to Improve Cooling in a Data Center

Interested in conducting your own webinar?

APC APPLICATION NOTE #74

SmartCabinet Intelligent, Integrated Containment for IT Infrastructure

Data Center Facility Analysis Project. Final Presentation Wednesday, July 22, :30 3:30 PM

Transcription:

Cooling Solutions & Considerations for High Performance Computers White Paper 6 Revision 1 By Rich Whitmore Executive summary The recent advances in high performance computing have driven innovations not only in the computer hardware market but also in several supporting industries such as storage, power and cooling. All of these technologies play an important support role in the HPC industry and perhaps none has received more focus recently than cooling.

Introduction As the uses for High Performance Computing (HPC) continue to expand, the requirements for operating these advanced systems has also grown. The drive for higher efficiency data centers is a subject closely tied to a building s Power Usage Effectiveness (PUE) which is defined as: (Total Facility Energy) / (IT Equipment Energy). HPC clusters and high density compute racks are consuming power densities of up to 100 kw per rack, sometimes higher, with an estimated average density of 35 kw per rack. Building owners, Co-location facilities, Enterprise data centers, Web Scale companies, Governments, Universities and National Research Laboratories are struggling to upgrade cooling infrastructure to not only remove the heat generated by these new computer systems but also to reduce or eliminate their effects on building energy footprints or PUE. The new and rapid adaptation of Big Data HPC systems in industries such as oil and gas research, financial trading institutions, web marketing and others is further highlighting the need for efficient cooling due to the fact that the majority of the world s computer rooms and data centers are not equipped nor prepared to handle the heat loads generated by current and next generation HPC systems. If one considers that 100% of the power consumed by an HPC system is converted to heat energy, it s easy to see why the removal of this heat in an effective and eff icient manner has become a focal point of the industry. Investigating Cooling Solutions for High Performance Computer Systems Submersion Cooling New high performance computer chips have the capability of allowing an HPC system designer to develop special clusters that can reach 100 kw per rack and exceed almost all available server cooling methods currently available. Submersion cooling systems offer baths or troughs filled with a specially designed nonconductive dielectric fluid allowing entire servers to be submerged in the fluid without risk of electrical conductance across the computer circuits. These highly efficient systems can remove up to 100% of the heat generated by the HPC system. The Submersion Cooling Bath 2

heat, once transferred to the dielectric fluid can then be easily removed via a heat exchanger, pump and closed loop cooling system. To be applied, the traditional data center is typically renovated to accept the new submersion cooling system. Legacy cooling equipment such as CRAC s, raised flooring and vertical server racks are replaced by the submersion baths and updated closed loop warm water cooling system. These baths lay horizontally on the floor which provides a new vantage point for IT personnel although at the cost of valuable square footage. Servers are modified either by the owner or a 3 rd party by removing components that would be negatively affected by the dielectric fluid such as hard drives and other components that might not be warranted by original equipment manufacturers (OEMs). Special consideration should be paid to future server refresh options considering that such a monumental shift in infrastructure greatly limits OEM server options in the future and limits the overall uses for a server room with dedicated submersion cooling technology only. While submersion cooling offers extreme efficiencies for the world s most extreme HPC systems, the general scarcity of such HPC systems combined with required infrastructure upgrades and maintenance challenges poses an issue for market wide acceptance at this time. The worldwide High Performance Computing (HPC) market is expected to grow at an 8.3% CAGR reaching $44 billion in 2020. The global HPC market will generate $220 Billion in revenues over the period 2015-2020. 3

Direct to Chip, On-Chip Cooling Direct to Chip or On-Chip cooling technology has made significant advances recently in the HPC industry. Small heat sinks are attached directly to the computer CPU s and GPU s creating high efficiency close coupled server cooling. Up to 70% of the heat from the servers is collected by the Direct to Chip heat sinks and transferred throu gh a system of small capillary tubes to a Coolant Distribution Unit (CDU). The CDU then transfers the heat to a separate closed loop cooling system to reject the heat from the computer room. The balance of the heat, 30% or more is rejected to the existing room cooling infrastructure. Diagram A The warm water cooling systems commonly used for Direct to Chip cooling are generally considered cooling systems that don t utilize a refrigeration plant such as closed loop dry coolers (similar to large radiators) and cooling towers and have been recently quantified by the American Society of Heating Refrigeration and Air-Conditioning Engineers (ASHRAE) to produce W-3 or W-4 water temperatures or water ranging from 2 C - 46 C (36 F- 115 F). These systems draw significantly less energy than a typical refrigerated chiller system and provide adequate heat removal for direct to chip cooling systems as they can operate with cooling water supply temperatures in the W3 - W4 range. Direct to chip cooling solutions can also be used to reclaim low grade water heat that if repurposed and used correctly can improve overall building efficiencies and PUE. The advantages of this form of heat recovery is limited by the ability of the building s HVAC system to accept it. HVAC building design varies around the world. Many parts of Europe can benefit from low grade heat recovery because of the popular use of water based terminal units in most buildings. In contrast most of the North American HVAC building designs use central forced air heating and cooling systems with electric reheat terminal boxes, leaving little use for low grade heat recovery from a direct to chip or on chip cooling system. The feasibility to distribute reclaimed warm water should also be studied in conjunction with building hydronic infrastructure prior to use. A recent study performed by the Ernest Orlando Lawrence Berkley National Laboratory titled Direct Liquid Cooling for Electronic Equipment concluded that the best cooler performance achieved by a market leading direct to chip cooling system reached 70% under optimized laboratory conditions. This leaves an interesting and possibly counterproductive result for such systems because large amounts of heat from the 4

computer systems must still be rejected to the surrounding room which must then be cooled by more traditional, less efficient means such as Computer Room Air Conditioners (CRAC) and or Computer Room Air Handlers (CRAH) (Diagram A). To understand better the new result of deploying a direct or on chip cooling system, one must consider the HPC cluster as part of the overall building energy footprint which can then be directly tied to the building PUE. Considering a 35 kw HPC rack with direct to chip cooling will reject at least 10.5 kw (30%) of heat to the computer room and an average HPC clust er consists of 6 racks of compute racks (excluding high density storage arrays), a direct to chip and or on chip cooling system will reject at least 60 kw of heat load to a given space. Utilizing the most common method of rejecting this residual heat by CRAC or CRAH results in a significant setback in original efficiency gains. Additional challenges are presented when considering the actual infrastructure needed inside the data center and more importantly inside the server rack itself when considering an on-chip cooling system. In order to bring warm water cooling to the chip level, water must be piped inside the rack through a large number of small hoses which in turn feed the small direct to chip heat exchangers/pumps. While small by nature, the scale of which these are installed, leaves an IT staff looking at the back of a rack filled with large numbers of hoses as well as a distribution header for connecting to the inlet and outlet water of the cooling system. Direct to chip cooling systems are tied directly to the mother boards of the HPC cluster and are designed to be more or less permanent. Average HPC clusters are refreshed ( replaced) every 3-5 years typically based on demand or budget. With that in mind, one must consider the costs of replacing the direct to chip or on chip cooling infrastructure along with the server refresh. Direct to chip cooling offers significant advances in efficiently cooling today s high performance computer clusters, however once put into the context of a larger computer room or building, one must consider the net result of overall building performance, cost implications and useful life on total ROI. 5

Active Rear Door Heat Exchangers Active Rear Door Heat Exchangers (ARDH) have grown in popularity among those manufacturing and using HPC clusters and high density server racks. An ARDH s ability to remove 100% of the heat from the server rack with little infrastructure change offers advantages in system efficiency and convenience. These systems are commonly rack agnostic and replace the rear door of any industry standard server rack. They utilize an array of high efficiency fans and tempered cooling water to remove heat from the computer system. The Electronically Commutated (EC) fans work to match server air flow rate in CFM to ensure all heat is removed from the servers. An ARDH uses clean water or a glycol mix between 57F-75F which is often readily available in most data centers and if not, can be produced by a chilled water plant, closed loop cooling systems such as cooling towers, dry fluid coolers, or a combination of these systems. Utilizing an ARDH allows for high density server racks to be installed in existing computer rooms such as Co -location Facilities or legacy data centers with virtually no infrastructure changes and zero effect on surrounding computer racks. Active Rear Door Heat Exchangers are capable of removing up to 75 kw per compute rack, offering users large amounts of scale as clusters go through multiple refresh cycles. These systems once deployed also offer advantages to owners by monitoring internal server rack temperatures and external room temperatures, ensuring that a heat neutral environment is maintained. Recent laboratory tests by server manufacturers have found that the addition of an ARDH actually reduces fan power consumption of the computers inside that rack, more than offsetting the minimal power consumption of the ARDH fan array. While counterintuitive at first glance, in depth study has shown that ARDH fans assist the server s fans allowing them to draw less energy and perform better even at high density workloads. Tests also show that hardware performance improves resulting in increased server life expectancy. 6

ARDH s offer full access to the rear of the rack and can be installed in both top and bottom water feed configuration offering further flexibility to integrate into new or existing facilities with or without the use of raised floors. Conclusion As densities in high performance computer systems continue to increase, the applied method of cooling them has become increasingly more important. Continued expansion for the use of HPC and high density servers in ever broadening applications will see these types of computers installed in more traditional data centers bringing overall data center PUE conversations into focus where in the past, HPC clusters have been largely overlooked or exempt from the conversation due to their previously more limited use in government labs and large research facilities. There are several reliable technologies currently available to cool today s HPC and high density server systems and one must select an efficient and practical system that works within the associated building s cooling infrastructure, future refresh strategy and budget. The engineering and design plans for cooling these advanced computers should take place prior to or in parallel with p urchasing of the computer system as the cooling system itself is now often leveraged to ensure optimal and guaranteed computer performance. About the author Rich Whitmore is the President and CEO of Motivair Corporation, manufacturer of mission critical chillers and the ChilledDoor rack cooling system. A graduate in Mechanical Engineering from the Rochester Institute of Technology, his career has focused on cooling some of the world s most advanced data centers and computers as well as pioneering advances in high efficiency, close coupled server cooling systems. 7

References ASHRAE TC 9.9, Thermal Guidelines for Data Processing Environments, 3 rd Edition Direct Liquid Cooling for Electronic Equipment by Henry Coles and Steve Greenberg, Ernest Orlando Lawrence Berkley National Laboratory Worldwide High Performance Computing (HPC) Market Forecast 2015-2020, Market Research Media 2014 Contact us For feedback and comments about the content of this white paper: info@motivaircorp.com For information regarding the ChilledDoor Rack Cooling System: www.chilleddoor.com For information regarding Motivair Corporation: www.motivaircorp.com 2015 Motivair Corporation. All rights reserved. 8