Case Study: Energy Systems Integration Facility (ESIF) at the U.S. Department of Energy s National Renewable Energy Laboratory (NREL) in Golden, Colorado OFFICE HIGH BAY LABORATORIES DATA CENTER 1
Case Study Roadmap Steve: NREL Introduction Steve: Performance Specs & Integrated Process Design Holistic approach to data center design and integration Peter: Cooling and Energy Recovery Steve: Preliminary (early) data All: Panel Discussion 2
NREL Snapshot Only National Laboratory Dedicated Solely to Energy Efficiency and Renewable Energy Leading clean-energy innovation for 37 years ~2300 total staff in world-class facilities Campus is a living model of sustainable energy ~34,000 visitors in FY2013. Located in Golden, Colorado Owned by the Department of Energy Operated by the Alliance for Sustainable Energy 3
Scope of Mission Energy Efficiency Renewable Energy Systems Integration Market Focus Residential Buildings Commercial Buildings Data Centers Personal and Commercial Vehicles Solar Wind and Water Biomass Hydrogen Geothermal Grid Infrastructure Distributed Energy Interconnection Battery and Thermal Storage Transportation Private Industry Federal Agencies Defense Dept. State/Local Govt. International 4
ESIF: Energy System Integration Facility New 185,000 s.f. research facility Office space for 220. High bay and laboratory space. Data center Integrated chips-to-bricks approach. Process: Design build with performance specs. LEED Platinum Significant achievement for building with large lab and data center load. Planning started in 2006. 5
National Renewable Energy Laboratory Steve Hammond NREL Data Center Showcase Facility 10MW, 10,000 s.f. Leverage favorable climate Use evaporative cooling, NO mechanical cooling. Waste heat captured and used to heat labs & offices. World s most energy efficient data center, PUE 1.06! High Performance Computing Petascale+ HPC Capability in 2013 20 year planning horizon 5 to 6 HPC generations. Insight Center Scientific data visualization Collaboration and interaction. Lower CapEx and lower OpEx. Leveraged expertise in energy efficient buildings to focus on showcase data center. Integrated chips to bricks approach.
Critical Data Center Specs Warm water cooling, 75F (24C) Water much better working fluid than air - pumps trump fans. Utilize high quality waste heat, 95F (35C) or warmer. +90% IT heat load to liquid. Up to 10% IT heat load to air. High power distribution 480VAC, Eliminate conversions. Think outside the box Don t be satisfied with an energy efficient data center nestled on campus surrounded by inefficient laboratory and office buildings. Innovate, integrate, optimize. Dashboards report instantaneous, seasonal and cumulative PUE values. National Renewable Energy Laboratory Steve Hammond
System Integration Designing for Energy Reuse HIGH PERFORMANCE COMPUTING DATA CENTER IT Load, Grows from 1 MW up to 10 MW Achieving Energy Efficiency Goals Requires: Evaporative Hydronic Cooling (Not Compressor Based) Maximize Direct Heat to Water, Minimize Use of Air for Cooling Maximize Energy Reuse in the Form of Low Grade Heating MUTUALLY BENEFICIAL RELATIONSHIP OFFICE SPACE and HIGH BAY LABORATORIES Require Large Volumes of Ventilation Air (Outside Air) Achieving Energy Efficiency Goals Requires: Maximize Evaporative Cooling, Minimize Compressor Use Utilize Energy Recovery to Reduce Operational Energy Use Provide for Static Pressure Reset & Exhaust Stack Velocity Turn-Down via Wind Anemometer Control. 8
COOLING AND ENERGY RECOVERY 9
Liquid Cooling Technologies Direct Contact Liquid Cooling TYPICAL SYSTEM CONFIGURATION A Cooling Distribution Unit (CDU) Isolates the Computer Cooling Liquid (Water) from the Central Building Systems The CDUs Circulate Water to Liquid-Cooled Local Heat Sinks (Multiple Each Server) Distribution Manifold Provides Brings Water to Each Server (Radiant Piping Technology) Supports Higher Density Solutions with a Minimal Amount of Air Cooling Required Water is in Direct Contact with Electronic Equipment (No Heat Pipe Interface) Higher Water Quality Requirements Higher Water Damage Risk Due to Multiple System Connections and Distribution Piping 10
Benefits of Liquid Cooling Thermal Stability DESIGN CONSIDERATIONS Due to the High and Variable Heat Loads within the Servers, Manufacturer s Favor a Stable, Consistent Temperature Profile Cooling Distribution Units (CDUs) Act as a Buffer to Central Building Systems and Give the User Control over Operating Set Points Dedicated Central Systems Serving the Data Center Need to BOTH Energy Efficiency & Temperature Stability 11
NREL Energy Systems Integration Facility Design Conditions and Capacity Liquid Cooling 100% total capacity 75 deg F chilled water supply Minimum 95 deg F return temp Design Capacity Day One: 1.0 MW IT load Final: 10 MW IT load Air Cooling 20% total capacity (Day One 10% capacity) 80 deg F inlet air / 25% RH minimum / 60% RH (42 deg f dp) maximum 100 deg f return temp
NREL Energy Systems Integration Facility
14
45 deg Chilled Water System 29% of the Year in Full Water Side Econmizer 15
75 deg Chilled Water System 100% of the Year in Full Water Side Econmizer 16
Cooling Load (kw, total) Cooling Load (kw, per rack) % Watercooled Supply Water Temp (F) Water delta T (F) Water press. Drop (PSI) Air Inlet/ Outlet Temp (F) System I 942.9 112 96.5% 70-75 25-30 (100F RWT) 20 80/- System II 398 51 86% 75 10 (85F RWT) 15 80/87 System III 216 100 91% 75 27.1 (102.1F RWT) 21 80/96.5 System IV 416-506 44.3-55.3 100% 75 20+ 14.5 -/- 17
18
19
20
21
Heat of Vaporization & Evaporative Cooling 22
NREL Energy Systems Integration Facility Heat Recovery System Recovered Heat to Lab and Office AHUs
Annual Energy GWh/yr Heat MBtu NREL Energy Systems Integration Facility We want the bytes AND the btu s! 10 NREL-ESIF Data Center Energy Recovery at 1 MW IT Load Annual Heat Demand & Recovery 9 2,500 8 7 6 5 4 3 2 1 0 EUE 0.9 Energy Targets PUE 1.06 EUE 0.9 EUE 0.7 Current Design PUE 1.05 EUE 0.7 Data Center Equipment Load Thermal Energy Recovered 2,000 1,500 1,000 500 0 Recovered Heat Campus Hot Water Heat for Export to Campus Heat Recovered Energy Usage Effectiveness (EUE) = Total Data Center Annual Energy Total Energy Recovered Total IT Equipment Annual Energy
THE BOTTOM LINE 25
NORMAL POWER (PLUG) DATA CENTER PANEL SP3 NORMAL LIGHTING DATA CENTER PANEL SL3 EMERGENCY LIGHTING DATA CENTER Panel ESL2 COOLING TOWERS DATA CENTER CT-602A, CT-602B, CT-602C, CT-6O2D COOLING TOWER FILTRATION UNIT & TRACE HEATERS FOR PIPING TOWER WATER PUMPS DATA CENTER P-602A, P-602B ENERGY RECOVERY PUMPS P-604A, P-604B AIR-HANDLING UNITS AHU-DC1, AHU-DC2 MAKE-UP AIR-HANDLING UNIT, MAU-DC1 WITH EVAPORATIVE COOLING PUMP DISTRIBUTION BOARD DSB-DCU-1 HPC FLEX RACK #9 HPC FLEX RACK #10 HPC FLEX RACK #11 HPC FLEX RACK #12 POWER DISTRIBUTION UNIT, HPC PDU-DCU-1 POWER DISTRIBUTION UNIT, LEGACY PDU-DCU-2 POWER DISTRIBUTION UNIT, HPC PDU-DC4-1 POWER DISTRIBUTION UNIT, LEGACY PDU-DC4-2 DISTRIBUTION BOARD DSB-DC2-1 DISTRIBUTION BOARD DSB-DC2-2 DISTRIBUTION BOARD DSB-DC1-1 DISTRIBUTION BOARD DSB-DC1-2 STAND-BY LOAD ESIF HPC Datacenter PUE Calculation NREL ESIF DATA CENTER ENERGY EFFICIENCY METRICS ENGINE BLOCK & FUEL HEATERS STAND-BY GENERATOR STAND-BY POWER POWER USE EFFECTIVENESS (PUE) DATA CENTER ENERGY EFFICIENCY IS BENCHMARKED USING AN INDUSTRY STANDARD METRIC OF POWER USE EFFECTIVENESS (PUE). THE PUE IS DEFINED AS FOLLOWS: ENERGY USE EFFECTIVENESS (EUE) ENERGY USE EFFECTIVENESS (EUE) IS A METRIC OF RECOVERED ENERGY BENEFICIALLY USED OUTSIDE OF THE DATA CENTER (HEATING). THE EUE IS DEFINED AS FOLLOWS: CRH CALCULATED POWER BASED ON RUNTIME HOURS (TYPICAL) AUTOMATIC TRANSFER SWITCH, ATS-LS AUTOMATIC TRANSFER SWITCH, ATS-CPLE AUTOMATIC TRANSFER SWITCH, ATS-U1SB POWER USE EFFECTIVENESS = TOTAL FACILITY POWER TOTAL FACILITY POWER IT EQUIPMENT POWER = LIGHTING & PLUG POWER + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT POWER ENERGY USE EFFECTIVENESS = TOTAL FACILITY ENERGY TOTAL FACILITY ENERGY RECOVERED ENERGY TOTAL FACILITY ENERGY = LIGHTING & PLUG ENERGY + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT ENERGY DATA CENTER NORMAL POWER SERVICE ENTRANCE SWITCHBOARD NORMAL LOAD IT EQUIPMENT POWER = TOTAL POWER USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER IT EQUIPMENT ENERGY = TOTAL ENERGY USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER EPM UPS UPM1 & UPM2 EPM EPM EPM EPM DATA CENTER ELECTRICAL TRANSFORMER CENTRAL PLANT DISTRIBUTION BOARD DSB-CP MECHANICAL DISTRIBUTION BOARD, DSB-MS1 DISTRIBUTION BOARD DSB-DCU DISTRIBUTION BOARD DSB-DC4 DISTRIBUTION BOARD DSB-DC3 DISTRIBUTION BOARD DSB-DC2 DISTRIBUTION BOARD DSB-DC1 DSB-U1SB CENTRAL PLANT EMERGENCY DISTRIBUTION BOARD, DSB-CPE UPS DISTRIBUTION BOARD ATS-U1SB EPM EPM EPM EPM CRH EPM EPM EPM EPM ELECTRICAL POWER METER (TYPICAL) COOLING DISTRIBUTION RACK, CDU-1 COOLING DISTRIBUTION RACK, CDU-2 COOLING DISTRIBUTION RACK, CDU-3 COOLING DISTRIBUTION RACK, CDU-4 RECOVERED ENERGY BENEFICIALLY USED OUTSIDE DATA CENTER, HEAT EXCHANGER HX-605A COOLING DISTRIBUTION RACK, CDU-5 COOLING DISTRIBUTION RACK, CDU-6 RECOVERED ENERGY (TYPICAL FOR HPC RACKS) LIGHTING & PLUG POWER COOLING LOADS PUMP LOADS HVAC LOADS IT EQUIPMENT POWER
STRIBUTION UNIT, PDU-DCU-1 STRIBUTION UNIT, Y PDU-DCU-2 STRIBUTION UNIT, PDU-DC4-1 STRIBUTION UNIT, Y PDU-DC4-2 UTION BOARD SB-DC2-1 UTION BOARD SB-DC2-2 BUTION BOARD SB-DC1-1 BUTION BOARD SB-DC1-2 Preliminary ESIF HPC Datacenter PUE ERGY EFFICIENCY METRICS POWER USE EFFECTIVENESS (PUE) DATA CENTER ENERGY EFFICIENCY IS BENCHMARKED USING AN INDUSTRY STANDARD METRIC OF POWER USE EFFECTIVENESS (PUE). THE PUE IS DEFINED AS FOLLOWS: POWER USE EFFECTIVENESS = TOTAL FACILITY POWER IT EQUIPMENT POWER TOTAL FACILITY POWER IT EQUIPMENT POWER = LIGHTING & PLUG POWER + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT POWER = TOTAL POWER USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER ENERGY USE EFFECTIVENESS (EUE) ENERGY USE EFFECTIVENESS (EUE) IS A METRIC OF RECOVERED ENERGY BENEFICIALLY USED OUTSIDE OF THE DATA CENTER (HEATING). THE EUE IS DEFINED AS FOLLOWS: ENERGY USE EFFECTIVENESS = TOTAL FACILITY ENERGY IT EQUIPMENT ENERGY Evap Cooling Towers, 2 Lighting, Plug Load, Misc., 6 Pumps, 34 Fan Walls, 7 TOTAL FACILITY ENERGY RECOVERED ENERGY TOTAL FACILITY ENERGY = LIGHTING & PLUG ENERGY + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT ENERGY = TOTAL ENERGY USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER 6 + 2 + 34 + 7 + 636 UPS UPM1 & UPM2 636 = 1.077! EPM EPM EPM EPM DISTRIBUTION BOARD DSB-DCU DISTRIBUTION BOARD DSB-DC4 DISTRIBUTION BOARD DSB-DC3 DISTRIBUTION BOARD DSB-DC2 IT Equipment, 636KW DISTRIBUTION BOARD DSB-DC1
Heat ESIF Offices, Labs, ventilation (save $200K / year) Green Data Center Bottom Line IT Load Energy Recovery NO Mechanical Chillers Evap. Water Towers Cost less to build Cost less to operate Comparison of ESIF PUE 1.06 vs efficient 1.3 data center. CapEx No Chillers Initial Build: 600 tons 10 Yr. growth: 2400 tons 10-year Savings: ($1.5K / ton) Savings No Chillers $.9M $3.6M $4.5M OpEx (10MW IT Load) PUE of 1.3 PUE of 1.06 Annual Savings 10-year Savings ($1M / MW year) Utilities $13M $10.6M $2.4M $24M (excludes heat recovery benefit)
What s Next - System Integration, DR, and Energy Mgmt MAXIMIZE ENERGY EFFICIENCY & REUSE Maximize Beneficial Daylighting, Minimize Lighting Loads Active Radiant (Chilled) Beams Perimeter Cooling & Heating Underfloor Air, Natural Ventilation, & Solar-Powered Relief Fans ENERGY Mgmt & Demand Response Go beyond power capping Energy Management Alter workload to meet opportunity. Alter workload minimize impact. 29
30