IBM Systems and Technology Group Lab Services Energy Efficient Data Centers Steven Ahladas IBM Corp ahladas@us.ibm.com Keeping our clients in the race.
All trends going the same way. IT Servers were an asset, tend to be a commodity Energy was a commodity for servers, tend to be a cost But.. Data Centers energy cost now shows Appear as a critical limitation to server extension in the data center Heat increase can be a growth killer by stopping introduction of new servers (DataCenter rooms limitations) Energy economics around IT servers must be rebalanced with new energyconscious criteria to stop cost increase 1) Stop the trend 2) Reduce power consumption Spending (US$B) $120 $100 $80 $60 $40 $20 $0 New Server Spending Power and Cooling 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 IDC Presentation, The Impact of Power and Cooling on Data Center Infrastructure, Doc #201722, May 2006 Installed Base (M Units) 50 45 40 35 30 25 20 15 10 5 0 2 Keeping our clients in the race.
Energy/Performance Metrics Server performance increasing much faster than server power Server/Storage Performance/Energy Trends 80 High End z xseries 1U p550 High End Storage 70 Annual Change(%) 60 50 40 30 20 10 0 Perf(capacity for storage) Power Perf/Space Perf/Watt Source: Roger Schmidt 3 Keeping our clients in the race.
Comparison of typical server utilization rates Used Wasted Mainframe UNIX x86 4 Keeping our clients in the race.
Energy Management Policy Example Server Consolidation Conserves Energy APP 1 APP 2 APP 3 APP 4 8 APP 7 APP APP 8 APP 1 APP 2 APP 3 APP 4 APP 5 APP 6 APP 7 System 1 System 2 System 8 Advanced Virtualization 10% busy 1KW 10% busy 1KW 10% busy 1KW 70% busy 4KW Total Power 8KW Total Power 4KW Server consolidation exploiting virtualization is the most effective tool in reducing energy costs 5 Keeping our clients in the race.
Cool Blue: Virtualization on IBM Systems and Storage This shift has eliminated one floor of servers, cut power and cooling costs by 80 percent, and reduced the administrative staff (and costs) by 10 administrators. No other platform has clearly demonstrated the vertical scaling (scale-up) savings as the mainframe. Virtualization was first introduced by IBM in the 1960s to allow the partitioning of large mainframe environments. System z virtualization remains the gold standard today. IBM SAN Volume Controller (SVC). for storage virtualization increases storage utilization rates by 30% while reducing the growth rate of storage capacity by 20%. VMware, Microsoft Virtual Server and Xen offerings are available for IBM System x and IBM BladeCenter Dell PowerEdge 1950 Dell PowerEdge 1950 virtualized Dell PowerEdge 1955 Dell PowerEdge 1955 virtualized IBM System z BC S07 Small IBM System z BC S07 Expanded Annual Power Cost per Logical Server 0 $100 $200 $300 $400 $500 $600 $700 $800 $900 $1,000 $920.70 $353.14 $550.06 $207.57 $127.23 $73.06 IBM System z EC S18 $32.80 6 Keeping our clients in the race. Source: Avoiding the $25 Million Server Data Center Power, Cooling, and Growth report by Sine Nomine Associates, 4/6/07
Cool Blue: Storage virtualization Pool and share I/O Resources for: Improved utilization Reduced energy needs Combined utilization 50% IBM SAN Volume Controller EMC IBM HP Hitachi IBM Combined utilization 90% 7 Keeping our clients in the race.
Cool Blue: Deploy efficient IBM System Storage 10 year TCO example using blended tape and disk best practices Deploy more power efficient storage Utilize storage more efficiently 250TB of storage 25% growth over 10 years 8 Keeping our clients in the race.
Reduce Consumption Through Provisioning De-provision Standby Servers Active Servers Standby Servers Provisioned 875 Servers De-provisioned 125 Servers Power Savings In Standby Mode 9 Keeping our clients in the race.
Energy Efficient Data Centers Data Center Power Usage Effectiveness 12 100.00% Number of Data Centers 10 8 6 4 2 80.00% 60.00% 40.00% 20.00% 0 1.53 1.81 2.10 2.38 More Building / IT Eqt Power 0.00% 10 Keeping our clients in the race.
DATA CENTER ENERGY EFFICIENCIES Total Power = 580 kw DATA CENTER A UPS Losses 6% DATA CENTER B Computer Loads 63% Total Power = 1700 kw Computer Loads 38% HVAC 54% UPS Losses 13% Lighting 2% Lighting 1% HVAC - Chilled Water Plant 14% HVAC - Air Movement 9% Data Center A: 54 % HVAC ; Data Center B: 23 % HVAC Data Center A could possibly save ~ 180 kw Those that did many things well still had room to improve From LBNL DC energy study, W. Tschudi 11 Keeping our clients in the race.
TYPICAL RAISED FLOOR DATA CENTERS GOAL: provide specified inlet temperatures to every IT Rack as efficiently as possible 12 Keeping our clients in the race.
3D Temperature distributions of IBM supercomputer 54.47 o C cold aisle hot aisle z=2.5 z=3.5 z=4.5 z=5.5 z=6.5 z=8.5 z=0.5 z=7.5 z=1.5 feet Hot spot at long aisle 33.74 o C z 13.01 o C y x Measured 3D temperature maps of a datacenter 13 Keeping our clients in the race.
Verticle Temperature Gradients 45 inlet air temperature T inlet [ o C] 40 35 30 25 20 15 upper specs. 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 3 4 5 6 7 8 9 height [feet] 14 Keeping our clients in the race.
COOLING EFFICIENCY #1: INTERMIXING between hot and cold air increases locally Inlet Air Temperatures ( excessive low CRAC discharge temperatures; affects Cooling Production Efficiency) #2: RECIRCULATION causes small differences in Return and CRAC Discharge Temperatures ( CRAC units are not fully utilized; affects Cooling Delivery Efficiency) 15 Keeping our clients in the race.
PERFORATED TILES 56% open, no damper 56% open, with damper - in many places perf. tiles do not support for the required cooling capacity - get high throughput tiles (50 % open) - remove dampers to enhance air flow (300 cfm ~ 2 kw) Tate GrateAire 56% Tate GrateAire 56% w/ 100% Open Damper Static Pressure, in H2O 0.12 0.10 0.08 0.06 0.04 0.02 0.00 0 500 1000 1500 2000 2500 3000 Airflow, CFM 16 Keeping our clients in the race.
UNDER-FLOOR BLOCKAGE Examples for blockage cables water pipes - avoid any blockage in the critical path to perforated tiles - put cabling etc. under hot aisles - possibly deploy under-floor boundaries - limit underfloor airflow to the boundaries of the zone to be cooled - escaping air results in lower underfloor air pressure and room airflow - use under-floor barriers to limit the airflow to the needed places 17 Keeping our clients in the race.
LEAKS AND CABLE CUTOUTS - block cable cut-outs using the methods shown below - add perforated tiles & use higher throughput tiles - try to have good sealing between the tiles Brushes Bags Mats Pillows 18 Keeping our clients in the race.
LEAKAGE in Sample Raised Floors # CRACs total flow/cooling capacity 1 # perf. tiles total perf. tile flow/cooling capacity 1 total ceiling tile flow/cooling capacity 1 Not-targeted flow/cooling 2 Total HW / Room Power Margin #1 11 67500 cfm 450 kw 118 29500 cfm 200 kw 1900 cfm 12 kw 38000 cfm 250 kw 300/360 kw 87/104 tons 90 kw 26 tons 130 tons 58 tons 3 tons 72 tons (57 %) (25 %) #2 7 42700 cfm 285 kw 43 11000 cfm 73 kw 2000 cfm 13 kw 31700 cfm 211 kw 110/140 kw 32/41 tons 145 kw 42 tons 82 tons 21 tons 4 tons 61 tons (74 %) (103 %) #3 17 84000 cfm 560 kw 24 6000 cfm 40 kw NA 78000 cfm 520 kw 175/230 kw 51/68 tons 330 kw 96 tons 162 tons 12 tons 151 tons (93 %) (143 %) #4 35 194 kcfm 1294 kw 185 46.5 kcfm NA 137.5 kcfm 310 kw 918 kw 585/730 kw 170/212 tons 564 kw 164 tons 375 tons 90 tons 266 tons (71 %) (77%) 1.) @ 60F (16 o C) CRAC discharge temperature 2.) Cable cutouts, other losses H.Hamman 2/2007 19 Keeping our clients in the race.
Normal Operation of Server Understanding Where Costs Hide is Critical Rear Door Heat exchanger can remove over 50% of a rack s s heat output Front Cold Air flow Back Hot No new fans or electricity needed. Attaches to back of rack (adds 5 ) No rearrangement of datacenter Cost effective; 1KW cooling = $286 Perf tile Cable Opening Tile floor Underfloor Chilled Air Subfloor Improved Operation of Server Rear Door Heat Xchanger The Cool Blue Heat exchanger adds cooling capacity at ~1/4 the cost of traditional methods Air flow Front Cold Back Hot IBM Enterprise Rack Rear Door Heat Exchange water lines 20 Keeping our clients in the race. Perf tile Cable Opening Underfloor Chilled Air Tile floor Subfloor
Motivation for Liquid Cooling - Increase in heat removal performance: Superior thermal properties of liquids compared to air - Design flexibility: Sensible heat transport to locations with available space - Centralized secondary heat exchanger - Efficient water-water heat exchanger Thermal conductivity [W/(m*K)] Air 0.0245 1.27 H 2 O 0.6 4176 Volumetric heat capacity [kj/(m 3 *K)] Disadvantage: Increased complexity Secondary Transport Primary Long distance transport possible 21 Keeping our clients in the race. Limited heat transport due to fin efficiency
Data Center Power Distribution Systems & Efficiency Requires this much power at the facility level Power Conversion & Distribution Losses One Watt Of Chip Power Feeder Transformer UPS System Raised Floor Transformer Distribution (all combined) Server Bulk PS VRM Load - Std Raised Floor 480V Servers 11.2KV(AC) to 480VAC 11.2KV(AC) to 480VAC 5% 480VAC to 400VDC to 480VAC 480VAC to 400VDC to 480VAC 480VAC to 208VAC to - 208VAC 400VDC to 12VDC NA - 480VAC to 400VDC to 12VDC 12VDC to 1VDC Std Watts 1.79 1.75 1.58 1.52 1.47 1.25 1 12VDC to 1VDC 480V Watts 1.70 1.67 1.50 1.50 1.47 1.25 1 1VDC to heat 1VDC to heat 400V DC Servers 400V DC Watts 11.2KV(AC) to 480VAC 12% 480VAC to 400VDC NA - 400VDC to 12VDC 12VDC to 1VDC 1.57 1.54 1.46 1.46 1.40 1.25 1 1VDC to heat Capital/Cost of Acquisition Savings 22 Keeping our clients in the race.
Thank You! Questions? Steve Ahladas, Data Center Services: ahladas@us.ibm.com 23 Keeping our clients in the race.