MSYS 4480 AC Systems Winter 2015 Module 12: DATA CENTERS By Satwinder Singh 14/05/2015 1
Data Centers Data Centers are specialized environments that safeguard your company's most valuable equipment and intellectual property. Data Centers house the devices that do the following: Process your business transactions Host your website Process and store your intellectual property Maintain your financial records Route your e mails
Computer Rooms and Data Centres
Server Farms Google, The Dalles, OR = 6,500m² Yahoo, Quincy, WA = 13,000m² Microsoft, Qunicy, WA = 43,600m² 48MW Cloud computing
Sea Containers
Data Centre What are they? Space for computer equipment: House it Power it Cool it Why should I care? It is just a room with servers. Heart of the organization Highest investment density ($/sq.ft) Highest power density (W/sq.ft)
What is in a typical rack? Rack of state of art servers: 20kW @10 cents /kwh: $17,000 per year High density rack: 10kW @10 cents /kwh: $8,500 per year Average rack: 5kW @10 cents /kwh: $4,250 per year A Data Center can have HUNDREDS of these racks.
Data Centers Types Wiring Closets (1-3 Racks) Computer Rooms (1-5 Racks) Small Data Centres (5-20 Racks) Medium Data Centres (20-100 Racks) Large Data Centres (Over 100 Racks) Server Farms (20,000 ft² and over) Sea Containers Range 0.5-3 KW 3-9.9 KW 10-40 KW 40 200 KW Over 200 KW MW range 700 kw Cooling 1 2 ton split AC unit Split AC/ High Precision Ceiling AC Raised floor Overhead ducting Hot / Cold Aisle Raised Floor Overhead Ducting Hot / Cold Aisle Raised floor Hot / Cold Aisle Supplemental cooling Raised floor Hot / Cold Aisle Supplemental cooling Cold Aisle and hot air plenum Overhead cooling
Data Centers Components Main Components (more than 20 elements) 1. Architectural 2. Mechanical 3. Electrical 4. Communications 5. Security 6. Structural 7. Housekeeping
Moores Law the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented.
Moores Law
Data Center Energy Hogs Data centers are energy intensive facilities. 10 100x more energy intensive than an office Server racks well in excess of 30 kw Surging demand for data storage EPA estimate: 3% of U.S. electricity Power and cooling constraints in existing facilities.
Data Centers High Energy users Energy intensive facilities Server racks now designed for more than 25+ kw Projected to double every 5 years. Surging demand for Data Storage Typical facility ~1MW (Could be more) US energy usage for Data Centers 1.5% of total Projected to double in next 5 years Rising cost of ownership Cost of utilities is surpassing IT equipment cost
Example: Data Centre 15ft x 15ft = 625 ft² Data Center IT Load 2 Racks of state-of-art servers @ 20kW 4 High density racks @ 10kW 14 average racks @ 5kW 40 kw 40 kw 70 kw 150 kw $34,000 $34,000 $60,000 Total connected IT load: 150 kw Annual energy consumption: 1,314,000 kwh Annual cost: $130,000 - @ 10 cents per kwh Non-UPS Load: (Lighting, Cooling etc ) Cooling IT : 42 tons Cooling UPS : 3 tons Lighting : 3 tons
How Data centre is different than Office? Annual energy costs: 15 times (as high as 40 times) Power density Office space: 3 5 W/ft² Data Center: 25 W/ft² to 60 W/ft² (High and increasing and doubling every 5 years) 24x7 operation Value of facility: at least 10 times (could be 30 times)
Safe Temperature Limits
Environmental Conditions Data center equipment s environmental conditions should fall within the ranges established by ASHRAE as published in the Thermal Guidelines.
Equipment Environmental Specification
Allowable / Recommended
2011 ASHRAE Thermal Guidelines
2011 ASHRAE Allowable Ranges
Cooling in Data Centers
Psychrometric Bin Analysis
Estimated Savings
Most common configuration (Chilled Air) Raised Floor Courtesy: ASHRAE Datacom Equipment Power Trends and Cooling Applications
Managing Supply & Return Using ceiling as return air plenum (Chilled Air) Courtesy: ASHRAE Datacom Equipment Power Trends and Cooling Applications
High Density Spots Overhead Cooling Units + Raised Floor (Liquid Cooling + Chilled Air) Courtesy: ASHRAE Datacom Equipment Power Trends and Cooling Applications
Closed Couple DC Cooling
Manage Airflow
Airflow Management
Airflow Management
Air Management
Isolate Cold and Hot Aisles
Natural Ventilation
Liquid Cooling Overview Water and other liquids (dielectrics, glycols and refrigerants) may be used for heat removal. Liquids typically use LESS transport energy (14.36 Air to Water Horsepower ratio for example below). Liquid to liquid heat exchangers have closer approach temps than Liquid to air (coils), yielding increased economizer hours.
Metrics in Data Centers (Historic) Load Intensity: Data Center floor area: ft² Total load density: W/ft² Computing load density: W/ft² HVAC load density: W/ft² Data Center Electrical power demand: UPS Loss kw Computer load (UPS Power) kw HVAC chilled water plant kw HVAC kw Lighting kw HVAC: Chiller plant Chiller Efficiency: kw/ton Chilled Water Plant Efficiency: kw/ton Chiller load: Tons Data Center Load: Tons HVAC air systems Central air handling fan power: cfm/kw Air handler fan efficiency: External temperature and humidity Design Data: Design basis for Computer load kw/ft² Design basis for Chilled Water, T air side HVAC, and % RH UPS Systems Flow rate
Metrics in Data Centers Power Usage Effectiveness (PUE): PUE of 3 is equal to 33% DCiE Data Center Infrastructure Efficiency (DCiE): 1/PUE Computer Power Consumption Index (CPCI): Fraction of the total data center power to Power used by the computers.
Data Center Efficiency Metric Power Usage Effectiveness (PUE) is an industry standard data center efficiency metric. The ratio of power used or lost by data center facility infrastructure (pumps, lights, fans, conversions, UPS ) to power used by compute. Not perfect, some folks play games with it. 2011 survey estimates industry average is 1.8. Typical data center, half of power goes to things other than compute capability.
PUE
PUE
Power Usage Effectiveness (PUE): Current Trends 1.9 Improved Operations 1.7 Best Practices 1.3 State of the Art 1.2 Figure 1: EPA Estimated PUE Values in 2011
PUE Less than 1 Re using waste heat from the data center elsewhere can render the PUE below 1.0
PUE Less than 1
Best Practices in DC Cooling 1. Reduce the IT load Virtualization & Consolidation (up to 80% reduction). 2. Implement contained hot aisle and cold aisle layout. Curtains, equipment configuration, blank panels, cable entrance/exit ports, 3. Install economizer (air or water) and evaporative cooling (direct or indirect). 4. Raise discharge air temperature. 5. Install VFD s on fans, pumps, chillers, and towers 6. Reuse data center waste heat if possible. 7. Expand humidity range and improve humidity control (or disconnect) Raise the chilled water (if used) set point. Increasing chiller water temperature by 1 F reduces chiller energy use by 1.4% 8. Central Plants: Use a central plant (e.g., Chiller/CRAHs vs. CRAC units) Use centralized controls on CRAC/CRAH units to prevent simultaneous humidifying and dehumidifying 9. Install high efficiency equipment including UPS, power supplies, etc. 10. Move chilled water as close to server as possible (direct liquid cooling). 11. Consider centralized high efficiency water cooled chiller plant Air cooled = 2.9 COP, water cooled = 7.8 COP
Availability
Modularity
Tier Level (Reliability) Infrastructure Tiers The higher the availability you want your Data Center to achieve, the more layers of infrastructure it must have. N capacity is the amount of infrastructure required to support all servers or networking devices in the Data Center, assuming that the space is filled to maximum capacity and all devices are functioning. N most commonly used when discussing standby power, cooling, and the room's network. N+1 infrastructure can support the Data Center at full server capacity and includes an additional component Alternately called a 2N or system plus system design, it involves fully doubling the required number of infrastructure components Even higher tiers exist or can be created: 3N, 4N, and so on.
Tier Level (Reliability) Tier I is composed of a single path for power and cooling distribution, without redundant components, providing 99.671% availability. Tier II is composed of a single path for power and cooling distribution, with redundant components, providing 99.741% availability. Tier III is composed of multiple active power and cooling distribution paths, but only one path active, has redundant components, and is concurrently maintainable, providing 99.982% availability. Tier IV is composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant, providing 99.995% availability.
Data Centers Tier Classification Tier I Tier II Tier III Tier IV Number of delivery paths Only 1 Only 1 1 Active 1 Passive 2 Active Redundant components N N +1 N +1 2(N+1) or S+S Support space to raised floor ratio 20% 30% 80-90% 100% Initial watts / sq. ft. 20-30 40-50 40-60 50-80 Ultimate watts / sq. ft. 20-30 40-50 100-150 150+ Raised floor height 12" 18" 30-36" 30-36" Floor loading pounds/sq. ft. 85 100 150 150+ Utility Voltage 208,480 208,480 12-15kV 12-15kV Months to implement 3 3 to 6 15 t0 20 15 to 20 Year first deployed 1965 1970 1985 1995 Construction $/Sq. Ft. Raised floor * $ 450 $ 600 $ 900 $1,100+ Annual IT downtime due to site 28.8 hrs 22.0 hrs 1.6 hrs 0.4 hrs Site availability 99.671% 99.749% 99.982% 99.995% * excludes land and abnormal civil cost. Assumes minimum of 15,000 sq.ft. of raised floor, architecturally plain one story building fitted out for the initial capacity, but with the backbone designed to reach the ultimate capacity with the installation of additional components. Make adjustments for NYC, Chicago, and other high cost areas.
Data Center Space Classifications Numerical Ranking Terminology Summary Definition Uptime Institute Equivalent 1 Uncontrolled Availability 2 Managed Availability 3 Controlled Availability (99.671%) 4 Moderate Availability (99.749%) 5 High Availability (99.982%) 6 Maximum Availability (99.995%) Shared building power and cooling No generator Dedicated utility power Unconditioned power to load Dedicated, non-redundant HVAC No / shared generator Dedicated utility power Uninterruptible power system (UPS) Dedicated, non-redundant HVAC Generator Dedicated utillity power Redundant UPS systems Dedicated, redundant HVAC Dedicated Generator Dedicated utillity power Redundant UPS systems Dedicated, redundant HVAC Dedicated, redundant generators Dedicated, redundant utility power Redundant UPS systems Dedicated, redundant HVAC Dedicated, redundant generators Fully automated systems fail-over Tier I (Non-Redundant) Tier II (Redundant) Tier III (Concurrently Maintainable) Tier IV (Fault Tolerant)