Green Sustainable Data Centres. Checklist - EU Code of Conduct on Data Centres. Annex

Similar documents
Green Sustainable Data Centres. Introductory Presentation

2015 Best Practice Guidelines

ITU-T L Best practices for green data centres SERIES L: CONSTRUCTION, INSTALLATION AND PROTECTION OF CABLES AND OTHER ELEMENTS OF OUTSIDE PLANT

Thermal management. Thermal management

2017 Best Practice Guidelines for the EU Code of Conduct on Data Centre Energy Efficiency

MEASURE MONITOR UNDERSTAND

Moving Containment Inside the Enclosure. Jeff Markle Great Lakes Case & Cabinet

Evaporative free air cooling technology Providing evaporative free air cooling solutions.

2018 Best Practice Guidelines for the EU Code of Conduct on Data Centre Energy Efficiency

Where s the Heat Coming From. Leland Sparks Global Account Technical Consultant Belden

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc

ENCLOSURE HEAT DISSIPATION

Industry leading low energy evaporative fresh air cooling.

Total Modular Data Centre Solutions

2019 Best Practice Guidelines for the EU Code of Conduct on Data Centre Energy Efficiency

Reducing Energy Consumption with

The European Programme for Energy Efficiency in Data Centres: The Code of Conduct

Energy Efficiency Best Practice Guide Data Centre and IT Facilities

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire

Data Center Energy Savings By the numbers

Planning a Green Datacenter

Intermediate Certificate in EU Code of Conduct for Data centres

Recapture Capacity for Existing. and Airflow Optimization

Server Room & Data Centre Energy Efficiency. Technology Paper 003

Cooling on Demand - scalable and smart cooling solutions. Marcus Edwards B.Sc.

Data Centre Energy & Cost Efficiency Simulation Software. Zahl Limbuwala

How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals. Michael Gagnon Coolcentric October

MSYS 4480 AC Systems Winter 2015

Switch to Perfection

Green Sustainable Data Centres. Operating Systems & Virtualisation

Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer

Overcoming the Challenges of Server Virtualisation

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power

European Union Code of Conduct for Data Centres. Liam Newcombe BCS DCSG

Reducing Data Center Cooling Costs through Airflow Containment

HIGHLY EFFICIENT COOLING FOR YOUR DATA CENTRE

18 th National Award for Excellence in Energy Management. July 27, 2017

Datacenter Efficiency Trends. Cary Roberts Tellme, a Microsoft Subsidiary

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design.

Improving Rack Cooling Performance Using Blanking Panels

Green IT and Green DC

Next Generation Cooling

Green Data Centers A Guideline

Excool Indirect Adiabatic and Evaporative Data Centre Cooling World s Leading Indirect Adiabatic and Evaporative Data Centre Cooling

Introducing the Heat Wheel to the Data Center

A Green Approach. Thermal

Efficiency of Data Center cooling

Rittal Cooling Solutions A. Tropp

Google s Green Data Centers: Network POP Case Study

Variable Density, Closed-Loop, Water-Cooled Data Center Solution

Data Center Enclosures Best Practices. Maximize the Efficiency of the Enclosure within a Data Center

Product Brochure DENCO Close Control Units Row-DENCO. High Density Cooling for Data Centres

Green Sustainable Data Centres. Measurement and Control

Power and Cooling for Ultra-High Density Racks and Blade Servers

Case Study: The design of a modern data centre Jeffrey Brindle Technical Director, HP Critical Facilities Services

FACT SHEET: IBM NEW ZEALAND DATA CENTRE

EU Code of Conduct on Data Centre Energy Efficiency. Participant Guidelines and Registration Form. Version 3.0.0

Smart Data Centres. Robert M Pe, Data Centre Consultant HP Services SEA

A separate CRAC Unit plant zone was created between the external façade and the technical space.

HIGH DENSITY RACK COOLING CHOICES

Impact of Air Containment Systems

The Energy. David L. Moss Dell Data Center Infrastructure

Liebert DCW CATALOGUE

Reclaim Wasted Cooling Capacity Now Updated with CFD Models to Support ASHRAE Case Study Data

HIGH DENSITY DATACOM INTEGRATED COOLING SYSTEM.

IT Air Conditioning Units

Survey and Audit Service Schedule. Airflow and Thermal Imaging Survey Service Schedule. Data Centre Solutions Expertly Engineered

Data center cooling evolution. The hottest way to save energy. Lucio Bustaffa: Product Development Manager BlueBox Group

Energy Logic: Emerson Network Power. A Roadmap for Reducing Energy Consumption in the Data Center. Ross Hammond Managing Director

Green Sustainable Data Centres. Data Centre Infrastructure Management

Ten Cooling Solutions to Support High-density Server Deployment. Schneider Electric Data Center Science Center White Paper #42

Server room guide helps energy managers reduce server consumption

Virtualization and consolidation

Ecolibrium. Cloud control. Keeping data centres cool. OCTOBER 2018 VOLUME 17.9 RRP $14.95 PRINT POST APPROVAL NUMBER PP352532/00001

The Efficient Enterprise. All content in this presentation is protected 2008 American Power Conversion Corporation

LANL High Performance Computing Facilities Operations. Rick Rivera and Farhad Banisadr. Facility Data Center Management

Running Your Data Center Under Ideal Temperature Conditions

High Tech Getting Too Hot? High Density Environments Require A Different Cooling Approach

How much power?! Data Centre Efficiency. How much Money?! Carbon Reduction Commitment (CRC)

A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems.

Case Study Leicester City Council

Data Centre Energy Efficiency. Duncan Clubb EMEA Data Centre Practice Director CS Technology

Air Containment Design Choices and Considerations

Why would you want to use outdoor air for data center cooling?

Rethinking Datacenter Cooling

EU Code of Conduct on Data Centre Energy Efficiency. Endorser Guidelines and Registration Form. Version 3.1.0

Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center. Precision Cooling For Business-Critical Continuity

A Data Center Heat Removal System That Saves... 80% Discover Inertech s Award-winning Data Center Heat Removal System

Effectiveness and Implementation of Modular Containment in Existing Data Centers

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

Green IT Project: The Business Case

CHILLED WATER. HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw

DATA CENTER EFFICIENCY: Data Center Temperature Set-point Impact on Cooling Efficiency

COOLING SOLUTIONS. SmartRack Close-Coupled Cooling for IT Environments

Data Centre Optimisation Services

Data Center Energy and Cost Saving Evaluation

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

84 kw, Tier 3, Direct Expansion, 937 ft 2

Optimiz Chilled January 25, 2013

Transcription:

Green Sustainable Data Centres Checklist - EU Code of Conduct on Data Centres Annex

This course is produced under the authority of e-infranet: http://e-infranet.eu/ Course team prof. dr. Colin Pattinson, Leeds Beckett University (United Kingdom), course chairman and author of Chapter 1 and 7 prof. dr. Ilmars Slaidins, Riga Technical University (Latvia), assessment material development: Study Guide dr. Anda Counotte, Open Universiteit (The Netherlands), distance learning material development, editor- in-chief dr. Paulo Carreira, IST, Universidade de Lisboa(Portugal), author of Chapter 8 Damian Dalton, MSc, University College Dublin (Ireland), author of Chapter 5 and 6 Johan De Gelas, MSc, University College of West Flanders (Belgium), author of Chapter 3 and 4 dr. César Gómez-Martin, CénitS - Supercomputing Center and University of Extremadura (Spain), author of Checklist Data Centre Audit Joona Tolonen, MSc, Kajaani University of Applied Sciences (Finland), author of Chapter 2 Program direction prof. dr. Colin Pattinson, Leeds Beckett University (United Kingdom), prof. dr. Ilmars Slaidins, Riga Technical University (Latvia) dr. Anda Counotte, Open Universiteit (The Netherlands) Hosting and Lay-out http://portal.ou.nl/web/green-sustainabledata-centres Arnold van der Leer, MSc Maria Wienbröker-Kampermann Open Universiteit in the Netherlands This course is published under Creative Commons Licence, see http://creativecommons.org/ First edition 2014

Content Study Guide annex Checklist - EU Code of Conduct on Data Centres Introduction 1 1 Data Centre Utilisation, Management and Planning 2 1.1 Involvement of Organisational Groups 2 1.2 General Policies 2 1.3 Resilience Level and Provisioning 2 2 IT Equipment and Services 3 2.1 Selection and Deployment of New IT Equipment 3 2.2 Deployment of New IT Services 4 2.3 Management of Existing IT Equipment and Services 4 2.4 Data Management 4 3 Cooling 5 3.1 Air Flow Management and Design 5 3.2 Cooling Management 6 3.3 Temperature and Humidity 6 3.4 Cooling Plant High Efficiency Cooling Plant 7 3.5 Computer Room Air Conditioners 8 4 Data Centre Power Equipment 8 4.1 Selection and Deployment of New Power Equipment 8 5 Other Data Centre Equipment 8 5.1 General Practices 8 6 Monitoring 9 6.1 Energy Use and Environmental Measurement 9 6.2 Energy Use and Environmental Collection and Logging 9 6.3 Energy Use and Environmental Reporting 9

Annex Checklist - EU Code of Conduct on Data Centres Cesar Gomez-Martin University of Extremadura I N T R O D U C T I O N On completion of this checklist, learners should be able to: Identify and recognise data centre energy efficiency best practices within the Code of Conduct. Define the extent to which best practice is in place in their data centre. Expected minimum practices Practices are marked in the expected column as: Category Entire Data Centre New Software New IT Equipment New build or retrofit Description Expected to be applied to all existing IT, Mechanical and Electrical equipment within the data centre Expected during any new software install or upgrade Expected for new or replacement IT equipment Expected for any data centre built or undergoing a significant refit of the M&E equipment Value of practices Each practice has been assigned a qualitative value to indicate the level of benefit to be expected. These values are from 1 to 5 with 5 indicating the maximum value. All practices outlined in this document are mandatory for a green and sustainable data centre but data centre managers are encourage to implement the most valuable practices before the less valuable ones. Marking practice status The learner should mark each practice in the checklist form as one of: Mark No mark Committed Date I E I & E Description Not implemented. Not yet implemented but a program is in place to implement the practice by the specified date. Implemented practice. Endorsed practice, this practice cannot be implemented by the student as it is outside the area of responsibility, but it is endorsed to their suppliers or customers. This practice is partially within the control of the applicant. The applicant has implemented the practice as fas as practical and endorsed the practice to their customers or suppliers. 1

Green Sustainable Data Centres 1 Data Centre Utilisation, Management and Planning 1.1 INVOLVEMENT OF ORGANISATIONAL GROUPS 1.1.1 Group involvement Establish an approval board containing representatives from all disciplines (software, IT, M&E). Require the approval of this group for any significant decision to ensure that the impacts of the decision have been properly understood and an effective solution reached. 1.2 GENERAL POLICIES 1.2.1 Consider the embedded energy in devices Carry out an audit of existing equipment to maximise any unused existing capability by ensuring that all areas of optimisation, consolidation and aggregation are identified prior to new material investment. 1.3 RESILIENCE LEVEL AND PROVISIONING 1.3.1 Build resilience to business requirements 1.3.2 Consider multiple levels of resilience 1.3.3 Lean provisioning of power and cooling for a maximum of 18 months of data floor capacity 1.3.4 Design to maximise the part load efficiency once provisioned Only the level of resilience and therefore availability actually justified by business requirements and impact analysis should be built, or purchased in the case of a collocation customer. It is possible to build a single data centre to provide multiple levels of power and cooling resilience to different floor areas. he provisioning of excess power and cooling capacity in the data centre drives substantial fixed losses and is unnecessary. The design of all areas of the data centre should be maximise the achieved efficiency of the facility under partial fill and variable IT electrical load. 2

Study Guide annex: Checklist - EU Code of Conduct on Data Centres 2 IT Equipment and Services 2.1 SELECTION AND DEPLOYMENT OF NEW IT EQUIPMENT 2.1.1 IT hardware Power Include the Energy efficiency performance of the IT device as a high priority decision factor in the tender process. This may be through the use of Energy Star or SPECPower 2.1.2 New IT hardware Restricted (legacy) operating temperature and humidity range 2.1.3 New IT hardware Expected operating temperature and humidity range 2.1.4 Select equipment suitable for the data centre Air flow direction 2.1.5 Enable power management features 2.1.6 Provision to the as configured power 2.1.7 Energy Star compliant hardware 2.1.8 Energy & temperature reporting hardware 2.1.9 Select free standing equipment suitable for the data centre Air flow direction 2.1.10 IT equipment power against inlet temperature Where no equipment of the type being procured meets the operating temperature and humidity range of practice 2.1.3, then equipment supportingat a minimum the restricted (legacy) range of 15 C - 32 C inlet temperature (59 F 89.6 F) and humidity from 20% to 80% relative humidity and below 17 C maximum dew point (62.6 F) may be procured. Include the operating temperature and humidity ranges at the air intake of new equipment as high priority decision factors in the tender process. When selecting equipment for installation into racks ensure that the air flow direction matches the air flow design for that area. This is commonly front to rear or front to top. Formally change the deployment process to include the enabling of power management features on IT hardware as it is deployed. This includes BIOS, operating system and driver settings. Provision power and cooling only to the as-configured power draw capability of the equipment, not the PSU or nameplate rating. The Energy Star Labelling programs for IT equipment should be used as a guide to server selection where and when available for that class of equipment. Select equipment with power and inlet temperature reporting capabilities, preferably reporting energy used as a counter in addition to power as a gauge. When selecting equipment which is free standing or supplied in custom racks the air flow direction of the enclosures should match the air flow design in that area of the data centre. This is commonly front to rear or front to top. When selecting new IT equipment require the vendor to supply at minimum; Either the total system power or cooling fan power for temperatures covering the full allowable inlet temperature range for the equipment under 100% load on a specified benchmark such as SPECPower http://www.spec.org/power_ssj2008/). 3

Green Sustainable Data Centres 2.2 DEPLOYMENT OF NEW IT SERVICES 2.2.1 Deploy using Grid and Virtualisation technologies 2.2.2 Reduce IT hardware resilience level 2.2.3 Reduce hot/cold standby equipment 2.2.4 Select efficient software 2.2.5 Develop efficient software Processes should be put in place to require senior business approval for any new service that requires dedicated hardware and will not run on a resource sharing platform. Determine the business impact of service incidents for each deployed service and deploy only the level of hardware resilience actually justified. Determine the business impact of service incidents for each IT service and deploy only the level of Business Continuity / Disaster Recovery standby IT equipment and resilience that is actually justified by the business impact. Make the energy use performance of the software a primary selection factor. Make the energy use performance of the software a major success factor of the project. 2.3 MANAGEMENT OF EXISTING IT EQUIPMENT AND SERVICES 2.3.1 Decommission unused services 2.3.2 Audit of exiting IT environmental requirements Completely decommission and remove, the supporting hardware for unused services Identify the allowable intake temperature and humidity ranges for existing installed IT equipment. 2.4 DATA MANAGEMENT 2.4.1 Data management policy Develop a data management policy to define which data should be kept, for how long and at what level of protection 4

Study Guide annex: Checklist - EU Code of Conduct on Data Centres 3 Cooling 3.1 AIR FLOW MANAGEMENT AND DESIGN 3.1.1 Design Contained hot or cold air 3.1.2 Rack air flow management Blanking Plates 3.1.3 Rack air flow management Other openings 3.1.4 Raised floor air flow management 3.1.5 Design Hot/cold aisle There are a number of design concepts whose basic intent is to contain and separate the cold air from the heated return air on the data floor; Hot aisle containment Cold aisle containment Contained rack supply, room return Room supply, contained rack return, (inc. Rack chimneys) Contained rack supply, Contained rack return This action is expected for air cooled facilities over 1kW per square meter power density. Installation of blanking plates where there is no equipment to reduce hot air re-circulating through gaps in the rack. Installation of aperture brushes (draught excluders) or cover plates to cover all air leakage opportunities in each rack. Close all unwanted apertures in the raised floor. Review placement and opening factors of vented tiles to reduce bypass. As the power densities and air flow volumes of IT equipment have increased it has become necessary to ensure that equipment shares an air flow direction, within the rack, in adjacent racks and across aisles. 3.1.6 Equipment segregation Deploy groups of equipment with substantially different environmental requirements and / or equipment airflow direction in a separate area. Where the equipment has different environmental requirements it is preferable to provide separate environmental controls. 3.1.7 Provide adequate free area on rack doors 3.1.8 Separate environmental zones 3.1.9 Separate environmental zones Colocation or Managed Service Provider Solid doors can be replaced (where doors are necessary) with partially perforated doors to ensure adequate cooling airflow which often impede the cooling airflow and may promote recirculation within the enclosed cabinet further increasing the equipment intake temperature. Where a data centre houses both IT equipment compliant with the extended range of practice and other equipment which requires more restrictive temperature or humidity control separate areas should be provided. Customers requiring extremely tight environmental control or items such as legacy equipment should not compromise the entire data centre for specific items of equipment. 5

Green Sustainable Data Centres 3.2 COOLING MANAGEMENT 3.2.1 Review of cooling before IT equipment changes 3.2.2 Review of cooling strategy 3.2.3 Effective regular maintenance of cooling plant The availability of cooling including the placement and flow of vented tiles should be reviewed before each IT equipment change to optimise the use of cooling resources. Periodically review the IT equipment and cooling deployment against strategy. Effective regular maintenance of the cooling system in order to conserve or achieve a like new condition is essential to maintain the designed cooling efficiency of the data centre. 2 2 2 3.3 TEMPERATURE AND HUMIDITY 3.3.1 Review and if possible raise target IT equipment intake air temperature 3.3.2 Review and increase the working humidity range 3.3.3 Review and if possible raise chilled water temperature Data Centres should be designed and operated at their highest efficiency to deliver intake air to the IT equipment within the temperature range of 10 C to 35 C (50 F to 95 F). Reduce the lower humidity set point(s) of the data centre within the ASHRAE Class A2 range (20% relative humidity) to remove de-humidification losses. Review and if possible increase the chilled water temperature set points to maximise the use of free cooling economisers and reduce compressor energy consumption. 3.3.4 Industrial Space The data centre should be considered as an industrial space, designed built and operate with the single primary objective of delivering high availability IT services reliably and efficiently. 6

Study Guide annex: Checklist - EU Code of Conduct on Data Centres 3.4 COOLING PLANT HIGH EFFICIENCY COOLING PLANT 3.4.1 Chillers with high COP 3.4.2 Cooling system operating temperatures 3.4.3 Efficient part load operation 3.4.4 Variable speed drives for compressors, pumps and fans 3.4.5 Select systems which facilitate the use of economisers 3.4.6 Do not share data centre chilled water system with comfort cooling Where refrigeration is installed make the Coefficient Of Performance of chiller systems through their likely working range a high priority decision factor during procurement of new plant. Evaluate the opportunity to decrease condensing temperature or increase evaporating temperature; reducing delta T between these temperatures means less work is required in cooling cycle hence improved efficiency. Optimise the facility for the partial load it will experience for most of operational time rather than max load. e.g. sequence chillers, operate cooling towers with shared load for increased heat exchange area Reduced energy consumption for these components in the part load condition where they operate for much of the time. Cooling designs should be chosen which allow the use of as much Free Cooling as is possible according to the physical site constraints, local climatic or regulatory conditions that may be applicable. In buildings which are principally designed to provide an appropriate environment for IT equipment, and that have cooling systems designed to remove heat from technical spaces, do not share chilled water systems with human comfort cooling in other parts of the building. 2 7

Green Sustainable Data Centres 3.5 COMPUTER ROOM AIR CONDITIONERS 3.5.1 Variable Speed Fans Many old CRAC units operate fixed speed fans which consume substantial power and obstruct attempts to manage the data floor temperature. 3.5.2 Do not control humidity at CRAC unit The only humidity control that should be present in the data centre is that on fresh Make Up air coming into the building and not on re-circulating air within the equipment rooms. Humidity control at the CRAC unit is unnecessary and undesirable. 4 Data Centre Power Equipment 4.1 SELECTION AND DEPLOYMENT OF NEW POWER EQUIPMENT 4.1.1 Modular UPS deployment It is now possible to purchase modular (scalable) UPS systems across a broad range of power delivery capacities. 4.1.2 High efficiency UPS High efficiency UPS systems should be selected, of any technology including electronic or rotary to meet site requirements. 4.1.3 Use efficient UPS operating modes 4.1.4 Elimination of Isolation Transformers UPS should be deployed in their most efficient operating modes such as line interactive. Isolation transformers in power distribution to IT equipment down to 110V are typically not required in Europe and should be eliminated from designs as they introduce additional transformer lossesunnecessarily. 2 5 Other Data Centre Equipment 5.1 GENERAL PRACTICES 5.1.1 Turn off Lights Lights should be turned off, preferably automatically whenever areas of the building are unoccupied, for example switches which turn off lighting a specified time after manual activation. 1 5.1.2 Low energy lighting Low energy lighting systems should be used in the data centre. 1 8

Study Guide annex: Checklist - EU Code of Conduct on Data Centres 6 Monitoring 6.1 ENERGY USE AND ENVIRONMENTAL MEASUREMENT 6.1.1 Incoming energy consumption meter 6.1.2 IT Energy consumption meter Install metering equipment capable of measuring the total energy use of the data centre, including all power conditioning, distribution and cooling systems. Install metering equipment capable of measuring the total energy delivered to IT systems, including power distribution units. 6.2 ENERGY USE AND ENVIRONMENTAL COLLECTION AND LOGGING 6.2.1 Periodic manual readings 6.2.2 Achieved economised cooling hours new build DC Entry level energy, temperature and humidity (dry bulb temperature, relative humidity and dew point temperature) reporting can be performed with periodic manual readings of measurement and metering equipment. This should occur at regular times, ideally at peak load. Require collection and logging of full economiser, partial economiser and full mechanical hours throughout the year. 6.3 ENERGY USE AND ENVIRONMENTAL REPORTING 6.3.1 Written report Entry level reporting consists of periodic written reports on energy consumption and environmental ranges. This should include determining the averaged DCIE or PUE over the reporting period. 6.3.2 Achieved economised cooling hours new build DC Require reporting to log full economiser, partial economiser and full mechanical hours throughout the year. 9