Reference Design Quicktour Design Summary

Similar documents
3300 kw, Tier 3, Chilled Water, 70,000 ft 2

84 kw, Tier 3, Direct Expansion, 937 ft 2

200 kw, Tier 3, Chilled Water, 5250 ft 2

2000 kw, Tier 3, Chilled Water, Prefabricated, sq. ft.

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS

1008 kw, Tier 2, Chilled Water, ft 2

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

5.2 MW, Pod-based build, Chilled Water, ft 2

Virtualization and consolidation

9.6 MW, Traditional IT and OCP-compatible, Indirect Air Economizer, m 2

Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer

MSYS 4480 AC Systems Winter 2015

Power & Cooling Considerations for Virtualized Environments

APC by Schneider Electric Elite Data Centre Partner

Close-coupled cooling

HPC Solutions in High Density Data Centers

Digital Realty Data Center Solutions Digital Chicago Datacampus Franklin Park, Illinois Owner: Digital Realty Trust Engineer of Record: ESD Architect

How to Increase Data Center Efficiency

Total Modular Data Centre Solutions

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power

Efficiency of Data Center cooling

INNOVATE DESIGN APPLY.

INNOVATE DESIGN APPLY.

Thermal management. Thermal management

Liebert Economizer Cooling Solutions Reduce Cooling Costs by up to 50% Without Compromising Reliability

Schneider Electric Cooling Portfolio. Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions

DATA CENTER COLOCATION BUILD VS. BUY

Next Generation Cooling

SMART SOLUTIONS TM INTELLIGENT, INTEGRATED INFRASTRUCTURE FOR THE DATA CENTER

CHILLED WATER. HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw

The Efficient Enterprise. All content in this presentation is protected 2008 American Power Conversion Corporation

LANL High Performance Computing Facilities Operations. Rick Rivera and Farhad Banisadr. Facility Data Center Management

Rittal Cooling Solutions A. Tropp

Highly Efficient Power Protection for High Density Computing Applications

Product Brochure DENCO Close Control Units Row-DENCO. High Density Cooling for Data Centres

Chiller Replacement and System Strategy. Colin Berry Account Manager

Liebert DCW CATALOGUE

PROCESS & DATA CENTER COOLING. How and Why To Prepare For What s To Come JOHN MORRIS REGIONAL DIRECTOR OF SALES

DCIM Solutions to improve planning and reduce operational cost across IT & Facilities

STULZ Micro DC. with Chip-to-Atmosphere TM Liquid to Chip Cooling

APC APPLICATION NOTE #74

QuickSpecs. HP Performance Optimized Datacenter (POD) 20ce. Overview. Retired

A Data Center Heat Removal System That Saves... 80% Discover Inertech s Award-winning Data Center Heat Removal System

Deploying High-Density Zones in a Low-Density Data Center

T66 Central Utility Building (CUB) Control System Reliability, Optimization & Demand Management

Specification of Modular Data Center Architecture

Overcoming the Challenges of Server Virtualisation

APC InfraStruXure Solution Overview

Green IT and Green DC

Precision Cooling Mission Critical Air Handling Units

Retro-Commissioning Report

DC Energy, Efficiency Challenges

SmartRow Solution Intelligent, Integrated Infrastructure for the Data Center

Google s Green Data Centers: Network POP Case Study

Investor day. November 17, IT business Laurent Vernerey Executive Vice President

Calculating Total Power Requirements for Data Centers

Prefabricated Data Center Solutions: Coming of Age

Ten Cooling Solutions to Support High-density Server Deployment. Schneider Electric Data Center Science Center White Paper #42

One Stop Cooling Solution. Ben Tam Business Development Manager

IT infrastructure Next level for data center. Rittal The System Yavor Panayotov Data center Day

PLAYBOOK. How Do You Plan to Grow? Evaluating Your Critical Infrastructure Can Help Uncover the Right Strategy

Maximising Energy Efficiency and Validating Decisions with Romonet s Analytics Platform. About Global Switch. Global Switch Sydney East

Symmetra PX 48kW. Scalable from 16 to 48kW. Make the most of your energy TM. Modular, Scalable, High-Efficiency Power Protection for Data Centers

Integralna rešenja Infrastrukture Data Centra

APC APPLICATION NOTE #126

How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals. Michael Gagnon Coolcentric October

Is Data Center Free Cooling Feasible in the Middle East? By Noriel Ong, ASEAN Eng., PMP, ATD, PQP

Increasing Data Center Efficiency by Using Improved High Density Power Distribution

Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center. Precision Cooling For Business-Critical Continuity

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc

where we are, where we could be, how we can get there. 12/14/2011

Energy Logic: Emerson Network Power. A Roadmap for Reducing Energy Consumption in the Data Center. Ross Hammond Managing Director

Data Center Trends: How the Customer Drives Industry Advances and Design Development

Customizable Colocation: Building for the Future

Combining Cold Aisle Containment with Intelligent Control to Optimize Data Center Cooling Efficiency

Configurable power distribution

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire

Charles Nolan. Do more with less with your data centre A customer s perspective. Director IT Infrastructure, UNSW. Brought to you by:

Cooling Solutions & Considerations for High Performance Computers

A Scalable, Reconfigurable, and Efficient High Density Data Center Power Distribution Architecture

Liebert. AFC from 500 to 1700 kw. The Adiabatic Freecooling Solution with Top-Tier Availability

PISCATAWAY, NEW JERSEY NJ1 DATA CENTERS POWERED BY TRUST

REPORT. Energy Efficiency of the San Diego Supercomputer Center and Distributed Data Centers at UCSD. Prepared for:

SMARTAISLE Intelligent, Integrated Infrastructure for the Data Center

DATA CENTER LIQUID COOLING

1.0 Executive Summary. 2.0 Available Features & Benefits

Electrical Efficiency Measurement for Data Centers

Data Centre Solutions Expertly Engineered APC Management Software

A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems.

InRow Chilled Water. Up to 70kW. Close-coupled, chilled water cooling for medium to large data centers. Row-Based Cooling.

ENCLOSURE HEAT DISSIPATION

The Energy. David L. Moss Dell Data Center Infrastructure

SmartCabinet Intelligent, Integrated Containment for IT Infrastructure

Types of Prefabricated Modular Data Centers

Case Study Leicester City Council

Constru January 15, 2013

Survey and Audit Service Schedule. Airflow and Thermal Imaging Survey Service Schedule. Data Centre Solutions Expertly Engineered

Scalable, Secure. Jason Rylands

Green Data Centers A Guideline

Power and Cooling for Ultra-High Density Racks and Blade Servers

Transcription:

Quicktour Design Summary Outdoor Heat Rejection Mixed Use or Purpose Built Data Center Facility Indoor/Outdoor Power Generation Mechanical Room Electrical Room Data Center Whitespace Reclaimed Whitespace UPS Room Battery Room December 25, 2009 Type Attributes Measure Power Available for IT (kw) 1000 Target Availability Tier 2 Rack Density (kw/rack) 8 Validated Availability (%) 99.7885 Annualized PUE (@ 100%) 1.35 Cost ($) $4,297,000 Total Data Center Footprint (sq ft) 11650 Whitespace Footprint (sq ft) 6350 Number of IT Racks 108 Number of Networking Racks 22 Total U-Space 5421 InRow Cooling Redundancy N+1 Whitespace Cooling Medium Chilled Water Mechanical Space Footprint 2850 Primary Chilled Water Loops 1 Secondary Chilled Water Loops 1 Heat Rejection Method Chilled Water with Cooling Tower Electrical Space Footprint 2450 Onsite Power Generation Y Minutes of UPS Runtime 6 This showcases a 1MW mid-range data center with some design add-ons to improve overall availability. Since its primary focus is on high value, the architecture utilizes additional hardware in a redundant approach to minimize downtime. This strategy increases the validated availability for the entire data center. This bare-bones architecture illustrates the primary requirements for electrical, mechanical and whitespace assets that provide a path toward the target availabilities while maintaining recommended best practices in data center design. This design has a high-density profile and a Tier 2 target availability, which incorporates a layer of redundancy across the power and cooling branches of the data center. Redundant UPSs and cooling technologies not only increase availability but also improve ease of maintenance. There is also a substantial savings in whitespace footprint because of the higher densities supported within the IT racks. The design works well in environments where change is constant, processes are less structured, flexibility is a requirement and budgets are tight. Typically, this type of design would fit well into larger retail, educational (universities, colleges, etc), small finance such as local banks, local government and healthcare environments. With redundancy built into the mechanical architecture, this design is attractive to sites looking to pilot virtualization or other dynamic IT technologies. Additionally, this design can be a purpose built data center integrated well into a mixed-use building that houses other departments or offices. Make the most of your energy 1-1

Quicktour Schneider Electric s architects, engineers and data center managers developed this design after significant design inputs and discussion. The expertise of these individuals spans the breadth of electrical, mechanical, whitespace, controls and systems domains. The unification of the domains has forced realistic thinking on the design that is intended to deliver value to our joint customers. The end-to-end data center management suite, which acts like a layer over the entire data center, provides optimized monitoring, reporting, and control functionality within the individual spaces. Schneider Electric provides intelligent devices such as sensors, meters and controllers as well as the expertise to implement management throughout the data center. Benefits Whitespace IT rooms, also referred to as Data Center Whitespace are typically the crown jewel within the entire building. Built using the innovative InfraStruXure architecture, the whitespace, has been instantiated within the to be a high power density critical environment. It supports a total power capacity of about one megawatt. This design showcases all the components that are typically housed on the data center floor like Racks, Coolers, Power Distribution Units, Cooling Distribution Units supporting functional areas dedicated to IT, networking and storage. Each row within the whitespace has been designed to its simplest form such that it is repeatable in a standardized and predictable manner. This enables customers to deploy data centers in a fraction of the time and facilitates removal of e-waste associated with over-sizing. Each high density zone within the whitespace has been designed to support a high availability design requirement. A zone consists of two rows of racks with integrated power and cooling distribution. Each row has been designed with Chilled Water-based close-coupled InRow RC air conditioners that enhance the efficiency of the overall space. By increasing the density, we are now able to process data in 108 instead of 284 racks thereby reducing our space requirement by almost half. This translates into significant cost savings especially in areas where real estate costs are exorbitant. In order to maintain thermal continuity without over sizing, the design incorporates redundant (N+1) CW-based close-coupled InRow RC air conditioners. Close coupling these redundant coolers boosts high heat handling capabilities at the source. Coolant is supplied to these InRow RC units by a set of Cooling Distribution Units located along the walls. These intelligent air conditioners ensure that the critical IT equipment within the space receives the adequate supply of cool air necessary for smooth operation by monitoring temperature at each rack. The power is handled by a high efficiency UPS located in the Electrical room. Power is distributed by stationary power distribution units at the end of each row. These units are placed in a modular fashion to prevent downtime and distribute power along and across rows, by ladders and cables, to rack mount PDUs located within each rack. All rack mount PDUs in the design are of the metered variety in support of a high-availability design requirement. Metering also enables remote monitoring of the units for efficiency tracking. The security of the room is maintained at multiple points. At the rack level, access is controlled inexpensively by a door lock and sensor. The sensor enables alerting when a rack is physically accessed thereby reducing manual control of keying hardware. Maintaining security at the rack level is considered a best practice as well as a basic level requirement. At the room level, basic level security cameras are utilized for monitoring. The glue that holds the whitespace infrastructure equipment together is the management layer. The whitespace management appliance of choice in this design is InfraStruXure Central. This appliance contains software that monitors everything that is located within the data center. Using a variety of protocols, InfraStruXure Central is able to communicate with whitespace as well as other components located off the whitespace. With the ability to monitor, manage as well as automate, InfraStruXure Central has become the preferred portal for data center management for many IT and Data Center managers. Advanced reporting and visibility features further enhance this tool. Overall, this whitespace design peaks the value and performance of this data center. Designed for customers looking to step into the realm of Large Data Centers, this design, complete with high density capability, scale and availability, is ideal to enter with a solid footing. 1-2

Quicktour Mechanical Space The mechanical space is the thermal vein that runs through the entire data center. This space is comprised of multiple rooms housing different types of equipment, all engineered to optimize energy efficiency, availability, ease of maintenance and control. The rooms include a variety of components including chillers, economizers, heat rejection and piping loops to provide a complete cooling solution for today s mission critical data center. The mechanical space integrates with the electrical space and whitespace to provide the necessary cooling and heat rejection for those spaces. Conversely, the electrical space provides the critical power to keep the mechanical plant up and running. The mechanical space supports cooling for a 1MW data center design. This design affords optimal availability levels with a double ended chilled water loop. Most downstream components are configured for an N+1 level of redundancy. The design has a highly efficient heat rejection methodology by utilizing two 340 ton water cooled chillers in an N+1 configuration and redundant cooling towers. The chillers are sized to run at partial load and have variable speed compressors allowing the chillers to operate at peak efficiency to provide more energy savings throughout the life of the data center. A pair of plate frame heat exchangers is provided with three way valves between the chillers and cooling towers for improved efficiency. This type of economization eliminates a need for glycol-based antifreeze protection thereby allowing for easier year round operation. Economizers reduce energy consumption and thereby operating cost, providing significant energy savings over the life of the data center, relative to geographic location and number of available free cooling days. Multiple pairs of chilled and condenser water pumps in an N+1 redundant configuration are implemented to provide better control, fault tolerance and efficiency. The secondary chilled water pumps operate at variable speed thereby allowing for adjustments in flow rate based upon peak and off peak cooling demands. VFD control of pumps in the design allow for more precise control of the pump thereby maintaining desired pressure and rate of flow. Redundant chilled water stratified thermal storage tanks in this design provide the necessary ride-through time for unplanned downtime of the chiller. These tanks are fabricated with a steel inner lining and an internal diffuser for thermal stratification which provides longer ride through times. Additionally, redundant city water storage tanks provide up to four hours of make up condenser water in case of the loss of the main water supply. Wall mounted ultrasonic humidifiers are installed throughout the whitespace to regulate humidity levels. InRow cooling units are configured with N+1 redundancy in the white space using a close coupled cooling architecture. 1-3

Quicktour This electrical space supplies power to all of the critical and non-critical components of the data center. It is a combination of multiple rooms housing a variety of different types of electrical equipment including UPS, switchgear, batteries, PDUs, all engineered to provide the highest levels of reliability and performance. Additionally, the electrical space is outfitted with mechanical equipment in order to maintain optimal ambient operating conditions. These critical pieces of the data center are integrated to provide a high level of availability to the design without compromising on efficiency. Electrical Space This electrical space supports a 1MW data center designed to be the high value architecture offering a high quality infrastructure. The electrical architecture has been dictated based on UPS and distribution characteristics. The UPS choices for this design include the worlds largest and most efficient Symmetra MW modular scalable UPS and the reliable MGE Galaxy 5000 UPS which have been deployed in support of the whitespace and mechanical loads, respectively. The Symmetra MW UPS has been configured to be an N+1 redundant unit. This UPS provides reliable and conditioned power to downstream APC InfraStruXure power distribution units (PDU) that in turn power the IT load. The MGE Galaxy 5000 UPS allows equipment like the chilled water pumps to have an extended ride-through time for the cooling system. Also, a strategy that includes soft starters and VFDs prevents over sizing of the mechanical UPS. The Square D switchgear and boards have been designed primarily around criticality. The application of switchboards in the design allows for easy scaling up to the designed maximum capacity thereby allowing for growth. Additionally, the drawout breakers in the PZ4 switchgears allow for ease of maintenance. A network of intelligent devices and power meters has been applied across the electrical space. These management devices with a seamless communication platform facilitate power quality monitoring and predictive maintenance of the entire electrical asset as a whole. Every component in this design is built and tested to the applicable ANSI, NEMA, UL or IEEE standards. For additional detailed design information, please see the One Line Schematics associated with the space. 1-4

1-5 Overview Quicktour Power Usage Effectiveness (PUE) How is data center efficiency defined? Power usage effectiveness (PUE)* is an industry standard metric for measuring efficiency in the data center. PUE ratio is the total power coming into the data center divided by the power going to the IT equipment. For example, if it takes 1.5 times as much energy in total (IT load plus supporting load) to operate a data center than is required for just the IT equipment, the PUE would be 1.5. Smaller PUE values indicate a more efficient data center. Why is the PUE presented as an annualized average here? A PUE ratio, as any other indexes, can be shown as either an instantaneous or average value. A collection of instantaneous data over a period of time can be very helpful in displaying a trend of efficiency performance of a data center at the operation stage. Hence, such information helps an IT manager decide on load shifting and a facility manager plan for the peak usage. Together, they can maximize the energy bill saving according to the electricity contract with the utility company. On the other hand, an annualized PUE ratio considers the average weather conditions throughout the year which has a direct impact on the performance of both power and cooling systems during a yearly cycle. The validated model takes into account the differences in equipment configurations, individual equipment efficiency, and the estimated free-cooling hours per year. In this case, an average weather in Chicago, IL was selected, which has about 2,400 hours per year for 100% free cooling. The annualized PUE can best help a data center manager make trade offs between the efficiency and systems architecture, at the design evaluation and planning stage. Why is the PUE a curve against the IT load, rather a fixed value? The PUE ratio is a simplified system performance index based on aggregated performance results from dozens of electrical and mechanical equipments. PUE is calculated based on measuring power used for primary functions as well as efficiency losses while equipment is operating. Furthermore, the efficiency loss on both types of equipments is not a constant value along the operation load. Schneider Electric has engineered the reference design architecture to operate at the optimal performance at 100% designed load, with the provision of at least 20% buffer capacity in all electrical equipments. Even at the 80% load, the efficiency performance is still considered to be in the top 10% among all new data centers until 2015, according to APC s forecast. For more information about the PUE estimation model, please consult the APC Whitepaper #113. *Data center infrastructure efficiency (DCiE) is another well-known industry standard metric. DCiE is represented as a percentage of IT load power divided by total data center power, and is the inverse of PUE or 1/PUE. Higher DCiE values indicate a more efficient data center.

1-6 Overview Quicktour System Planning Sequence What is a reference design? Given the IT parameters of criticality, capacity, and growth plan, there are potentially thousands of ways the physical infrastructure system could be designed, but there is a much smaller number of good designs. A library of these optimal i.e., recommended and proven designs can be used to quickly narrow down the possibilities. Much like a catalog of kitchen designs at a home improvement store, reference designs provide a choice of general architecture for the design of the system. Reference designs can help in zeroing in or ruling out, by presenting designs that may be hard to articulate or perhaps haven t been thought of. Reference designs have most of the system engineering built in, but with enough variability to satisfy the specific requirements of a range of user projects. A reference design is an actual system design that is a prototype or shorthand representation of a collection of key attributes of a hypothetical user design. A reference design embodies a specific combination of attributes including criticality features, power density, scalability features, and instrumentation level. Reference designs have a practical range of power capacity for which they are suited. The great power of reference designs is they provide a shortcut to effective evaluation of alternative designswithout the time-consuming process of actual specification and design (tasks #4 and #5 in this planning sequence). High quality decisions can be made quickly and effectively. Choosing the physical room Selection of the physical room where the system will be installed allows for consideration of room characteristics that might constrain the choice of reference design. Examples of possible constraints are room size, location of doors, location of support columns, floor strength, and ceiling height. For example, suppose a reference design is not an exact physical fit in the user s room racks may need to be added or removed. Such a reference design might have a cost attribute allowing estimation of the cost adjustment needed to scale the design to fit a specific floor space. Choosing a reference design The reference design is chosen to support the criticality, power capacity, and growth plan that have been determined in task #1 of the planning sequence. A reference design will have characteristics that make it more or less adaptable in each of these areas. For example, a reference design may be designed for one criticality level, or it may allow modifications to adjust the criticality up or down. A reference design may be able to support a range of power loads. A reference design may be scalable or not. Scalability of the reference design determines how well it can support the phase-in steps of a growth plan. Scalable reference designs may differ in the granularity (step size) of their scalability, making them more or less adaptable to a specific growth plan.the IT load profile (constructed from the four parameters of the growth plan) will suggest a rough idea of phase-in steps. The reference design is then chosen to have a scalability that supports that general phase-in concept. Since the actual phase-in steps will be row-based, the row-by-row details of the phase-in plan will be finalized later in the planning sequence, after the row layout of the room has been determined. A library of reference designs will be particularly useful if it has a software tool to assist in selecting an appropriate reference design. Input to such a reference design selector would be the foundational IT parameters established in the previous Determine IT Parameters task criticality, capacity, and growth plan. Other essential requirements may also need to be included to further narrow down the possible reference designs, such as type of cooling or power density. The automatically selected reference design(s) can then be reviewed for additional considerations that may not be handled by the selector tool, such as location of doorways, support columns, or other significant constraints.

1-7 Overview Quicktour Your Next Steps Need more information on controlling, monitoring, managing and maintaining your small or medium data center? Want to get more details, or explore Schneider Electric solutions in greater depth? APC online tools, research guides, and online courses give you all the background you need to make informed decisions and smart investments. CHECK OUT APC online resources APC has developed a wealth of online tools to help you clarify your requirements and explore your options: Data Center Selector The Schneider Electric Data Center Selector cut right to the chase to present a wide spectrum of data center architecture that best meet your needs, saving you time and hassle. TradeOff Tools These easy-to-use, Web-based applications enable data center professionals to experiment with various design scenarios, including virtualization, efficiency, and capital costs. tools.apc.com READ Data Center Science Center research APC has spent $90 million researching solutions to the most pressing customer problems. Take advantage of more than 100 must-read white papers, practical howto guides, and other thought-leadership publications from the world s leading R&D center on power, cooling, and physical infrastructure issues. Implementing Energy Efficient Data Centers ( #114) Ten Cooling Solutions to Support High-Density Server Deployment (#42) Rack Powering Options for High Density (#29) Browse through the entire APC library of research by going to our Information Center at www.apc.com INTERACT APC Online Discussion Forums Get answers, help others, or simply explore the blog posts of our virtual community members. Go to www.apc-forums.com LEARN... Data Center University online education Data Center University (DCU) courses offer industry-leading education for IT professionals and deliver real-world expertise, where and when you need it. For more information, go to www.datacenteruniversity.com