University of Florida Data Center at Eastside Campus: Key Facts

Size: px
Start display at page:

Download "University of Florida Data Center at Eastside Campus: Key Facts"

Transcription

1 University of Florida Data Center at Eastside Campus: Key Facts Design: KlingStubbins Architectural Engineering, Philadelphia, PA Construction: Whiting-Turner; Baltimore, MD Commissioning: Hanson Professional Services; Springfield, IL Budget: $15.7Million Final Construction Cost: approximately $14Million Gross Size: 25,390 square feet Data Hall 1: University Systems: 5,000 square feet, Near Tier 3 ; On-Line Courses (Sakai), Web-Servers, , PeopleSoft, Hyperion, & Other Administrative Systems, and Hosting Services Data Hall 2: High-Performance Research Computing: 5,000 square feet, Tier 1 Initial Power Capacity: 675kW (300kW Near Tier 3, 375kW Tier 1) Ultimate Power Capacity: 2,250kW (750kW Near Tier 3, 1,500kW Tier 1) Storm-Rated for Hurricane Category 3 (129 mph) 2.5 Megawatt Diesel Backup Generator with 72-hour Fuel Tank (at full load)

2 University of Florida Data Center at Eastside Campus: Purposes Secondary site for University Systems, providing redundancy, continuity of operations & disaster recovery Provide significantly expanded capacity for Research Computing/HPC Center Facility to accommodate consolidation of college & department servers from various campus buildings into central site, to allow for better power-management of campus buildings, reduction in overall power-usage, and consequent university budget savings. Growth/expansion of existing services (University Systems) Long-range provision for growth/expansion of all services via both additional floor-space and increased usage (power) densities

3 Estimated Annual Operating Cost As facility usage increases, so does Operations & Maintenance Cost, the largest component of which will be electric power. $6,000,000 $5,000,000 $4,000,000 Tier 3 Maximum Build-Out Total Annual Operating Cost Annual Power Cost $3,000,000 $2,000,000 $1,000,000 Annual Operating & Maintenance Cost $ Total Net IT kw - Tier 1 + Tier 3 Excludes salaries Includes depreciation/lifecycle replacement for infrastructure equipment

4 Growth Projections: High-Performance Research Computing Data Hall K i l o w a t t s Capacity [1500kW Max] HPC Need / Demand (updated Feb 2013) Years

5 Growth Projections: University Administrative Systems & Hosting Data Hall 700 Capacity [750kW Max] K i l o w a t t s Total Enterprise Need / Demand OSG & Virtual Hosting ERP: PeopleSoft, Cognos, Hyperion, etc. Networking Mainframe UFAD / Exchange Historical Trend (8%/year) Years Co-Lo Hosting

6 Connectivity 2 Separate, Independent, Fully-Redundant High-Speed Fiber-Optic Pathways Back to Main UF Campus The red path is 96 strands (48 pair) of fiber-optic cable, and was built by UF specifically for this data center. The green path is 48 strands (24 pair) dark fiber, on a 25 year lease from GRUCom. Paths shown are approximate The redundant connection is essential to UFDC ensuring that no backhoe accident or other mishap can disrupt communications, and, consequently, the facility s ability to provide mission-critical services. UF Core

7 Data Center Capacity: How Big Is It? The UFDC has two Data Halls (Server Rooms) of 5,000 square feet each, for a total of 10,000 square feet. Each room can hold about 100 racks of servers. An additional 15,390 square feet is needed for the support spaces - the Electrical Rooms, Mechanical Room, and Equipment Yard, in addition to the relatively small human-spaces. However, floor-space is only one factor of the 3 dimensions which describe a data center s capacity. UFDC Power & Cooling Capacity Power and Cooling are HPC / University directly related: All the Research Systems Total power that goes into the Day 1 375kW 300kW 675kW Power Data Center Cooling room ends up as HEAT, which must be removed by the cooling systems Ultimate 1500kW 750kW 2250kW Capacity Floor-Space

8 Data Center Reliability: N+1 and Concurrently Maintainable One of the most important characteristics of a data center is Reliability: its ability to keep its pay-load equipment running supplied with adequate power and cooling. N+1 The primary infrastructure systems at UFDC are configured with N+1 Redundancy. This means that, if it takes N components to fully meet the needs, then N+1 one extra units are provided. This allows for continued operation if: One unit fails, or One unit needs to be taken off-line for maintenance For the University Systems Data Hall (Data Hall 1), the electrical, cooling, and networking systems are all N+1 Redundant. UFDC Data Hall 2, used by UFIT High-Performance Research Computing group was not designed to have N+1 Redundancy, because their research work does not have as stringent up-time requirements as the Student and Administrative Systems. Instead, the emphasis in this room is on greater expandability of CAPACITY (kilowatts of computing payload). Concurrently Maintainable Not all UFDC infrastructure is suitable for N+1 Redundancy. In some cases, it would be prohibitively expensive to do N+1 for example, the backup power generator, or the chilled-water piping. In those cases, we have designed the systems to be Concurrently Maintainable that is, it is possible to work on the system, or part of the system, without taking that (entire) system off-line.

9 Energy-Efficiency Consideration has been given at every step in the design and construction process to making UFDC as much as practical, a Good Energy Citizen. Chillers set to run 10 F warmer than typical; preliminary estimates suggest this may be up to 2X normal efficiency Overhead power & cabling eliminates underfloor airflow congestion, reducing fan-power requirement High-efficiency, variable speed fans in CRAHs High-Efficiency Trane chillers 32% more efficient than US Govt. Recommended Standard Hot-aisle/Cold-aisle layout using under-floor and over-ceiling plenums to maximize hot/cold-air separation; minimizing mixing and maximizing return-air temperature, for better cooling efficiency Room design allows for chimney racks ducted directly into ceiling plenum for high power-density applications Taps into the chilled water distribution system allow for future use of closely-coupled cooling systems for efficiently handling high power-density applications Incremental expansion is more efficient, due to operational characteristics of the devices; they are more efficient at near-full rated capacity PUE Projections: Data Center efficiency is frequently described in terms of Power Utilization Efficiency (PUE). PUE = Total Power Consumption IT Power Consumption A PUE of 1.0 would be perfect and impossible. Some power is required for cooling, lighting, etc., and some is lost to overall inefficiencies of the equipment. UFDC is projected to have an annual PUE of 1.39, which is 12.3% better than the ASHRAE Baseline model.

10 How Much Heat? How Much Cooling? The fundamental law of Data Centers is: All the power you put into the room is converted to heat which must be removed. A single rack of servers or storage can require anywhere from 3.5kW (3,500 Watts) to as much as 28kW. 1 Ton of cooling = approximately 3.5kW. So a rack of servers or storage requires from 1 to 8 (or more) Tons of A/C. With floor-space for 100+ racks per room, this will add up to hundreds of Tons. UFDC has 2 air-cooled Trane water chillers, each of which can provide up to 270* Tons (945kW) of cooling. Because this is an N+1 design, only one of the chillers is needed (initially) for cooling; the other is on stand-by/back-up. Future Growth & Expansion When the needs of the facility exceed 270 Tons, a third chiller will be added, increasing the capacity to: 270 Tons Tons = 540 Tons (1,890 kw) [with the third chiller being the +1 back-up unit]. Finally, when demand exceeds 540 Tons, the 4th chiller will be added to raise the N+1 capacity to 810 Tons (2,835kW). *The chillers are rated for 300 Tons, but we will only run them at a maximum of 90% rated load, to improve reliability.

11 Future Growth & Expansion Throughout the data center, you will see signs like this one, pointing out design/engineering decisions which provide a path for future growth. The UF Data Center is designed and engineered to meet today s needs today, but to be expandable/upgradeable to over 3X its original (current) capacity. We chose to signal these features in green, not only to represent GROWTH, but because these design choices make UFDC more ENERGY-EFFICIENT. The large infrastructure components, such as Uninterruptable Power Supplies, Chillers, Pumps, and Computer Room Air Handlers (CRAHs) perform at higher efficiency when run near their maximum rated loads. By only having enough of these to handle the current load, and adding them as required for growth, the University not only saves initial construction costs, but also has a more efficient data center throughout its lifetime.

12 How It Works: Heat Removal Cross-Section of a Data Hall Hot Air Return - Above Ceiling CRAHs Computer Room Air Handlers draw in air from ceiling via ducts H ot A i R R et u r n CRAH Computer Room Air Handler Cold air comes up through vented tiles in Cold Aisle floor IT equipment in racks draws in cold air in front, and blows out hot air to rear Hot air blown out from rear of IT equipment racks rises to vents in ceiling Solid floor in Hot Aisles Cold Aisle IT Equipment is in racks face to face Hot Aisle IT Equipment is in racks back to back Solid floor in Hot Aisles Cold air comes up through vented tiles in Cold Aisle floor H ot A i R R et u r n CRAH Computer Room Air Handler CRAHs Computer Room Air Handlers draw in air from ceiling via ducts Cold Air Supply - Under Floor Maximizing the separation between cold air supply, and hot air exhaust improves efficiency by reducing the amount of power needed to cool the equipment.

13 About Data Center Tiers The data center industry generally recognizes four classes of facility, based on their level of resistance to outages. UFDC is a Hybrid Tier 3/1 Design. Tier 4 2 independent utility paths 2N power and cooling systems Able to sustain 96 hour power outage Stringent site selection 24/7 onsite maintenance staff Tier 4 VERY EXPENSIVE TO IMPLEMENT Tier 3 2 utility paths N+1 power and cooling systems Able to sustain 72 hour power outage Redundant service providers Careful site selection planning Allows for concurrent maintenance Tier 3 The UFDC Data Hall 1, used by UFIT-CNS to host UF administrative systems (including Student Systems, Sakai, PeopleSoft, , Web-servers, etc. is considered a NEAR-Tier 3 facility. There are not 2 utility paths available at this site, and site-selection was pre-determined, based on the fact that UF already owned the land. Tier 2 Some redundancy in power and cooling systems Generator backup Able to sustain 24 hour power outage Minimal thought to site selection Tier 2 The Bryant Hall/SSRB Data Center and CSE Data Center on UF Main Campus are generally considered to be Tier 2+. Tier 1 Numerous single points of failure No generator Unable to sustain more than 10 minute power outage Tier 1 UFDC Data Hall 2, used by UFIT High-Performance Research Computing was designed to be a Tier 1 facility, because their research work does not have as stringent up-time requirements as the Student and Administrative Systems. Instead, the emphasis in this room is on greater expandability of CAPACITY (kilowatts of computing payload).

14 Network Entrance Room 2 The red route 96-strand Optical Fiber to UF Main Campus UFDC Wholly-owned by UF Installed by USI, under direction of UFIT/CNS On Main Campus, this terminates in the SSRB data center UF Core

15 Network Entrance Room 1 The green route 48-strand Optical Fiber to UF Main Campus UFDC 25-year lease from GRUCom Dark Fiber allowing UF to use any technologies needed On Main Campus, this splits into two separate 24-strand runs; One to SSRB One to Centrex...making it dually-connected to the UF Core, to provide additional redundancy UF Core

16 Data Hall 1: University Administrative Systems and Hosting 5,000 square feet Space for approximately 100 standard Racks of computing equipment, in addition to the networking racks Plus an additional row reserved for non-standard size/shape equipment (at the rear of the room) Initial power capacity = 300kW Ultimate power capacity = 750kW Ultimate power density = 150 Watts/square-foot (~7.5kW/Rack) Dual-feed, fully-redundant power to all points, coming from 2 separate UPS systems Power from GRU, backed by 2,500kW diesel generator with minimum 72 hours of fuel Dual-feed fully-redundant networking to all points connected back to main UF Campus via 2 physically disparate fiber-paths

17 Data Hall 1 Networking Rack Rack Rack Rack Rack Rack Rack Rack Rack Main Distribution Area Horizontal (Row) Distribution Area Horizontal (Row) Distribution Area [Etcetera... more rows, more racks...] Horizontal (Row) Distribution Area Main Distribution Area Rack Rack Rack Rack Rack Rack Rack Rack Rack UFDC Redundant Cross-Town Fiber Routes

18 Data Hall 2: High-Performance Research Computing 5,000 square feet Space for approximately 154 standard Racks of computing, storage, and networking equipment Initial power capacity = 375kW Ultimate power capacity = 1,500kW Overall average ultimate power density = 9.7kW/Rack or 300 Watts/square-foot Connected to Main UF Campus Research Network via 200* gigabit/second fiber-optic link, and from there via Florida LambdaRail to the Internet2 Innovation Platform *Initially 100Gbps, increasing to 200Gbps by mid-2013

19 PDU Power Distribution Unit Feeds power to the computing and networking equipment, via the overhead power BUSWAY system. The units in this room range in size/capacity from 150kva (roughly 142,000 watts) to 300kva (~284,000 watts) each. As part of the Tier 3 reliability design, each equipment row is fed by 2 power Busways, each connected to separate Power Distribution Units (PDUs), helping to ensure that no row is dependent on a single Busway, or a single PDU.

20 Busway Power Distribution Systems (large aluminum rectangular bars running overhead) Track-Lighting on Steroids Allows for quick connection of new equipment (using specialized connectors) No Electrician needed Each equipment rack contains an internal powerdistribution system for the equipment in that rack, so racks are connected to the Busways, eliminating the need for many long power cables. As part of the Tier 3 reliability design, each equipment row is fed by 2 power BUSWAYs, each connected to separate Power Distribution Units (PDUs), helping to ensure that no row is dependent on a single Busway, or a single PDU. To further capitalize on this redundancy-principle, most individual servers and network devices have 2 separate power supplies, so that each device can be connected to BOTH Busways feeding its row.

21 Mechanical Room This room contains: Pumps which move the chilled water from the chillers (in the equipment yard) to the CRAHs Substantial piping and control valves for the chilled water system Air handler for the human-spaces of the data center

22 Pumps 1 & 2 These pumps direct chilled water from the 2 chillers (in the equipment yard) to the Computer Room Air Handlers (CRAHs) which cool the data halls. The pumps are sized to match the capacity of the chillers; each chiller requires one pump to handle the chilled water it produces. The piping system is designed so that the pumps are not specifically coupled to any chiller; any combination of the pumps can work in conjunction with any combination of chillers to supply the needs of the data halls. Each pump can handle the entire phase 1 requirements of the Data Center; like the chillers, we have N+1 redundant capacity in pumps. During normal operation, the pumps are rotated, to achieve even wear, provide routine exercise of each unit, and opportunity for each unit to undergo scheduled maintenance. For improved reliability, they will be operated at a maximum of 90% of rated capacity.

23 Future Growth & Expansion Pads & Connections for Future Pumps 3 & 4 When the Data Center load increases to where a single chiller can no longer meet the demand for chilled water, a third and, eventually a fourth chiller and pump will be added, to maintain N+1 redundancy.

24 Concurrently Maintainable Chilled Water Loop How it works We need to be able to isolate any component, including any stretch of pipe, for maintenance, without shutting down the chilled water feed (or return)! Return Feed The solution is that both the Feed and Return lines are LOOPS; water travels in both directions around each loop, crossing from the Feed loop to the Return loop through the CRAH units. And Valves A LOT of VALVES! CRAH Chilled H 2 O Supply Feed CRAH Chiller Pump Chiller Pump Chilled H 2 O Supply Feed CRAH CRAH Return Feed

25 Electrical Room 1 This room contains: Main Power Switchboards Uninterruptable Power Supply systems for the Data Halls 2 CRAH units to handle the heat produced by the electrical systems

26 Main Electrical Switchboard 1 Size/Capacity: 480 Volt; 2,500 Amp Function: Divides the main utility (GRU) electric power current into smaller currents for further distribution and provides switching, current protection and metering for these various currents. Switchboards distribute power to transformers, panelboards, control equipment, and ultimately to system loads. Redundancy: The University Systems Data Hall (Data Hall 1) is fed by two separate Main Electrical Switchboards; MSB1 and MSB3. Those connect to alternate PDUs (Power Distribution Units) within the Data Hall, feeding alternate BUSWAYs, such that each row of equipment is fed by two separate Busways, fed by two separate PDUs, fed by two separate UPSs, fed by two separate MSBs. This configuration provides the maximum possible protection against failure of the electrical distribution system, since each IT equipment rack is fed by two independent paths.

27 Main Electrical Switchboard 2 Size/Capacity: 480 Volt; 2,500 Amp Function: Divides the main utility (GRU) electric power current into smaller currents for further distribution and provides switching, current protection and metering for these various currents. Switchboards distribute power to transformers, panelboards, control equipment, and ultimately to Data Hall Power Distribution Units (PDU) and system loads. This MSB feeds: the High-Performance Research Computing Data Hall (Data Hall 2) Much of the people Space

28 Main Electrical Switchboard 3 Size/Capacity: 480 Volt; 2,500 Amp Function: Divides the main utility (GRU) electric power current into smaller currents for further distribution and provides switching, current protection and metering for these various currents. Switchboards distribute power to transformers, panelboards, control equipment, and ultimately to system loads. Redundancy: The University Systems Data Hall (Data Hall 1) is fed by two separate Main Electrical Switchboards; MSB1 and MSB3. Those connect to alternate PDUs (Power Distribution Units) within the Data Hall, feeding alternate BUSWAYs, such that each row of equipment is fed by two separate Busways, fed by two separate PDUs, fed by two separate UPSs, fed by two separate MSBs. This configuration provides the maximum possible protection against failure of the electrical distribution system, since each IT equipment rack is fed by two independent paths.

29 UPS 1 Size/Capacity: 500kVa (450 kilowatts): 7 400kW Function: Filters Utility (GRU) power, to provide smooth, constant power to IT equipment in Data Hall 1 (University Administrative Systems) Supplies momentary backup power in the event of a brief power loss ( flicker ) Supplies backup power until generator starts up (takes about 15 seconds) in the event of a longer power outage Connected to backup generator

30 UPS 2 Size/Capacity: 625kVa (562 kilowatts): 5 500kW Function: Filters Utility (GRU) power, to provide smooth, constant power to IT equipment in Data Hall 2 (High- Performance Research Computing) Supplies momentary backup power in the event of a brief power loss ( flicker ) Not connected to backup generator

31 UPS 3 Size/Capacity: 500kVa (450 kilowatts): 7 400kW Function: Filters Utility (GRU) power, to provide smooth, constant power to IT equipment in Data Hall 1 (University Administrative Systems) Supplies momentary backup power in the event of a brief power loss ( flicker ) Supplies backup power until generator starts up (takes about 15 seconds) in the event of a longer power outage Connected to backup generator

32 Future Growth & Expansion Electrical Room 2 Provides expansion space with pre-built connections for addition of 2nd electrical feed from GRU, via: 3 additional pad-mounted transformers - see empty pads in Equipment Yard 3 additional Main Switch Boards - In this room 3 additional UPS systems - In this room...essentially duplicating the equipment and capacity currently in Electrical Room 1

33 Connection Point for Temporary External Generator In the event of a failure of the permanent diesel backup power generator (in the equipment yard), a portable, trailer-mounted generator can be parked outside and connected to the data center electrical system at this point, to take over until the main generator is back in service.

34 Back-Up Electric Power Generator Engine Configuration: V-20 (20 cylinders) Engine Size (Displacement): 5,822 cubic inch engine Engine Power: 3,673 brake horsepower Generator Output: 2,500,000 Watts (2.5 Megawatts) Fuel/Run-Time: 12,100 gallon Diesel fuel tank; 72 hours run-time at full load (168 gallons/hour; 2.8 gal/min) Algae-X Fuel-Polishing System processes 600 gallons/hour, filtering the entire tank volume each 24 hours. In the event of a power outage, the large Uninterruptable Power Supply (UPS) systems in the Electrical Room can provide power for at least 5 MINUTES (assuming the data center is fully-loaded; longer at partial load). The backup generator takes approximately 15 SECONDS to start, and begin delivering power. The generator feeds power to the chiller(s), pumps, and the Data Hall 1 UPSs and CRAH (cooling) units. Restrictions: Does not supply power to Data Hall 2 (High-Performance Research Computing) Instead, the emphasis in that room is on greater expandability of CAPACITY (kilowatts of computing payload).

35 Load Bank Often described as a Giant Hair-Dryer, the Load Bank is largely just heating coils and a large fan, to be a dummy load for testing the generator. Capacity: 800 kw Function: To allow regular, periodic testing of the diesel backup generator under load, without actually taking the data center off main power. The 800kW capacity of the load bank is approximately 32% of the maximum rated output of the generator.

36 Chillers 1 & 2 Capacity: 270 Tons of cooling (each) (964 kw) Function: Provide chilled water to Computer Room Air Handlers (CRAHs) For initial operations, the entire facility can be supported by a single chiller. Consequently, following the N+1 Redundancy principle, we have 2 chillers so that if one fails or needs to be taken off-line for maintenance, the other can support the data center needs. UFDC s Trane RTAC300 Chillers use Trane s high-efficiency configuration, providing 32% energy efficiency improvement over ASHRAE Standard 90.1, adopted by the US Government as its recommended efficiency goal for modern chillers. During normal operation, the chillers are rotated, to achieve even wear, provide routine exercise of each unit, and opportunity for each unit to undergo scheduled maintenance. The chillers are nominally rated at 300 Tons, but for improved reliability, they will be operated at a maximum of 90% capacity.

37 Future Growth & Expansion Pads & Connections for Future Chillers 3 & 4 When the Data Center load increases to where a single chiller can no longer meet the demand for chilled water, a third and, eventually a fourth chiller and pump will be added. Each additional chiller provides 270 Tons of cooling capacity, sufficient for an additional 964 kilowatts of IT equipment. When fully expanded, the chiller bank will provide: 3 x 270 Tons = 810 Tons or 3 x 964 kw = 2892 kw... of cooling, with the 4th chiller being the +1 (of N+1 ) needed to meet the data center s reliability requirements.

38 Configuration/Setup Room Provides an environment as close as possible to that of the main Data Halls: temperature, humidity, etc. Overhead power Busway and data cabling mimic the main rooms Allows racks of equipment to be built, configured, and tested, prior to moving them into place Minimizes need for people to do work inside the Data Halls, reducing the chance of mistakes/accidents with production IT equipment Helps keep the Data Halls clean and dust-free, by keeping packing materials out of the production rooms

Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer

Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling. Jeremiah Stikeleather Applications Engineer Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling Jeremiah Stikeleather Applications Engineer Passive RDHx as a Cost Effective Alternative to CRAH Air Cooling While CRAH cooling has been

More information

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America 7 Best Practices for Increasing Efficiency, Availability and Capacity XXXX XXXXXXXX Liebert North America Emerson Network Power: The global leader in enabling Business-Critical Continuity Automatic Transfer

More information

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power

Future of Cooling High Density Equipment. Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power Future of Cooling High Density Equipment Steve Madara Vice President and General Manager Liebert Precision Cooling Business Emerson Network Power Agenda Issues facing the Data Center Managing the explosive

More information

MSYS 4480 AC Systems Winter 2015

MSYS 4480 AC Systems Winter 2015 MSYS 4480 AC Systems Winter 2015 Module 12: DATA CENTERS By Satwinder Singh 14/05/2015 1 Data Centers Data Centers are specialized environments that safeguard your company's most valuable equipment and

More information

84 kw, Tier 3, Direct Expansion, 937 ft 2

84 kw, Tier 3, Direct Expansion, 937 ft 2 [Reference Design 42] 84 kw, Tier 3, Direct Expansion, 937 ft 2 DESIGN OVERVIEW Data Center IT Capacity 84 kw Target Availability Tier 3 Annualized PUE at 100% Load 1.712 Total Racks and Average Density

More information

The Energy. David L. Moss Dell Data Center Infrastructure

The Energy. David L. Moss Dell Data Center Infrastructure The Energy Advantages of Containment Systems David L. Moss Dell Data Center Infrastructure www.dell.com/hiddendatacenter Summary The Uptime Institute estimates that the cost to build a new 20,000 square

More information

Liebert DCW CATALOGUE

Liebert DCW CATALOGUE Liebert DCW CATALOGUE Chilled Water-Based High Density Cooling For The Data Center LIEBERT DCW Flexible Approaches For Energy Efficient Cooling Save up to 70% in cooling energy usage Data center managers

More information

November/December 2011

November/December 2011 November/December 2011 Company is justifiably proud of its new facilities Helping companies plan, design, operate, and eventually refresh or repurpose their data centers more efficiently and effectively

More information

Cooling on Demand - scalable and smart cooling solutions. Marcus Edwards B.Sc.

Cooling on Demand - scalable and smart cooling solutions. Marcus Edwards B.Sc. Cooling on Demand - scalable and smart cooling solutions Marcus Edwards B.Sc. Schroff UK Limited Data Centre cooling what a waste of money! A 1MW data center needs 177,000,000 kwh in its lifetime of 10

More information

Power & Cooling Considerations for Virtualized Environments

Power & Cooling Considerations for Virtualized Environments Power & Cooling Considerations for Virtualized Environments Cisco Data Center Day 2008 31-01-2008 APC-MGE Vision & Mission OUR VISION APC-MGE products ensure availability wherever data is created, transmitted

More information

ENCLOSURE HEAT DISSIPATION

ENCLOSURE HEAT DISSIPATION ENCLOSURE HEAT DISSIPATION DATA CENTER SOLUTIONS A White Paper by Great Lakes Case & Cabinet Company www.werackyourworld.com 814.734.7303 Page 2 Introduction Dangerous heat levels inside an enclosure threaten

More information

3300 kw, Tier 3, Chilled Water, 70,000 ft 2

3300 kw, Tier 3, Chilled Water, 70,000 ft 2 [Reference Design 22] 3300 kw, Tier 3, Chilled Water, 70,000 ft 2 DESIGN OVERVIEW Data Center IT Capacity 1100 kw per Data Hall 3 Independent Data Halls Design Application Range This design can be applied

More information

1008 kw, Tier 2, Chilled Water, ft 2

1008 kw, Tier 2, Chilled Water, ft 2 [eference Design 81] 1008 kw, Tier 2, hilled Water, 18115 ft 2 DESIGN OVEVIEW Data enter IT apacity 1008 kw Adaptable from 200 kw to 3024 kw Target Availability Tier 2 Annualized PUE at 100% Load 1.65

More information

Reducing Data Center Cooling Costs through Airflow Containment

Reducing Data Center Cooling Costs through Airflow Containment FEATURE Reducing Data Center Cooling Costs through Airflow Containment Implementing a containment solution in the data center costs much less than the alternatives of adding air handlers or other supplemental

More information

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire

Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101. Nick Gangemi Regional Sales Manager Data Aire Mission Critical Facilities & Technology Conference November 3, 2011 Cooling 101 Nick Gangemi Regional Sales Manager Data Aire Agenda Exactly what are we talking about ASHRAE TC 9.9 Design best practices

More information

Thermal management. Thermal management

Thermal management. Thermal management Thermal management Thermal management Managing thermal loads is a major challenge for all Data Centre operators. Effecting proper control of the thermal environment drives energy consumption and ultimately

More information

Data Center Trends: How the Customer Drives Industry Advances and Design Development

Data Center Trends: How the Customer Drives Industry Advances and Design Development Bablu Kazi, PE MORRISON HERSHFIELD MISSION CRITICAL Data Center Trends: How the Customer Drives Industry Advances and Design Development What does the Customer Need? Scalability P U E Latest Technology

More information

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc

Current Data Center Design. James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc Current Data Center Design James Monahan Sept 19 th 2006 IEEE San Francisco ComSoc What is the biggest facility problem in your data center? 35% 30% 25% 20% 15% Series1 10% 5% 0% Excessive heat Insufficient

More information

Virtualization and consolidation

Virtualization and consolidation Virtualization and consolidation Choosing a density strategy Implementing a high-density environment Maximizing the efficiency benefit Anticipating the dynamic data center Schneider Electric 1 Topical

More information

REPORT. Energy Efficiency of the San Diego Supercomputer Center and Distributed Data Centers at UCSD. Prepared for:

REPORT. Energy Efficiency of the San Diego Supercomputer Center and Distributed Data Centers at UCSD. Prepared for: REPORT Energy Efficiency of the San Diego Supercomputer Center and Distributed Data Centers at UCSD Prepared for: San Diego Supercomputer Center 10100 John J Hopkins Drive San Diego, CA 92037 May 11, 2009

More information

5.2 MW, Pod-based build, Chilled Water, ft 2

5.2 MW, Pod-based build, Chilled Water, ft 2 [Reference Design 65] 5.2 MW, Pod-based build, hilled Water, 85000 ft 2 DESIGN OVERVIEW Data enter apacity 5.2 MW Target Availability Tier III with 2N power distribution and less-redundant generator configuration

More information

A Green Approach. Thermal

A Green Approach. Thermal A Green Approach to Thermal Management Thermal Management - AGENDA Definitions & Thermal Facts Density Trends Rack Level Management Passive Solutions Thermal Sources & Green Solutions Data Center Layout

More information

Google s Green Data Centers: Network POP Case Study

Google s Green Data Centers: Network POP Case Study Google s Green Data Centers: Network POP Case Study Table of Contents Introduction... 2 Best practices: Measuring. performance, optimizing air flow,. and turning up the thermostat... 2...Best Practice

More information

Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center. Precision Cooling For Business-Critical Continuity

Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center. Precision Cooling For Business-Critical Continuity Liebert DCW Family Chilled Water-Based High Density Cooling For The Data Center Precision Cooling For Business-Critical Continuity Flexible Approaches For Energy Efficient Cooling Save up to 70% in cooling

More information

White Paper: Data Center Capacity Management Provides New Business Opportunities & Efficiency at State-of-the-Art One Wilshire June 8, 2017

White Paper: Data Center Capacity Management Provides New Business Opportunities & Efficiency at State-of-the-Art One Wilshire June 8, 2017 White Paper: Data Center Capacity Management Provides New Business Opportunities & Efficiency at State-of-the-Art One Wilshire June 8, 2017 As operators have demanded increasing levels of power density

More information

Optimizing Cooling Performance Of a Data Center

Optimizing Cooling Performance Of a Data Center ASHRAE www.ashrae.org. Used with permission from ASHRAE Journal. This article may not be copied nor distributed in either paper or digital form without ASHRAE s permission. For more information about ASHRAE,

More information

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings

AisleLok Modular Containment vs. Legacy Containment: A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings WHITE PAPER AisleLok Modular Containment vs. : A Comparative CFD Study of IT Inlet Temperatures and Fan Energy Savings By Bruce Long, Upsite Technologies, Inc. Lars Strong, P.E., Upsite Technologies, Inc.

More information

How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals. Michael Gagnon Coolcentric October

How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals. Michael Gagnon Coolcentric October How Liquid Cooling Helped Two University Data Centers Achieve Cooling Efficiency Goals Michael Gagnon Coolcentric October 13 2011 Data Center Trends Energy consumption in the Data Center (DC) 2006-61 billion

More information

Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center

Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center Two-Stage Power Distribution: An Essential Strategy to Future Proof the Data Center Adapting to Change Concern over the pace of technology change is increasing among data center managers. This was reflected

More information

Green IT Project: The Business Case

Green IT Project: The Business Case Green IT Project: The Business Case Indumathi Lnu University at Albany DATE We will discuss Identifying green IT projects Funding green IT projects Building the business case Case study: University at

More information

Impact of Air Containment Systems

Impact of Air Containment Systems White Paper June 2012 WP-20 Impact of Air Containment Systems Reducing Energy Consumption Introduction: The Importance of Maintaining Data Center Cooling Energy Efficiency Data centers are mission-critical

More information

Power and Cooling for Ultra-High Density Racks and Blade Servers

Power and Cooling for Ultra-High Density Racks and Blade Servers Power and Cooling for Ultra-High Density Racks and Blade Servers APC White Paper #46 Richard L. Sawyer Director of Data Center Technology American Power Conversion Company 2005 APC corporation What has

More information

Next Generation Cooling

Next Generation Cooling Next Generation Cooling Server Manufacturer Perspective Practical and Cost Effective David Moss Data Center Strategist Dell 1 Why Chip, Row, or Rack? Density? Energy Don t be scared; I don t see 50kW/rack

More information

STATE DATA CENTER DRAFT PROPOSAL. STATE OF OREGON DATA CENTER 28 October 2016

STATE DATA CENTER DRAFT PROPOSAL. STATE OF OREGON DATA CENTER 28 October 2016 STATE DATA CENTER DRAFT PROPOSAL 1 PROJECT TEAM 2 SERA Architects, Inc George Hager Patrick Sullivan PAE Engineers Dave Williams Doug Nachtrieb Michael Yee JMB Consulting Group Jon Bayles MEETING GOALS

More information

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design.

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design. Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design. TelecityGroup Kvastvägen 25-29 128 62 Sköndal Stockholm Sweden Tel: +46 (0) 8 799 3800 se.info@telecity.com www.telecitygroup.se

More information

Moving Containment Inside the Enclosure. Jeff Markle Great Lakes Case & Cabinet

Moving Containment Inside the Enclosure. Jeff Markle Great Lakes Case & Cabinet Moving Containment Inside the Enclosure Presented by Jeff Markle Great Lakes Case & Cabinet Containment Containment : To put constraint upon; to restrain; to confine; to keep within bounds Containment

More information

A Data Center Heat Removal System That Saves... 80% Discover Inertech s Award-winning Data Center Heat Removal System

A Data Center Heat Removal System That Saves... 80% Discover Inertech s Award-winning Data Center Heat Removal System A Data Center Heat Removal System That Saves... Infrastructure Power Water up to 40% up to 80% up to 85% Discover Inertech s Award-winning Data Center Heat Removal System A Cool Solution; A Breakthrough

More information

Data Center Enclosures Best Practices. Maximize the Efficiency of the Enclosure within a Data Center

Data Center Enclosures Best Practices. Maximize the Efficiency of the Enclosure within a Data Center Data Center Enclosures Best Practices Maximize the Efficiency of the Enclosure within a Data Center Things to ask before selecting Enclosures: Primary Use Severs or Patching, or both determines size. RMU

More information

Close Coupled Cooling for Datacentres

Close Coupled Cooling for Datacentres Close Coupled Cooling for Datacentres Dr Fathi Tarada Mosen Ltd fathi.tarada@mosenltd.com www.mosenltd.com Agenda Increasing Power Densities Raised Floor Limitations Hot and Cold Aisle Enclosures Close

More information

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS

Cooling. Highly efficient cooling products for any IT application. For More Information: (866) DATA CENTER SOLUTIONS Cooling > Highly efficient cooling products for any IT application. DATA CENTER SOLUTIONS For More Information: (866) 787-3271 Sales@PTSdcs.com End-to-End Cooling Solutions from the Closet to the Data

More information

SmartRow Solution Intelligent, Integrated Infrastructure for the Data Center

SmartRow Solution Intelligent, Integrated Infrastructure for the Data Center SmartRow Solution Intelligent, Integrated Infrastructure for the Data Center Deploy a Fully Configured Data Center in Just Weeks Availability, Capacity, Efficiency. You need them and now you can have them

More information

Reducing Energy Consumption with

Reducing Energy Consumption with Reducing Energy Consumption with Passive Cooling Ian Cathcart, RCDD Technical Support Manager Technical Support Manager Chatsworth Products, Inc. Thermal Management: Mission Critical Data center heat loads

More information

High Performance Computing (HPC) Data Center Proposal

High Performance Computing (HPC) Data Center Proposal High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015 Quick Facts!! Located on 1 st floor in Building

More information

200 kw, Tier 3, Chilled Water, 5250 ft 2

200 kw, Tier 3, Chilled Water, 5250 ft 2 [eference Design 4] 200 kw, Tier 3, Chilled Water, 5250 ft 2 DESIGN OVEVIEW Data Center IT Capacity 200 kw Adaptable from 50 kw to 600 kw Target Availability Tier 3 Annualized UE at 100% Load 1.67 in Miami,

More information

2000 kw, Tier 3, Chilled Water, Prefabricated, sq. ft.

2000 kw, Tier 3, Chilled Water, Prefabricated, sq. ft. [Reference Design 64] 2000 kw, Tier 3, Chilled Water, refabricated, 23700 sq. ft. DESIGN OVERVIEW Data Center IT Capacity 2000 kw Target Availability Tier 3 Annualized E at 100% Load 1.64 in Miami, FL

More information

Data Center SC1. Santa Clara, CA

Data Center SC1. Santa Clara, CA Data Center SC1 Santa Clara, CA powerful/ efficient /reliable solutions > Data Center SC1 ABOUT DUPONT FABROS TECHOLOGY (DFT) DFT is a leading owner, developer, operator and manager of wholesale data centers.

More information

COOLING SOLUTIONS. SmartRack Close-Coupled Cooling for IT Environments

COOLING SOLUTIONS. SmartRack Close-Coupled Cooling for IT Environments COOLING SOLUTIONS SmartRack Close-Coupled Cooling for IT Environments Efficient, flexible, manageable cooling reduces costs and simplifies installation. Efficient, flexible, manageable cooling for IT environments

More information

Data Center NJ1. Piscataway, NJ

Data Center NJ1. Piscataway, NJ Data Center NJ1 Piscataway, NJ powerful/ efficient /reliable solutions > Data Center NJ1 ABOUT DUPONT FABROS TECHOLOGY (DFT) DFT is a leading owner, developer, operator and manager of wholesale data centers.

More information

Efficiency of Data Center cooling

Efficiency of Data Center cooling Efficiency of Data Center cooling Comparison of data room layouts and cooling systems Bohumil Cimbal Product manager for Cooling Systems in CONTEG Company Objective Basic questions between investor and

More information

Rittal White Paper 506: Cold Aisle Containment for Improved Data Center Cooling Efficiency By: Daniel Kennedy

Rittal White Paper 506: Cold Aisle Containment for Improved Data Center Cooling Efficiency By: Daniel Kennedy Rittal White Paper 506: Cold Aisle Containment for Improved Data Center Cooling Efficiency By: Daniel Kennedy Executive Summary Increasing power requirements for new servers have created the need to support

More information

Optimiz Chilled January 25, 2013

Optimiz Chilled January 25, 2013 Optimiz zing Existing Data Center Chilled Water Plants January 25, 2013 This White Paper describes factors to be consideredd when optimizing an existing data center chilledd water plant. Authored by Michael

More information

PROCESS & DATA CENTER COOLING. How and Why To Prepare For What s To Come JOHN MORRIS REGIONAL DIRECTOR OF SALES

PROCESS & DATA CENTER COOLING. How and Why To Prepare For What s To Come JOHN MORRIS REGIONAL DIRECTOR OF SALES PROCESS & DATA CENTER COOLING How and Why To Prepare For What s To Come JOHN MORRIS REGIONAL DIRECTOR OF SALES CONGRATULATIONS 2 BOLD PREDICTION 3 Where We Are TODAY Computer Technology NOW advances more

More information

Rittal Cooling Solutions A. Tropp

Rittal Cooling Solutions A. Tropp 1 Rittal Cooling Solutions Presented by Adam Tropp 2 Agenda Cooling Strategies Aisle Containment Chimney Solution Airstream Management Rittal Liquid Cooling Packages Data Monitoring 3 Cooling strategies.

More information

DATA CENTER COLOCATION BUILD VS. BUY

DATA CENTER COLOCATION BUILD VS. BUY DATA CENTER COLOCATION BUILD VS. BUY Comparing the total cost of ownership of building your own data center vs. buying third-party colocation services Executive Summary As businesses grow, the need for

More information

9.6 MW, Traditional IT and OCP-compatible, Indirect Air Economizer, m 2

9.6 MW, Traditional IT and OCP-compatible, Indirect Air Economizer, m 2 [Reference esign 62] 9.6 MW, Traditional IT and OC-compatible, Indirect Air Economizer, 16970 m 2 ESIGN OVERVIEW ata Center IT Capacity 9.6 MW Target Availability Tier III with 2N power distribution and

More information

Ten Cooling Solutions to Support High-density Server Deployment. Schneider Electric Data Center Science Center White Paper #42

Ten Cooling Solutions to Support High-density Server Deployment. Schneider Electric Data Center Science Center White Paper #42 Ten Cooling Solutions to Support High-density Server Deployment Schneider Electric Data Center Science Center White Paper #42 1 Introduction 2 Introduction This paper provides 10 approaches for increasing

More information

Dude Solutions Business Continuity Overview

Dude Solutions Business Continuity Overview Dude Solutions Business Continuity Overview Table of Contents Overview.... 2 Primary and Disaster Recovery Data Centers.... 2 Network Infrastructure.... 3 Emergency Processes.... 3 Power and Cooling Systems....

More information

World Class. Globally Certified. High Availability.

World Class. Globally Certified. High Availability. Intellicentre 2 World Class. Globally Certified. High Availability. Intellicentre 2 Macquarie Park, Sydney 2 Intellicentre 2 A new generation of data centre solutions. Australia s first showcase Tier III

More information

Creating an Energy-Efficient, Automated Monitoring System for Temperature and Humidity at a World-Class Data Center

Creating an Energy-Efficient, Automated Monitoring System for Temperature and Humidity at a World-Class Data Center About CyrusOne CyrusOne is a world-renowned enterprise data center provider that offers 40 data centers across the U.S., Europe and Asia with a three million-plus square feet of total net rentable space.

More information

Schneider Electric Cooling Portfolio. Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions

Schneider Electric Cooling Portfolio. Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions Schneider Electric Cooling Portfolio Jim Tubbesing Heath Wilson APC/Schneider North Texas Rep Tubbesing Solutions Data Center Cooling Agenda 1. How to size data center, server rooms, IDF rooms, etc in

More information

The End of Redundancy. Alan Wood Sun Microsystems May 8, 2009

The End of Redundancy. Alan Wood Sun Microsystems May 8, 2009 The End of Redundancy Alan Wood Sun Microsystems May 8, 2009 Growing Demand, Shrinking Resources By 2008, 50% of current data centers will have insufficient power and cooling capacity to meet the demands

More information

A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems.

A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems. DATA CENTRE COOLING SYSTEM A) Differences between Precision and Comfort Cooling Here are the major differences exist between precision air conditioning and comfort systems. 1. Cooling Optimized to Electronic

More information

Increasing Data Center Efficiency through Metering and Monitoring Power Usage

Increasing Data Center Efficiency through Metering and Monitoring Power Usage White Paper Intel Information Technology Computer Manufacturing Data Center Efficiency Increasing Data Center Efficiency through Metering and Monitoring Power Usage To increase data center energy efficiency

More information

Evaporative free air cooling technology Providing evaporative free air cooling solutions.

Evaporative free air cooling technology Providing evaporative free air cooling solutions. Evaporative free air cooling technology Providing evaporative free air cooling solutions. Creating an effective workspace environment data centre free air cooling technology FreeCool - Evaporative Free

More information

The Efficient Enterprise. All content in this presentation is protected 2008 American Power Conversion Corporation

The Efficient Enterprise. All content in this presentation is protected 2008 American Power Conversion Corporation Ee The Efficient Enterprise All content in this presentation is protected 2008 American Power Conversion Corporation Keystrokes Kilowatts Heat OUT Electricity IN Need for bandwidth exploding Going hyperbolic!

More information

Smart Data Centres. Robert M Pe, Data Centre Consultant HP Services SEA

Smart Data Centres. Robert M Pe, Data Centre Consultant HP Services SEA Smart Data Centres Robert M Pe, Data Centre Consultant Services SEA 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Content Data center

More information

High Tech Getting Too Hot? High Density Environments Require A Different Cooling Approach

High Tech Getting Too Hot? High Density Environments Require A Different Cooling Approach High Tech Getting Too Hot? High Density Environments Require A Different Cooling Approach Proper airfl ow and thermal management are the most critical aspects of designing and operating your data center,

More information

TWO FINANCIAL IMPACT STUDIES. Seal IT Equipment Cabinets for Significant Annual Cost Savings and Simple Payback in a Few Short Months

TWO FINANCIAL IMPACT STUDIES. Seal IT Equipment Cabinets for Significant Annual Cost Savings and Simple Payback in a Few Short Months TWO FINANCIAL IMPACT STUDIES Seal IT Equipment Cabinets for Significant Annual Cost Savings and Simple Payback in a Few Short Months TWO FINANCIAL IMPACT STUDIES Seal IT Equipment Cabinets for Significant

More information

Switch to Perfection

Switch to Perfection Switch to Perfection Traditional Data Centre - Temperatures Ideal Air Temperature: 22 C Chilled Water: 7-9 C Heat Load per Rack: Up to 4,5 Chilling & Pump Set The challenge a thermal image Front View of

More information

Green Data Centers A Guideline

Green Data Centers A Guideline Green Data Centers A Guideline Sekhar Kondepudi Ph.D Associate Professor Smart Buildings & Smart Cities Vice Chair Focus Group on Smart Sustainable Cities ITU-TRCSL Workshop on Greening the Future: Bridging

More information

<Insert Picture Here> Austin Data Center 6Sigma DC CFD Model

<Insert Picture Here> Austin Data Center 6Sigma DC CFD Model Austin Data Center 6Sigma DC CFD Model Keith Ward Keith Ward Data Center Facilities Engineering and Logistics Design Austin Data Center Statistics - Space Original AT&T Designed and

More information

Los Anseles W Department of 5* Water & Power

Los Anseles W Department of 5* Water & Power Los Anseles W Department of 5* Water & Power 3D RESOLUTION NO. BOARD LETTER APPROVAL DAVID H. WRIGHT MARCIE L. EDWARDS Chief Operating Officer General Manager DATE: May 26, 2016 SUBJECT: Lease Agreement

More information

Recapture Capacity for Existing. and Airflow Optimization

Recapture Capacity for Existing. and Airflow Optimization Recapture Capacity for Existing Data Centers Through Cooling and Airflow Optimization Introduction Challenges and Trends Agenda Cooling & Airflow Optimization Takeaways Learning Objectives Recognize opportunities

More information

1.0 Executive Summary

1.0 Executive Summary Statement of Work - 1 - Power and Cooling Assessment for Data Center Professional Services Service 1.0 Executive Summary Table of Contents 1.0 Executive Summary 2.0 Features & Benefits 3.0 Details of Service

More information

Planning a Green Datacenter

Planning a Green Datacenter Planning a Green Datacenter Dr. Dominique Singy Contribution to: Green IT Crash Course Organisator: Green IT Special Interest Group of the Swiss Informatics Society ETH-Zurich / February 13, Green Data

More information

Cisco Lab Setpoint Increase. Energy Efficient Data Center Demonstration Project

Cisco Lab Setpoint Increase. Energy Efficient Data Center Demonstration Project Cisco Lab Setpoint Increase Energy Efficient Data Center Demonstration Project About the Energy Efficient Data Center Demonstration Project The project s goal is to identify key technology, policy and

More information

1.0 Executive Summary. 2.0 Available Features & Benefits

1.0 Executive Summary. 2.0 Available Features & Benefits Statement Of Work Energy Management Services Table of Contents 1.0 Executive Summary 2.0 Available Features & Benefits 3.0 Assessment Deliverables Selection 4.0 On Site Assessment Activities 5.0 Exclusions

More information

Variable Density, Closed-Loop, Water-Cooled Data Center Solution

Variable Density, Closed-Loop, Water-Cooled Data Center Solution Variable Density, Closed-Loop, Water-Cooled Data Center Solution Overview Great Lakes Case & Cabinet Co., Inc. has worked with Naissus Thermal Management Solutions, of Toronto, Ontario to develop one of

More information

Energy Logic: Emerson Network Power. A Roadmap for Reducing Energy Consumption in the Data Center. Ross Hammond Managing Director

Energy Logic: Emerson Network Power. A Roadmap for Reducing Energy Consumption in the Data Center. Ross Hammond Managing Director Energy Logic: A Roadmap for Reducing Energy Consumption in the Data Center Emerson Network Power Ross Hammond Managing Director Agenda Energy efficiency Where We Are Today What Data Center Managers Are

More information

APC APPLICATION NOTE #74

APC APPLICATION NOTE #74 #74 Configuration of InfraStruXure for Data Centers to Support Dell PowerEdge 1855 Abstract Many companies are making plans to incorporate Dell PowerEdge Blade Servers into their data center applications.

More information

Green IT and Green DC

Green IT and Green DC Green IT and Green DC Alex SL Tay Regional Service Product Line Manager Site & Facilities Services, IBM ASEAN 1 What our clients are telling us We are running out of space in our data center We have environmental

More information

HPC Solutions in High Density Data Centers

HPC Solutions in High Density Data Centers Executive Report HPC Solutions in High Density Data Centers How CyrusOne s Houston West data center campus delivers the highest density solutions to customers With the ever-increasing demand on IT resources,

More information

CHILLED WATER. HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw

CHILLED WATER. HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw IT COOLING CLOSE CONTROL AIR CONDITIONERS CHILLED WATER HIGH PRECISION AIR CONDITIONERS, FROM 7 TO 211 kw 3 Versions available Single chilled water coil Double chilled water coil High temperature IT COOLING

More information

Using CFD Analysis to Predict Cooling System Performance in Data Centers Ben Steinberg, P.E. Staff Applications Engineer

Using CFD Analysis to Predict Cooling System Performance in Data Centers Ben Steinberg, P.E. Staff Applications Engineer Using CFD Analysis to Predict Cooling System Performance in Data Centers Ben Steinberg, P.E. Staff Applications Engineer Agenda What is CFD? Applications of CFD Applicability to the Data Center How it

More information

Powering Change in the Data Center. A White Paper from the Experts in Business-Critical Continuity TM

Powering Change in the Data Center. A White Paper from the Experts in Business-Critical Continuity TM Powering Change in the Data Center A White Paper from the Experts in Business-Critical Continuity TM Executive Summary Increases in data center density and diversity are driving change in the power and

More information

APC APPLICATION NOTE #146

APC APPLICATION NOTE #146 #146 Hot Aisle Containment Application Guidelines and Approved Configurations By Kevin Lemke Abstract The Hot Aisle Containment System (HACS) is a modular ceiling panel and door system for use in isolating

More information

CyrusOne Case Study About CyrusOne

CyrusOne Case Study About CyrusOne CyrusOne Case Study About CyrusOne CyrusOne is a world-renowned enterprise data center provider that offers 31 data centers across the U.S., Europe and Asia with a three million-plus square feet of total

More information

10 TECHNIQUES FOR IMPROVING DATA CENTER POWER EFFICIENCY

10 TECHNIQUES FOR IMPROVING DATA CENTER POWER EFFICIENCY NETAPP VISION PAPER 10 TECHNIQUES FOR IMPROVING DATA CENTER POWER EFFICIENCY Dan Hoffman November 2007 WP-7030-1107 IMPROVE POWER EFFICIENCY IN YOUR DATA CENTER USING TODAY S TECHNOLOGIES NetApp is completely

More information

SmartRow Intelligent, Integrated Infrastructure in a Self-Contained Line-Up. Solutions For Business-Critical Continuity

SmartRow Intelligent, Integrated Infrastructure in a Self-Contained Line-Up. Solutions For Business-Critical Continuity SmartRow Intelligent, Integrated Infrastructure in a Self-Contained Line-Up Solutions For Business-Critical Continuity Deploy a Fully Configured Data Center in Just Weeks Efficiency. Capacity. Availability.

More information

SmartMod Intelligent, Integrated Infrastructure for the Data Center

SmartMod Intelligent, Integrated Infrastructure for the Data Center SmartMod Intelligent, Integrated Infrastructure for the Data Center Deploy a High-Capacity, Efficient and Secure Infrastructure in Just Months Building a new data center in a short timeframe is nearly

More information

02 Why Does My IT Infrastructure Need a UPS?

02 Why Does My IT Infrastructure Need a UPS? UPS Buyer s Guide TABLE OF CONTENTS 01 What is a UPS? 02 Why Does My IT Infrastructure Need a UPS? 03 Common Features of UPS Systems ENERGY STORAGE SYSTEMS BATTERIES POWER PROTECTION MONITORING SOLUTIONS

More information

Data Center Design and. Management. Steve Knipple. August 1, CTO / VP Engineering & Operations

Data Center Design and. Management. Steve Knipple. August 1, CTO / VP Engineering & Operations Data Center Design and Energy Management Steve Knipple CTO / VP Engineering & Operations August 1, 2013 Agenda Introduction to EasyStreet Speaker Bio EasyStreet Sustainability Data Center 1 (Retrofit completed

More information

A LAYMAN S EXPLANATION OF THE ROLE OF IT RACKS IN COOLING YOUR DATA CENTER

A LAYMAN S EXPLANATION OF THE ROLE OF IT RACKS IN COOLING YOUR DATA CENTER A LAYMAN S EXPLANATION OF THE ROLE OF IT RACKS IN COOLING YOUR DATA CENTER By David Moss Dell Data Center Infrastructure dell.com/hiddendatacenter Many people see an IT rack as a glorified shelving system.

More information

Data Center Efficiency and Retrofit Best Practices

Data Center Efficiency and Retrofit Best Practices White Paper August 2014 IT@Intel Data Center Efficiency and Retrofit Best Practices The use of best practices enables us to increase data center efficiency and capacity, optimize the return on invested

More information

Green Computing: Datacentres

Green Computing: Datacentres Green Computing: Datacentres Simin Nadjm-Tehrani Department of Computer and Information Science (IDA) Linköping University Sweden Many thanks to Jordi Cucurull For earlier versions of this course material

More information

Engineering Pre-Design Study

Engineering Pre-Design Study Engineering Pre-Design Study report for: University of Minnesota Project Name: University Project Number: 01-184-17-1520 KFI Project Number: 17-038.00 March 2, 2017 Title Page Building Information University

More information

Datacenter Efficiency Trends. Cary Roberts Tellme, a Microsoft Subsidiary

Datacenter Efficiency Trends. Cary Roberts Tellme, a Microsoft Subsidiary Datacenter Efficiency Trends Cary Roberts Tellme, a Microsoft Subsidiary Power Slashing Tools P3 Kill-A-WATT Measures min, max, and instantaneous voltage and current draw Calculates power (watts), kilowatt

More information

DATA CENTER EFFICIENCY: Data Center Temperature Set-point Impact on Cooling Efficiency

DATA CENTER EFFICIENCY: Data Center Temperature Set-point Impact on Cooling Efficiency DATA CENTER EFFICIENCY: Data Center Temperature Set-point Impact on Cooling Efficiency September 27 th, 2013 Tejas Desai, PE CEM CEA LEED AP Sameer Behere, PE CEM LEED AP Agenda Problem: Quick fixes can

More information

Reliable, Energy Efficient, and Cost-Effective On-site Cogeneration and Back-up Power System for a Mission Critical Data Center

Reliable, Energy Efficient, and Cost-Effective On-site Cogeneration and Back-up Power System for a Mission Critical Data Center Reliable, Energy Efficient, and Cost-Effective On-site Cogeneration and Back-up Power System for a Mission Critical Data Center Critical Facilities Roundtable Dan Hoffman Director of Facilities May 19,

More information

KEEPING YOU OUT IN FRONT

KEEPING YOU OUT IN FRONT URBACON DATA CENTRE SOLUTIONS KEEPING YOU OUT IN FRONT A next generation data centre in the core of downtown Toronto DOWNTOWN DATA CENTRE 281 FRONT STREET TORONTO, ON 1 2 X URBACON BUILDING GROUP 70 Lake

More information