1 Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Executive Summary There is no single end-all cabling configuration for every data center, and CIOs, data center professionals and IT managers need to examine the pros and cons of each solution based on their specific needs. This paper focuses on the many factors to consider when evaluating top of rack (ToR) and structured cabling configurations. The discussion includes the impact of those configurations on total management; scalability and upgrades; interoperability; equipment, maintenance and cabling costs; port utilization; power consumption and cooling requirements. Bandwidth demand and scalability, server virtualization, high-performance switching capabilities and higher densities necessitate taking a careful look at the various configurations available for cabling a data center. Today s data center cabling configuration choices are also impacted by the need to lower power consumption and ensure efficient cooling of critical equipment, as well as by budget constraints and management structure. Introduction As data centers become more complex, cabling system design and topology become critical. When the first data centers were built, end user terminals were connected via point-to-point connections. This was a viable option for small computer rooms with no foreseeable need for growth or reconfiguration. As computing needs increased and new equipment was added, these point-to-point connections resulted in cabling chaos with associated complexity and higher cost. In response, data center standards like TIA-942-A and ISO recommended a hierarchical structured cabling infrastructure for connecting equipment. Instead of point-to-point connections, structured cabling uses distribution areas that provide flexible, standards-based connections between equipment, such as connections from switches to servers, servers to storage devices and switches to switches (see Figure 1).
2 Figure 1: TIA-942-A Data Center Topology (similar to ISO 24764) With today s high-performance servers and virtualization, more applications can be delivered from a single rack of servers than ever before. In response, several switch manufacturers recommend a Top of Rack (ToR) configuration where smaller (1RU to 2RU) edge switches are placed in the top of each server rack (or cabinet) and connect directly to the servers in the rack via short preterminated small form-factor pluggable (e.g., SFP+ and QSFP) twinaxial cable assemblies, active optical cable assemblies or RJ-45 modular patch cords. ToR significantly increases the number of switches and reduces the initial amount of structured cabling. It is often recommended for its rack-at-a-time deployment, ability to limit the use of copper cabling to within racks, support for east-west (i.e., server-to-sever) traffic and rack-level management capabilities. Both TIA 942-compliant structured cabling and ToR have advantages and disadvantages. When selecting the cabling configuration to best meet the needs of the data center, it is important to examine the impact that structured cabling and ToR have on overall total cost of operations, as well as other trade-offs. Manageability Considerations Before choosing a cabling infrastructure, data center professionals should consider operational structure and policies. With structured cabling, patch panels that mirror switch ports and server ports connect to corresponding panels in one or more central patching areas or zones via permanent (or fixed) links. Also referred to as distribution areas, these patching areas may be located at the end or in the middle of a row of cabinets. Moves, adds and changes (MACs) are accomplished by repositioning patch cord or fiber jumper connections at the central patching area. The fixed portion of the channel remains unchanged and switches and equipment are left untouched and secure. This creates an any-to-all configuration where any switch port can be connected to any equipment port. Furthermore, structured cabling can be field terminated to any length to maintain a clean, slack-free appearance.
3 In a ToR configuration, switches at the top of each rack connect directly to the servers in the same rack, requiring all changes to be made within each individual rack (or cabinet). This eliminates the use of central patching and reduces the amount of structured cabling in the data center (see Figure 2). MACs in a ToR configuration can be more complicated and time consuming especially in large data centers with hundreds of cabinets. Changes must be made in individual racks or cabinets, rather than at one convenient central patching area. Identifying the specific rack or cabinet requiring the change can be a complicated process. ToR can be a solution for data centers where individual racks of servers and their corresponding switches need to be managed as their own entity or segregated by application. ToR does not allow network administrators to keep switches separate from server administrators, which can be problematic when these groups manage switches and servers separately and when switches must remain protected for security purposes. Figure 2: Structured Cabling vs. ToR Topology. ToR eliminates central patching in the distribution area. Note: Designs based on a three-tier switch architecture. Scalability and Upgrade Considerations Cabling system design can limit or enhance squeezing the most from racks, servers and ports. A widespread switch upgrade with ToR impacts many more switches than with structured cabling and requires equipment at the network core to have the port densities and bandwidth capacity to support the increased number of switches. An individual switch upgrade with structured cabling can increase connection speeds to multiple servers across several racks in the data center. An upgrade to an individual ToR switch improves connection speed to only the servers in that rack. ToR switches using short-distance small form-factor pluggable twinaxial cable assemblies cannot support autonegotiation. Because the twisted-pair cabling used with structured cabling is backwards compatible, it supports autonegotiation where individual ports can switch between a 10 and 1 gigabit operation depending on the connected equipment. Autonegotiation enables partial switch or server upgrades on an as-needed basis, enabling a cost-effective migration over time. Without autonegotiation, a switch upgrade requires all equipment connected to that switch to be upgraded simultaneously, incurring full upgrade costs all at once.
4 Because ToR small form-factor pluggable twinaxial cable assemblies are typically more expensive than copper patch cords in a structured cabling system, costs can escalate even further during upgrades as some equipment vendors require use of their cable assemblies and force cable upgrades with equipment upgrades. The core physical layer infrastructure or the fixed portion of the cabling channel is typically installed once, as long as the minimum standards recommended fiber and copper cabling are used. In a ToR configuration using small form-factor pluggable twinaxial cable assemblies, distance between the switches and the servers is limited to a length of 7 meters in passive mode. While this is not a problem if each rack will always be managed as an individual unit, these short lengths can restrict the location of equipment if needs change. Structured cabling lengths can be up to 100 meters, allowing flexible equipment placement. Interoperability Concerns Open systems enable more choices and a competitive market place. Interoperability and the open systems concept is the bedrock of cabling industry standards. Data center managers expect and value interoperability to fully leverage their existing cabling investment by ensuring performance and a competitive market regardless of which vendors equipment and cable designs are selected. Unfortunately, some switch vendors now require their proprietary cable assemblies for connecting ToR switches to servers when using small form-factor pluggable twinaxial cable assemblies. Some ToR switches are designed to check vendor security IDs on cables and either display errors or prevent ports from functioning when connected to an unsupported vendor ID. While this helps ensure that vendor-approved cable assemblies are used with corresponding electronics, it can limit data center design options by locking data center managers into a proprietary solution. This is a substantial change from the industry standards-based fiber connectivity and copper connectivity successfully deployed in data centers for decades. Third-party independent testing by University of New Hampshire's Interoperability Lab (UNH IOL) proves that passive small form-factor pluggable twinaxial cable assemblies from cabling vendors pass interoperability testing with several vendors ToR switches that are designed to display errors. These tests demonstrate that proprietary cables are not necessarily required. This may not be the case for switches designed to actually prevent ports from functioning altogether when connected to an unsupported vendor ID. Maintenance, Equipment and Cabling Cost The choice of cabling system has a major impact on cost. With a ToR switch in every cabinet (or two for dual primary and secondary networks), the total number of switch ports depends on the total number of cabinets in the data center, rather than on the actual number of switch ports needed to support the equipment. This can nearly double the amount of switches and power supplies required, compared to structured cabling. Unlike passive structured cabling, ToR switches require power and ongoing maintenance. For example, based on an actual 39-cabinet data center using separate dual networks, the cost for equipment and maintenance for ToR is more than twice that of structured cabling (see Figure 3). The use of distribution areas with structured cabling also impacts cabling costs. ToR saves on structured cabling costs, but the added cost of cabling assemblies can negate that savings, especially when switch vendors require their proprietary cable assemblies. As shown in Figure 3, the cost of the ToR cable assemblies for each used port is more than twice the cost of structured cabling. Even if the cost of ToR cable assemblies is not a consideration, structured cabling costs represents only about five percent of the total equipment and maintenance savings realized with structured cabling. Structured cabling also typically carries a 15 to 25 year warranty (depending on manufacturer) as opposed to the average 90-day warranty on cabling assemblies offered by most electronics vendors.
5 Equipment and Unit Price ToR Structured Cabling Units Price Units Price Total Savings 32-port 10G ToR Switches ($15,000) 78 $1,170, $525,000 $645,000 Redundant Power Supplies ($500) 78 $39, $17,500 $21,500 SFP+ Uplink ($1500) 312 $468, $210,000 $258, Port Aggregation Switches ($25,000) 10 $250,000 5 $125,000 $125,000 SFP+ Modules ($5000) 80 $400, $200,000 $200,000 Redundant Power Supplies ($500) 10 $5,000 5 $2,500 $2,500 Core Switches ($80,000) 2 $160,000 2 $160,000 0 Redundant Power Supplies at ($7,500) 2 $15,000 2 $15,000 0 Fiber Cards for Uplinks at ($70,000) 4 $280,000 2 $140,000 $140,000 Cabling Total $240,000 $110,000 $130,000 Equipment Total (not including software) $2,787,000 $1,395,000 $1,392,000 3 Years Maintenance $1,200,000 $570,000 $630,000 TOTAL $4,227,000 $2,075,000 $2,152,000 Figure 3: ToR vs. Structured Cabling Cost Comparison (based on MSRP at time of print) for an actual 39-cabinet data center (assumes average 5 to 6kW per cabinet, dual network, redundant power supplies, 14 servers per cabinet, four uplinks per switch, 2.5-meter SFP+ direct attach cable assemblies for each used ToR port, and category 6A UTP for structured cabling). Furthermore, a ToR design can add to server costs. Servers with twisted-pair network interface cards (NICs) are typically less expensive than those with SFP+ or QSFP NICs. Many server platforms also ship with twisted-pair copper NICs native to the motherboard. With ToR deployments supported by small form-factor pluggable twinaxial cable assemblies, SFP+ or QSFP NICs would need to be purchased for those connections, which can range from about $400 to $800 per NIC. Switch Port Utilization Structured cabling maximizes port accessibility and utilization. Studies by DataCenter Dynamics and the Gartner Group show that the average power supplied to a server cabinet ranges between 5 and 6kW. Using this assumption, the number of servers in a cabinet is approximately 14. Server switch port demand is therefore typically lower than the switch ports available on a ToR switch. For example, with only 14 ports of a 32-port switch used, 18 ports will remain unused. Other assumptions using a higher number of servers and higher power supplied to the server cabinet would improve the switch port utilization on a ToR switch. The relative economics would need to be determined based on the actual design and type of servers that are deployed. When using separate dual networks with 14 servers per cabinet where each server connects to two 32-port switches, 28 of the 64 ports are used for the 14 servers, leaving 36 unused. When applied across the actual 39- cabinet example used in Figure 3, this scenario results in 1,404 unused ports, which equates to nearly 44 unnecessary switch purchases (see Figure 4). With structured cabling, virtually all active ports can be fully utilized because they are not confined to single cabinets. Instead, the 32 switch ports that would be dedicated to a single cabinet in the ToR configuration can now be divided up, on demand, to any of several cabinets via the distribution area. Even if edge switches are used to accommodate east-west (i.e., server-to-server) traffic, structured cabling allows the switches to be centrally located in distribution areas, which decreases the number of unused ports and reduces the amount of north-south traffic to and from aggregate and core switches.
6 Equipment Units Total ToR Used Unused Units Structured Cabling Total Used 32-port 10G ToR Switches Port Aggregation Switches Fiber Cards for Core Uplinks Unused TOTAL PORT USAGE Figure 4: Switch port utilization for ToR vs. Structured Cabling for an actual 39-cabinet data center (assumes average 5 to 6kW per cabinet, dual network, redundant power supplies, 14 servers per cabinet and four uplinks per switch ). While the number of unused ports will be less in high-density server environments where the power supplied to server cabinets is now reaching upwards of 20kW to support a full complement of servers, they can still add up. For example, in a 200-cabinet high-density server farm with 40 servers and a 48-port switch per cabinet, the number of unused ports reaches 1600 (i.e., 8 unused ports per cabinet) at a cost of about $735,000. Using structured cabling can save nearly 85% even in these high-density environments. Being able to effectively manage unused switch ports is therefore a key consideration when selecting a ToR configuration. It is worth noting that some data centers use a switch fabric architecture, which may have a different outcome on the number of unused ports and amount of equipment and cabling compared to a three-tier switch architecture. However, the benefits of a structured cabling system remain. Increased Power Consumption Port utilization and equipment have a major impact on power consumption. Power consumption is one of the top concerns among today s data center managers as energy costs continue to rise, power becomes more costly and green initiatives take center stage. As shown in Figure 5 below, the ability to use all switch ports with structured cabling lowers the number of switches and power supplies required by more than half. This helps cut equipment and energy costs while contributing to green initiatives such as LEED, BREEAM or STEP. With structured cabling requiring fewer power supplies, future upgrades to more efficient power supplies are easier and more cost effective. Equipment ToR Structured Cabling 32-port 10G ToR Switches Redundant Power Supplies SFP+ Uplink Port Aggregation Switches 10 5 SFP+ Modules Redundant Power Supplies 10 5 Core Switches) 2 2 Redundant Power Supplies 2 2 Fiber Cards for Uplinks at 4 2 TOTAL SWITCHES TOTAL POWER SUPPLIES Figure 5: With structured cabling, the number of switches and power supplies required is less than half that of ToR, thereby reducing power consumption When switches reach full utilization in a ToR configuration, adding a new server to the cabinet requires an additional switch in the cabinet as well as additional power supplies. This can take away from potential power that
7 can be allocated for servers. There may not even be enough power provisioned to the cabinet to support the additional load. Even with virtualization reducing the number of servers and related power and cooling, the increased number of power supplies required with ToR can potentially negate some virtualization savings. It is important to remember that even in an idle state, unused switch ports can consume power. Furthermore, edge switches like those used in ToR process more instructions in their central processing units. This can cause potential unanticipated power spikes. Regardless of the configuration and technologies deployed, power consumption varies by switch model and manufacturer and actual port consumption or power draw should be examined across the entire data center. Cooling and Failure Rate Considerations Improper placement of equipment can wreak havoc on the best data center cooling and uptime plan. While a ToR switch can technically be placed in the middle or bottom of a cabinet, they are most often placed at the top for easier accessibility and manageability. According to the Uptime Institute, the failure rate for equipment placed in the top third of the rack is three times greater than that of equipment located in the lower two thirds. In a structured cabling configuration, the passive components (i.e., patch panels) are generally placed in the upper position, leaving the cooler space for the equipment. ToR designs can also land-lock equipment placement due to the short cabling lengths available and data center policies that do not allow patching from cabinet to cabinet. This can prevent placing equipment where it makes the most sense for power and cooling. For example, if the networking budget does not allow for outfitting another cabinet with a ToR switch, placement of a new blade server may be limited to where network ports are available. This can lead to hot spots, which can adversely impact neighboring equipment within the same cooling zone. Structured Cabling systems avoid these problems. Conclusion Many factors impact the choice of which cabling configuration to deploy, and there is no single end-all solution. ToR configurations can make a lot of sense for small server rooms, when self-contained cabinets are required or where a higher number of unused ports can be managed. However, data center managers should carefully consider the pros and cons as outlined in Figure 6. ToR Pros Reduced amount of structured cabling Lower cabling infrastructure costs Rack-based cable management and changes Rack-based access for application segregation Rack-based switch upgrades Figure 6: ToR Pros and Cons ToR Cons Significantly higher equipment, maintenance and cabling costs Poor port utilization Increased power consumption Scalability and upgrade limitations may not support interoperability and autonegotiation Time consuming MACs in large data centers Limited equipment placement and potential hot spots When selecting a cabling configuration for the data center, an overall study that includes total management; scalability and upgrades; interoperability; equipment, maintenance and cabling costs; port utilization; and power and cooling requirements should be undertaken by facilities and all networking departments to ultimately make the best educated decision. ToR is often positioned as a replacement for structured cabling, but in many instances, structured cabling must coexist with ToR to support central switching for KVM or other in-band or out-of-band management and data center monitoring.
Volume 6 Number 65 November/December 2013 INSIDE: Enterprise Migrations to Collocation Sites See page 12 Efficiency and Density- Boosting Upgrades for the Modern Data Center See page 56 www.missioncriticalmagazine.com
. White Paper Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches Introduction Best practices for I/O connectivity in today s data centers configure each server with redundant connections
Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches Introduction Today s data centers are being transformed as the most recent server technologies are deployed in them. Multisocket servers
White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses
How You Can Optimize Passive Optical LAN Through Structured Cabling Author: Stephane Bourgeois Director Technology Strategy Table of Contents Overview 1 What is POLAN? 1, 2 Deploying a POLAN 2 The Benefits
White Paper Fibre Channel over Ethernet and 10GBASE-T: Do More with Less What You Will Learn Over the past decade, data centers have grown both in capacity and capabilities. Moore s Law which essentially
WHITE PAPER July 2015 Mellanox Virtual Modular Switch Introduction...1 Considerations for Data Center Aggregation Switching...1 Virtual Modular Switch Architecture - Dual-Tier 40/56/100GbE Aggregation...2
Understanding the technology for a more advanced enterprise network Strategic White Paper Network architects have used local area network (LAN) switches to manage the volume of traffic in enterprise networks
Anticipating Cat 8 Agenda Cat 8 Overview Cat 8 Customer Design Considerations Testing Procedures and Standards What is Cat 8? Next Generation Copper Cabling 25G and 40G data rates over twisted pair copper
IBM Europe Announcement ZG07-0221, dated February 13, 2007 Cisco MDS 9200 for IBM System Storage switches, models 9216i and 9216A, offer enhanced performance, scalability, multiprotocol capabilities, and
Trends in Data Centre The impact on the IT infrastructure Thomas Ko Product Manager Market Situation Standards Update and Topology 40G and 100G Ethernet - Impact on IT Cabling Migration Options with Fiber
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Technical Document What You Need to Know About Ethernet Audio Overview Designing and implementing an IP-Audio Network can be a daunting task. The purpose of this paper is to help make some of these decisions
1 OmniSwitch 6850E Stackable LAN Switch Sales Presentation 1 Presentation Title Month Year OmniSwitch 6850E Product Family The OmniSwitch 6850E series: Is the latest Alcatel-Lucent s line of layer-3 GigE
High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center Table of Contents Introduction 3 Figure 1 - The Ethernet road map 3 Planning considerations 4 Fiber
Conflicts in Data Center Fiber Structured Cabling Standards By Dave Fredricks, CABLExpress Data Center Infrastructure Architect Various telecommunications and data center (DC)-specific standards are commonly
Category 8 paves the way for IEEE 40GBASE-T David Hughes RCDD/DCDC/NTS Senior Technical Manager MEA CommScope Agenda Market need for 40 G over balanced twisted pair cabling Standards ISO/IEC 11801-99-1
Increasing Bandwidth while Reducing Costs with the AOI Remote PHY Solution Executive Summary This paper introduces the concept a Remote PHY Device (RPD) in cable operator (MSO) networks and describes how
Arista 7160 series: Q&A Product Overview What are the 7160 Series? Highly dynamic cloud data center networks continue to evolve with the introduction of new protocols and server technologies such as containers
Cabling Best Practices Using Fast Channel TM Switch Connect Marc Dunham, Methode - Data Solutions ABSTRACT With the continued growth of port count density in data center hardware, end users are presented
Saving Critical Space Cable Management Solutions Cable Management Innovation that Saves Space where You Need It Most Enterprise telecom rooms have traditionally housed telecommunications equipment, cable
Powering Change in the Data Center A White Paper from the Experts in Business-Critical Continuity TM Executive Summary Increases in data center density and diversity are driving change in the power and
Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems
Deploying Gigabit Ethernet to the Desktop: Drivers and Applications The Federal Reserve estimates that the greater use of information technology accounts for 60 percent of the gains in the non-farm business
CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,
The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress 36 Years of Experience CABLExpress is a manufacturer of
APPLYING DATA CENTER INFRASTRUCTURE MANAGEMENT IN COLOCATION DATA CENTERS Table of Contents Executive Summary 3 The Colocation Market 4 An Introduction to DCIM 5 The Value of DCIM for Colocation Providers
Arista 400G Transceivers and Cables: Q&A What are the benefits of moving to 400G technology? 400G Transceivers and Cables Q&A Arista s 400G platforms allow data centers and high-performance computing environments
Chapter 10: Planning and Cabling Networks Before using an IP phone, accessing instant messaging, or conducting any number of other interactions over a data network, we must connect end devices and intermediary
Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient
Testing Considerations for Beyond 10Gigabit Ethernet Erin Hallett JDSU Agenda Intro Need for Speed Growth of IP traffic Fiber vs. Copper Fiber Tightening requirements Testing considerations and practices
Enterprise-class desktop virtualization with NComputing Clear the hurdles that block you from getting ahead Whitepaper Introduction Enterprise IT departments are realizing virtualization is not just for
Optimizing PON architectures maximizes electronics efficiencies White paper When deploying a passive optical network (PON), the FTTH architecture chosen directly affects the cost of active equipment. Placement
Networking Cheat Sheet YOUR REFERENCE SHEET FOR GOING WEB-SCALE WITH CUMULUS LINUX With Cumulus Linux, you can build a web-scale data center with all the scalability, efficiency and automation available
Introduction Central Offices (COs) are not going away anytime soon. They are strategically located in the center of every city, and are critical assets for all types of current and future communications
WWW.SIEMON.COM V-Built Custom Preconfigured Solutions 8 One Part Number, One Price Ready to Roll Available with any Siemon cabinet, rack or enclosure, Siemon s V-Built Custom Preconfigured Solutions are
FAQ BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS Introduction The Brocade ICX 6610 Switch redefines the economics of enterprise networking by providing unprecedented levels of performance and availability
SINGLEstream Link Aggregation Tap (SS-00) Optional 3-Unit Rack Mount Datacom Systems SINGLEstream 0/00 Link Aggregation Tap provides a superior solution for 4x7 monitoring of full-duplex Ethernet links.
Managing Density Managing Density in the Data Center: A Long-term Approach for the Future Increasing capacity and network applications create new challenges for both the planners who design and the operations
HUBER+SUHNER DCCT Training DCCT training The following topics are covered throughout the 2 course: Fiber optic fundamentals This is an optional section of the training which covers the basics about fiber
Cisco Nexus 4000 Series Switches for IBM BladeCenter What You Will Learn This document is targeted at server, storage, and network administrators planning to deploy IBM BladeCenter servers with the unified
Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity Introduction Virtualization and cloud technologies have applied an incredible amount of pressure
V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the
What an auditor needs to know Course Objectives Understand what a data center looks and feels like Know what to look for in a data center and what questions to ask Deepening understanding of controls that
New Product: Cisco Catalyst 2950 Series Fast Ethernet Desktop Switches Product Overview The Cisco Catalyst 2950 Series of fixed configuration, wire-speed Fast Ethernet desktop switches delivers premium
SUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE IN-CHASSIS VIRTUALIZED 40 GBE ACCESS KEY FEATURES Virtualized 1 port of 40 GbE or 2 ports of 10 GbE uplinks shared by all server modules Support
O V E R V I E W Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Adaptable Modular Storage and Hitachi Workgroup
Simplify Fiber Optic Cabling Migrations with a Multi-Path System By Dave Fredricks, Data Center Infrastructure Architect, CABLExpress and Rick Dallmann, Sr. Data Center Infrastructure Architect, CABLExpress
Nominee: Prefabricated, Pre Assembled Data Centre Modules by Schneider Electric Nomination title: Schneider Electric Prefabricated, Pre Assembled Data Centre Modules Schneider Electric has introduced a
ARISTA WHITE PAPER QSFP-40G Universal Transceiver The Arista Universal transceiver is the first of its kind 40G transceiver that aims at addressing several challenges faced by today s data centers. Increased
SAAS: THE RDP ADVANTAGE FOR ISVS AND USERS How RDP SaaS deployment reduces costs, time to market and barriers to entry while improving security, performance and the UX Independent Software Vendors (ISVs)
2016 COOLING THE EDGE SURVEY Conducted by Vertiv Publication Date: August 30, 2016 TWO STAGE POWER DISTRIBUTION Table of Contents Section I: Methodology Section II: Key Findings 1. Strategic Importance
Using Siemon's SFP+ interconnect assemblies proven by UNH IOL to be interoperable with Brocade, Cisco, Dell, and other leading equipment A cost-effective and lower-power alternative to optical fiber cables
GPON Gigabit Passive Optical Network A comprehensive product portfolio for high demand networks. Datacom products meet the various requirements for fiber optical network solutions, providing outstanding
DELL ACTIVE SYSTEM 800 WITH DELL OPENMANAGE POWER CENTER When building or expanding your data center, you need the ability to control energy resources and costs, and management of circuit density is essential.
802.11ac FREQUENTLY ASKED QUESTIONS May 2012 Table of Contents General Questions:... 3 1. What is 802.11ac?... 3 2. When will 802.11ac be ratified into a standard?... 3 5. Will 802.11ac come out before
Arista 7060X, 7060X2, 7260X and 7260X3 series: Q&A Product Overview What are the 7060X, 7060X2, 7260X & 7260X3 series? The Arista 7060X Series, comprising of the 7060X, 7060X2, 7260X and 7260X3, are purpose-built
O V E R V I E W Hitachi Adaptable Modular Storage and Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Data Systems Hitachi Adaptable Modular Storage and Workgroup
Using the Network to Optimize a Virtualized Data Center Contents Section I: Introduction The Rise of Virtual Computing. 1 Section II: The Role of the Network. 3 Section III: Network Requirements of the
READ ABOUT: Decreasing power consumption by upgrading from older 24-port 1U switches to 48-port 1U switches Monitoring Gigabit Ethernet for error rates and link speeds More access points with lower power
BWDM CWDM DWDM WDM TECHNOLOGY INTRODUCTION User s of today s voice, video, and data networks are becoming more complex requiring more bandwidth and faster transmission rates over ever increasing distances.
Proven Ethernet Technology for Rack, Tower, and Blade Servers The family of QLogic 10GbE Converged Network Adapters extends Ethernet s proven value set and economics to public and private cloud-based data
PLAYBOOK F O R C H A N G E How Do You Plan to Grow? Evaluating Your Critical Infrastructure Can Help Uncover the Right Strategy The world is becoming more digital every day. The demand for feature-rich,
INFINITE POSSIBILITIES CORNING S EDGE SOLUTIONS FOR DATA CENTERS Getting a Data Center Up and Running is No Small Task. Keeping a data center fully optimized, eliminating latency, reducing downtime, maintaining
Passive Optical LAN Executive Summary Overview Passive Optical LAN (POL) is a solution that provides a great option for enterprises interested in significant capital and operational cost savings. Savings
Deploying Flexible Brocade 5000 and 4900 SAN Switches By Nivetha Balakrishnan Aditya G. Brocade storage area network (SAN) switches are designed to meet the needs of rapidly growing enterprise IT environments.
3331 Quantifying the value proposition of blade systems Anthony Dina Business Development, ISS Blades HP Houston, TX email@example.com 2004 Hewlett-Packard Development Company, L.P. The information contained
White Paper Broadcom Adapters for Dell PowerEdge 12G Servers The Dell PowerEdge 12G family of rack, tower, and blade servers offer a broad range of storage and networking options designed to address the
Juniper Virtual Chassis Technology: A Short Tutorial Victor Lama Fabric Specialist LE Northeast Region August 2010 What if? What if your company could drastically minimize the complexity of managing your
Product Bulletin Cisco Introduces the Cisco Nexus 2232TM-E 10GE Fabric Extender PB715278 Cisco is introducing the Cisco Nexus 2232TM-E 10GE Fabric Extender as part of the Cisco Nexus 2000 Series Fabric
Architecting the High Performance Storage Network Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary...3 3.0 SAN Architectural Principals...5 4.0 The Current Best Practices
1 of 5 12/31/2008 8:41 AM SAVE THIS EMAIL THIS Close All multimode fiber is not created equal By Robert Reid Bandwidth, reach, and cost are among the key factors you ll need to weigh. Multimode fiber cabling
White Paper The Economic Benefits of a Cooperative Control Wireless LAN Architecture Aerohive Networks, Inc. 3150-C Coronado Avenue Santa Clara, California 95054 Phone: 408.988.9918 Toll Free: 1.866.918.9918
4-Fiber System 10/40/100 GbE Migration Solution HIGH DENSITY EQUIPMENT PANEL (HDEP) ORDERING GUIDE Table of Contents HDEP System Overview 3 HDEP 4-Fiber Interconnect System 10 Gigabit Interconnect Introduction...
The Smart Route To Visibility GigaVUE H Series High-density, High-scale Traffic Visibility Fabric Nodes The Traffic Visibility Fabric Overcoming the Limitations of Network Traffic Monitoring and Management
HP Converged Network Switches and Adapters Family Data sheet Realise the advantages of Converged Infrastructure with HP Converged Network Switches and Adapters Data centres are increasingly being filled
TM w w w. s i g n a m a x. c o m Power over Ethernet PoE A P P L I C A T I O N S G U I D E Your Guide to PoE Table of Contents Your Guide to PoE...1 PoE Background...1 A Brief History...2 PoE Device Types...3
Datasheet MS400 Series Cisco Meraki MS400 Series Cloud-Managed Aggregation Switches OVERVIEW The Cisco Meraki MS400 Series brings powerful cloud-managed switching to the aggregation layer. This range of
Distributed Core Architecture Using Dell Networking Z9000 and S4810 Switches THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT
1 NC Education Cloud Feasibility Report 1. Problem Definition and rationale North Carolina districts are generally ill-equipped to manage production server infrastructure. Server infrastructure is most
Increasing Data Center Efficiency by Using Improved High Density Power Distribution By Neil Rasmussen White Paper #128 Revision 1 Executive Summary A new approach to power distribution for high density
White Paper Cisco Nexus 9508 Switch Power and Performance The Cisco Nexus 9508 brings together data center switching power efficiency and forwarding performance in a high-density 40 Gigabit Ethernet form
SEL-2730M Managed 24-Port Ethernet Switch Reliably Control and Monitor Your Substation and Plant Networks Features and Benefits Tough Designed, built, and tested for trouble-free operation in extreme conditions,