1 Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Executive Summary There is no single end-all cabling configuration for every data center, and CIOs, data center professionals and IT managers need to examine the pros and cons of each solution based on their specific needs. This paper focuses on the many factors to consider when evaluating top of rack (ToR) and structured cabling configurations. The discussion includes the impact of those configurations on total management; scalability and upgrades; interoperability; equipment, maintenance and cabling costs; port utilization; power consumption and cooling requirements. Bandwidth demand and scalability, server virtualization, high-performance switching capabilities and higher densities necessitate taking a careful look at the various configurations available for cabling a data center. Today s data center cabling configuration choices are also impacted by the need to lower power consumption and ensure efficient cooling of critical equipment, as well as by budget constraints and management structure. Introduction As data centers become more complex, cabling system design and topology become critical. When the first data centers were built, end user terminals were connected via point-to-point connections. This was a viable option for small computer rooms with no foreseeable need for growth or reconfiguration. As computing needs increased and new equipment was added, these point-to-point connections resulted in cabling chaos with associated complexity and higher cost. In response, data center standards like TIA-942-A and ISO recommended a hierarchical structured cabling infrastructure for connecting equipment. Instead of point-to-point connections, structured cabling uses distribution areas that provide flexible, standards-based connections between equipment, such as connections from switches to servers, servers to storage devices and switches to switches (see Figure 1).
2 Figure 1: TIA-942-A Data Center Topology (similar to ISO 24764) With today s high-performance servers and virtualization, more applications can be delivered from a single rack of servers than ever before. In response, several switch manufacturers recommend a Top of Rack (ToR) configuration where smaller (1RU to 2RU) edge switches are placed in the top of each server rack (or cabinet) and connect directly to the servers in the rack via short preterminated small form-factor pluggable (e.g., SFP+ and QSFP) twinaxial cable assemblies, active optical cable assemblies or RJ-45 modular patch cords. ToR significantly increases the number of switches and reduces the initial amount of structured cabling. It is often recommended for its rack-at-a-time deployment, ability to limit the use of copper cabling to within racks, support for east-west (i.e., server-to-sever) traffic and rack-level management capabilities. Both TIA 942-compliant structured cabling and ToR have advantages and disadvantages. When selecting the cabling configuration to best meet the needs of the data center, it is important to examine the impact that structured cabling and ToR have on overall total cost of operations, as well as other trade-offs. Manageability Considerations Before choosing a cabling infrastructure, data center professionals should consider operational structure and policies. With structured cabling, patch panels that mirror switch ports and server ports connect to corresponding panels in one or more central patching areas or zones via permanent (or fixed) links. Also referred to as distribution areas, these patching areas may be located at the end or in the middle of a row of cabinets. Moves, adds and changes (MACs) are accomplished by repositioning patch cord or fiber jumper connections at the central patching area. The fixed portion of the channel remains unchanged and switches and equipment are left untouched and secure. This creates an any-to-all configuration where any switch port can be connected to any equipment port. Furthermore, structured cabling can be field terminated to any length to maintain a clean, slack-free appearance.
3 In a ToR configuration, switches at the top of each rack connect directly to the servers in the same rack, requiring all changes to be made within each individual rack (or cabinet). This eliminates the use of central patching and reduces the amount of structured cabling in the data center (see Figure 2). MACs in a ToR configuration can be more complicated and time consuming especially in large data centers with hundreds of cabinets. Changes must be made in individual racks or cabinets, rather than at one convenient central patching area. Identifying the specific rack or cabinet requiring the change can be a complicated process. ToR can be a solution for data centers where individual racks of servers and their corresponding switches need to be managed as their own entity or segregated by application. ToR does not allow network administrators to keep switches separate from server administrators, which can be problematic when these groups manage switches and servers separately and when switches must remain protected for security purposes. Figure 2: Structured Cabling vs. ToR Topology. ToR eliminates central patching in the distribution area. Note: Designs based on a three-tier switch architecture. Scalability and Upgrade Considerations Cabling system design can limit or enhance squeezing the most from racks, servers and ports. A widespread switch upgrade with ToR impacts many more switches than with structured cabling and requires equipment at the network core to have the port densities and bandwidth capacity to support the increased number of switches. An individual switch upgrade with structured cabling can increase connection speeds to multiple servers across several racks in the data center. An upgrade to an individual ToR switch improves connection speed to only the servers in that rack. ToR switches using short-distance small form-factor pluggable twinaxial cable assemblies cannot support autonegotiation. Because the twisted-pair cabling used with structured cabling is backwards compatible, it supports autonegotiation where individual ports can switch between a 10 and 1 gigabit operation depending on the connected equipment. Autonegotiation enables partial switch or server upgrades on an as-needed basis, enabling a cost-effective migration over time. Without autonegotiation, a switch upgrade requires all equipment connected to that switch to be upgraded simultaneously, incurring full upgrade costs all at once.
4 Because ToR small form-factor pluggable twinaxial cable assemblies are typically more expensive than copper patch cords in a structured cabling system, costs can escalate even further during upgrades as some equipment vendors require use of their cable assemblies and force cable upgrades with equipment upgrades. The core physical layer infrastructure or the fixed portion of the cabling channel is typically installed once, as long as the minimum standards recommended fiber and copper cabling are used. In a ToR configuration using small form-factor pluggable twinaxial cable assemblies, distance between the switches and the servers is limited to a length of 7 meters in passive mode. While this is not a problem if each rack will always be managed as an individual unit, these short lengths can restrict the location of equipment if needs change. Structured cabling lengths can be up to 100 meters, allowing flexible equipment placement. Interoperability Concerns Open systems enable more choices and a competitive market place. Interoperability and the open systems concept is the bedrock of cabling industry standards. Data center managers expect and value interoperability to fully leverage their existing cabling investment by ensuring performance and a competitive market regardless of which vendors equipment and cable designs are selected. Unfortunately, some switch vendors now require their proprietary cable assemblies for connecting ToR switches to servers when using small form-factor pluggable twinaxial cable assemblies. Some ToR switches are designed to check vendor security IDs on cables and either display errors or prevent ports from functioning when connected to an unsupported vendor ID. While this helps ensure that vendor-approved cable assemblies are used with corresponding electronics, it can limit data center design options by locking data center managers into a proprietary solution. This is a substantial change from the industry standards-based fiber connectivity and copper connectivity successfully deployed in data centers for decades. Third-party independent testing by University of New Hampshire's Interoperability Lab (UNH IOL) proves that passive small form-factor pluggable twinaxial cable assemblies from cabling vendors pass interoperability testing with several vendors ToR switches that are designed to display errors. These tests demonstrate that proprietary cables are not necessarily required. This may not be the case for switches designed to actually prevent ports from functioning altogether when connected to an unsupported vendor ID. Maintenance, Equipment and Cabling Cost The choice of cabling system has a major impact on cost. With a ToR switch in every cabinet (or two for dual primary and secondary networks), the total number of switch ports depends on the total number of cabinets in the data center, rather than on the actual number of switch ports needed to support the equipment. This can nearly double the amount of switches and power supplies required, compared to structured cabling. Unlike passive structured cabling, ToR switches require power and ongoing maintenance. For example, based on an actual 39-cabinet data center using separate dual networks, the cost for equipment and maintenance for ToR is more than twice that of structured cabling (see Figure 3). The use of distribution areas with structured cabling also impacts cabling costs. ToR saves on structured cabling costs, but the added cost of cabling assemblies can negate that savings, especially when switch vendors require their proprietary cable assemblies. As shown in Figure 3, the cost of the ToR cable assemblies for each used port is more than twice the cost of structured cabling. Even if the cost of ToR cable assemblies is not a consideration, structured cabling costs represents only about five percent of the total equipment and maintenance savings realized with structured cabling. Structured cabling also typically carries a 15 to 25 year warranty (depending on manufacturer) as opposed to the average 90-day warranty on cabling assemblies offered by most electronics vendors.
5 Equipment and Unit Price ToR Structured Cabling Units Price Units Price Total Savings 32-port 10G ToR Switches ($15,000) 78 $1,170, $525,000 $645,000 Redundant Power Supplies ($500) 78 $39, $17,500 $21,500 SFP+ Uplink ($1500) 312 $468, $210,000 $258, Port Aggregation Switches ($25,000) 10 $250,000 5 $125,000 $125,000 SFP+ Modules ($5000) 80 $400, $200,000 $200,000 Redundant Power Supplies ($500) 10 $5,000 5 $2,500 $2,500 Core Switches ($80,000) 2 $160,000 2 $160,000 0 Redundant Power Supplies at ($7,500) 2 $15,000 2 $15,000 0 Fiber Cards for Uplinks at ($70,000) 4 $280,000 2 $140,000 $140,000 Cabling Total $240,000 $110,000 $130,000 Equipment Total (not including software) $2,787,000 $1,395,000 $1,392,000 3 Years Maintenance $1,200,000 $570,000 $630,000 TOTAL $4,227,000 $2,075,000 $2,152,000 Figure 3: ToR vs. Structured Cabling Cost Comparison (based on MSRP at time of print) for an actual 39-cabinet data center (assumes average 5 to 6kW per cabinet, dual network, redundant power supplies, 14 servers per cabinet, four uplinks per switch, 2.5-meter SFP+ direct attach cable assemblies for each used ToR port, and category 6A UTP for structured cabling). Furthermore, a ToR design can add to server costs. Servers with twisted-pair network interface cards (NICs) are typically less expensive than those with SFP+ or QSFP NICs. Many server platforms also ship with twisted-pair copper NICs native to the motherboard. With ToR deployments supported by small form-factor pluggable twinaxial cable assemblies, SFP+ or QSFP NICs would need to be purchased for those connections, which can range from about $400 to $800 per NIC. Switch Port Utilization Structured cabling maximizes port accessibility and utilization. Studies by DataCenter Dynamics and the Gartner Group show that the average power supplied to a server cabinet ranges between 5 and 6kW. Using this assumption, the number of servers in a cabinet is approximately 14. Server switch port demand is therefore typically lower than the switch ports available on a ToR switch. For example, with only 14 ports of a 32-port switch used, 18 ports will remain unused. Other assumptions using a higher number of servers and higher power supplied to the server cabinet would improve the switch port utilization on a ToR switch. The relative economics would need to be determined based on the actual design and type of servers that are deployed. When using separate dual networks with 14 servers per cabinet where each server connects to two 32-port switches, 28 of the 64 ports are used for the 14 servers, leaving 36 unused. When applied across the actual 39- cabinet example used in Figure 3, this scenario results in 1,404 unused ports, which equates to nearly 44 unnecessary switch purchases (see Figure 4). With structured cabling, virtually all active ports can be fully utilized because they are not confined to single cabinets. Instead, the 32 switch ports that would be dedicated to a single cabinet in the ToR configuration can now be divided up, on demand, to any of several cabinets via the distribution area. Even if edge switches are used to accommodate east-west (i.e., server-to-server) traffic, structured cabling allows the switches to be centrally located in distribution areas, which decreases the number of unused ports and reduces the amount of north-south traffic to and from aggregate and core switches.
6 Equipment Units Total ToR Used Unused Units Structured Cabling Total Used 32-port 10G ToR Switches Port Aggregation Switches Fiber Cards for Core Uplinks Unused TOTAL PORT USAGE Figure 4: Switch port utilization for ToR vs. Structured Cabling for an actual 39-cabinet data center (assumes average 5 to 6kW per cabinet, dual network, redundant power supplies, 14 servers per cabinet and four uplinks per switch ). While the number of unused ports will be less in high-density server environments where the power supplied to server cabinets is now reaching upwards of 20kW to support a full complement of servers, they can still add up. For example, in a 200-cabinet high-density server farm with 40 servers and a 48-port switch per cabinet, the number of unused ports reaches 1600 (i.e., 8 unused ports per cabinet) at a cost of about $735,000. Using structured cabling can save nearly 85% even in these high-density environments. Being able to effectively manage unused switch ports is therefore a key consideration when selecting a ToR configuration. It is worth noting that some data centers use a switch fabric architecture, which may have a different outcome on the number of unused ports and amount of equipment and cabling compared to a three-tier switch architecture. However, the benefits of a structured cabling system remain. Increased Power Consumption Port utilization and equipment have a major impact on power consumption. Power consumption is one of the top concerns among today s data center managers as energy costs continue to rise, power becomes more costly and green initiatives take center stage. As shown in Figure 5 below, the ability to use all switch ports with structured cabling lowers the number of switches and power supplies required by more than half. This helps cut equipment and energy costs while contributing to green initiatives such as LEED, BREEAM or STEP. With structured cabling requiring fewer power supplies, future upgrades to more efficient power supplies are easier and more cost effective. Equipment ToR Structured Cabling 32-port 10G ToR Switches Redundant Power Supplies SFP+ Uplink Port Aggregation Switches 10 5 SFP+ Modules Redundant Power Supplies 10 5 Core Switches) 2 2 Redundant Power Supplies 2 2 Fiber Cards for Uplinks at 4 2 TOTAL SWITCHES TOTAL POWER SUPPLIES Figure 5: With structured cabling, the number of switches and power supplies required is less than half that of ToR, thereby reducing power consumption When switches reach full utilization in a ToR configuration, adding a new server to the cabinet requires an additional switch in the cabinet as well as additional power supplies. This can take away from potential power that
7 can be allocated for servers. There may not even be enough power provisioned to the cabinet to support the additional load. Even with virtualization reducing the number of servers and related power and cooling, the increased number of power supplies required with ToR can potentially negate some virtualization savings. It is important to remember that even in an idle state, unused switch ports can consume power. Furthermore, edge switches like those used in ToR process more instructions in their central processing units. This can cause potential unanticipated power spikes. Regardless of the configuration and technologies deployed, power consumption varies by switch model and manufacturer and actual port consumption or power draw should be examined across the entire data center. Cooling and Failure Rate Considerations Improper placement of equipment can wreak havoc on the best data center cooling and uptime plan. While a ToR switch can technically be placed in the middle or bottom of a cabinet, they are most often placed at the top for easier accessibility and manageability. According to the Uptime Institute, the failure rate for equipment placed in the top third of the rack is three times greater than that of equipment located in the lower two thirds. In a structured cabling configuration, the passive components (i.e., patch panels) are generally placed in the upper position, leaving the cooler space for the equipment. ToR designs can also land-lock equipment placement due to the short cabling lengths available and data center policies that do not allow patching from cabinet to cabinet. This can prevent placing equipment where it makes the most sense for power and cooling. For example, if the networking budget does not allow for outfitting another cabinet with a ToR switch, placement of a new blade server may be limited to where network ports are available. This can lead to hot spots, which can adversely impact neighboring equipment within the same cooling zone. Structured Cabling systems avoid these problems. Conclusion Many factors impact the choice of which cabling configuration to deploy, and there is no single end-all solution. ToR configurations can make a lot of sense for small server rooms, when self-contained cabinets are required or where a higher number of unused ports can be managed. However, data center managers should carefully consider the pros and cons as outlined in Figure 6. ToR Pros Reduced amount of structured cabling Lower cabling infrastructure costs Rack-based cable management and changes Rack-based access for application segregation Rack-based switch upgrades Figure 6: ToR Pros and Cons ToR Cons Significantly higher equipment, maintenance and cabling costs Poor port utilization Increased power consumption Scalability and upgrade limitations may not support interoperability and autonegotiation Time consuming MACs in large data centers Limited equipment placement and potential hot spots When selecting a cabling configuration for the data center, an overall study that includes total management; scalability and upgrades; interoperability; equipment, maintenance and cabling costs; port utilization; and power and cooling requirements should be undertaken by facilities and all networking departments to ultimately make the best educated decision. ToR is often positioned as a replacement for structured cabling, but in many instances, structured cabling must coexist with ToR to support central switching for KVM or other in-band or out-of-band management and data center monitoring.
Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches Introduction Today s data centers are being transformed as the most recent server technologies are deployed in them. Multisocket servers
White Paper Fibre Channel over Ethernet and 10GBASE-T: Do More with Less What You Will Learn Over the past decade, data centers have grown both in capacity and capabilities. Moore s Law which essentially
Understanding the technology for a more advanced enterprise network Strategic White Paper Network architects have used local area network (LAN) switches to manage the volume of traffic in enterprise networks
WHITE PAPER July 2015 Mellanox Virtual Modular Switch Introduction...1 Considerations for Data Center Aggregation Switching...1 Virtual Modular Switch Architecture - Dual-Tier 40/56/100GbE Aggregation...2
1 OmniSwitch 6850E Stackable LAN Switch Sales Presentation 1 Presentation Title Month Year OmniSwitch 6850E Product Family The OmniSwitch 6850E series: Is the latest Alcatel-Lucent s line of layer-3 GigE
Trends in Data Centre The impact on the IT infrastructure Thomas Ko Product Manager Market Situation Standards Update and Topology 40G and 100G Ethernet - Impact on IT Cabling Migration Options with Fiber
Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient
Saving Critical Space Cable Management Solutions Cable Management Innovation that Saves Space where You Need It Most Enterprise telecom rooms have traditionally housed telecommunications equipment, cable
WWW.SIEMON.COM V-Built Custom Preconfigured Solutions 8 One Part Number, One Price Ready to Roll Available with any Siemon cabinet, rack or enclosure, Siemon s V-Built Custom Preconfigured Solutions are
Optimizing PON architectures maximizes electronics efficiencies White paper When deploying a passive optical network (PON), the FTTH architecture chosen directly affects the cost of active equipment. Placement
Nominee: Prefabricated, Pre Assembled Data Centre Modules by Schneider Electric Nomination title: Schneider Electric Prefabricated, Pre Assembled Data Centre Modules Schneider Electric has introduced a
Networking Cheat Sheet YOUR REFERENCE SHEET FOR GOING WEB-SCALE WITH CUMULUS LINUX With Cumulus Linux, you can build a web-scale data center with all the scalability, efficiency and automation available
New Product: Cisco Catalyst 2950 Series Fast Ethernet Desktop Switches Product Overview The Cisco Catalyst 2950 Series of fixed configuration, wire-speed Fast Ethernet desktop switches delivers premium
V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the
SUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE IN-CHASSIS VIRTUALIZED 40 GBE ACCESS KEY FEATURES Virtualized 1 port of 40 GbE or 2 ports of 10 GbE uplinks shared by all server modules Support
READ ABOUT: Decreasing power consumption by upgrading from older 24-port 1U switches to 48-port 1U switches Monitoring Gigabit Ethernet for error rates and link speeds More access points with lower power
Deploying Flexible Brocade 5000 and 4900 SAN Switches By Nivetha Balakrishnan Aditya G. Brocade storage area network (SAN) switches are designed to meet the needs of rapidly growing enterprise IT environments.
4-Fiber System 10/40/100 GbE Migration Solution HIGH DENSITY EQUIPMENT PANEL (HDEP) ORDERING GUIDE Table of Contents HDEP System Overview 3 HDEP 4-Fiber Interconnect System 10 Gigabit Interconnect Introduction...
Juniper Virtual Chassis Technology: A Short Tutorial Victor Lama Fabric Specialist LE Northeast Region August 2010 What if? What if your company could drastically minimize the complexity of managing your
SAAS: THE RDP ADVANTAGE FOR ISVS AND USERS How RDP SaaS deployment reduces costs, time to market and barriers to entry while improving security, performance and the UX Independent Software Vendors (ISVs)
Increasing Data Center Efficiency by Using Improved High Density Power Distribution By Neil Rasmussen White Paper #128 Revision 1 Executive Summary A new approach to power distribution for high density
White Paper Cisco Nexus 9508 Switch Power and Performance The Cisco Nexus 9508 brings together data center switching power efficiency and forwarding performance in a high-density 40 Gigabit Ethernet form
BWDM CWDM DWDM WDM TECHNOLOGY INTRODUCTION User s of today s voice, video, and data networks are becoming more complex requiring more bandwidth and faster transmission rates over ever increasing distances.
Veritas NetBackup Appliance Family OVERVIEW BROCHURE Veritas NETBACKUP APPLIANCES Veritas understands the shifting needs of the data center and offers NetBackup Appliances as a way for customers to simplify
802.11ac FREQUENTLY ASKED QUESTIONS May 2012 Table of Contents General Questions:... 3 1. What is 802.11ac?... 3 2. When will 802.11ac be ratified into a standard?... 3 5. Will 802.11ac come out before
Datasheet MS400 Series Cisco Meraki MS400 Series Cloud-Managed Aggregation Switches OVERVIEW The Cisco Meraki MS400 Series brings powerful cloud-managed switching to the aggregation layer. This range of
Structured Cabling Design for Large IT/Service Provider Data Centers Author: Dave Kozischek, Marketing Applications Manager, Data Center and In-Building Networks Introduction Structured cabling is defined
software is quickly becoming the core engine of data center operations. Only 5 years ago, manual spreadsheets, sometimes supplemented with Visio diagrams, were accepted as the default tools for data center
4-Fiber System 10/40/100 GbE Migration Solution High Density Equipment Panel (HDEP) Ordering Guide Table of Contents HDEP System Overview... 3 HDEP 4-Fiber Interconnect System 10 Gigabit Interconnect Introduction...4
Oracle s Netra Modular System A Product Concept Introduction Table of Contents Table of Contents 1 Introduction 2 Blades Versus Rackmount Servers 3 Traditional Server Architectures Merged 3 Plug-and-Play
PDU Inspired by Your Data Center W IEC PDU Series Intelligent kwh PDU - W series Power Efficiency Management With today s increasing demand for high technology power management solutions, PDU kwh measurement
DCIM Data Center Infrastructure Management Part 3 Operational Management Moves Adds and Changes & Electrical Infrastructure November 5, 2013 Presented by Julius Neudorfer Sponsored by Today s Topics Operational
How to Decide on Optical Fibre Termination Options Thomas Ko, RCDD Product Manager Tyco Electronics / AMP NetConnect Termination Choices Optical Fiber Cabling in Data Center Options available Traditional
RFP # ISD 1806 Modular/Container Based Recovery Data Center Questions and Answers Q. In-rack power outlet requirements (# of C13/C19 etc) A. In-rack power should be flexible per TIA-942 22.214.171.124 and able
Overcoming the Challenges of Server Virtualisation Maximise the benefits by optimising power & cooling in the server room Server rooms are unknowingly missing a great portion of their benefit entitlement
FAQ BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS Introduction The Brocade ICX 6610 Switch redefines the economics of enterprise networking by providing unprecedented levels of performance and availability
Data Sheet Cisco 100GBASE QSFP-100G Modules Product Overview The Cisco 100GBASE Quad Small Form-Factor Pluggable (QSFP) portfolio offers customers a wide variety of high-density and low-power 100 Gigabit
SEL-2730M Managed 24-Port Ethernet Switch Reliably Control and Monitor Your Substation and Plant Networks Features and Benefits Tough Designed, built, and tested for trouble-free operation in extreme conditions,
Optical Fiber Networks: Industry Trends, Application Influences and New Options for Networks Herbert V Congdon II, PE Manager, Standards & Technology Tyco Electronics AMP NETCONNECT Solutions Preview Deciding
ClearPath Plus Dorado 300 Series Model 340 Server Specification Sheet Introduction The ClearPath Plus Dorado Model 340 server is the mid-range model in the Dorado 300 Series of OS 2200 systems. These servers
White Paper Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility The Cisco 4000 Series Integrated Services Routers (ISRs) are designed for distributed organizations with
W H I T E P A P E R Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure Executive Summary Growing demand for faster access to
EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...
Data Sheet Cisco 40GBASE QSFP Modules Product Overview The Cisco 40GBASE QSFP (Quad Small Form-Factor Pluggable) portfolio offers customers a wide variety of high-density and low-power 40 Gigabit Ethernet
SEPTEMBER 2015 FOR PROFESSIONALS MANAGING THE CABLE AND WIRELESS SYSTEMS THAT ENABLE CRITICAL COMMUNICATIONS GO SMALL or go home STANDARDS PAGE 17 WBMMF on the fast track NETWORK MANAGEMENT PAGE 31 Quicken
. White Paper Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation Introduction As organizations increasingly rely on IT to help enable, and even change, their business
Improving Blade Economics with Virtualization John Kennedy Senior Systems Engineer VMware, Inc. firstname.lastname@example.org The agenda Description of Virtualization VMware Products Benefits of virtualization Overview
ClearPath Plus Dorado 300 Series Model 380 Server Specification Sheet Introduction The ClearPath Plus Dorado Model 380 server is the most agile, powerful and secure OS 2200 based system we ve ever built.
Q&A Cisco SFS 7000 Series Infiniband Server Switches Q. What are the Cisco SFS 7000 Series InfiniBand switches? A. The Cisco SFS 7000 Series InfiniBand switches are a new series of high-performance InfiniBand
Cisco Nexus 7000 Series Connectivity Solutions for the Cisco Unified Computing System About the Cisco Nexus 7000 Series Switches The Cisco Nexus 7000 Series Switches combine the highest level of scalability
WHITE PAPER Monitoring Converged Networks: Link Aggregation www.ixiacom.com 915-6896-01 Rev. A, July 2014 2 Table of Contents Benefits... 4 Introduction... 4 The Net Optics Solution... 4 Easy-to-Use...
Data Sheet Cisco HyperFlex HX220c Edge M5 Hyperconvergence engineered on the fifth-generation Cisco UCS platform Rich digital experiences need always-on, local, high-performance computing that is close
Total Cost of Ownership: Benefits of ECM in the OpenText Cloud OpenText Managed Services brings together the power of an enterprise cloud platform with the technical skills and business experience required
Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server High availability for lower expertise
Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform Discuss database workload classification for designing and deploying SQL server databases
DATA SHEET SuperStack and Switches 3 Baseline Hubs Key Benefits Affordable hubs and switches to support a wide range of bandwidth requirements. Solutions for all your needs. The industry s most complete
PowerEdge FX - Upgrading from 0GbE Pass-through Modules to FN40S I/O Modules Dell Networking Solutions Engineering June 06 A Dell EMC Deployment and Configuration Guide Revisions Date Revision Description
POWER DISTRIBUTION UNITS Tripp Lite rack power distribution for high-density IT environments. Introduction 2-3 Basic 4 Metered 5 Monitored 6 Switched 7 ATS 8 Hot-Swap 8 INTRODUCTION Tripp Lite Rack increase
White Paper DATA CENTER CABLING DESIGN FUNDAMENTALS Telecommunications Cabling Infrastructure Requirements according the Availability Classes I-IV of Introduction With the completion by end of Q1 2016,
Dell PowerEdge M1000e Blade Enclosure and Dell PS Series SAN Design Best Practices Using Dell S-Series and M-Series Networking Dell EMC Storage Engineering May 2017 Dell EMC Best Practices Revisions Date
VX Router Series VXrouters The worlds first fiber optic, non-blocking, 6.25 Gbps Video and Peripheral Router! Thinklogical s exceptionally resilient VX routers switch virtually any desktop device. Unlike
Multi-Capable Network Solutions Fibre Channel Compatibility for IP Networks INTRODUCTION As one of the first applications for storage networking based on TCP/IP, extending connectivity for Fibre Channel
Coordinating Conferencing and Collaboration Vital unified communications capabilities offer a solid foundation for an integrated view of the collaborative environment. To make the most of the opportunities
DATE VENUE IBM System Storage DS8800 Overview Tran Thanh Tu email@example.com Storage FTSS IBM Vietnam 1 Agenda Introducing new DS8800 model What s new what s not Dramatic efficiency performance benefits
Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.
The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization May 2014 Prepared by: Zeus Kerravala The Top Five Reasons to Deploy Software-Defined Networks and Network Functions
Dynamic Power Capping TCO and Best Practices White Paper Executive Summary... 2 Introduction... 2 Poor Power Planning Results in Trapped Power Capacity... 2 Thermal Logic and Dynamic Power Capping Reclaim
ARISTA WHITE PAPER Arista DWDM Solution and Use Cases The introduction of the Arista 7500E Series DWDM solution expands the capabilities of the Arista 7000 Series with a new, high-density, high-performance,
InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary v1.0 January 8, 2010 Introduction This guide describes the highlights of a data warehouse reference architecture
Industry Standards for the Exponential Growth of Data Center Bandwidth and Management Craig W. Carlson 2 Or Finding the Fat Pipe through standards Creative Commons, Flikr User davepaker Overview Part of
Performance Evaluation of FDDI, ATM, and Gigabit Ethernet as Backbone Technologies Using Simulation Sanjay P. Ahuja, Kyle Hegeman, Cheryl Daucher Department of Computer and Information Sciences University
White Paper Why Enterprises Need to Optimize Their Data Centers Introduction IT executives have always faced challenges when it comes to delivering the IT services needed to support changing business goals
Cisco Catalyst 4500 E-Series General Q. What is being introduced with the Cisco Catalyst 4500 E-Series? A. The Cisco Catalyst 4500 E-Series is a high-performance next-generation extension to the widely
OptiDriver Application Suite INTRODUCTION MRV s OptiDriver is designed to optimize both 10 Gbps and applications with the industry s most compact, low power, and flexible product line. Offering a variety
Family Datasheet MS Series Switches Meraki MS Series Switches FAMILY DATASHEET Overview Cisco Meraki offers a broad range of switches, built from the ground up to be easy to manage without compromising
Linux Automation Using Red Hat Enterprise Linux to extract maximum value from IT infrastructure www.redhat.com Table of contents Summary statement Page 3 Background Page 4 Creating a more efficient infrastructure:
ARISTA WHITE PAPER Arista 7500E Series Interface Flexibility Introduction: Today s large-scale virtualized datacenters and cloud networks require a mix of 10Gb, 40Gb and 100Gb Ethernet interface speeds
Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview May 2010 1 About SSI Established in 1998, the Server System Infrastructure (SM) (SSI) Forum is a leading server industry group