Structured Cabling Design for Large IT/Service Provider Data Centers

Similar documents
Structured Cabling Design for Large IT/Service Provider Data Centres. Liang Dong Yuan, Technical Sales manager, CDCE

Corning Distance and Link-Loss Budgets for the Cisco 40GbE BiDi Transceiver

Fiber Design for 1 Gigabit and 10 Gigabit Campus Backbone Applications

Using Laser-Optimized Multimode Fiber for 1 and 10 Gigabit Campus Backbone Applications

Plug & Play Universal Low-Loss Systems A LANscape Pretium Solutions Product

Plug & Play Universal Systems

Enabling Rapid Multitenant Data Centre (MTDC) Connections

Pretium EDGE HD Solutions A LANscape Pretium Solutions Product

The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures. Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress

DENSIFICATION DATA CENTER JULY 2018 FOR PROFESSIONALS MANAGING THE CABLE AND WIRELESS SYSTEMS THAT ENABLE CRITICAL COMMUNICATIONS DESIGN TECHNOLOGY

Conflicts in Data Center Fiber Structured Cabling Standards

Cabling a Data Center to TIA-942 Standard

Connectivity Solutions Utilizing Base-8 Structured Cabling

Pretium EDGE FX Solutions

Data Centers & Storage Area Networks Trends in Fiber Optics & Cabling Systems

Fiber Backbone Cabling in Buildings

DATA CENTER SOLUTIONS TM. Fiber Flex

Choosing the Correct Harness for your Switch

Siemon PON Fiber Cabling Solutions

Fiber backbone cabling in buildings

Leverage Data Center Innovation for Greater Efficiency and Profitability

Viable Options for Data Center Solutions. By Ning Liu Offering Group Panduit APJ

High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center

ARISTA 7500R/7500E/7300X Connectivity Guide CABLExpress 1. ARISTA 7500R/7500E/7300X Connectivity Guide

Wall-Mountable Connector Housings (WCH)

The Green Advantage of 10G Optical Fiber

Optimization and Speed: Two Critical Standards Enabling Next Generation Networks. Stephanie Montgomery, TIA Brett Lane, Panduit

take your data centre to the next level

Port Mapping the Cisco Nexus 9K Switch

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

FIBER OPTIC HARDWARE STEVE PAULOV

1 of 5 12/31/2008 8:41 AM

Data Center Design and the ANSI/TIA-942 Standard

EDGE Solutions Trunk Cable, SMF-28 Ultra OS2, MTP to MTP, 12 fibres

Tim Takala RCDD. Vice President Field Application Engineer APAC. Mobility is Driving Changing Network Architectures & Transceiver Evolutions

RAPID CABLING INFRASTRUCTURE

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

DATA CENTER COLOCATION BUILD VS. BUY

Data Center Network Infrastructure

Cabling Best Practices Using Fast Channel TM Switch Connect. Marc Dunham, Methode - Data Solutions

Plug & Play AnyLAN Systems A LANscape Solutions Product

Trends in Data Centre

Don t Get Left Behind: Cabling Designs You Can Trust to Support Speeds of 32G to 400G

(Data Center Networks & Cloud Computing Security)

Plug & Play Universal Systems

Simplify Fiber Optic Cabling Migrations with a Multi-Path System

PRE-TERMINATED SOLUTION GUIDE

The Drivers Behind High-Density Systems

How to Decide on Optical Fibre Termination Options. Thomas Ko, RCDD Product Manager Tyco Electronics / AMP NetConnect

Tarleton State University. Specifications

ANSI/TIA/EIA-568-B Cabling Standard

Optical Trends in the Data Center

Closet Connector Housings (CCH)

A Fork in the Road OM5 vs. Single-Mode in the Data Center. Gary Bernstein, Sr. Director, Product Management, Network Solutions

Uniprise Structured Cabling Solutions Guide

Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity

The Next Data Center Understanding and Preparing for Tomorrow s Technologies WHITE PAPER

Plug & Play Universal Trunks

Pass-Through Technology

V-Built Custom Preconfigured Solutions

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Wall-Mountable Connector Housings (WCH)

Application Note. Data Center Standards and Layouts

OptiSheath MultiPort Terminal

The advantages of the 6-connector channel

HIGH DENSITY DATA CENTER CABLING GUIDE. Optics DACs Cables Cable Management Fiber TAPs Media Converters

Optimizing PON architectures maximizes electronics efficiencies. White paper

EDGE8 Solutions. EDGE8 Solutions Introduction

Managing Density Managing Density in the Data Center: A Long-term Approach for the Future

Corning Optical Network Evolution (ONE ) SD-LAN Applications Guide

Anticipating Cat 8 Agenda. Cat 8 Overview Cat 8 Customer Design Considerations Testing Procedures and Standards

DIRECT ATTACH CABLES FOR DATA CENTERS

FlexNAP Outside Plant System

Chapter 10: Planning and Cabling Networks

Standards-based Labelling: The Key to Effective Network Cabling Plan. Lin Huanyu Market Development, Regional Manager Brother International Singapore

Pretium EDGE Solutions Harness, 12 fibres,

Solutions FOR DATA CENTERS AND STORAGE AREA NETWORKS

WHITE PAPER. The Three Principles of Data Center Infrastructure Design

IBM Europe Announcement ZG , dated February 13, 2007

InstaPATCH Cu Cabling Solutions for Data Centers

How You Can Optimize Passive Optical LAN Through Structured Cabling

Passive Optical LAN Executive Summary. Overview

v2.5 was updated May 10, v2.6 is an updated copy from March 1, 2017.

Fiber Optic Cabling Systems for High Performance Applications

FAS-NET MPO Solution. Benefits of FAS-NET MPO Fiber Optic Solution:

PN Telecommunications Infrastructure Standard for Data Centers Draft 2.0 July 9, 2003

Networking Terminology Cheat Sheet

IBM TotalStorage SAN Switch M12

Closet Connector Housings (CCH)

The six major components of a structured wiring system are:

Data Center Design and the ANSI/TIA-942 Standard

DATA CENTER GUIDE BROCADE CABLING SOLUTION GUIDE

Wirewerks Datacenters Becoming More Common in Our Workplace, Next Step Solution

Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure

DATA CENTER CABLING DESIGN FUNDAMENTALS

LEAVE THE TECH TO US

Data Center High Speed Migration: Infrastructure issues, trends, drivers and recommendations

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

ARIADROP: Fiber to the MDU (FTTM)

Copper Cabling Standards

Transcription:

Structured Cabling Design for Large IT/Service Provider Data Centers Author: Dave Kozischek, Marketing Applications Manager, Data Center and In-Building Networks Introduction Structured cabling is defined as building or campus telecommunications cabling infrastructure that consists of a number of standardized smaller elements (hence structured) called subsystems. For it to be effective, structured cabling is organized in such a way that individual fibers are easy to locate, moves, adds, and changes are easily managed, and there s ample airflow around cabling. Perhaps no environment requires effective structured cabling more than the data center. With no tolerance for downtime or network failure, the data center s owners and operators are among the main consumers of training resources devoted to structured cabling. The reason is clear: even as fewer traditional data centers are being built in favor of outsourcing to the cloud i.e., some type of IT service provider there are still physical structures enabling the cloud, and these structures need to be cabled. Fortunately, what constitutes effective structured cabling isn t open to interpretation, rather, it s clearly explained in the ANSI/TIA-942-B standard titled Telecommunications Infrastructure Standard for Data Centers. In this white paper, we ll explore the standard and break down key considerations for making the most of structured cabling in the data center no matter its size. Consider the different types of data centers in operation today: In-house data center: Also known as enterprise data centers, these facilities are privately owned by large companies. The company designs, builds, and operates its own facility and can still provide a service for profit such as cloud services or music streaming. Wholesale data center: Owned by IT service providers, also known as cloud providers, these data centers are in the business of selling space. Instead of building their own facilities, enterprises buy space and deploy their data center infrastructure within the wholesale facility. Colocation data center: These facilities are like wholesale data centers, but enterprises just rent a rack, cabinet, or cage. The IT service provider is the one running the infrastructure. Dedicated and managed hosting data centers: IT service providers operate and rent server capacity in these data centers, but each enterprise customer controls its own dedicated server. Shared hosting data center: In these facilities, enterprise customers buy space on an IT service provider s servers. These servers are shared among enterprise customers. Corning Optical Communications White Paper LAN-2184-AEN Page 1

Today in the industry, a significant shift is underway in how these different types of data centers invest in their infrastructure. LightCounting and Forbes report* that cloud/it service provider spending is up while enterprise IT spending is down, as shown in Figure 1. Further evidence of this shift is reflected in Dell Oro s graph of server investments, the lion s share of which are shipping for installation in cloud-type facilities. See Figure 2. $600 Telecom Cloud Enterprise 100% Infrastructure Spending ($bn) $500 $400 $300 $200 $100 Percent of Units (%) 75% 50% 25% Enterprise Premises Cloud $0 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 Source: LightCounting and Forbes 0% 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 Figure 1: Growth in Cloud/IT Service Provider Spending Figure 2: Growth in Cloud/IT Service Provider Server Shipments As enterprises increasingly decide to outsource some or all of their infrastructure to IT service providers, the result is not at all surprising: fewer data centers overall and hypersized facilities in their place. See Figure 3. Data Center Market Segment Spending Enterprise IT Customers IT Service Providers Figure 3: Shift from Enterprise IT to IT Service Provider Growth Corning Optical Communications White Paper LAN-2184-AEN Page 2

The structured cabling requirements of these resulting hyperscale, multitenant data centers may differ from what has been installed in the past in the smaller single-tenant, enterprise-owned facilities but TIA-942 provides guidance. TIA-942 always recommends a star architecture, with different areas for cross-connecting and interconnecting cable. The standard defines five different cross-connect/interconnect areas: the main distribution area (MDA), intermediate distribution area (IDA), horizontal distribution area (HDA), zone distribution area (ZDA), and equipment distribution area (EDA). These areas represent the full network from racks and cabinets to the main area where routers, switches, and other components are located. TIA-942 also provides guidance on redundancy definitions, and they rank those into four tiers, called ratings. Rated-1 is the lowest tier with the least redundancy. Rated-4 provides the most redundancy in a data center s structured cabling and is typically deployed in large IT/Service provider data centers. The other basics covered by this standard include zone architectures and guidelines for energy efficiency. See Table 1 for a snapshot of the standard s topics. Key Areas Architecture Insight Recommends a star topology architecture Cross-Connect vs. Interconnect MDA, IDA, HDA, ZDA, EDA Redundancy Definitions Rated 1-4 Zone Architectures Reduced topologies and consolidated points Energy Efficiency Examples of routing cables and airflow contention Table 1: Topics Covered by ANSI/TIA-942-B, Telecommunications Infrastructure Standard for Data Centers When it comes to structured cabling, the standard addresses backbone and horizontal cabling as shown in Figure 4. Each of the distribution areas, or squares, is an area where there is a patch panel. How much fiber is needed in each of those areas is a function of network speeds, network architectures, oversubscription, and switch configuration. Let s look at a few examples under each of these considerations to illustrate how they affect a data center s fiber count. Work Areas in Offices, Operations Center, Support Rooms Horizontal cabling TR (Office and Operations Center LAN switches) Computer Room Horizontal cabling HDA (LAN/SAN/KVM Switches) ZDA Access Providers Backbone cabling Backbone cabling HDA (LAN/SAN/KVM Switches) Primary Entrance Room (Carrier Equipment and Demarcation) MDA (Routers, Backbone LAN/SAN Switches, PBX, M13 Muxes) IDA (LAN/SAN Switches) HDA (LAN/SAN/KVM Switches) HDA (LAN/SAN/KVM Switches) Access Providers Secondary Entrance Room (Carrier Equipment and Demarcation) IDA (LAN/SAN Switches) Horizontal cabling HDA (LAN/SAN/KVM Switches) ZDA Horizontal cabling Horizontal cabling Horizontal cabling Horizontal cabling Horizontal cabling EDA (Rack/Cabinet) EDA (Rack/Cabinet) EDA (Rack/Cabinet) EDA (Rack/Cabinet) EDA (Rack/Cabinet) Figure 4: Backbone and Horizontal Cabling Distribution Areas Corning Optical Communications White Paper LAN-2184-AEN Page 3

Table 2 shows how network speed influences fiber count as a data center moves from 10 to 100G. On the left is the physical architecture with four racks or cabinets each with a switch on top and a switch at the end of the row. Next is the logical architecture in TIA-942 s recommended star configuration for cabling, and finally on the right is the network speed. 10G only takes 2 fibers to support; 40G can operate over 2 or 8 fibers; and 100G takes 2, 8, or even 20 fibers depending on the transceiver. So you see that, depending on the network speed, as few as 2 fibers or as many as 20 fibers are needed for just one port. Takeaway: network speeds do affect fiber count. Check road maps (IEEE for Ethernet and, on the storage side, ANSI for Fibre Channel) for detailed information on per-port fiber counts. Physical Logical Speed 10G 40G 100G Table 2: Network Speed Influences Fiber Count Corning Optical Communications White Paper LAN-2184-AEN Page 4

Now let s look at how the network s logical architecture affects a data center s fiber count. In the example provided in Table 3, each architecture s speed will be constant at 40G with 8 fibers connecting each switch. Point-to-point architecture is the simplest both logically being a star and physically cabled as a star with 8 fibers to each cabinet. A full mesh architecture connects each switch to every other switch, totaling 32 fibers for the same five switches. That logical mesh is cabled physically at the cross-connect, and it takes 32 fibers to do that. The final architecture in this example is the spine and leaf, in which every spine switch (Switches 1 and 2) has to connect to every leaf switch (Switches 3-5). In the same physical configuration with the same five switches, the spine-and-leaf logical architecture requires 16 fibers. So, depending on the data center s architecture, it can take an operator 8, 16, or 32 fibers for every cabinet. Takeaway: architecture redundancy increases fiber count. Physical Logical Speed Point to Point 40G 8 Fiber Full Mesh 40G 8 Fiber Spine and Leaf 40G 8 Fiber Table 3: Network Architecture Affects Fiber Count Corning Optical Communications White Paper LAN-2184-AEN Page 5

Next, let s consider how oversubscription impacts fiber count. Simply put, oversubscription is the ratio of circuits coming in vs. going out of a switch. In the example shown in Figure 4, the star architecture is used physically and logically with a constant network speed of 10G. The variable shown is the oversubscription rate. The example shows a 4:1 oversubscription with 24 10G circuits coming in and six of them going out; in the middle, 24 10G circuits come in and 12 go out for a 2:1 rate; and at the bottom is 1:1 with 24 10G circuits both entering and exiting each switch. Depending on the oversubscription rate, with all other variables remaining constant, the required per-switch fiber count can be 12, 24, or 48 fibers. Takeaway: the lower the oversubscription ratio, the higher the fiber count. Ultimately, the oversubscription rate is a function of network ingress/egress traffic needs meaning the fiber count is driven by this requirement as well. Physical Logical Speed Over 4:1 10G 2:1 10G 1:1 10G Table 4: Network Oversubscription Impacts Fiber Count Corning Optical Communications White Paper LAN-2184-AEN Page 6

Finally, a look at how the network s switch configuration drives fiber count. Using constant architectures and running 10G to all of the servers on the racks, we reconfigure what happens on the right side with the switch. In Table 5, all of the circuits going down are 10G; two of the 40G ports are quad small-form-factor pluggable 40G optical transceivers (QSFP), i.e., 8-fiber MTP connection; they break out into four 10G to total 16 more ports yielding two 40G ports going up, or 2 x 8 = 16. In the middle, we see the same switch with all four of the 40G ports going back up to the core equating to 8 x 4 = 32 fibers. The final scenario shows an equal distribution of 10G going down as going up. 40G ports break out into 10G for 16 x 10G ports, adding more 10G to make it even totals 64 fibers. Takeaway: just deciding how to configure the switch affects the fiber count in these scenarios from 16, 32, or 64 fibers. Physical Logical Switch Configuration 40G OSFP (48) 10G SFP (servers) 40G Breakout (servers) 40G OSFP (48) 10G SFP (servers) (16) 10G 40G SFP OSFP (32) 10G SFP (servers) Table 5: Network Switch Configuration Drives Fiber Count Corning Optical Communications White Paper LAN-2184-AEN Page 7

Note that this switching configuration only addresses the Ethernet side of these servers. The fiber count would continue to climb if the servers also had a Fibre Channel network and/or ports for InfiniBand high-speed computing. Furthermore, we ve looked at how the four variables can independently increase the number of fibers needed in data centers, so imagine the impact that mixed variables can have in driving fiber counts up even higher. Changing the network s operating speed affects the fiber count, sure, but change the speed and the architecture? Or change the speed and the oversubscription rate? Fiber counts that were already relatively high go up even more. What remains is the question of how to cable this type of data center. Typically today s increasingly large data centers extend to separate locations much like an enterprise campus as shown in Figure 5. Indoor cable is typically used within each building, connected by indoor/outdoor cable and transitional optical splice enclosures. See Table 6. Campus Backbone Cabling Indoor/Outdoor Trunks Key Areas Meet Me Room Insight Demarcation Cross-Connect DC3 Data Center Campus Main Distribution Area Racks/Cabinets Cross-Connect DC2 DC4 Storage Indoor Cabling Plenum Rated OSE Servers/Compute (equipment distribution area) DC1 Main Distribution Area Indoor Trunks Meet Me Room Indoor/Outdoor Cabling Optical Splice Enclosure (OSE) Plenum/Riser Armored Cable Transition from Indoor to Outdoor Cables Figure 5: Large IT/Service Provider Data Center ZA-42 Table 6: Data Center Cabling Areas Corning Optical Communications White Paper LAN-2184-AEN Page 8

When it comes to deployment methods, there are three to consider: Preterminated cable: Typically deployed for indoor plenum-rated cabling, these trunks are factory-terminated on both ends with 8- or 12-fiber MTP connectors. They are ideal for MDA to HDA or EDA installations involving raceway or race floor where the entire fiber count is being deployed in one run at a single location at each end of the link. See Figure 6. Pigtailed cable: These semipreconnectorized assemblies are factory-terminated on one end with MTP connectors for easy high-fibercount deployment while remaining unterminated on the other end to fit through small conduit or allow for onsite length changes. Often used in buildingto-building installations, pigtailed cable is ideal for situations when conduit is too small for pulling grips and/or the cable pathway can t be determined before ordering. See Figure 7. Bulk cable: This deployment option requires field connectorization on both ends, typically with MTP spliceon connectors. Bulk cable is best for deployments requiring center-pull installation and/or extremely high fiber counts (such as 1,728 fibers and up). See Figure 8. MTP Trunk 12 F MTP LC Module MDA 12 F MTP LC Module not to scale HDA (or EDA) Figure 6: Preterminated Cable ZA-4235 MTP Pigtailed Trunk 12 F MTP LC Module 12 F MTP LC Module HDA (D) HDA (C) HDA (B) IDA Data Center Building 1 Data Center Building 2 HDA (A) not to scale ZA-4238 Figure 7: Pigtailed Cable Field-Terminated MTP Connectors Figure 8: Bulk Cable Corning Optical Communications ZA-4239 White Paper LAN-2184-AEN Page 9

Table 7 provides an overview of the three deployment methods and their corresponding fiber counts. Category Method Environment Connector Counts Trunk Type Fiber Type Indoor 144 Preterminated 192 1 Cables MTP to MTP 216 Multimode Non-Armored Connector 288 Single-Mode Pigtail Cable 432 2 576 1 2 ZA-4234 Preterminated Cables ZA-4233 Pigtail Cable Indoor/Outdoor MTP Connector to Fiber 144 216 288 432 576 864 Armored Non-Armored Multimode Single-Mode ZA-4234 3 Bulk Cable All Fiber to Fiber 144 to 1,728 Armored Non-Armored Multimode Single-Mode Table 7: Deployment Methods and Cabling Choices ZA-4234 Putting all of this information to practice, the following example illustrates how a four-way spine is cabled and the resulting fiber count. Table 8 comes from Cisco s Massively Scalable Data Center White Paper, showing the Nexus 7000 switches. Based on manufacturer recommendations, there are 48 leaf switches with 32 ports down to servers and 32 ports going back up into the fabric. In this example, we use two Cisco 3064 switches at the top of each of the 24 cabinets to create an A fabric and a B fabric. Figure 9 shows how these recommendations translate to a logical architecture. Cisco Nexus 3064 32 Servers Cisco Nexus 7000 Figure 9: Cisco Four-Way Spine Architecture 24 Cisco Four-Way Spine Configuration Device Count Nexus 7009/7010 Spine Switches 4 N7K-F248XP-25 Blades per 7009 Chassis 7 N7K-F248XP-25 Blades per 7010 Chassis 8 Ports Used for Leaf Switches per 7009 Chassis 336 Ports Used for Leaf Switches per 7010 Chassis 384 Nexus 3064 Leaf Switches 48 Nexus 3064 Ports Facing Fabric 32 Nexus 3064 Ports Facing Servers 32 Key Design Parameters: All 10G Ethernet, no 40G Spine switches: Cisco Nexus 7000 series with 48-port blades Leaf switches: Cisco Nexus 3064 32 ports facing fabric, 32 ports facing servers Table 8: Cisco Four-Way Spine Corning Optical Communications White Paper LAN-2184-AEN Page 10

As shown in Figure 10 and Table 9, Cisco s spine-and-leaf architecture guidance provides for a four-way spine with 48 leaf switches. Starting at the rack and working backward, 32 ports go out to every leaf switch which translates to 64 fibers required per switch. With two switches on each rack, 128 fibers are needed to support this architecture for every cabinet. This design called for 10G and a 1:1 oversubscription as previously covered, and we will proceed with this example using fiber counts that are divisible by 12. 64 F Required per Switch or 72 F Trunk Cable per Switch Fabric A Fabric B Spine Switches Cisco Nexus 7000 Fiber Count Variables Cisco Spine-and- Leaf Rules Cisco Leaf Rules Details 4 Spine Switches 48 Leaf Switches 32 Fabric 32 Server Architecture Spine-and- Leaf A + B Fabrics Network Speed 10G Oversubscription 1:1 (32) 2-F Jumpers (32) 2-F Jumpers Standard Fiber Counts 12-Fiber Divisible Table 9: Cisco Spine-and-Leaf Fiber Count Variables Figure 10: Cisco Spine-and-Leaf Architecture When it comes to cabling this scenario, we have options. They may not all be good options, like the one depicted in Figure 11 that uses jumpers over 3,000 of them. Better would be consolidating jumpers into 48 72-fiber cables as shown in Figure 12. Even better yet is Option 3: using high-fiber-count trunks, 576 fibers in each one, getting us down from 3,000 jumpers to six 576-fiber trunks. See Figure 13. Cisco Nexus 7000 Cisco Nexus 7000 Cisco Nexus 3064 24 3,072 (2 F) Jumpers 32 Servers Cisco Nexus 3064 32 Servers 24 Figure 11: Cabling Option 1 Corning Optical Communications White Paper LAN-2184-AEN Page 11

Cisco Nexus 7000 Cisco Nexus 7000 MDA Cisco Nexus 3064 24 48 (72 F) Cables Backbone Cable HDA 32 Servers Cisco Nexus 3064 24 32 Servers Rack Row Figure 12: Cabling Option 2 Cisco Nexus 7000 Cisco Nexus 7000 MDA Cisco Nexus 3064 24 6 (576 F) Cables Backbone Cable HDA 32 Servers Cisco Nexus 3064 24 32 Servers Rack Row Figure 13: Cabling Option 3 Corning Optical Communications White Paper LAN-2184-AEN Page 12

To understand why some of these options are better than others, let s explore their relative ease of use (which translates to labor spend) both in their initial installation and during the moves, adds, and changes that are inevitable in a data center. Table 10 speaks for itself. Would You Rather Option 1 Qty 3,072: 2-Fiber Jumpers Option 2 Qty 48: 72-Fiber Trunks Option 3 Qty 6: 576-Fiber Trunks Test and Clean 6,144 2-Fiber Duplex LC Connectors 576 12-Fiber MTP Connectors 576 12-Fiber MTP Connectors Document and Label 3,072 Jumpers and 6,144 Connectors 48 Trunks and 576 Connectors Six Trunks and 576 Connectors Pull and Install 3,072 Jumpers (Both Ends) 48 Trunks (Both Ends) Six Trunks (Both Ends) Purchase 3,072 Jumpers 48 (72 Fiber Trunks) Six (576 Fiber Trunks) Troubleshoot 3,072 Links, > 6,000 Connectors 48 Links, 576 Connectors Six Links, 576 Connectors Move, Add, or Change One Jumper at a Time, Point-to-Point Configuration Create Cross-Connect, Use Short Jumper Create Cross-Connect, Use Short Jumper Table 10: Deployment Differences from Jumpers to High-Fiber-Count Trunks Corning Optical Communications White Paper LAN-2184-AEN Page 13

Furthering the case for high-fiber-count trunks are their impact on valuable data center real estate the pathway for cabling. TIA-569 provides calculations to understand what percentage of tray/conduit/raceway is taken up by cabling along with a recommendation that the individual maximum fill ratio not exceed 25 percent. Though it may not be intuitive, it is a fact that a 50 percent fill ratio actually uses up an entire pathway, because the spaces between cables are part of the equation. With this in mind and referring to Figure 14, the first option using more than 3,000 jumpers isn t an option at all. However, the second cabling option (48 72-fiber trunks) does work in a 4 x 6-in tray but not quite as well in a 4 x 4-in tray. Both tray sizes can easily accommodate the six 576-fiber trunks option. Fill Ratio Option 1 Qty 3,072: 2-Fiber Jumpers Option 2 Qty 48: 72-Fiber Trunks Option 3 Qty 6: 576-Fiber Trunks Tray Size Figure 14: Fill Ratio Differences from Jumpers to High-Fiber-Count Trunks Corning Optical Communications White Paper LAN-2184-AEN Page 14

Conclusion High-fiber-count trunks can be the best fit in today s data centers. The days of trusty 12- and 24-fiber trunks to each rack are no more; now we re looking at data centers that are increasingly growing in scale and in the fiber counts required to support higher speeds, greater oversubscription rates, redundant architectures, and creative switch configurations. It s clear that large facilities are the new normal; enterprise IT customers will continue to shift away from small, single-tenant facility operators toward outsourcing all or part of their data center infrastructure. Fortunately, there are proven structured cabling methods and global manufacturers with many years of experience solving data center challenges and the assurance of TIA-942 continuing to provide guidance. Corning Optical Communications White Paper LAN-2184-AEN Page 15

Corning Optical Communications LLC PO Box 489 Hickory, NC 28603-0489 USA 800-743-2675 FAX: 828-325-5060 International: +1-828-901-5000 www.corning.com/opcomm Corning Optical Communications reserves the right to improve, enhance, and modify the features and specifications of Corning Optical Communications products without prior notification. A complete listing of the trademarks of Corning Optical Communications is available at www.corning.com/opcomm/trademarks. All other trademarks are the properties of their respective owners. Corning Optical Communications is ISO 9001 certified. 2017 Corning Optical Communications. All rights reserved. LAN-2184-AEN / September 2017