New Developments in Data Center Cables and Transceivers

Size: px
Start display at page:

Download "New Developments in Data Center Cables and Transceivers"

Transcription

1 New Developments in Data Center Cables and Transceivers 25G is the new 10G; 50G the new 40G; and 100G is Amazing! The latest wave in rapid data center growth is resulting from an increase in machine-to-machine traffic due to expansion in server virtualization, software defined networks (SDNs) and in cloud computing which is driving massive demand for highcapacity data center infrastructures. High-speed interconnects are playing an increasingly more important role in these technologies

2 page 2 Contents Key Takeaways...3 Introduction...4 Why So Many Types of High-speed Interconnects?...5 DAC A Cabling Industry Shake Up...6 What is DAC Cabling?...7 Low-Price, Low-Power DAC Advantages Stimulate its Popularity...8 Backwards Compatibility...9 Mellanox LinkX Offers 18 Different DAC Options!...9 QSA: QSFP-to-SFP Adapter Enables Many Options...11 BER Designed for High-Performance Computing (HPC)...12 Just Use FEC to Clean it up!...13 More Value Extending the Reach Past 3 Meters...13 Zero Latency Delays...14 Learn from DAC Disasters...14 Mellanox DAC Manufacturing...15 Every Cable Tested in Real Systems...15 DAC Cables - Why Choose Mellanox?...16 AOCs - Active Optical Cables...17 How AOCs Differ from Other Interconnect Solutions...18 Achieving Lower Product Cost...19 Offer Lower Operational Hidden Costs...19 How Are AOCs used in Modern Data Centers?...20 Short Reach Optics in Modern Data Centers...22 VCSEL Multi-mode Lasers & Fibers...23 Connectorized Optics A Key Feature...23 Point-to-Point Applications...23 Breakout Applications...24 Increased Airflow; Tighter Bend Radii; Easier Rack Maintenance...24 Can 50G Can Costs Less Than 40G?...25 Single-mode Optics Increasing Popularity...26 Single-mode Transceivers in Modern Data Centers...28 Numerous PSM4 Applications and Configurations for Any Need...28 Why Choose Mellanox Single-mode Transceivers?...30 A Glance into the Future...31 Newly Announced 200G HDR InfiniBand AOCs...31 Big Changes for Ethernet & InfiniBand Going Forward...31 Summary...32 Why Mellanox Cables and Transceivers?...32

3 page 3 Key Takeaways Hyperscale and large data centers are driving prices down and speed advancements faster to market dramatically accelerating the entire pace of the industry. 25G offers 2.5 times the 10G bandwidth at a small price premium making newer systems an easy upgrade choice for switches, network adapters, cables and transceivers in just about all applications. 25G is the new 10G. Modern data centers have standardized on two form-factors: the single channel SFP and quad channel QSFP. DAC cables are used primarily inside the rack; AOCs across the row and multi-mode and single-mode transceivers spanning longer reaches up to 10km across the data center. Breakouts of 100G to individual 25G ends are now available for all four interconnect types DACs and AOCs cables and both multi-mode and single-mode transceivers. The goal of the plethora of different interconnect technologies is to minimize the costs for every connection type while maximizing bandwidth and ROI. For more information about Mellanox s LinkX cables and transceivers products, visit:

4 page 4 Introduction The rapid growth of cellular, video and internet popularity all ends up in the modern data center at the server and storage where the content resides. The path to the server today is via links using DAC and AOC cables and optics based on multi-mode and single-mode transceivers. The shift from 10G to 25G lanes is already well underway with the world s first 25/100G Ethernet platform consisting of switches, network adapters, cables and transceivers announced by Mellanox in June of Today, the industry is transitioning from 10G to 25G line rates as 25G offers 2.5 times the bandwidth but at less then 2X the price premium making newer systems an easy upgrade choice for switches, network adapters, cables and transceivers in just about all applications in hyperscale, enterprise, storage, web 2.0, HPC installations and even telecom data centers are adopting it. Hyperscale data centers are changing the game by stimulating rapid product advances in speed and form-factors as well as stimulating low prices due to their massive buying practices. The modern data center is being built using 25G line rates in single and four channel versions (100G). On the horizon is changing the signaling scheme from NRZ to PAM4 to offer two data bits per clock instead of one, the transceiver packaging to include 8-channels, and the increased use of single-mode optics that can span the data center all the way to the server. The next hop to 200G and 400G has spawn ~20 different types of optical transceivers in development for every speed, reach, configuration and price point imaginable! With CPU speed advances slowing and becoming more expensive, the only option is to scale up meaning to build a standard rack configuration design using servers, storage, GP-GPUs switches and replicate it in the thousands of racks and build mega data centers. Then interconnect all of it using copper and optical cables and optical transceivers. This is driving the demand for cables and transceivers of all types through the roof. To simplify things, the data center builders today have standardized on two main form factors or transceiver shells - the single-channel SFP and quad-channel QSFP and four types of interconnects schemes: DAC and AOC cables and optical multi-mode and single-mode transceivers. The legacy 1G and 10GBASE-T interconnect that has been in use for the last 15 years, has largely been left in the dust for the modern high-speed data center.

5 page 5 Why So Many Types of High-speed Interconnects? High-speed interconnects are all about: 1. Implementing the lowest cost links 2. Achieving the highest net data throughput (i.e. fastest data rate with least amount of data errors, data retransmissions and minimal latencies). 3. Transmitting over various distances To achieve these goals, various technologies are often used each of which has its own set of benefits and limitations. Data center builders want to build all links with single-mode fiber, duplex LC connectors and single-mode transceivers. Build the fiber into the data center infrastructure once and forget it using single-mode fiber as it does not have reach limitations, then upgrade the transceivers with each new development. While the fibers and LC connectors are already at the lowest costs, the problem is the single-mode transceivers, which are very complex to build requiring many different material systems, and are both hard to manufacture and expensive. A 3-meter long DAC cable is priced at less than $100 but a 10km reach single-mode transceiver $4,000-$5,000. AOCs and multi-mode transceivers priced in between. As a result, data centers often use an array of different high-speed interconnects matching each interconnect type to specific reach requirements. DAC is the lowest cost but after about 3-5 meters the wire acts like a radio antenna and the signal becomes unrecognizable. AOCs are used from 3 meters to about 30 meters after which installing long cables becomes difficult. More expensive SR, SR4 multimode transceivers up to 100 meters after which signal degrades. Parallel single-mode transceivers (PSM4) from 500m-2km. After 500 meters the cost of 8 fibers adds up with each meter, so multiplexing the four channels signals into two single fibers is more economical with CWDM4 for up to 2 km and LR4 up to 10 km. In the chart below the longer the reach, different technologies are used. The faster the data rates, the shorter in reach the DAC and multi-mode optics (SR, SR4) becomes while single-mode fiber is largely reach independent.

6 page 6 DAC A CABLING INDUSTRY SHAKE UP CAT-5e cabling and 1GBASE-T have dominated the data center interconnect scene for 15 plus years. However, the transition to 10G Ethernet proved to be technically challenging in both power consumption and cost. That s when Direct Attach Copper (DAC) cabling, aka Twinax, snuck in and grabbed the lead. Now it has become the preferred, low-cost interconnect for inside server racks, especially for high-speed links at 25G, 50G and 100G in just about all applications in hyperscale, enterprise, storage, and in many HPC installations and is likely to be used for 200G and 400G as well. Let s look at the features, advantages and limitations of each of the four major interconnect types used in modern data centers today. First off, there are many different types of high-speed interconnects because high-speed interconnects are expensive to engineer and manufacture. Doing anything at 25 billion times per second is going to be costly. Transceivers may use upwards of different material systems and need critical component alignments that are on the order of the width of bacteria! These need to be made in the millions of units and last 20 years! So, different technologies are used for the longer the length of the interconnect span and the faster the line rates. Since high-speed interconnects are often used in the tens or hundreds of thousands in a large data center, costs must be minimized at every connection point.

7 page 7 What is DAC Cabling? DAC forms a direct electrical connection hence the name, Direct Attach Copper cabling. A DAC is simply two wires where the 1,0 electrical signal is the voltage difference between two wires. A wire pair is used to create one directional lane; so, two pairs creates a single-channel, bi-directional interconnect. Similarly, eight wire pairs form four-channels. Wrap it all up in multiple layers of shielding foil, and solder the wires onto a tiny PCB with an EPROM chip that contains identity data about the protocol, data rate, cable length, etc. Then, put it all in an industry standard plug shell such as SFP or QSFP to create the complete cable with connector ends. While there isn t much inside DAC cables, there is a lot of design engineering and manufacturing technology that goes into them. At high signal rates, the wires act like radio antennas. This means the longer the reach and higher the data rate, resulting in requiring more EMI shielding and the cable becomes thicker, usually shorter and more difficult to bend. IEEE and IBTA sets the cable standard specifications for Ethernet and InfiniBand applications. The standard for 10Gb/s signaling supports reaches of 7 meters; the maximum reach for 25Gb/s DACs is usually 3 meters enough to span up & down server racks. New cable materials are enabling reaches to 5 meters and beyond.

8 page 8 LinkX Single-Channel SFP DAC LinkX 4-Channel QSFP DAC Low-Price, Low-Power DAC Advantages Stimulate its Popularity The popularity of DAC can be summed up in two words: low price. Copper cabling is the least expensive way to interconnect high speed systems together. Additionally, the highest volume of links are within racks at reaches of about 2 meters connecting servers and storage. It s hard to beat the cost of a copper wire, a solder ball and tiny PCB all built on automated machines. More complex technologies such as optical fibers, GaAs VCSEL lasers, SiGe control ICs, InP lasers or Silicon Photonics, which all require submicron alignment tolerances, manual labor and a vast assortment of technology piece parts to assemble, cost much more than DACs but support longer reaches. Besides low price, the other big reason for their enduring popularity is that DAC consumes almost zero power. Several studies show only one Watt saved at the component level (e.g. chip or cable) translates to between 3-to-5 Watts at the facility level. The Wattage multiplier factors in the facility with all the power distribution losses from 100 KV street lines down to 3 Volts and adding in the cooling fans in every one of servers in a single rack chassis, plus all the intermediate fans on the way to the rooftop A/C just to power and cool that one Watt extra. Mellanox Active Optical Cables consume ~ 2.2 Watts; transceivers 2.6 to 4.5 Watts; DACs near zero! Now, multiply this savings by 100,000 cables and a few dollars saved on each cable on the capital acquisition expenses (Capex) and power consumption operating expenses (Opex) and the costs adds up fast! Large data centers spend upwards of $4 million per month on electric bills and use 25MW! These low-cost, low-power consumption and high-performance capabilities has led DAC-in-the-Rack to becoming very popular in hyperscale, enterprise, storage and many HPC systems. DACs create the link between network adapter in servers and storage to top-of-rack switches. Because the number of cables needed in only one rack can be or more, even small performance or cost difference becomes very important. Server-to-storage links carry the highest value traffic in the entire data center. All this is especially true in large data centers that deploy tens or hundreds of thousands of cable links.

9 page 9 Backwards Compatibility Mellanox DAC and systems hardware are also line rate backwards compatible. For example, the Ethernet SN port 100G switch or 25G/100G ConnectX-5 network adapter card can run at older 1G and 10G as well as newer 25G line rates. Same for InfiniBand equipment with 10G, 14G and 25G line rates. This enables connecting slower or older equipment to newer, faster systems without issues and providing a smooth upgrade path. Another reason for DAC cable popularity is the vast array of configuration options available to create links between old and new hardware in every configuration. Mellanox LinkX Offers 18 Different DAC Options! Why so many options? To minimize costs at every connection and embrace the different speeds of various equipment types and both new and older equipment. When systems are fully populated with DAC cables in racks, there is so much cabling that even the LEDs on the servers and switches inside the rack are nearly invisible and blocked out! Mellanox offers six different physical cabling schemes for interconnecting switches and network adapters to subsystems using SFP and QSFP DAC cables and port adapters. SFP-SFP cables QSFP-QSFP cables QSFP-Quad SFP breakout cables QSFP-Dual QSFP breakout cables SFP-QSFP adapter cables QSA: QSFP-SFP mechanical port adapter used with SFP cables

10 page 10 10G & 25G QSA Port Adapters QSFP-SFP Port Adapters & Cable 25G SFP28 25G SFP28 100G QSFP28 100G QSFP28 Straight Cables Dual-50G QSFP28 100G QSFP28 Quad-25G SFP28 100G QSFP28 Breakout/Splitter Cables Multiply times two for 10G and 25G line rates which totals 12 different Ethernet DAC options. Add to that six different InfiniBand QSFP-QSFP DAC cables in HDR (4x50G), HDR100 (2x50G) breakout, EDR (4x25G), FDR (4x14G), FDR10 (4x10G) and QDR (4x10G) rates for a total of eighteen different DAC interconnect options. There is even one more if you want to include 14G-based Ethernet that uses 14G FDR InfiniBand ICs to transport the Ethernet protocol a Mellanox exclusive. Called VPI, and unique to Mellanox it enables 4x14G or 56G Ethernet versus the standard IEEE 40G. At SC 16 in November Mellanox announced the 200Gb/s HDR Quantum switches, ConnectX-6 network adapters and HDR200 QSFP28 DAC cables and a 1:2 splitter cable HDR200-to-Dual HDR100. This brings the total to 18 different ways to create the most cost and performance optimized network links available from Mellanox for InfiniBand and Ethernet protocols. The graphic below illustrates a data center rack consisting of servers and storage linked via Mellanox 25/50/100G network adapters to Mellanox Top-of-Rack switches using multiple DAC cabling options in single and quad channel versions.

11 page 11 QSA: QSFP-to-SFP Adapter Enables Many Options Sometimes the simplest products can solve big problems. The QSA is a mechanical port adapter that enables plugging a single-channel SFP device into a quad-channel QSFP port. Available in a 10G and 25G versions, only one channel passes through the QSFP. Shown below, the QSA enables all kinds of interconnect devices from 1G, 10G, to 25G in DAC and AOC cables including multi-mode and single-mode SFP-based transceivers spanning reaches from 0.5 meters to 10km. All of these options can plug into a QSFP28 port in a switch or network adapter. QSA QSFP-to-SFP Adapter

12 page 12 QSA Adapts a Multitude of Interconnect Option Types and Speeds BER Designed for High-Performance Computing (HPC) It s been said that nearly anyone can build a 10G DAC cable. But not everyone can build one that works perfectly at blazing fast speeds of 25G that operate for many years under high temperatures, under all conditions found in modern data centers and one that does not induce bit errors into the data streams. DAC cables are most frequently used to link high value data from compute servers so maintaining the lowest error interconnect is critically important. All Mellanox DAC cables are designed to HPC InfiniBand supercomputer Bit Error Ratio (BER) standards (even our Ethernet DACs). which requires a BER rating of one-bit error in 10^15th bits transmitted (expressed as 1E-15). The IEEE Ethernet industry standard is BER of 1E-12 which is one-bit error in 10^12th bits transmitted or about 1,000 more errors than Mellanox standard DAC cables. All Mellanox DAC cables are tested to BER of 1E-15. Which cable would you choose to send your electronic pay check over? Too many bit errors from poor quality DAC cables means data packets get dropped and the data must be retransmitted making 100G more like 85G. And most operators won t even know it!

13 page 13 Just Use FEC to Clean it up! Not so fast! We ve heard many data center operators say, We ll just use FEC to clean up the errors. In the server rack, the use of Forward Error Correction circuits (FEC) is not recommended at reaches <2 meters per the latest IEEE spec at 25G line rates and 2 meters is the most common reach for DAC for linking highvalue servers! FEC adds about 120ns delays each way. For server uplinks where all the traffic is, this delay can really slow things down. FEC can detect and correct only so many errors before it becomes overloaded and forces a packet retransmit. Server uplinks are the most important links to maintain error free as they account for 65 percent of the total hardware costs and most of the volume of data traffic in a data center as it is where all the data is processed. So, keeping these links efficient and error free is very important to maintaining high throughput and maintaining optimal ROI. More Value Extending the Reach Past 3 Meters Mellanox DAC cables typically can reach significantly further than competitor s DAC cables which often just barely achieve the IEEE standards of 3 meters and at a BER 1E-12. Without using FEC on the host, Mellanox DAC cables can reach as far as 5 meters (16 feet) which is enough to span 3-4 server racks. Using competitor cables with 3 meter limitations, data center operators have to resort to more expensive AOCs or optical transceivers for reaches past 3 meters proving that going cheap is usually more expensive. Note: The use of FEC, what types of FEC, cable thickness, and lengths are currently hotly contested subjects in the 25G industry and the IEEE with no firm decisions yet so stay tuned!

14 ebook: New Developments in Data Center Cables and Transceivers page 14 Zero Latency Delays InfiniBand market requirements are much more stringent about signal quality than Ethernet markets as InfiniBand systems are all about minimizing latency delays. So, InfiniBand markets avoid the use of FEC, which can cost 120ns each way to clean up data errors. Since DAC cables have no electronics or optoto-electronic conversion in the data path, as do optical devices, DAC latency delays are near zero. In big data centers with complex cabling configurations, the latency delay for each hop can rapidly add up with all the various interconnects that data must pass through so minimizing it is of key importance to data center operators today. Some things in Life you don t go cheap on eye surgeons, parachutes and DAC cables Learn from DAC Disasters Many inexpensive DAC cables typically use shoddy manufacturing techniques, sample testing (maybe 1 in 10 cables tested) and less electrical shielding in the cable to save costs. Often the signal is sitting right on the edge of failure with little margin for error. Small changes in the cable can push the signal over the edge and cause a loss of the link or worse induce random intermittent data error bursts. This results in many installations having the dreaded, DAC Disaster. This is where going cheap becomes really expensive when factoring in system down-time and chasing down intermittent signal losses and drops from low-quality cables. Link drops have even occurred by simply moving a cable a few inches to see the port number on the switch. Signaling is at the near margin limit, the shielding at the bend in the cable opens, signal squirts out, and the link drops. Just try to diagnose that problem! Some installations had to completely replace the DAC cabling because of going cheap. Mellanox DAC cables offer BER 1E-15 without the host FEC enabled (vs IEEE standard of 1E-12 with host FEC enabled) means there is a lot of signaling margin left to absorb signal losses and random or burst-mode noise. The graphic below shows all the different Mellanox cabling options for linking rack systems to Top-ofRack switches for both InfiniBand and Ethernet protocols. InfiniBand uses primarily a 4-channel link and only recently with HDR100, a 1:2 splitter cable. DAC cables can also link various subsystems to other subsystems directly without going through a switch as well. Shown in 25G line rates, the switches, network adapters and DAC cables are all available in 10G line rates too. DAC Uses in Ethernet and InfiniBand

15 page 15 Key DAC Cable features: Lowest cost, high-speed interconnect solution Acts as a plug & play cable Available in multiple configurations for every application from 0.5 to 7 meters Point-to-point links to connect racks together and breakouts cables link switches to servers and storage systems QSA adapter enables linking single channel devices to quad channel ports in switches and network adapters Mellanox DAC Manufacturing Most DAC manufacturers only build DAC cables. Mellanox designs and manufactures all its own switch systems, network adapters, DAC and AOC cables and optical transceivers. Not only the systems but also the ICs that go into the switches, network adapters, and transceivers including advanced technologies such as Silicon Photonics technologies. This vertical integration, end-to-end approach ensures everything is designed to work optimally together. The vertical integration and manufacturing enables Mellanox to offer and unparalleled out-of-the-box quality experience for our customers. Mellanox has multiple facilities in multiple geographic regions that manufactures DAC cables in high volumes. At Mellanox, so called, PLUG & PLAY means PLUG-IN AND WALK- AWAY ; not the usual Plug & Play-All-Day needed to get things to work. Every Cable Tested in Real Systems As Mellanox is also a switch and network adapter systems company, we test every DAC cable in real switching and adapter systems at a time for extended times under raised temperature conditions found in actual systems deployment. This is unlike most competitors who typically test one at a time (or sample testing) on a technician s bench for a few minutes using manual labor and expensive test equipment. To save costs, many manufactures don t even test every cable and sample test one in a lot of 5 or 10 leaving the final test burden on the buyer. Mellanox test every DAC cable to a BER of 1E-15 one thousand of times better than competing Ethernet cable suppliers. So, there is a lot of spare signal margin in Mellanox cables rather than, just barely qualifying and operating the edge as many competitor cables do.

16 page 16 DAC Cables - Why Choose Mellanox? Some buyers attempt to shave a few dollars building Frankenstein systems from multiple vendor s equipment, but they often end up paying big time in qualification, maintenance and reliability. At 1G this was easy to do but at 25/50/100G it is another matter. In e-commerce applications, even one minute of down time can be very costly. The combination of high-quality cable materials, Mellanox designed and manufactured cables using real systems testing and at a minimum standard of 1E-15 BER makes Mellanox LinkX cables a preferred choice in high-speed, critical systems applications and at blazing 25G line rates that includes just about all applications! If you can find any other way to interconnect switches and network adapters using DAC cables we d like to hear about it! DAC cables are a tool in the networking tool kit and it s important to understand use advantages and limitations. In early 2017, Mellanox shipped its 100,000th 100G DAC cable to hyperscale, enterprise and HPC customers and has built over 2 million DAC cables in total. Why Mellanox? More cabling options, exceptionally low bit error and low-latency ratings, tested in real systems, designed and manufactured by Mellanox.

17 ebook: New Developments in Data Center Cables and Transceivers page 17 Key AOC Cable features: Lowest cost optical solution Acts like a plug and play cable but offers optical long reach features without optical connector hassles Available in point-to-point configurations to link racks together and breakout links individual servers and storage subsystems 100m reach using VCSELbased optics and 200m using Silicon Photonics technologies AOCS - ACTIVE OPTICAL CABLES Active Optical Cables (AOCs) are the lowest priced optical links available. AOCs are generally used at reaches of 5-30 meters but can reach up to meters. They are widely used in HPCs and more recently became popular in hyperscale, enterprise and storage systems as a high-speed, plug & play solution with longer reaches than DAC cables. What is an AOC? Optical transceivers convert electrical data signals into blinking laser light which is then transmitted over an optical fiber. Optical transceivers have a detachable optical connector to disconnect the fiber from the transceiver. AOCs bond the fiber connection inside the transceiver end, creating a complete cable assembly much like a DAC cable, only with a meter reach capability. AOCs main benefit is the very long reach of optical technology, while acting like a simple, plug & play copper cable. A complete 2 transceiver AOC including fiber cable assembly is usually priced slightly above the price of only a single connectorized transceiver for several reasons we ll show below! What are AOC Features and Advantages? Compared to less expensive DAC cables, AOCs offer: Longer reach capability than DAC 3-7 meter limits 3m 100-meters multi-mode technology meters with Mellanox single-mode, Silicon Photonics technologies Lower weight, thinner cable and tighter bend radius enabling more flexible configurations, increased airflow for cooling and easier system maintenance

18 page 18 University of Texas HPC Supercomputer using Orange colored AOCs Compared to more expensive optical transceivers, AOCs offer: Dramatically lower priced solution than two optical transceivers and connectorized fiber based links Lower power consumption at 2.2 Watts per end versus up to 2.6 to 4.5 Watts for optical transceivers (4-channel) Lower operational and maintenance cost with no optical connector to clean or repair. The photo above shows the use of thousands of AOCs in a HPC super computer at the University of Texas. The AOCs are precisely manufactured with all eight fibers cut to exact lengths to minimize the time skew between each of the four channels within the AOC cable end. This is to enable each of the four individual signal pulses to arrive at the transceiver end at exactly the same time. Believe it or not, the speed of light delay traveling at 130,000 miles per second in the fiber is a significant issue over a 10 meter AOC in an HPC supercomputer. High-speed computing is all about minimizing latency delays between critical components. How AOCs Differ from Other Interconnect Solutions Permanently attaching the fibers is a seemingly simple change but yields a surprisingly large number of technical benefits and cost advantages; enough to create an entirely new category of interconnect products. Since the optics is contained inside the cable, designers do not have to comply to IEEE or IBTA industry standards for transceiver interoperability with other vendors. This gives designers complete freedom to can pick and choose the lowest cost, best performing technologies since the cable is a closed system and has a predefined cable length. All of this results in a dramatic cost and price reductions. Here are some of the results this simple change enables: 1. Lowest priced optical interconnect available near half the price of a single optical transceiver much more than just the cost of deleting the optical connectors. 2. Plug & play: Ease-of-use cable features like DAC cables only with optical long reaches 3. Long reach: Up to 100 and 200-meter reach depending on the technology

19 page Lowest optical power consumption per end significantly lower than connectorized transceivers saves operating expenses in power consumption and cooling 5. No optical connectors to clean and maintain saves operating expenses and increases reliability 6. Optical isolation isolates electrical systems from ground loops as with copper DAC cables a reliability advantage Achieving Lower Product Cost 1. Testing Costs: Optical testing accounts for percent of the total cost of manufacturing a transceiver. AOCs can be tested in switch system as an electrical test. Cable is plugged in; test patterns and data run; come back later and look at the results. If good, ship! If not, scrap. Optical transceivers, on the other hand, are much more complex, requiring $500,000 of optical test equipment per station, a very experienced (e.g. expensive) test technician, and a lot of manual time on the test bench. AOCs do away with all of this since the testing is only in the electrical domain. Mellanox uses its scratch & dent switches to test AOCs and is one way we achieve a bit error ratio (BER) feature of 1E-15 versus 1E-12 IEEE standard. 2. Design Freedom: Since the optics is contained inside the AOC cable, designers can utilize the lowest cost materials and transceiver designs. Besides deleting four MPO optical connectors (2 per end), for example, Mellanox s Silicon Photonics AOC uses only one laser versus four lasers for a single-mode AOC. VCSELs that don t qualify for transceivers can be reused and low cost, short reach (orange colored) OM2 fiber can be used for <20 meter reaches saving more expensive OM3 and OM4 fiber for longer reaches. 3. Freedom from Industry Standards: AOCs must comply to the IEEE, IBTA and SFF industry standards for the electrical, mechanical, and thermal requirements but he hardest part are the optical requirements. Since the optics are contained inside the cable, they do not have to meet any standards hence allows a lot more design and material use freedom and eliminates the costly optical testing. AOCs Offer Lower Operational Hidden Costs 1. AOCs do not have optical connectors to manually clean every time they are removed as a single speck of dust inside the connector can completely block the 50-um or 9-um diameter fiber light transmission area. In a transceiver link, there are two fiber ends and two transceiver ends to clean. Besides, the personnel cost the connector cleaners can cost upwards of $250 each. 2. AOCs don t use MPO or LC optical connectors which, in crowded racks, can be dropped and the fiber end scratched rendering them useless. 3. Optical connectors can channel an electrical static charge that builds up on a long plastic cable and can destroy the sensitive optical transceivers electronics when connected. 4. AOC is a plug and play cable solution rather than a plug, assemble and clean solution as with optical transceivers. Optical transceivers, fibers and connectors also have many different and complicated product variances. All these must all be exactly matched to the specific transceiver used and spares kept and a technician trained in the specifications. 5. AOC cables have a short bend radius and much thinner able thickness than most DAC cables. This makes them easier to deploy and frees up a lot of space for increased air flow cooling in crowded systems. 6. MPO optical connectors are known to insert half way into the transceiver and look fine to the technician later creating problems and maintenance issues. 7. Lastly, there are big operational savings in power consumption costs. One Watt saved at the One Watt saved at the component level translates into 3-5 Watts at the data center facility level!

20 page 20 component level translates to 3-5 Watts at the data center facility level. This is when all the chassis, row, room and facility fans and air conditioning equipment is included along with the electrical power to drive them not counting the repair and maintenance! AOCs are less complex than optical transceivers and offer lower power consumption. Mellanox designs its own AOC ICs, so can offer incredibly low power consumption ratings of 2.2 Watts per end compared to competitors at 2.5-3W. Why Choose Mellanox AOCs? Mellanox not only designs its own AOC control ICs such as a TIA/Laser Driver but the company designs, manufactures and tests every AOC cable as well. THE vertical integration of components enables maximizing the signal margin at every point. Mellanox was one of the first adopters of using AOCs in high-speed HPC supercomputers and has a long history and experience in AOC deployment and manufacturing. How Are AOCs used in Modern Data Centers? While AOCs reaches can extend to the limits of the optical technology used ( meters), installing a long 100-meter (328 foot) cable, complete with an expensive transceiver end, is difficult in crowded data center racks so the average reach typically used is between 3-30 meters. Only one oops per cable allowed. Damaging the cable means replacing it as it cannot be repaired in the field. AOCs are typically deployed in open access areas such as within racks or in open cable trays for this reason. Mellanox s InfiniBand AOCs started out in about 2005 with DDR (4 x 5Gb/s) for use in the Top10 HPCs and quickly became the preferred solution for the large InfiniBand HPCs in the Top100 in which Mellanox is the market leader. Today, AOC use is the norm for QDR, FDR and EDR and in 2017 the newly announced LinkX 200G HDR announced at SC 16 event in November. The power and cost savings caught the eye of the Ethernet hyper scale and enterprise data center builders and has since become a popular way to link Top-of-Rack switches upwards to aggregation layer switches such as End-of-Row and leaf switches. Several hyperscale companies have publically stated their preferred use of AOCs for linking Top-of-Rack switches. Additionally, single channel (SFP) AOCs have become very popular in high-speed, NVMe storage subsystems. Some hyperscale builders often run 10Gb/s or 25Gb/s AOCs from a Top-of-Rack switch to subsystems meters away much further that 3-7 meter reach of DAC cables. On the down side, AOCs have an expensive transceiver end which is difficult to install in crowded areas over long reaches, hence the average use is meters. AOCs are typically deployed where there is easy access to cable trays or open areas. When an AOC fails, most operators simply abandon it in place and run another AOC.

21 ebook: New Developments in Data Center Cables and Transceivers page 21 Above is an example of how AOCs are typically used inside systems racks to link subsystems together and between switches and across systems in rows. Below is a more detailed view of Ethernet configurations showing Mellanox s LinkX 10Gb/s and 25Gb/s based AOCs, Spectrum switches and ConnectX-3, 4, 5 QSFP and SFP network adapters. Additionally, Mellanox recently announced two new AOC that break out to 25G and 50G ends. A 100G QSFP28 AOC split out into four 25G SFP28s ends and another cable with a dual 50G QSFP28 break out.

22 page 22 SHORT REACH OPTICS IN MODERN DATA CENTERS Short Reach (SR) multi-mode optics are the lowest priced optical interconnects available today that use detachable optical connectors to separate the transceiver from the optical fibers. Although AOCs and SR transceivers both support 100m reaches, AOCs are much less expensive to manufacture but due to installations difficulties generally used at less than meters reaches. However, SR optics is frequently used up to meter reaches. The key advantage of SR transceivers is the transceiver can be separated from the optical fiber infrastructure that may be permanently installed in structured cabling pipes, under computer room floors, or even between multiple floors. Short reach optics are not new and has a long history of different fibers, connectors and transceiver types at different data rates creating a blizzard of different parts and complex, specific features to each. But modern data centers have simplified things and zeroed in on the single-channel SFP with the duplex 2-fiber LC optical connector and the four-channel QSFP with the 8-fiber MPO connector. Mellanox offers both these configurations in both as 10G and 25G line rates and quad versions at 40G and 100G. Multi-mode fibers have a large 50-um diameter light carrying fiber core. This makes SR transceivers easier and less expensive to manufacture compared to single-mode optics with tiny 9-um fiber cores which are difficult and expensive to build with. For the low-cost reason, multi-mode, short reach optics are very popular in modern hyperscale, enterprise and storage data center applications as an optical solution. Approximately percent of the links in a data center are at reaches less than 60 meters traveling up and down rows of racks and well within the 100-meter reach of short reach transceivers. Mellanox SR4 and SR Short Reach Transceivers & Connectors

23 page 23 VCSEL Multi-mode Lasers & Fibers Short reach transceivers use a laser that is built on a gallium arsenide (GaAs) semiconductor wafer and constructed perpendicular to the surface of the wafer. When excited by electrons, GaAs emits light at 850nm wavelength and is channeled into a vertical cavity on the wafer surface where it resonates and becomes laser light. Hence the name, Vertical Cavity, Surface Emitting Laser or VCSEL. Multi-mode optics employ a large core diameter, 50-um, optical fiber that is easy to interface to VCSEL lasers and detectors, so the costs are much lower than single-mode optics with a tiny 9-um core diameter fiber that are difficult to align. But the SR laser pulse tends to scatter into multiple transmission paths or modes (hence the name multi-mode). The scattered pulse in large diameter fibers becomes unrecognizable after about 100 meters so the IEEE standards body sets the limit at 100 meters, assuming four connectors in the run. Multi-mode can reach to 400m, but requires specialized lasers, fibers and connectors and are priced near that of single-mode transceivers. For reaches longer than 100m, single-mode optics is generally used. Key SR/SR4 Transceiver features: Connectorized optics fibers can be disconnected from the optical transceiver Point-to-point and breakouts meaning a SR4 transceiver can operate as a single, four channel link -or- as four separate channels with links to individual subsystems. Available in 10G & 25G line rates and 40G & 100G Connectorized Optics A Key Feature Many data centers have structured cabling where the fiber infrastructure is fixed and installed in cabling pipes, under raised floors and integrated into optical patch panels used to manually reconfigure the fiber run end points. Sometimes, fibers run to other system rows, rooms, floors, or even other buildings necessitating the ability to disconnect the fibers from the transceivers installed in the systems. This is something that DAC and AOCs integrated cables cannot do as the wires or fibers are integrated into the plug or transceiver end. Multi-mode optics uses the 2-fiber LC and the 8-fiber MPO optical connectors. Point-to-Point Applications ToR-to-Leaf/spine EOR switches One of the main applications for SR and SR4 transceivers are to link Top-of-Rack (ToR) switches to other parts of the network such as aggregation switches, middle and end-of-row switches, and to leafs in a leaf-spine network. These are typically used as high bandwidth busses that are four-channel SR4s at 40G or 100Gb/s bandwidths. Single channel SR 10G and 25G transceivers can be used to link server and storage systems to a ToR switch within a rack or adjacent racks. Since the number of links is very high the closer to the server one gets, low cost optics is important and multi-mode optics is well suited to these applications where the reaches are relatively short spanning within a rack or along a single row. While several enormous hyperscale operators have made a lot of noise in the press around moving to single-mode fiber, many big hyperscale and enterprise installation still operate as groups of small system clusters where all the systems are well within the reach of 100m multi-mode fiber. Interestingly multi-mode fiber is about three times more expensive than single-mode fiber, the singlemode transceivers are 50 percent to 10X more expensive than multi-mode transceivers. Single-mode transceivers are difficult to build but offer reaches up to 10Km vs only 100m of multi-mode. Single-mode fiber is the mainstay of the telecom industry linking cities and countries together hence is made in thousands of mile spools.

24 page G QSFP28 Breakouts to Dual 50G QSFP28 and Quad 25G SFP28s Breakouts Using 8-fiber MPO and 2-fiber LC Optical Connectors Breakout Applications ToR QSFP Breakouts to SFP Servers & Storage Linking Top-of-Rack switches down to servers and storage subsystems within the same rack is another popular use for SR and SR4 optics. In the past, SR4 transceivers only transferred at 4-channels at a time to another SR4 in a switch-to- switch application. Newer transceiver models can split the four into individual single-channels that can be connected to different systems and operate independently. This is important when the link reach needed is greater than the 3-meter capability of DAC copper cables and perhaps spanning more than one rack. The passive fiber break out cable has a single 4-channel MPO on one end connecting to the SR4 transceiver and four Duplex LC optical connectors on the other end connecting to four separate SFP transceivers each with their own 100m fiber run. Increased Airflow; Tighter Bend Radii; Easier Rack Maintenance Breakout optics also provides a linking alternative in the rack that is much smaller than DAC cables. This is due to the tiny fiber diameters compared to DAC cabling. All the optical cables together supporting a 32-port switch has a diameter less than 2 cm (3/4-inch) compared to about cm (4-5 inches) with copper DAC cables. Mellanox sells short reach multi-mode optics in 10G and 25G line rates and single and four-channel configurations enabling 10G to 100G of link bandwidth. These are available in SFP and QSFP connector form-factors that use LC and MPO optical connector respectively. Thirty-two optical fiber cables would blow off the table with a sneeze but 32 DAC cables could qualify as exercise equipment!

25 page 25 Similarly, two 50Gb/s links can be created from one 100Gb/s using an MPO breakout cable with two MPOs connected to 50Gb/s SR2 or SR4 transceivers using only two channels each (2x25G) as shown in the following figure. SN Port 100G Switch SN Port 100G Switch 32-ports of 100G Switch or 64 ports 50G 16-ports of 100G Switch or 32 ports 50G 16-ports of 100G Switch or 32 ports 50G 100G SR4 1:2 Breakout Using MM Fiber Breakout Cable DAC 1:2 Breakout Cable 100G-to-Dual 50G 100G PSM4 1:2 Breakout Using SM Fiber Breakout Cable AOC 1:2 Breakout Cable 100G-to-Dual 50G 64-Ports of 50G Using Mellanox Ethernet Switches & Breakout DAC and AOC Cables and Transceivers Can 50G Can Costs Less Than 40G? Answer: Yes, use breakout cables or transceivers with splitter fibers and split the ports on a 100G Mellanox SN2700 or SN2100 ToR switch. Top-of-Rack switches such as the Mellanox SN2700 supports 32 QSFP28 ports of 100G each. The SN2100 is half the width and offer 16-ports of 100G. By using DAC splitter, AOC splitters, or transceivers and splitter fiber cables, ports in the switches can be split into a total of 64 50G ports. In this way, one 32-port or 16-port Mellanox SN2700/2100 switch makes 50Gb/s less expensive overall than a 32-port 40Gb/s switch. With this approach, you end up with 64-ports instead of 32 and each run 20% faster! Additionally, it provides an upgrade path to 100G by simply changing the network adapters and 50G cables or transceivers to 100Gb/s. 32 switch ports at 40Gb/s with no upgrade path or- 64 ports at 50Gb/s using break out cables with an upgrade path to 100G. 2.5X the bandwidth at less than 30%-50% price premium you do the math!

26 page 26 SINGLE-MODE OPTICS INCREASING POPULARITY What is Single-mode Optics? Single-mode optics is all about long reaches enabled by using single-mode fiber the same fiber used by the telecom industry to send Internet data between cites and across oceans. Traditionally, singlemode transceivers have been exceptionally expensive but Hyperscale builders began ordering these in huge volumes are driving down the cost. Single-mode transceivers use various technologies to achieve price and reach targets. Lasers are made using Indium Phosphide (InP) and emit in the 1310nm and 1550nm wavelengths. Lasers can be directly modulated via turning on and off electrical currents (called DMLs) or use continuous wave lasers and modulating the light externally (called EAs or EMLs). Lasers, modulators, waveguides, multiplexers, detectors and other elements all have to be integrated or precision aligned to bacteria level tolerances of hundreds of nano-meters. Difficult to do at all, harder and make a million units reliably and be attractively priced. Silicon Photonics Solves Costly Alignment Manufacturing Problems Since silicon is transparent to both 1310 and 1550nm wavelengths, silicon wafers can be built that channel light into waveguides inside the wafer surface to various optical elements embedded in the material system in the wafer. Lasers are built using InP and attached to the Silicon Photonics chip and the chip then attached to optical fibers. Called Silicon Photonics, it is primarily a manufacturing technique to uses semiconductor processes to precisely align optical elements thereby dramatically reducing manufacturing costs for single-mode transceivers which can account for upwards of 50% of the manufacturing total cost. Mellanox is a leading supplier of Silicon Photonics transceiver technologies in both transceivers and Silicon Photonics engine components. Mellanox currently offers 1550nm-based PSM4 transceivers in QSFP28 package with a 2-km reach capability. Mellanox builds PSM4s using its internally developed Silicon Photonics technologies located in southern California which has been building and shipping Silicon Photonics transceivers and Variable Optical Attenuators (VOAs) products for over a decade. Mellanox-Designed PSM4 Silicon Photonics and Control ASICs Single-mode Transceiver Advantages Enables using low-cost, long fiber reaches up to 10 km Makes the type of fiber and reach used anywhere in the data center a non-issue Can support hundreds of wavelengths in a single fiber Is largely data rate speed agnostic 1310/1550nm wavelengths enable using Silicon Photonics manufacturing technologies

27 page 27 The main advantage of single-mode transceivers is it enables using inexpensive single-mode fiber that has a very long reach capability. Recently, as the transceiver costs drop, it is also being used for in-rack breakout applications like DAC, AOCs, and multi-mode transceivers configurations mentioned above. The long reach capability makes it one of the most flexible interconnect types as it makes reaches in a data center a non-issue. By employing a tiny 9-um core optical fiber, the data signal pulse is recognizable at the receiver over an astounding 100 km reach! Which is why the telecom industry uses single-mode fiber to connect cities and countries together. No kidding, single-mode fiber is less expensive than dental floss, yet it regularly carries the entire internet traffic across oceans. Large data centers are moving to deploying single-mode fiber as fast as they can to get rid of the numerous multimode fiber short reach hops and fiber/connector/switch complexities required to send data past 100m. These are costly, expensive to maintain, and must be constantly upgraded with each line rate speed advances. With speed advances happening every 2-3 years instead of 5-8, data center operators are looking to eliminate these costs. Multi-mode fiber is tuned to specific data rates and bandwidths. Every increase in data rates means the multi-mode fiber reach becomes shorter or fiber needs to be upgraded (OM2, OM3, OM4 now OM5). Add to this the complexity of many different types of MPO connectors with different quality ratings, key-up/key-down, polarity, male/female, crossover issues. Large data centers have very complex networks inside that create optical losses as well as having enormous physical data centers as long as 1 km and also need to connect to the metro area network using 10km transceivers. Lower costs and long reach is what is driving the popularity of singlemode optics in data centers. To go 200m, four SR4 transceivers would be needed with a network switch in between. With singlemode optics just 2 transceivers and that could send data 10 km. While single-mode transceivers are more expensive than multi-mode transceivers, when considering a long link, the total cost of fiber, connectors, transceivers, and an extra network switch is less with single-mode optics. Hyperscale data centers want to build a huge data center infrastructures once (some costing $1 billion) and connect everything with inexpensive single-mode fiber from the core router across the data center to all the way to each individual server inside racks. Then never touch it again instead of, as with multi-mode fiber, change the infrastructure every few years. Transceiver pricing is holding them back but Silicon Photonics may provide a solution. What Data Centers are 10 km or 6.25 Miles Long? While most data centers are not 2km or 10km long, the km spec is another way of stating the optical power of the laser. Measured in powers of ten called dbs (Decibels), the Mellanox PSM4 offers ~3.3 dbs of optical power (LR4 has an optical power of 6dBs) which is enough to push through hundreds of meters of a lossy fiber infrastructure consisting of dirty and/or misaligned optical connectors, jumpers, optical patch panels and other interferences to the light path. This is like needing a very powerful flash light to shine through a dense forest of twigs, branches and leaves in the way even though the actual distance is relatively short. Multiplexing Hundreds of different colored laser pulses can be simultaneously sent over a strand of single-mode fiber. Telecom does 1,000 but in the data centers today, four is common and soon headed for 8 wavelengths. Additionally, single-mode fiber is data rates agnostic, unlike multi-mode fiber. Unlike multi-mode fiber, the same fiber can be used for 1G, 10G, 25G, 50G or 100G individual line rates with little impact on the reach. Using Silicon Photonics to reduce manufacturing costs and sending multiple 100G data streams with different colored laser down a single strand of low-cost single-mode fiber is the technology goal of the future. Signal multiplexing is not limited to single-mode and some companies have recently introduced multiplexed multi-mode transceivers called SWDM for Short Wavelength Division Multiplexing. These use four different colored VCSEL lasers that are multiplexed into a single multi-mode fiber and use duplex LC optical connectors.

28 page 28 Single-mode Transceivers in Modern Data Centers The main single-mode transceiver types used in data centers today are: LR: PSM4 CWDM4 LR4 10G, 25G, SFP, 1-channel, 2-fiber, LC connector, 10km, 1310nm 40G, 100G QSFP, 4-channel, 8-fiber, MPO/APC connector, 500m-2km, 1550/1310nm 40G, 100G QSFP, 4-channel, 2-fiber, LC connector, 2km, 1310nm 40G, 100G QSFP, 4-channel, 2-fiber, LC connector, 10km, 1310nm PSM4 sends each channel into separate parallel fibers one per channel. CWDM4 and LR4 multiplex 4-channels each with different wavelength lasers into a single fiber and de-multiplexes them at the receiver end essentially sending a rainbow down the fiber. PSM4 s are generally used at less than 500m, CWDM4 up to 2km and LR4 to 10km. 100G PSM4, LR4, CWDM4 and 25G LR Transceivers The PSM4.MSA specifies 500m reach but Mellanox s PSM4 can reach 2km providing four times the industry standard reach and much more network design flexibility. Numerous PSM4 Applications and Configurations for Any Need The PSM4 has many different configuration application uses and is one of the hottest selling transceivers in the hyperscale segment. It can bus 100G point-to-point over 2km or can be broken out using passive fiber splitter cables or half-aoc hybrid (called a, pigtail ) into dual 50G or quad 25G links for linking to servers, storage and other subsystems within a rack. The PSM4.MSA specifies 500m reach but Mellanox s PSM4 can reach 2km providing four times the reach and much more network design flexibility nm PSM4 Interoperability While Mellanox s PSM4 uses the 1550nm wavelength, most PSM4 transceivers use a wide bandwidth detector so transceivers with either wavelength can interoperate with ease. PSM4 Breakouts to Servers & Storage Beside long reach 2km point-to-point links, PSM4 channels can also be split out individually. The diagram below shows a 100G PSM4 transceiver split using a passive breakout splitter cable with an MPO on one end and either dual MPOs (50G) or quad LC connectors (25G) on the other ends. CWDM4s and LR4s cannot do this feature and can only bus 100Gb/s point-to-point.

29 ebook: New Developments in Data Center Cables and Transceivers page 29 Passive Fiber Breakout Configurations The following diagram illustrates the Mellanox, end-to-end system solutions consisting of switches and network adapters with cables and transceivers showing all the different uses of single-mode optics in modern data centers from core routers as far away as 10km all the way to individual servers using various single-mode transceivers and fiber combinations. 25G, 50G, 100G PSM4 Transceiver Applications

30 page 30 Key single-mode transceiver features: Enables long reaches up to 10km making various reaches a non-issue Employs inexpensive singlemode fiber Line rate agnostic supporting 100G per channel futures Why Choose Mellanox Single-mode Transceivers? Mellanox designs its own PSM4 TIA and Laser Driver transceiver control ICs. In addition, the company has over 20 year s experience in designing and building Silicon Photonics transceivers and VOAs in an in-house Silicon Photonics fab and outside 12-inch fab. Mellanox designs, manufactures and tests every transceiver in live switching systems unlike competitors who test with benchtop equipment. BER ratings are 1E-15 and 1,000 better than IEEE industry standards and competitors at 1E-12. Mellanox s Silicon Photonics PSM4 offers four times longer reach than the PSM4.MSA. The vertical integration of components combined with manufacturing all from one company enables maximizing the signal margin at every point, power consumption and transceiver reliability. Mellanox s 100G PSM4 has a reach of 2km versus the PSM4.MSA specification at 500m. Mellanox also offers the PSM4 transceiver components are individual parts so OEMs can integrate 100G PSM4 transceivers into their designs at the chip level. 1310/1550nm wavelength enables employing Silicon Photonics to reduce manufacturing costs. Enables multiplexing multiple channels into a single fiber

31 page 31 A GLANCE INTO THE FUTURE There is an old saying in New England, If you don t like the weather, just wait a minute. This saying is starting to apply to the high-speed interconnect space as well. Cloud computing, HPCs and now converged Cloud/HPCs are driving the link speeds improvements faster than at any other time in history. What used to take 5-8 years is now done in 2-3. Traditional enterprise data centers are just now moving from 1G/10G/40G to 25/50/100G while Cloud and HPC are well into deploying 25/50/100Gb/s and soon 200G and 400G. Now, that s life in the fast lane! Newly Announced 200G HDR InfiniBand AOCs Mellanox announced a new line of LinkX DAC and AOC cables capable of running at a whopping 200G in a QSFP28, 4-channel x 50G configuration! Available in 2H-2017 and These cables support the newly announced 200G InfiniBand HDR200 and 100G HDR100 speed Quantum switches and ConnectX-6 host bus adapter cards (network adapters). For the first time in InfiniBand s history, we have a double-ended, 1:2 cable splitter cable that enables a single 200G HDR port in a switch to be split into two HDR100 ports in host bus adapter cards in servers, storage and other subsystems at 100Gs. Using HDR-Dual HDR100 splitter cables, a 40-port Quantum InfiniBand switch can support Gb/s HDR100 links. The next speed hop will be called NDR for 400G. Big Changes for Ethernet & InfiniBand Going Forward Going faster and faster encounters increasing complexities in electronics and optical physics. So, the transceiver industry is introducing several changes. Sending 2 data bits per clock, increasing the number of channels, and changing the transceiver packaging, and optical connectors. These changes combined with future line rate advances of 100G PAM4 will enable 400G and 800G per transceiver in the future. 1. New Signaling Scheme: For the last 35+ years, digital signaling has been based digital ones or zeros with one bit per defined clock pulse (called NRZ). Now, the industry is moving to 2-bits per clock pulse by varying the amplitude of the pulse to four levels of 00,01,10,11 (instead of just 1,0) called PAM4 for Pulse Amplitude Modulation 4-channels. The entire infrastructure of switches, network adapters cables and transceivers must change to adopt this new technology. 50G PAM4 allows keeping the same low-cost 25GHz electrical infrastructure but transferring data at twice the rate with 2-bits per clock pulse or 2x25G=50G. 2. Move to 8-Channels: Increasing the number of channel from four to eight provides more aggregate bandwidth but makes everything larger and requires more electrical paths and thermal dissipation. 3. New Transceivers Packaging: QSFP28 supports W with four-channels whereas the new packages offer 8-channels and 12-15W support. In July, 2017, the SFP-DD MSA was announced to advance the SFP design from one channel to two channels and from 2W to 3.5W enabling up to 100G and 200G in an SFP-DD form-factor using 50G or 100G PAM4 signaling.several industry groups are battling for leadership over the next transceiver package type. Cisco is leading the QSFP-DD charge and Arista is leading the OSPF group. Each offer backward compatibility, cooling and connector different options and more cryptic buzzwords than ever before! New Transceiver Packages (MSAs)

32 page 32 SUMMARY The graphic below shows only part of Mellanox s complete, end-to-end portfolio of switches, adapters, DAC and AOC cables, including optical transceivers. Mellanox is one of a few companies in the data center business that designs switch and network adapter ICs, transceiver control and Silicon Photonics ICs, and sells complete switching, adapter and cables and transceivers system solutions. This figure shows 10G/25G SR and 40G/100G SR4 multi-mode transceivers with breakout fibers used within server/storage racks, between system racks within rows, and in switch-to-switch networking infrastructures up to 100 meter reaches. Also, shown is the 100G PSM4 single-mode transceiver break out to two 50G transceivers but at a longer reach of 2km using single-mode fiber. The modern data center has focused on DAC and AOC cables with multi-mode and single-mode transceivers in 10G and 25G line rates, in single SFP and quad channel QSFP form-factors. Large and hyperscale data center builders are driving down prices and up the rate of change to unprecedented levels. The next phase of the industry starting in 2018 will bring many changes in line rate, modulation techniques, number of channels and new packaging schemes. The jump from 10G/40G to 25G/100G was fairly smooth as it had a minimum number of changes. However, the jump to 50G/200G and 100/200/400G entails very significant changes on nearly every aspect of switching, network adapters, DAC, AOC cables and optical transceivers and optical connectors. Why Mellanox Cables and Transceivers? Mellanox is vertically integrated and at the cutting edge of these technical developments in building switch and network adapter ICs, transceiver control ICs and Silicon Photonics as well as complete switching and adapter systems including DAC and AOC cables and multi-mode and single-mode transceivers. Mellanox is a leader in DAC, AOC cables design and both multi-mode and single-mode transceiver technologies including Silicon Photonics. Mellanox is also a founding member in the new technologies for QSFP-DD, OSFP and SFP-DD MSA groups. Contact your Mellanox sales representative for availability and pricing options for any of the 200 different cables and transceivers products on the Mellanox.com LinkX website. Simple to use and only 3-4 clicks, start to finish, and you are at the product brief PDF downloads. Also, there is link to buy online and have it delivered to your door. Visit the new Mellanox site at:

ETHERNET OPTICS TODAY: 25G NRZ

ETHERNET OPTICS TODAY: 25G NRZ ETHERNET OPTICS TODAY: 25G NRZ THE STATE OF ETHERNET OPTICS PANEL Brad Smith, Director of Marketing, LinkX Interconnects, Mellanox March 23, 2016 OFC 2016 Anaheim, CA BradS@Mellanox.com 1GBASE-T CAT5 is

More information

Cisco 100GBASE QSFP-100G Modules

Cisco 100GBASE QSFP-100G Modules Data Sheet Cisco 100GBASE QSFP-100G Modules Product Overview The Cisco 100GBASE Quad Small Form-Factor Pluggable (QSFP) portfolio offers customers a wide variety of high-density and low-power 100 Gigabit

More information

Dynamix QSA TM Port Adapter Family

Dynamix QSA TM Port Adapter Family WHITE PAPER November 2017 Dynamix QSA TM Port Adapter Family Introduction...1 The Interconnect Problem...3 The Solution = Dynamix QSA TM port adapter...4 Product Description...5 Applications...6 SFP DAC,

More information

Cisco 40GBASE QSFP Modules

Cisco 40GBASE QSFP Modules Data Sheet Cisco 40GBASE QSFP Modules Product Overview The Cisco 40GBASE QSFP (Quad Small Form-Factor Pluggable) portfolio offers customers a wide variety of high-density and low-power 40 Gigabit Ethernet

More information

Cisco 100GBASE QSFP-100G Modules

Cisco 100GBASE QSFP-100G Modules Data Sheet Cisco 100GBASE QSFP-100G Modules Product Overview The Cisco 100GBASE Quad Small Form-Factor Pluggable (QSFP) portfolio offers customers a wide variety of high-density and low-power 100 Gigabit

More information

Arista 400G Transceivers and Cables: Q&A

Arista 400G Transceivers and Cables: Q&A Arista 400G Transceivers and Cables: Q&A What are the benefits of moving to 400G technology? 400G Transceivers and Cables Q&A Arista s 400G platforms allow data centers and high-performance computing environments

More information

Scaling the Compute and High Speed Networking Needs of the Data Center with Silicon Photonics ECOC 2017

Scaling the Compute and High Speed Networking Needs of the Data Center with Silicon Photonics ECOC 2017 Scaling the Compute and High Speed Networking Needs of the Data Center with Silicon Photonics ECOC 2017 September 19, 2017 Robert Blum Director, Strategic Marketing and Business Development 1 Data Center

More information

WHITE PAPER THE CASE FOR 25 AND 50 GIGABIT ETHERNET WHITE PAPER

WHITE PAPER THE CASE FOR 25 AND 50 GIGABIT ETHERNET WHITE PAPER THE CASE FOR 25 AND 50 GIGABIT ETHERNET OVERVIEW The demand for cloud-based solutions has grown rapidly, leaving many datacenters hungry for bandwidth. This has created a demand for cost-effective solutions

More information

By John Kim, Chair SNIA Ethernet Storage Forum. Several technology changes are collectively driving the need for faster networking speeds.

By John Kim, Chair SNIA Ethernet Storage Forum. Several technology changes are collectively driving the need for faster networking speeds. A quiet revolutation is taking place in networking speeds for servers and storage, one that is converting 1Gb and 10 Gb connections to 25Gb, 50Gb and 100 Gb connections in order to support faster servers,

More information

Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity

Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity Introduction Virtualization and cloud technologies have applied an incredible amount of pressure

More information

Trends in Testing Data Center Infrastructure. Ed Gastle VIAVI Solutions

Trends in Testing Data Center Infrastructure. Ed Gastle VIAVI Solutions Trends in Testing Data Center Infrastructure Ed Gastle VIAVI Solutions Agenda INTRODUCTION Modern Data Center Architecture Historically Enterprise has been a 3 tier topology Core, Aggregation, Access Cloud

More information

HIGH DENSITY DATA CENTER CABLING GUIDE. Optics DACs Cables Cable Management Fiber TAPs Media Converters

HIGH DENSITY DATA CENTER CABLING GUIDE. Optics DACs Cables Cable Management Fiber TAPs Media Converters HIGH DENSITY DATA CENTER CABLING GUIDE Optics DACs Cables Cable Management Fiber TAPs Media Converters PROVIDING MTP CABLING SOLUTIONS FOR THE SMART DATA CENTER. APPROVED NETWORKS SETS THE STANDARD The

More information

The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures. Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress

The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures. Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress 36 Years of Experience CABLExpress is a manufacturer of

More information

SLIDE 1 - COPYRIGHT G. Across the country, Across the Data Centre. Tim Rayner. Optical Engineer, AARNet.

SLIDE 1 - COPYRIGHT G. Across the country, Across the Data Centre. Tim Rayner. Optical Engineer, AARNet. SLIDE 1 - COPYRIGHT 2015 100G Across the country, Across the Data Centre Tim Rayner Optical Engineer, AARNet Tim.Rayner@AARNet.edu.au Agenda SLIDE 2 - COPYRIGHT 2015 Review of 10G, 40G &100G Standards

More information

DENSIFICATION DATA CENTER JULY 2018 FOR PROFESSIONALS MANAGING THE CABLE AND WIRELESS SYSTEMS THAT ENABLE CRITICAL COMMUNICATIONS DESIGN TECHNOLOGY

DENSIFICATION DATA CENTER JULY 2018 FOR PROFESSIONALS MANAGING THE CABLE AND WIRELESS SYSTEMS THAT ENABLE CRITICAL COMMUNICATIONS DESIGN TECHNOLOGY JULY 2018 FOR PROFESSIONALS MANAGING THE CABLE AND WIRELESS SYSTEMS THAT ENABLE CRITICAL COMMUNICATIONS DATA CENTER DENSIFICATION DESIGN PAGE 11 Why 5G needs fiber TECHNOLOGY PAGE 18 IoT is the problem

More information

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Storage Transitions Change Network Needs Software Defined Storage Flash Storage Storage

More information

High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center

High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center Table of Contents Introduction 3 Figure 1 - The Ethernet road map 3 Planning considerations 4 Fiber

More information

Dell Networking Optics and Cables Connectivity Guide

Dell Networking Optics and Cables Connectivity Guide Dell Networking and Cables Connectivity Guide 1 Gigabit Ethernet (1GbE) 10 Gigabit Ethernet (10GbE) 40 Gigabit Ethernet (40GbE) 100 Gigabit Ethernet (100GbE) Fibre Channel Networking I/O Connectivity Options

More information

A Fork in the Road OM5 vs. Single-Mode in the Data Center. Gary Bernstein, Sr. Director, Product Management, Network Solutions

A Fork in the Road OM5 vs. Single-Mode in the Data Center. Gary Bernstein, Sr. Director, Product Management, Network Solutions A Fork in the Road OM5 vs. Single-Mode in the Data Center Gary Bernstein, Sr. Director, Product Management, Network Solutions Outline Definition of Enterprise and Cloud Data Centers The Growth of Cloud

More information

Tim Takala RCDD. Vice President Field Application Engineer APAC. Mobility is Driving Changing Network Architectures & Transceiver Evolutions

Tim Takala RCDD. Vice President Field Application Engineer APAC. Mobility is Driving Changing Network Architectures & Transceiver Evolutions Tim Takala RCDD Vice President Field Application Engineer APAC Mobility is Driving Changing Network Architectures & Transceiver Evolutions 5TOP TRENDS SHAPING NETWORKS 1. Mobility 2. Connected Devices

More information

OFC/NFOEC 2011 Interoperability Demonstration

OFC/NFOEC 2011 Interoperability Demonstration OFC/NFOEC 2011 Interoperability Demonstration March 7, 2011 1. Overview The Ethernet Alliance has twelve members participating in the OFC/NFOEC 2011 demonstration. The participating companies include:

More information

InfiniBand SDR, DDR, and QDR Technology Guide

InfiniBand SDR, DDR, and QDR Technology Guide White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses

More information

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Optical Communications

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Optical Communications Optical Trends in the Data Center Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Optical Communications Telecom 2017 Corning Restricted Incorporated 2 Server Access Switch

More information

Signature Core Fiber Optic Cabling System

Signature Core Fiber Optic Cabling System White Paper June 2012 WP-17 Signature Core Fiber Optic Cabling System Multimode Fiber: Understanding Chromatic Dispersion Introduction The performance and reliability of networks within the Data Center

More information

CDFP MSA Delivers 400 Gb/s Today

CDFP MSA Delivers 400 Gb/s Today CDFP MSA Delivers 400 Gb/s Today The CDFP Form Factor The CDFP module is the first 400 Gb/s form factor and will enable the highest port and bandwidth density of any pluggable form factor. With a 32mm

More information

Silicon Based Packaging for 400/800/1600 Gb/s Optical Interconnects

Silicon Based Packaging for 400/800/1600 Gb/s Optical Interconnects Silicon Based Packaging for 400/800/1600 Gb/s Optical Interconnects The Low Cost Solution for Parallel Optical Interconnects Into the Terabit per Second Age Executive Summary White Paper PhotonX Networks

More information

Fiber Optic Cabling Systems for High Performance Applications

Fiber Optic Cabling Systems for High Performance Applications Fiber Optic Cabling Systems for High Performance Applications BICSI Conference Bangkok, Thailand 17-18 November 2016 Nicholas Yeo, RCDD/NTS/DCDC Data Center Trends Computing evolution Cloud computing Servers

More information

40 GbE: What, Why & Its Market Potential

40 GbE: What, Why & Its Market Potential 40 GbE: What, Why & Its Market Potential Contributors: Gautam Chanda, Cisco Systems Yinglin (Frank) Yang, CommScope, Inc. November 2010 1 P a g e Table of Contents Executive Summary...3 Introduction: The

More information

High-bandwidth CX4 optical connector

High-bandwidth CX4 optical connector High-bandwidth CX4 optical connector Dubravko I. Babić, Avner Badihi, Sylvie Rockman XLoom Communications, 11 Derech Hashalom, Tel-Aviv, Israel 67892 Abstract We report on the development of a 20-GBaud

More information

The Future of High Performance Interconnects

The Future of High Performance Interconnects The Future of High Performance Interconnects Ashrut Ambastha HPC Advisory Council Perth, Australia :: August 2017 When Algorithms Go Rogue 2017 Mellanox Technologies 2 When Algorithms Go Rogue 2017 Mellanox

More information

Data Center Applications and MRV Solutions

Data Center Applications and MRV Solutions Data Center Applications and MRV Solutions Introduction For more than 25 years MRV has supplied the optical transport needs of customers around the globe. Our solutions are powering access networks for

More information

1 of 5 12/31/2008 8:41 AM

1 of 5 12/31/2008 8:41 AM 1 of 5 12/31/2008 8:41 AM SAVE THIS EMAIL THIS Close All multimode fiber is not created equal By Robert Reid Bandwidth, reach, and cost are among the key factors you ll need to weigh. Multimode fiber cabling

More information

Networking Terminology Cheat Sheet

Networking Terminology Cheat Sheet Networking Cheat Sheet YOUR REFERENCE SHEET FOR GOING WEB-SCALE WITH CUMULUS LINUX With Cumulus Linux, you can build a web-scale data center with all the scalability, efficiency and automation available

More information

The Arista Universal transceiver is the first of its kind 40G transceiver that aims at addressing several challenges faced by today s data centers.

The Arista Universal transceiver is the first of its kind 40G transceiver that aims at addressing several challenges faced by today s data centers. ARISTA WHITE PAPER QSFP-40G Universal Transceiver The Arista Universal transceiver is the first of its kind 40G transceiver that aims at addressing several challenges faced by today s data centers. Increased

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

ExtremeSwitching TM. 100Gb Ethernet QSFP28 Transceivers and Cables. Product Overview. Data Sheet. Highlights. 100Gb Ethernet QSFP28 SR4 MMF

ExtremeSwitching TM. 100Gb Ethernet QSFP28 Transceivers and Cables. Product Overview. Data Sheet. Highlights. 100Gb Ethernet QSFP28 SR4 MMF Data Sheet Highlights Compact Quad Small Form-factor Pluggable (QSFP28) products for high density 100Gb Ethernet applications Hot swappable, field serviceable modular interfaces for Extreme Networks switches

More information

Siemon Interconnect Solutions

Siemon Interconnect Solutions Using Siemon's SFP+ interconnect assemblies proven by UNH IOL to be interoperable with Brocade, Cisco, Dell, and other leading equipment A cost-effective and lower-power alternative to optical fiber cables

More information

Why Service Providers Should Consider IPoDWDM for 100G and Beyond

Why Service Providers Should Consider IPoDWDM for 100G and Beyond Why Service Providers Should Consider IPoDWDM for 100G and Beyond Executive Summary The volume of traffic on service providers networks is growing dramatically and correspondingly increasing cost pressures.

More information

Integrated Optical Devices

Integrated Optical Devices Integrated Optical Devices May 2018 Integrated Optical Devices 2017 a good year for Silicon Photonics, a fantastic year for integrated InP and GaAs optics Source: Luxtera with text added by LightCounting

More information

40 Gbit/s Small-Form Low-Power Optical Transceiver

40 Gbit/s Small-Form Low-Power Optical Transceiver FEATURED TOPIC 4 Gbit/s Small-Form Low-Power Optical Transceiver Hideaki KAMISUGI*, Kuniyuki ISHII, Tetsu MURAYAMA, Hiromi TANAKA, Hiromi KURASHIMA, Hiroto ISHIBASHI and Eiji TSUMURA The authors have successfully

More information

Arista Cranks Leaf Switches To 100G For Big Data, Storage

Arista Cranks Leaf Switches To 100G For Big Data, Storage in Arista Cranks Leaf Switches To 100G For Big Data, Storage July 15, 2014 by Timothy Prickett Morgan The emergence of big data, storage, and streaming media applications that have bursty behavior on the

More information

High Speed Migration 100G & Beyond

High Speed Migration 100G & Beyond High Speed Migration 100G & Beyond Moses Ngugi Field Application Engineer 5th September 2017 BANDWIDTH GROWTH Mobile Data IP Video Global Cloud IP Traffic Global IP Traffic Cisco CAGR: 50+% CAGR: 35%+

More information

Ixia Flex Tap PASSIVE NETWORK MONITORING HIGHLIGHTS KEY FEATURES. Data Sheet

Ixia Flex Tap PASSIVE NETWORK MONITORING HIGHLIGHTS KEY FEATURES. Data Sheet Ixia Flex Tap Data Sheet PASSIVE NETWORK MONITORING The Ixia Flex Tap provide reliable continuous visibility of 100% of network traffic to support performance and security with no network performance degradation

More information

Reflex Photonics Inc. The Light on Board Company. Document #: LA Rev 3.1 June 2009 Slide 1

Reflex Photonics Inc. The Light on Board Company. Document #: LA Rev 3.1 June 2009 Slide 1 Reflex Photonics Inc. The Light on Board Company Document #: LA-970-063-00 Rev 3.1 June 2009 Slide 1 Reflex Photonics Inc. Who are we? Reflex designs and builds integrated parallel electrical-to-optical

More information

Singlemode vs Multimode Optical Fibre

Singlemode vs Multimode Optical Fibre Singlemode vs Multimode Optical Fibre White paper White Paper Singlemode vs Multimode Optical Fibre v1.0 EN 1 Introduction Fibre optics, or optical fibre, refers to the medium and the technology associated

More information

Introduction to Cisco ASR 9000 Series Network Virtualization Technology

Introduction to Cisco ASR 9000 Series Network Virtualization Technology White Paper Introduction to Cisco ASR 9000 Series Network Virtualization Technology What You Will Learn Service providers worldwide face high customer expectations along with growing demand for network

More information

Everything you wanted to know about cabling, but were afraid to ask

Everything you wanted to know about cabling, but were afraid to ask Everything you wanted to know about cabling, but were afraid to ask Join the teleconference To receive a call back, click Communicate/Teleconference/Join Teleconference. In the pop-up window, enter your

More information

Anticipating Cat 8 Agenda. Cat 8 Overview Cat 8 Customer Design Considerations Testing Procedures and Standards

Anticipating Cat 8 Agenda. Cat 8 Overview Cat 8 Customer Design Considerations Testing Procedures and Standards Anticipating Cat 8 Agenda Cat 8 Overview Cat 8 Customer Design Considerations Testing Procedures and Standards What is Cat 8? Next Generation Copper Cabling 25G and 40G data rates over twisted pair copper

More information

QDR Active Optical Cables. SC 10 New Orleans

QDR Active Optical Cables. SC 10 New Orleans QDR Active Optical Cables SC 10 New Orleans Christian Urricariet (christian.urricariet@finisar.com) November 16, 2010 World s Largest Supplier of Fiber Optic Components Company Highlights Market leader:

More information

Getting more from dark fiber

Getting more from dark fiber The business case for embedded DWDM A Whitepaper from Smartoptics Of all the methods available for corporate data centers and campuses to connect traffic and replicate data between sites, the majority

More information

The zettabyte era is here. Is your datacenter ready? Move to 25GbE/50GbE with confidence

The zettabyte era is here. Is your datacenter ready? Move to 25GbE/50GbE with confidence The zettabyte era is here. Is your datacenter ready? Move to 25GbE/50GbE with confidence The projected annual traffic for the year 2020 is 15 trillion gigabytes of data. Where does this phenomenal growth

More information

Overview Brochure Direct attach and active optical cables

Overview Brochure Direct attach and active optical cables Overview Brochure Direct attach and active optical cables Contact us on: Tel: 01189 122 980 Email: sales@cabcon.co.uk www.sterlingcablingsolutions.co.uk Contents High Speed Cables and Transceivers page

More information

Insuring 40/100G Performance Multimode Fiber Bandwidth Testing & Array Solutions. Kevin Paschal Sr. Product Manager Enterprise Fiber Solutions

Insuring 40/100G Performance Multimode Fiber Bandwidth Testing & Array Solutions. Kevin Paschal Sr. Product Manager Enterprise Fiber Solutions Insuring 40/100G Performance Multimode Fiber Bandwidth Testing & Array Solutions Kevin Paschal Sr. Product Manager Enterprise Fiber Solutions Fiber Optics LAN Section Part of the Telecommunications Industry

More information

Wideband Multimode Fiber What is it and why does it make sense?

Wideband Multimode Fiber What is it and why does it make sense? White Paper Wideband Multimode Fiber What is it and why does it make sense? March, 2015 Contents Executive summary 3 A brief history of MMF 3 The role of fiber connectors 4 Introducing WBMMF 5 2 Executive

More information

DELL EMC NETWORKING TRANSCEIVERS AND CABLES

DELL EMC NETWORKING TRANSCEIVERS AND CABLES DELL EMC NETWORKING TRANSCEIVERS AND CABLES Features and benefits Hot-swappable for simplified maintenance (no power-down required for installation or replacement) Some of the smallest and lowestpower

More information

The Green Advantage of 10G Optical Fiber

The Green Advantage of 10G Optical Fiber The Green Advantage of 10G Optical Fiber What is the definition of Green? Green. The word invokes natural images of deep forests, sprawling oak trees and financial images of dollar bills. The topic of

More information

Network Media and Layer 1 Functionality

Network Media and Layer 1 Functionality Network Media and Layer 1 Functionality BSAD 146 Dave Novak Dean, Chapter 3, pp 93-124 Objectives Introduction to transmission media Basic cabling Coaxial Twisted pair Optical fiber Basic wireless (NIC)

More information

Future Datacenter Interfaces Based on Existing and Emerging Technologies

Future Datacenter Interfaces Based on Existing and Emerging Technologies Future Datacenter Interfaces Based on Existing and Emerging Technologies Summer Topicals IEEE Photonics Society Waikoloa, Hawaii 8-10 July 2013 Chris Cole Outline 10G Multi-link 10G 40G Serial 40G 100G

More information

Optical Fiber Networks: Industry. and New Options for Networks

Optical Fiber Networks: Industry. and New Options for Networks Optical Fiber Networks: Industry Trends, Application Influences and New Options for Networks Herbert V Congdon II, PE Manager, Standards & Technology Tyco Electronics AMP NETCONNECT Solutions Preview Deciding

More information

AllWave FIBER BENEFITS EXECUTIVE SUMMARY. Metropolitan Interoffice Transport Networks

AllWave FIBER BENEFITS EXECUTIVE SUMMARY. Metropolitan Interoffice Transport Networks AllWave FIBER BENEFITS EXECUTIVE SUMMARY Metropolitan Interoffice Transport Networks OFS studies and other industry studies show that the most economic means of handling the expected exponential growth

More information

Conflicts in Data Center Fiber Structured Cabling Standards

Conflicts in Data Center Fiber Structured Cabling Standards Conflicts in Data Center Fiber Structured Cabling Standards By Dave Fredricks, CABLExpress Data Center Infrastructure Architect Various telecommunications and data center (DC)-specific standards are commonly

More information

Arista AgilePorts INTRODUCTION

Arista AgilePorts INTRODUCTION ARISTA TECHNICAL BULLETIN AgilePorts over DWDM for long distance 40GbE INSIDE AGILEPORTS Arista AgilePorts allows four 10GbE SFP+ to be combined into a single 40GbE interface for easy migration to 40GbE

More information

THE INVISIBLE UPGRADE: MOVING TO 1Gbps

THE INVISIBLE UPGRADE: MOVING TO 1Gbps READ ABOUT: Decreasing power consumption by upgrading from older 24-port 1U switches to 48-port 1U switches Monitoring Gigabit Ethernet for error rates and link speeds More access points with lower power

More information

LEAVE THE TECH TO US

LEAVE THE TECH TO US LEAVE THE TECH TO US The demands on today s IT professionals in managing data center challenges is driving the growth of pre-terminated fiber. w MODULAR. AGILE. FLEXIBLE. SCALABLE. MISSION-CRITICAL. RELIABLE.

More information

Roadmap to 400 Gigabit Ethernet over Multimode Fiber WHITEPAPER

Roadmap to 400 Gigabit Ethernet over Multimode Fiber WHITEPAPER Roadmap to 400 Gigabit Ethernet over Multimode Fiber WHITEPAPER The unprecedented exponential growth of information technology (IT) data is not a new finding. What is new is that there is now an end in

More information

Fiber Backbone Cabling in Buildings

Fiber Backbone Cabling in Buildings White Paper Fiber Backbone Cabling in Buildings www.commscope.com Contents Introduction 3 Backbone Cabling Speeds 3 Optical Fiber in the Backbone 4 b/s Optical Fiber Backbone 4 Migrating to 40 Gbps and

More information

Mellanox Virtual Modular Switch

Mellanox Virtual Modular Switch WHITE PAPER July 2015 Mellanox Virtual Modular Switch Introduction...1 Considerations for Data Center Aggregation Switching...1 Virtual Modular Switch Architecture - Dual-Tier 40/56/100GbE Aggregation...2

More information

Active Optical Cables. Dr. Stan Swirhun VP & GM, Optical Communications April 2008

Active Optical Cables. Dr. Stan Swirhun VP & GM, Optical Communications April 2008 Active Optical Cables Dr. Stan Swirhun VP & GM, Optical Communications April 2008 Supplier of Mixed Signal Products Supplier of Mixed Signal Communication Semiconductors, public $230M Medical Communications

More information

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Executive Summary There is no single end-all cabling configuration for every data center, and CIOs, data center professionals

More information

MPO/ MTP Parallel Optics Technology in High Speed Data Center Application

MPO/ MTP Parallel Optics Technology in High Speed Data Center Application MPO/ MTP Parallel Optics Technology in High Speed Data Center Application www.vlinkoptics.com 1 1.Data centers present and future The need for ever-greater bandwidths continues unabated. Data centers must

More information

Leverage Data Center Innovation for Greater Efficiency and Profitability

Leverage Data Center Innovation for Greater Efficiency and Profitability www.corning.com Leverage Data Center Innovation for Greater Efficiency and Profitability Data Centers Are the Engines that Power Modern Connected Life According to the Cisco Global Cloud Index, by 2020,

More information

Bend, Bandwidth or Both: enterprise networks. Ravi Yekula

Bend, Bandwidth or Both: enterprise networks. Ravi Yekula Bend, Bandwidth or Both: Making the right choice for enterprise networks Ravi Yekula Corning Optical Fiber Agenda Multimode fiber remains the most cost-effective choice for enterprise networks Multimode

More information

DC Network Connectivity

DC Network Connectivity DC Network Connectivity Options & Optimizing TCO Rakesh SAMBARAJU - Application Engineer Nexans Data Center Solutions Agenda l Part I Ethernet Standards and Technologies Ø Current Ethernet Standards Ø

More information

Fiber in Data Centers What s Next? Srinivasan B, RCDD

Fiber in Data Centers What s Next? Srinivasan B, RCDD Fiber in Data Centers What s Next? Srinivasan B, RCDD Hubbell Fiber Systems 10G / 40G / 100G / 400G Standards, Applications and Practices Agenda Part 1: Current Technologies Standards Review IEEE 802.3:

More information

Passive optical LAN explained

Passive optical LAN explained Understanding the technology for a more advanced enterprise network Strategic White Paper Network architects have used local area network (LAN) switches to manage the volume of traffic in enterprise networks

More information

Photonics Integration in Si P Platform May 27 th Fiber to the Chip

Photonics Integration in Si P Platform May 27 th Fiber to the Chip Photonics Integration in Si P Platform May 27 th 2014 Fiber to the Chip Overview Introduction & Goal of Silicon Photonics Silicon Photonics Technology Wafer Level Optical Test Integration with Electronics

More information

AMP NETCONNECT XG FIBER SYSTEM

AMP NETCONNECT XG FIBER SYSTEM AMP NETCONNECT XG FIBER SYSTEM The Complete High-Performance, Cost-Effective Optical Fiber Premises Cabling Solution for Supporting the Emerging Ten Gigabit Networks von mit freundlicher Genehmigung von

More information

SINGLEstream Link Aggregation Tap (SS-100)

SINGLEstream Link Aggregation Tap (SS-100) SINGLEstream Link Aggregation Tap (SS-00) Optional 3-Unit Rack Mount Datacom Systems SINGLEstream 0/00 Link Aggregation Tap provides a superior solution for 4x7 monitoring of full-duplex Ethernet links.

More information

Cisco MDS 9000 Family Pluggable Transceivers

Cisco MDS 9000 Family Pluggable Transceivers Cisco MDS 9000 Family Pluggable Transceivers The Cisco Small Form-Factor Pluggable (), and X2 devices for use on the Cisco MDS 9000 Family are hot-swappable transceivers that plug into ports on the Cisco

More information

SFP GBIC XFP. Application Note. Cost Savings. Density. Flexibility. The Pluggables Advantage

SFP GBIC XFP. Application Note. Cost Savings. Density. Flexibility. The Pluggables Advantage SFP GBIC XFP The Pluggables Advantage interfaces in the same general vicinity. For example, most major data centers have large Ethernet (and Gigabit Ethernet) networks with copper, multimode and single-mode

More information

Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure

Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure W H I T E P A P E R Integrate 10 Gb, 40 Gb and 100 Gb Equipment More Efficiently While Future-Proofing Your Fiber and Copper Network Infrastructure Executive Summary Growing demand for faster access to

More information

AllPath OPX. Product Description. AP-4240 Features. Datasheet DS-00102

AllPath OPX. Product Description. AP-4240 Features. Datasheet DS-00102 Product Description The Fiber Mountain AllPath Optical Path Exchange (OPX) product family provides any port-to-any port optical cross connect for point-to -point and point-to-multipoint applications, creating

More information

Active Optical Cable. Optical Interconnection Design Innovator 25G SFP28 AOC 10G SFP+ AOC. Shenzhen Gigalight Technology Co.,Ltd.

Active Optical Cable. Optical Interconnection Design Innovator 25G SFP28 AOC 10G SFP+ AOC. Shenzhen Gigalight Technology Co.,Ltd. Optical Interconnection Design Innovator Active Optical Cable Shenzhen Gigalight Technology Co.,Ltd. Add: 17F, Zhongtai Nanshan Zhujue Building, 4269 Dongbin Road, Nanshan District, Shenzhen, Guangdong

More information

Expanding your network horizons

Expanding your network horizons Datacenter Interconnect Lösungen von 1G bis 100G - Brocade- und Cisco-zertifizierte FibreChannel-Lösungen bis 32G - SAN und IP-Interconnect - Längenrestriktionen bei der Übertragung - 100G ist das neue

More information

Building a simple network, part 1

Building a simple network, part 1 Building a simple network, part 1 Brien M. Posey Nov 26, 1999 What it s all about Planning and building your simple network. What you take away An overview of the essential components that make up a simple

More information

SAN Distance Extension Solutions

SAN Distance Extension Solutions SN Distance Extension Solutions Company Introduction SmartOptics designs and markets all types of fibre optical Product portfolio: transmission products. Headquarted in Oslo, Norway, we serve Storage,

More information

InfiniBand FDR 56-Gbps QSFP+ Active Optical Cable PN: WST-QS56-AOC-Cxx

InfiniBand FDR 56-Gbps QSFP+ Active Optical Cable PN: WST-QS56-AOC-Cxx Data Sheet PN: General Description WaveSplitter s Quad Small Form-Factor Pluggable Plus (QSFP+) active optical cables (AOC) are highperformance active optical cable with bi-directional signal transmission

More information

Choosing MPO Connectors for the Data Center. Ken Hall, RCDD, NTS Data Center Architect CommScope

Choosing MPO Connectors for the Data Center. Ken Hall, RCDD, NTS Data Center Architect CommScope Choosing MPO Connectors for the Data Center Ken Hall, RCDD, NTS Data Center Architect CommScope Agenda Network Architecture changes Data Center & MPO Standards Application comparisons Additional considerations

More information

Guide to Interconnecting 3Com Switches Using 10-Gigabit Ethernet

Guide to Interconnecting 3Com Switches Using 10-Gigabit Ethernet Guide to Interconnecting 3Com Switches Using 10-Gigabit Ethernet TECHNICAL GUIDE CONTENTS Page Section 1 1.0 Understanding 10-Gigabit Technology 1.1 What is 10-Gigabit Ethernet 1.2 Behind 10-Gigabit Ethernet

More information

TE100-S16 16-Port 10/100Mbps Fast Ethernet Switch. User s Guide

TE100-S16 16-Port 10/100Mbps Fast Ethernet Switch. User s Guide TE100-S16 16-Port 10/100Mbps Fast Ethernet Switch User s Guide FCC Warning This equipment has been tested and found to comply with the regulations for a Class A digital device, pursuant to Part 15 of the

More information

Optics Modules and Cables Data Sheet

Optics Modules and Cables Data Sheet Optics Modules and Cables Data Sheet Key Features Deployment flexibility of 100GbE, 40GbE, 25GbE, 10GbE or 1GbE modules Smallest and lowest power SFP optic module form factor for 1GbE, 10GbE and 25GbE

More information

Fiber backbone cabling in buildings

Fiber backbone cabling in buildings White paper Fiber backbone cabling in buildings www.commscope.com 1 Contents Introduction 3 Backbone cabling speeds 3 Optical fiber in the backbone 3 10 Gbps optical fiber backbone 3 Migrating to 40 Gbps

More information

White Paper: Testing Strategies for SYSTIMAX High Speed Migration Solutions

White Paper: Testing Strategies for SYSTIMAX High Speed Migration Solutions White Paper: Testing Strategies for SYSTIMAX High Speed Migration Solutions Exploding demand for bandwidth is pushing data center teams to rethink their network infrastructure as they look to support faster

More information

Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches

Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches Introduction Today s data centers are being transformed as the most recent server technologies are deployed in them. Multisocket servers

More information

STREET READY SMALL CELL & Wi-Fi BACKHAUL

STREET READY SMALL CELL & Wi-Fi BACKHAUL STREET READY SMALL CELL & Wi-Fi BACKHAUL Gigabit capacity all the way to your small cell & Wi-Fi hotspots over the interference free millimeter wave band, designed to minimize both CAPEX and OPEX Challenge

More information

The Future of Fiber Optics in the Data Center Today, Tomorrow and Next Year. Dwayne Crawford Global PLM Fiber Connectivity

The Future of Fiber Optics in the Data Center Today, Tomorrow and Next Year. Dwayne Crawford Global PLM Fiber Connectivity The Future of Fiber Optics in the Data Center Today, Tomorrow and Next Year Dwayne Crawford Global PLM Fiber Connectivity The World We Live In The Internet of Everything Internet of Information 60T web

More information

100 GBE - WHAT'S NEW AND WHAT'S NEXT

100 GBE - WHAT'S NEW AND WHAT'S NEXT Here is more rough stuff on the ALTO ALOHA network. Memo sent by Bob Metcalfe on May 22, 1973. 100 GBE - WHAT'S NEW AND WHAT'S NEXT Greg Hankins UKNOF25 UKNOF25 2013/04/18 40 Gigabit

More information

Physical Layer V What does the physical layer provide?

Physical Layer V What does the physical layer provide? SEMESTER 1 Chapter 8 Physical Layer V 4.0 90 Points 8.1.1 What does the physical layer provide? What are the four elements of delivering frames across the media? 8.1.2 What are the three basic forms of

More information

100G DWDM QSFP Datasheet

100G DWDM QSFP Datasheet 100G DWDM QSFP Datasheet Product Overview The Arista Dense Wavelength-Division Multiplexing (DWDM) 100G QSFP pluggable module (Figure 1) offers cost effective solution for metro Data Center Interconnect

More information

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017 Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2017 InfiniBand Accelerates Majority of New Systems on TOP500 InfiniBand connects 77% of new HPC

More information