Dell Networking S6000

Similar documents
Huawei Technologies requested Miercom evaluate the

Dell Networking S6000

Lab Testing Summary Report

Lab Testing Summary Report

Dell Networking S6000

Cisco engaged Miercom to verify the performance and advanced

Huawei Technologies engaged Miercom to conduct an evaluation

Performance & Interoperability Dell Networking 7000 and 8100 Switch Series

Huawei Technologies engaged Miercom to evaluate the S2700-EI

Huawei Technologies engaged Miercom to evaluate the S12700

Huawei Technologies engaged Miercom to conduct an

3all environmental impact and business enabling green benefits that

WildPackets TimeLine network recorder featuring the OmniPeek

Hewlett-Packard* A5800 Series and A5820 Switches were

H3C S5800 Series and S5820X Switches were evaluated by

Hbusiness enabling green benefits that the A G SI offers to its

Huawei Technologies engaged Miercom to evaluate several series

Huawei Technologies engaged Miercom to evaluate the S9300

Lab Test Report DR100401D. Cisco Nexus 5010 and Arista 7124S

Aruba Networks engaged Miercom to provide an independent

LabTest Report. Cisco Nexus 3064

Summary. Controller. Product. Category: Vendor Tested: a rate. Sonus Network while. Report scenarios. uses 48VDC NBS9000.

CSince XenDesktop 4 Enterprise and Platinum editions offer many other

Introduction. Executive Summary. Test Highlights

Hewlett-Packard s ProCurve 2520 Series Switches were evaluated

Hewlett-Packard s ProCurve 1810 Series Switches were evaluated

Sonus Networks engaged Miercom to evaluate the call handling

Hewlett Packard s ProCurve 5406zl switch was evaluated by

Data Center 40GE Switch Study. Cisco Nexus 9508 DR L. 24 February 2014 Report DR140126L

Hewlett-Packard* A7506 Ethernet Switch was evaluated by

3Com Switch 7906E/H3C S7506E was evaluated by Miercom under

Cisco Nexus 6004 Switch Performance Validation

Lab Testing Summary Report

Lab Testing Summary Report

Sonus Networks submitted their SBC 5100 Session Border

Hewlett-Packard* E4800G 24-Port and 48-Port switches were

3Com Switch 4800G/H3C 5500-EI 24-Port and 48-Port switches

Lab Testing Summary Report

Arista 7060X, 7060X2, 7260X and 7260X3 series: Q&A

Red Condor had. during. testing. Vx Technology high availability. AntiSpam,

Arista 7160 series: Q&A

IBM Ethernet Switch J08E and IBM Ethernet Switch J16E

Cisco Nexus 9508 Switch Power and Performance

OmniSwitch 6900 Overview 1 COPYRIGHT 2011 ALCATEL-LUCENT ENTERPRISE. ALL RIGHTS RESERVED.

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

Dell PowerEdge M1000e Blade Enclosure and Dell PS Series SAN Design Best Practices Using Dell S-Series and M-Series Networking

T H E TOLLY. No September Dell PowerConnect 3348 vs. 3Com SuperStack 3

T H E TOLLY. No September 2002

Metaswitch engaged Miercom for an evaluation of the Perimeta

Arista 7170 series: Q&A

Arista 7320X: Q&A. Product Overview. 7320X: Q&A Document What are the 7320X series?

Arista 7300X and 7250X Series: Q&A

Emulex OCe11102-N and Mellanox ConnectX-3 EN on Windows Server 2008 R2

Network Design Considerations for Grid Computing

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

CloudEngine 6800 Series Data Center Switches

Gigabit Ethernet XMVR LAN Services Modules

Force10 Networks, Inc.

Gigabit Ethernet XMVR LAN Services Modules

TOLLY. Extreme Networks, Inc.

MS425 SERIES. 40G fiber aggregation switches designed for large enterprise and campus networks. Datasheet MS425 Series

1588v2 Performance Validation for Mobile Backhaul May Executive Summary. Case Study

10 Gigabit Ethernet XM LAN Services Modules

Isocore Summary Report. Ixia Gigabit Ethernet LSM1000XMV16 GigE LAN Services Module (XMV16) Scalability Verification in Realistic Test Environment

Validation of Cisco SCE8000

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Dell DCPPE-200. Dell PowerEdge Professional. Download Full version :

Cisco SFS 7000D InfiniBand Server Switch

Cisco UCS 6324 Fabric Interconnect

Cisco Nexus 3548 Switch Performance Validation December 2012

TOLLY. Nortel, Inc. Ethernet Routing Switch 5000 Series. Test Summary

Deploying Data Center Switching Solutions

QuickSpecs. HP Z 10GbE Dual Port Module. Models

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

Dell PS Series DCB Configuration Best Practices

BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS

Meraki MS Series Switches

Cisco Nexus 9200 Switch Datasheet

Network World Clear Choice Test: Access Switches Scheduled for publication in Network World in March 2008 Test Methodology

FGS-2616X L2+ Managed GbE Fiber Switches

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

4 PWR XL: Catalyst 3524 PWR XL Stackable 10/100 Ethernet

24-Port: 20 x (100/1000M) SFP + 4 x Combo (10/100/1000T or 100/1000M SFP)

TOLLY. No November 2005 Nortel Ethernet Routing Switch 5510, 5520, 5530 Layer 2 Performance, Resiliency and Ease of Use

Gigabit Managed Ethernet Switch

GIGABIT ETHERNET XMVR LAN SERVICES MODULES

Arista 7050X, 7050X2, 7250X and 7300 Series Performance Validation

As IT organizations look for better ways to build clouds and virtualized data

CloudEngine 6800 Series Data Center Switches

7500 Series Data Center Switch

Meraki MS Series Switches

SETUP INFO TPE-1620WS (V2.0R) TPE-1620WS

As enterprise organizations face the major

Gigabit EasySmart Switches

H3C S12516X-AF Sets New World Records for Data Center Performance. June 2017

PowerEdge FX2 - Upgrading from 10GbE Pass-through Modules to FN410S I/O Modules

Transcription:

Dell Networking S6000 High-performance 10/40 GbE Top-of-Rack Switch Miercom Lab Testing Report August 2013 October Report ******* 2013 Report 130815

Contents 1.0 Executive Summary... 3 2.0 About the Dell Networking S6000 10/40 GbE Switch... 4 3.0 Methodology... 5 3.1 Test Bed Diagram... 6 3.2 Hardware and Software Featured in Testing... 6 4.0 Performance Testing... 7 4.1 RFC 2544 Throughput... 7 4.2 RFC 2544 Latency... 9 4.3 RFC 2889 Fully Meshed Throughput... 13 4.4 RFC 2889 Fully Meshed Latency... 15 4.5 RFC 3918 Layer 3 Multicast Throughput... 17 4.6 RFC 3918 Layer 3 Multicast Latency... 19 4.7 RFC 3918 Layer 3 Group Join Delay and Group Leave Delay... 20 5.0 Scalability Test... 22 6.0 Power Consumption and Efficiency Test... 23 7.0 VDI Scalability Testing... 24 8.0 Features... 27 8.1 Tool-less and Hot-swappable Maintenance... 27 8.2 Default Configuration... 27 Copyright 2013 Miercom Page 2 8October2013

1.0 Executive Summary Dell engaged Miercom to evaluate its Dell Networking S6000 high-performance 10/40 GbE Top-of-Rack/End-of-Row switch for high-bandwidth, low-latency deployments, such as use in virtualized data centers. Comprehensive, hands-on testing assessed performance and energy efficiency as well as scalability to 10,000 virtual desktop users in a Virtual Desktop Infrastructure (VDI) environment. Performance testing, which focused on throughput and latency, utilized RFC 2544, 2889 and 3918 benchmarking methodologies. Key findings include: Easily transmits frame sizes of 128 to 12000 bytes at full line rate with low latency and no loss in RFC 2544, 2889 and 3918 testing Verified table capacities support high port density and high performance: 16,384 IPv4 routes, 163,836 MAC addresses and 52,251 ARP addresses RFC 2544 Layer 2 performance testing validated full line rate throughput of 2.56 Tbps with all ports fully loaded and a forwarding rate of 1,464,007,507 frames per second (fps) Energy consumption ranged from.25 of a watt per Gbps for the smallest frame size tested, 64 bytes, to.12 for the largest frame size tested, 12000 bytes Redundant power supplies and hot-swappable cooling fans and hard drives simplify ongoing maintenance With a base configuration of 32 ports of 40 GbE QSFP+, the switch can play an important role as a spine switch in a leaf-spine architecture that is the foundation of a cloud-based environment. It also can help to connect physical hardware and virtual machines in a virtualized environment. The S6000 also can be configured with 96 ports of 10 GbE and eight additional ports of 40 GbE. This is a migration path as speed in the network core approaches 40 Gbps. Miercom was impressed with the S6000, which exhibited high performance and low latency in performance testing while operating in store-and-forward mode and running two different versions of the FTOS firmware, pre-release 9-0 (2-28) and production 9.0 (2.0). It also exhibited a high level of scalability in a VDI environment. The Dell S6000 Top-of-Row/End-of-Rack switch operating in store-and-forward mode has earned Miercom Performance Verified. Rob Smithers CEO Miercom Copyright 2013 Miercom Page 3 8October2013

2.0 About the Dell Networking S6000 10/40 GbE Switch The Dell Networking S6000 is a Layer 2 and Layer 3 Top-of-Row/End-of-Rack 10/40 Ethernet switch designed for deployment in scenarios that require a combination of high bandwidth and low latency. A key deployment scenario is as a switch in traditional Ethernet and Layer 2 fabrics for virtual data centers. Others scenarios are: Aggregation switch for enterprise LAN serving mid-sized and large customers or handling high-frequency financial trading, Web 2.0, big data and other heavy workload operations Traditional Ethernet switch with redundant connections to 10 GbE rack and blade servers The S6000 delivers high performance, 2.56 Tbps of switching I/O bandwidth in full duplex mode, from a compact 1U form factor, which conserves rack space. The MTU verified in testing is 12000 bytes, a super jumbo frame size. The primary configuration is 32 ports of 40 GbE QSFP+. An alternate configuration, 96 ports of 10 GbE and eight additional ports of 40 GbE, can create a pathway for the migration of speed in the network core to 40 Gbps. Configuration of the FTOS switch firmware is done via the CLI. The default forwarding mode is store-and-forward. Large tables support the high port density and high performance of the S6000. Testing verified the following capacities: IPv4 routing table, 16,384; MAC address table, 163,836; and ARP address table, 52,251. All are beyond the vendor-stated capacity. Priority-Based Flow Control (PFC), Data Center Bridge Exchange (DCBX) and Enhanced Transmission Selection (ETS) make the S6000 a good fit for the Data Center Bridging (DCB) environment and iscsi storage networking. Layer 2 multi-path support via Virtual Link Trunking (VLT) is a key feature. A proprietary Layer 2 link aggregation protocol, VLT offers servers connected to different access switches a redundant, load-balancing connection to the network core in a loop-free environment that has benefits beyond that of Spanning Tree Protocol. The S6000 also supports Multi-domain Virtual Link Trunking (mvlt), a proprietary Dell design for multi-dimensional VLT that allows multiple VLT domains to be linked with a VLT LAG. VLT and mvlt enable the S6000 to be positioned as core-aggregation layer and to serve as a Layer 2 top-of-rack core or aggregation switch. The combination also provides a robust, multi-chassis lagging feature that permits the switch infrastructure to maintain high availability even during chassis upgrade times. Tool-less mounting kits, redundant power supplies and hot-swappable hard drives and cooling fans reduce the time and labor needed to install and maintain the S6000. Copyright 2013 Miercom Page 4 8October2013

3.0 Methodology The Dell Networking S6000 switch was evaluated in testing running a pre-release version of the FTOS firmware, 9-0 (2-28) as well as a production version, 9.0 (2.0). In performance testing, throughput and latency were measured in accordance with the RFC 2544, 2889 and 3918 benchmarking methodologies. Layer 2 and/or Layer 3 traffic was utilized in all performance tests, the result of which was the average of three tests. The RFC 2889 test was used to verify the throughput and latency of fully meshed traffic. The RFC 3918 test was used to verify the throughput and latency of Layer 3 IPv4 multicast traffic. RFC 2544, 2889 and 3918 latency values were verified utilizing 10 GbE and 40 GbE ports of the Dell S6000, which was configured in store-and-forward mode. Ixia XM12 Chassis Ixia (www.ixiacom.com) is an industry leader in performance testing of networking equipment. Traffic routed through the Dell S6000 switch was generated by the Ixia XM12 test platform, which ran the following Ixia test applications: IxNetwork for RFC 2544 and 2889 testing, IxAutomate for RFC 3918 and table size testing, and IxExplorer for power consumption testing. In addition, scalability testing verified the IPv4 route capacity and the capacity of the MAC and ARP tables. Power consumption was monitored during booting and idling as well as at full line rate with different frame sizes. The S6000 switch supports fiber optic cable, but does not support the Energy-Efficient Ethernet (EEE) standard, IEEE 802.az. The Ixia XM12 chassis running the Ixia IxNetwork application drove network traffic through the S6000 switch in RFC 2544 and 2889 testing. The combination of the XM12 and the IxAutomate application was used in the RFC 3918 testing. Ixia (www.ixiacom.com) is an industry leader in performance testing of networking equipment. Ixia s exclusive approach employs coordination of energy measurements with network traffic load, allowing energy consumption to be charted against network traffic volume. Real-world traffic is generated by Ixia s test platform and test applications, principally the IxAutomate application for Layer 2 and Layer 3 switching and routing traffic. The Ixia XM12 also was used to determine the capacity of the MAC address table and the IPv4 routing table. The BreakingPoint Firestorm chassis was used to determine the capacity of the ARP address table. The BreakingPoint Firestorm can saturate Class B subnets by injecting traffic that simulates more than thousands of servers and clients with unique IP addresses and MAC addresses. It is possible to configure port numbers for traffic injection as well as total and per-port bandwidth, session counts, and end device counts. The BreakingPoint Firestorm is managed via multiple interfaces, including a Web-based graphical user interface and a RS-232C or SSL CLI interface. The peer mapping function of the WildPackets OmniPeek network analyzer was used to analyze the characteristics of the VDI traffic. Copyright 2013 Miercom Page 5 8October2013

3.1 Test Bed Diagram Ixia XM12 Dell Networking S6000 BreakingPoint FireStorm Source: Miercom, October 2013 3.2 Hardware and Software Featured in Testing Hardware Name Function Software Version Dell S6000 Spine switch FTOS pre-release 9-0 (2-28), production 9.0 (2.0) Ixia XM12 Traffic generator IxOS 6.40.900.6 EA (Chassis: 7.00.395.6) BreakingPoint FireStorm Traffic generator System 3.1.0 Product Build: 116072 Software Name Function Version Ixia IxNetwork RFC 2544 and 2889 Testing Tool 7.0.801.25 EA Ixia IxAutomate RFC 3918 Multicast and Table Size Testing 7.40.123.5 GA-Patch1 Tool Ixia IxExplorer Power Consumption Testing Tool 6.40.900 Build 6 WildPackets OmniPeek Network Analyzer 7.5 Copyright 2013 Miercom Page 6 8October2013

4.0 Performance Testing Performance tests focused on throughput and latency and were conducted in accordance with the RFC 2544, 2889 and 3918 benchmark methodologies. Configuring the S6000 switch was straightforward. Testing verified that the MTU was 12000 bytes, a super jumbo frame size. Layer 2 traffic passed through a single VLAN. In order to conduct Layer 3 testing, each 10 GbE and 40 GbE port was untagged and was assigned to a unique VLAN. VLAN and IP address configuration also was straightforward. 4.1 RFC 2544 Throughput This test determines the maximum rate at which the S6000 receives and forwards Layer 2 and Layer 3 frames without loss. The Ixia XM12 as the test load generator forwarded traffic to and received it from each directly connected port. Results from two throughput tests are included in this report, one utilized 10 GbE ports. The other utilized 40 GbE ports. Frames generally were sent at the maximum theoretical rate based on the supported port speed. This test is configured with one-to-one traffic mapping as shown in the figure below. RFC 2544 Throughput Configuration The test results will show the maximum throughput the switch is able to achieve without any frame loss. In addition, a latency value will be captured for each frame size tested. Copyright 2013 Miercom Page 7 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 2544 Layer 2 and Layer 3 Throughput Test, 96 x 10 GbE and 8 x 40 GbE ports 100 90 80 70 Line Rate (%) 60 50 40 30 20 10 Layer 2 Layer 3 0 128 256 512 1024 1280 1518 2176 9216 12000 Frame Size (Bytes) Source: Miercom Switch Industry Assessment, October 2013 The Dell S6000 exhibited line rate throughput for Layer 2 and Layer 3 traffic using the RFC 2544 benchmarking methodology. The minimum frame size at which the switch handled 100% line rate throughput was 90 bytes. The maximum was a super jumbo frame size, 12000 bytes. Testing verified a forwarding rate for 64-byte packets of 1,464,007,507 frames per second (fps) for the switch. 96 x 10 GbE and 8 x 40 GbE ports were used in testing. Copyright 2013 Miercom Page 8 8October2013

4.2 RFC 2544 Latency A latency value was captured for each frame size utilized in RFC 2544 Layer 2 and Layer 3 throughput in the two tests, one utilizing 10GbE ports and the other utilizing 40 GbE ports. The S6000 switch exhibited low latency for all Layer 2 and Layer 3 frame sizes tested. For 10 GbE ports, there were minor differences in Layer 2 and Layer 3 maximum, average and minimum latency values for all frame sizes. The largest was.03 µs between the maximum Layer 2 (0.88 µs) and Layer 3 (0.91 µs) for 1024-byte frames. Dell Networking S6000 10/40 GbE Switch RFC 2544 Layer 2 Latency Test, 96 x 10 GbE ports Latency (µs) 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.82 0.80 0.72 0.70 0.61 0.60 0.84 0.85 0.78 0.74 0.65 0.62 0.88 0.88 0.90 0.89 0.87 0.89 0.81 0.79 0.79 0.78 0.78 0.79 0.67 0.66 0.67 0.65 0.65 0.65 Maximum Average Minimum 0.20 0.10 0.00 64 128 256 512 1024 1280 1518 2176 9216 12000 Source: Miercom Switch Industry Assessment, October 2013 Frame Size (Bytes) The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 2 Latency Test that utilized 96 x 10 GbE ports. Average latency ranged from a low of 0.70 µs for 128-byte frames to a high of 0.81 µs for 1024-byte frames. The switch was configured in store-and-forward mode and was tested with an Ixia XM12 using RFC standard benchmark test suites. Copyright 2013 Miercom Page 9 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 2544 Layer 3 Latency Test, 96 x 10 GbE ports Latency (µs) 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.81 0.78 0.72 0.69 0.62 0.60 0.84 0.74 0.62 0.87 0.91 0.87 0.90 0.87 0.89 0.89 0.77 0.80 0.78 0.79 0.77 0.78 0.78 0.65 0.65 0.65 0.65 0.64 0.65 0.65 Maximum Average Minimum 0.20 0.10 0.00 74 128 256 512 1024 1280 1518 2176 9216 12000 Frame Size (Bytes) Source: Miercom Switch Industry Assessment, October 2013 The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 3 Latency Test that utilized 96 x 10 GbE ports. Average latency ranged from a low of 0.69 µs for 128-byte frames to a high of 0.80 µs for 1024-byte frames. The switch was configured in store-and-forward mode and was tested with an Ixia XM12 using RFC standard benchmark test suites. Copyright 2013 Miercom Page 10 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 2544 Layer 2 Latency Test, 32 x 40 GbE ports 0.62 0.605 0.607 0.604 0.603 0.601 0.608 0.609 0.611 0.60 0.596 0.581 0.58 0.588 0.589 0.592 0.587 0.584 0.587 0.593 0.583 Latency (µs) 0.56 0.54 0.565 0.541 0.58 0.549 0.565 0.575 0.571 0.563 0.561 0.568 0.564 0.564 Maximum Average Minimum 0.52 0.50 128 256 512 1024 1280 1518 3054 5120 9216 11982 Frame Size (Bytes) Source: Miercom Switch Industry Assessment, October 2013 The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 2 Latency Test that utilized 32 x 40 GbE ports. Average latency ranged from a low of 0.565 µs for 128-byte frames to a high of 0.593 µs for 9216-byte frames. The switch was configured in store-and-forward mode and was tested with an Ixia XM12 using RFC standard benchmark test suites. Copyright 2013 Miercom Page 11 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 2544 Layer 3 Latency Test, 32 x 40 GbE ports 0.62 0.60 0.596 0.605 0.607 0.611 0.603 0.609 0.608 0.609 0.604 Latency (µs) 0.58 0.56 0.54 0.581 0.560 0.568 0.546 0.591 0.589 0.589 0.570 0.567 0.571 0.587 0.586 0.588 0.568 0.569 0.568 0.594 0.564 0.582 0.571 Maximum Average Minimum 0.52 0.534 0.50 128 256 512 1024 1280 1518 3054 5120 9216 11982 Frame Size (Bytes) Source: Miercom Switch Industry Assessment, October 2013 The Dell S6000 Switch exhibited consistently low latency values in the RFC 2544 Layer 3 Latency Test that utilized its 32 x 40 GbE ports. Average latency ranged from a low of 0.560 µs for 128-byte frames to a high of 0.594 µs for 9216-byte frames. The switch was configured in store-and-forward mode and was tested with an Ixia XM12 using RFC standard benchmark test suites. Copyright 2013 Miercom Page 12 8October2013

4.3 RFC 2889 Fully Meshed Throughput The objective is to determine the maximum rate of fully meshed traffic the S6000 can handle. The cross-processor performance of the switch is verified. With the Ixia XM12 generating test traffic and IxNetwork software conducting traffic analysis, many-to-many traffic mapping is used. Each S6000 10 GbE port transmits frames to all other switch ports in an evenly distributed, round-robin manner and receives frames from all other ports. RFC 2889 Fully Meshed Throughput Configuration The maximum throughput the switch can achieve without any frame loss will be verified. In addition, a latency value will be captured for each frame size tested. Copyright 2013 Miercom Page 13 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 2889 Layer 2 and Layer 3 Fully Meshed Throughput Test, 96 x 10 GbE 100 90 80 70 Line Rate (%) 60 50 40 30 20 10 Layer 2 Layer 3 0 128 256 512 1024 1280 1518 2176 9216 12000 Source: Miercom Switch Industry Assessment, October 2013 Frame Size (Bytes) The Dell S6000 switch exhibited line rate throughput for Layer 2 and Layer 3 traffic using the RFC 2889 benchmarking methodology. The S6000 achieved 100% line rate throughput for Layer 2 and Layer 3 fully meshed traffic for all frame sizes from 128 bytes to 12000 bytes. An Ixia XM12 using RFC standard benchmark suites conducted the tests, which utilized 96 x10 GbE ports on the switch. Copyright 2013 Miercom Page 14 8October2013

4.4 RFC 2889 Fully Meshed Latency A latency value was captured for each frame size in the RFC 2889 Layer 2 and Layer 3 throughput testing. The S6000 switch exhibited low latency for all Layer 2 and Layer 3 frame sizes tested. There was little difference between the Layer 2 and Layer 3 latency values for each frame size. Dell Networking S6000 10/40 GbE Switch RFC 2889 Fully Meshed Layer 2 Latency Test, 96 x 10 GbE Latency (µs) 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.82 0.80 0.72 0.70 0.60 0.60 0.88 0.90 0.89 0.90 0.89 0.84 0.81 0.78 0.79 0.80 0.78 0.74 0.62 0.65 0.65 0.64 0.65 0.64 0.91 0.90 0.81 0.79 0.67 0.66 Maximum Average Minimum 0.20 0.10 0.00 64 128 256 512 1024 1280 1518 2176 9216 12000 Frame Size (Bytes) Source: Miercom Switch Industry Assessment, October 2013 The Dell S6000 Switch exhibited consistently low latency values in the RFC 2889 Fully Meshed Layer 2 Latency Test. Average latency ranged from a low of 0.70 µs for 128-byte frames to a high of 0.81 µs for 1024- and 9216-byte frames. The S6000 was configured in store-and-forward mode. An Ixia XM12 using RFC standard benchmark suites conducted the test, which utilized 96 x 10 GbE ports on the switch. Copyright 2013 Miercom Page 15 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 2889 Fully Meshed Layer 3 Latency, 96 x 10 GbE ports Latency (µs) 1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.82 0.72 0.63 0.79 0.70 0.60 0.84 0.74 0.62 0.88 0.78 0.64 0.91 0.90 0.89 0.89 0.89 0.91 0.81 0.79 0.80 0.78 0.78 0.79 0.66 0.65 0.65 0.64 0.65 0.66 Maximum Average Minimum 0.20 0.10 0.00 74 128 256 512 1024 1280 1518 2176 9216 12000 Frame Size (Bytes) Source: Miercom Switch Industry Assessment, October 2013 The Dell S6000 Switch exhibited consistently low latency values in the RFC 2889 Fully Meshed Layer 3 Latency Test. Average latency ranged from a low of 0.70 µs for 128-byte frames to a high of 0.81 µs for 1024-byte frames. The S6000 was configured in store-and-forward mode. An Ixia XM12 using RFC standard benchmark suites conducted the test, which utilized 96 x 10 GbE ports on the switch. Copyright 2013 Miercom Page 16 8October2013

4.5 RFC 3918 Layer 3 Multicast Throughput The objective of the RFC 3918 Layer 3 Multicast Throughput Test is to validate the maximum rate of Layer 3 IPv4 multicast traffic that can be handled by the S6000. A binary or a linear search and one-to-many traffic mapping, minimum of two ports, was used as shown in the figure below. IGMP snooping, based on the IGMPv2 protocol, was enabled on the S6000 switch in order to learn multicast groups and their members. Traffic was transmitted from one port on the S6000 and received on the other ports. The IxAutomate application running on the Ixia XM12 injected IGMPv2 multicast traffic. RFC 3918 Multicast Throughput Configuration The S6000 successfully transmitted traffic to all multicast member ports at 100% line rate for frame sizes ranging from 68 to 12000 bytes. Testing verified that the switch was capable of snooping the multicast groups and then properly transmitting multicast traffic at 100% line rate with zero loss to each multicast group member. Copyright 2013 Miercom Page 17 8October2013

Dell Networking S6000 10/40 GbE Switch RFC 3918 Layer 3 Multicast Throughput Test, 96 x 10 GbE ports Line Rate (%) 100 90 80 70 60 50 40 30 20 10 0 68 128 256 512 1024 1280 1518 2176 9216 12000 Source: Miercom Switch Industry Assessment, October 2013 Frame Size (Bytes) The Dell S6000 switch exhibited line rate throughput for Layer 3 IPv4 traffic using RFC 3918 standard tests. The S6000 achieved 100% line rate throughput for Layer 3 IPv4 multicast traffic for all frame sizes from 68 bytes to 12000 bytes. An Ixia XM12 conducted the test, which utilized 96 x 10 GbE ports on the S6000. Copyright 2013 Miercom Page 18 8October2013

4.6 RFC 3918 Layer 3 Multicast Latency A latency value was captured for each frame size utilized in RFC 3918 Layer 3 throughput testing. The S6000 switch exhibited low latency for all frame sizes tested. Dell Networking S6000 10/40 GbE Switch RFC 3918 Multicast Layer 3 Latency Test, 96 x 10GbE 1.00 0.90 0.86 0.87 0.85 Latency (µs) 0.80 0.70 0.73 0.74 0.72 0.72 0.80 0.78 0.76 0.83 0.83 0.82 0.79 0.79 0.79 0.81 0.78 0.74 0.82 0.79 0.75 0.81 0.78 0.74 0.82 0.80 0.75 Maximum Average Minimum 0.70 0.70 0.60 0.50 68 128 256 512 1024 1280 1518 2176 9198 12000 Source: Miercom Switch Industry Assessment, October 2013 Frame Size (Bytes) The Dell S6000 Switch exhibited consistently low latency values in the RFC 3918 Multicast Latency Test. Average latency ranged from a low of 0.72 µs for 68- and 128-byte frames to a high of 0.83 µs for 1024- byte frames. The S6000 was configured in store-and-forward mode. An Ixia XM12 using RFC standard benchmark suites conducted the test, which utilized 96 x10 GbE ports of the switch. Copyright 2013 Miercom Page 19 8October2013

4.7 RFC 3918 Layer 3 Group Join Delay and Group Leave Delay The Group Join Delay Test determines how long it takes a switch to register multicast clients to a new or existing group in its forwarding table. The duration between the time a switch receives a group of IGMP/MLD Join requests and the time the multicast clients begin receiving traffic for the groups they joined is measured. The impact of different frame sizes on the duration is recorded. The Group Leave Delay Test determines how long it takes a switch to remove a client from its multicast table. The duration between the time a switch receives a group of IGMP/MLD Leave requests and the time the multicast clients stop receiving traffic for the groups they left is measured. The impact of different frame sizes on the duration is recorded. RFC 3918 Multicast Group Join Delay and Group Leave Delay Configuration Copyright 2013 Miercom Page 20 8October2013

In the Group Join Delay Test, the S6000 exhibited a gradual increase in latency that corresponded with the increase in frame size. Dell Networking S6000 10/40 GbE Switch RFC 3918 Multicast Group Join Delay Test, 10 GbE Frame Size (Bytes) Group Join Delay (ns) 68 220.00 128 223.89 256 237.22 12 261.11 1024 306.11 1280 329.44 1518 352.67 In the Leave Delay Test, the S6000 exhibited a fractional decrease in latency as the frame size increased. Dell Networking S6000 10/40 GbE Switch RFC 3918 Multicast Group Leave Delay Test, 10 GbE Frame Size (Bytes) Group Leave Delay (seconds) 68 30.13 128 30.06 256 30.01 512 29.99 1024 29.97 1280 29.97 1518 29.96 With all receivers subscribed to nine multicast groups, the average Group Join Delay latency of the S6000 was 275.63 nanoseconds compared to an average Group Leave Delay latency of 30.01 seconds. Testing verified the maximum multicast group capacity of the S6000 to be the vendor-stated figure, 8,000. Copyright 2013 Miercom Page 21 8October2013

5.0 Scalability Test The Dell Networking S6000 switch has high port density and 2.56 Tbps of bandwidth in full duplex mode. This requires large Layer 2 and Layer 3 tables. Scalability testing validated IPv4 route capacity and the address capacity of the MAC and ARP tables. IPv4 Route Capacity To verify the capacity that the S6000 can sustain, the OSPF route capacity test was conducted. The switch learned 16,384 IPv4 routes. MAC Address Table Size This test utilized two 10 GbE ports on the S6000, one configured to transmit and the other configured to receive. Random MAC addresses were transmitted to the receive port until the MAC table was filled. Capacity was verified to be 163,836 addresses. ARP Address Table Size The scalability of the S6000 when it interconnects with subnets containing a large number of nodes was assessed. The capacity of the ARP table was validated to be 52,251 addresses. Dell Networking S6000 10/40 GbE Switch Address and Routing Tables Verification Table Verified Capacity IPv4 Routing 16,384 MAC Address 163,836 ARP Address 52,251 Copyright 2013 Miercom Page 22 8October2013

6.0 Power Consumption and Efficiency Test Layer 2 traffic was generated for five minutes at 100% line rate using various frame sizes, 64 through 12000 bytes, to determine power consumption of the S6000. The power efficiency of the switch is based on the watts required to transmit traffic throughput measured in Gbps. The resulting value measured for power efficiency is in units of watts/gbps. Dell Networking S6000 10/40 GbE Switch Power Consumption and Efficiency Test, 96 x 10GbE and 8 x 40GbE Consumption (Watts) Efficiency (Watts/Gbps) Greater energy consumption is required for a switch to transmit smaller frames. In testing, the S6000 consumed the greater amount of energy to transmit the smallest frame size,.25 of a watt for 64 bytes. The smallest amount of energy was required to transmit the largest frame size, 0.12 of a watt for 12,000 bytes. Copyright 2013 Miercom Page 23 8October2013

7.0 VDI Scalability Testing Virtual Desktop Infrastructure (VDI) is being utilized in the IT infrastructure at a rapidly accelerating rate. Because virtual user desktops are hosted on physical servers inside the network core, the virtualization technology provides efficient resource utilization for end user computing and data security. It also facilitates a BYOD environment by offering access from any client devices, such as a laptop, smart phone, tablet and thin client. Since the network handles all input to and output from virtual user desktops, its scalability is crucial in a VDI environment. The objective of this test was to confirm whether the S6000 can support 10,000 users in a VDI environment. (10,000 is the maximum number of users that can be supported by the desktop virtualization software used in testing, Horizon View 5.2 from VMware.) The simulated traffic was injected using the Ixia XM12 and the BreakingPoint FireStorm. The scenarios consisted of traffic distribution data, frame size distribution and traffic capture (payload) that were emulated by both traffic generators. Dell Networking S6000 10/40 GbE Switch VDI Traffic Pattern (Peer and Mesh) This VDI traffic distribution is the model on which the traffic distribution used in scalability testing was based. It has a point-to-multipoint appearance. The VMware designation of a Power User (standard) was selected for the Horizon View clients. It is the third of four user types in ascending order in the VMware Horizon View Architecture Planning Guide. Characteristics include a usage level of compute-intensive and a virtual machine configuration of 1vCPU and 2GB RAM. Miercom projected prior to testing that in order to accommodate a virtual desktop environment of 10,000 Power Users (standard), the S6000 would have to maintain at least 20 Gbps of traffic. Copyright 2013 Miercom Page 24 8October2013

To analyze the characteristics of the VDI traffic, the peer mapping function of the WildPackets OmniPeek network analyzer was used. The Ixia XM12 injected fully meshed, custom IMIX traffic that was equal to or more than that needed to support 10,000 VDI sessions. Fully Meshed Traffic Generated by Ixia XM12 Traffic flow configuration path of fully meshed traffic between the Ixia XM12 traffic generation test and a generic Device under Test (DUT). The distribution of frame sizes of VDI traffic generated by the Ixia XM12 and handled by the S6000 is shown below. Because the VMware default frame size is 1300 bytes, 1024-1518 frames accounts for the largest % of the traffic distribution. VDI Traffic, Percentage Distribution by Frame Sizes 15.7 64-127 48.3 25.7 128-255 256-511 512-1023 1024-1518 5.7 4.7 VDI traffic frame-size distribution used in scalability testing for 10,000 users. Note that large packets, 1024-1518 bytes, make up nearly half of the distribution. Using just seven 10 GbE ports, the capacity of the S6000 was verified to be 63.2 Gbps of fully meshed traffic at 99.5% line rate with low latency. There was no frame loss and no network anomalies. See the table on the following page. Copyright 2013 Miercom Page 25 8October2013

The near-theoretical maximum of traffic throughput for these seven ports was achieved, far exceeding the amount needed to support a 10,000-user VDI environment. The slight difference from 100% line rate is attributable to the inter-frame gap (IFG) for the custom IMIX traffic distribution used in testing. Dell Networking S6000 10/40 GbE Switch Verified Throughput of Seven 10 GbE Ports, Fully Meshed Traffic Tested Ports Traffic Type Line Rate (%) Throughput (Gbps) 7 x10 GbE Fully Meshed 99.5 63.20 Copyright 2013 Miercom Page 26 8October2013

8.0 Features 8.1 Tool-less and Hot-swappable Maintenance Once a spine switch is installed in a data center, shutting it down or doing emergency maintenance on it is not easy. Many switches and other network equipment have removable, redundant power supplies and hot-swappable hard drives and cooling fans. The S6000 has six cooling fans that reside on a hot-swappable tray with a quick-remove tab. This is an advantage that shortens the mean time to repair (MTTR). 8.2 Default Configuration In the Dell Networking S6000, store-and-forward is the default operating mode. Spanning Tree Protocol and cut-through mode are disabled. The minimum queue size of the optional cut-through mode is one. As mentioned above, the MTU was 12000 bytes, a super jumbo frame size, for both Layer 2 and Layer 3. These characteristics, low latency and support for Data Center Bridging (DCB) make the S6000 well-suited for one of its key applications, an iscsi storage deployment including DCB-converged lossless transactions. Copyright 2013 Miercom Page 27 8October2013

About Miercom Miercom has hundreds of product comparison analyses published in leading network trade periodicals including Network World, Business Communications Review - NoJitter, Communications News, xchange, Internet Telephony and other leading publications. Miercom s reputation as the leading, independent product test center is unquestioned. Miercom s private test services include competitive product analyses, as well as individual product evaluations. Miercom features comprehensive certification and test programs including: Certified Interoperable, Certified Reliable, Certified Secure and Certified Green. Products may also be evaluated under the NetWORKS As Advertised program, the industry s most thorough and trusted assessment for product usability and performance. Other Notes and Comments Product names or services mentioned in this report are registered trademarks of their respective owners. Miercom makes every effort to ensure that information contained within our reports is accurate and complete, but is not liable for any errors, inaccuracies or omissions. Miercom is not liable for damages arising out of or related to the information contained within this report. Consult with professional services such as Miercom Consulting for specific customer needs analysis. The tests in this report are intended to be reproducible for customers who wish to recreate them with the appropriate test and measurement equipment. Current or prospective customers interested in repeating these results may contact reviews@miercom.com for details on the configurations applied to the Device Under Test and test tools used in this evaluation. Miercom recommends customers conduct their own needs analysis study and test specifically for the expected environment for product deployment before making a product selection. Copyright 2013 Miercom Page 28 8October2013