H3C S12516X-AF Sets New World Records for Data Center Performance. June 2017
|
|
- Samantha King
- 6 years ago
- Views:
Transcription
1 H3C S12516X-AF Sets New World Records for Data Center Performance June 2017
2 T A B L E O F C O N T E N T S Executive Summary... 3 About This Test Performance Test Results IPv4 Unicast Throughput With BGP Routing... 5 Routing Scalability and Throughput... 7 IPv4 Unicast Latency and Jitter With BGP Routing Routing Scalability Latency and Jitter With BGP Routing IPv6 Unicast Throughput With BGP Routing IPv6 Unicast Latency and Jitter With MP-BGP Routing EVPN Scalability With VXLAN Tunnels ISSU/ISSD Failover Times Conclusion Appendix A: About Network Test Appendix B: Hardware and Software Releases Tested Appendix C: Disclaimer Page 2
3 H3C S12516X-AF: A New World Record for Data Center Performance Executive Summary Data centers keep growing in two ways: They re larger, with more servers connected than ever before, and they re faster, with server NIC speeds of 10 gigabits and higher now common. For the core switches that tie everything together, one requirement stands out above all else: Scale to unprecedented heights. That s exactly what H3C has done with its S12516X-AF data center core switch, setting a new world record by demonstrating the highest bandwidth capacity for a single switch with G Ethernet interfaces. H3C commissioned independent test lab Network Test to validate the performance and scalability of its data center core switch. This is, by far, the largest assessment of 100G Ethernet switch performance yet conducted. In addition to performing rigorous stress tests on the switch fabric, Network Test assessed the H3C switch in terms of control-plane metrics such as BGP and MP-BGP routing with IPv4 and IPv6 traffic; Ethernet VPN (EVPN) scalability using VXLAN tunnels; and the impact of in-service software upgrades (ISSU). In all these areas, the H3C switch delivered stellar performance: The highest throughput ever recorded from a single switch chassis (76.8 terabits per second) Throughput of more than 100 million frames per second per port on each of G Ethernet ports Support for nearly 1 million unique routes learned via BGP Identical throughput when routing to 768 routes and nearly 1 million routes The highest EVPN scalability ever recorded with a single chassis (768 concurrent VXLAN tunnels) ISSU failover times measured in tens of microseconds This report is organized as follows. This section provides an overview of the test results. The About This Test section provides an overview of test cases and briefly covers the importance of each metric used and briefly describes issues common to all test cases. The Performance Test Results section provides full results from individual test cases. Appendix B provides software versions used in testing. Figure 1: The H3C S12516X-AF test bed, with Spirent TestCenter and G Ethernet ports Page 3
4 About This Test The device under test for this project was an H3C S12516X-AF data center core switch fully loaded with two supervisor modules and 16 line-card modules, each supporting gigabit Ethernet interfaces 1. This project assessed the H3C S12516X-AF using eight test cases, most involving G Ethernet interfaces: IPv4 unicast throughput with BGP routing (1 route per port) IPv4 route scalability throughput (nearly 1 million routes total) IPv4 unicast latency and jitter with BGP routing IPv4 routing scalability latency and jitter IPv6 unicast throughput with MP-BGP routing IPv6 unicast latency and jitter with MP-BGP routing EVPN scalability with VXLAN tunnels ISSU / ISSD failover times The test bed, as seen in Figure 1, also included the Spirent TestCenter traffic generator/analyzer with dx3 10/25/40/50/100G hardware modules and direct-attached (DACS) copper cables. The Spirent test instrument can offer traffic at wire speed concurrently on all ports with timestamp resolution of 2.5 nanoseconds. The primary metrics in performance tests were throughput, latency, and jitter. EVPN tests examined VXLAN tunnel scalability as well as throughput, latency, and jitter. ISSU tests used frame loss to derive failover time during software image upgrades and downgrades. RFC 2544, the industry-standard methodology for network device performance testing, determines throughput as the limit of system performance. In the context of lab benchmarking, throughput describes the maximum rate at which a device forwards all traffic with zero frame loss. Describing real-world performance is explicitly a nongoal of RFC 2544 throughput testing. Indeed, production networks load are typically far lower than the throughput rate. Latency and jitter respectively describe the delay and delay variation introduced by a switch. Both are vital, and arguably even more important than throughput, especially for delay-sensitive applications such as video and voice. In all tests, engineers configured the H3C data center core switch in store-and-forward mode, its default setting. This is a common industrywide practice for data center core switches. Each test described in this document uses a three-part structure. Why It Matters explains the significance of the metric or feature being tested in the context of data center networking. What We Did describes the test procedure. And What We Found covers test results. 1 The supervisor modules were LSXM1SUPB1 main processing units (MPUs). The line cards were LSXM1CGQ48HB1 interface modules. Page 4
5 Performance Test Results IPv4 Unicast Throughput With BGP Routing Why It Matters: No task is more important for a data center core switch than moving traffic at maximum speed with zero frame loss. None of a switch s other attributes its features list, its supported protocols, its high-reliability attributes will matter if the switch cannot efficiently move traffic under even the heaviest loads. How We Tested: We measured throughput by stressing both the data-plane fabric and control-plane routing capabilities of the H3C switch. RFCs 2544 and 2889 describe a data-plane stress test, and have long been the industry-standard methodologies for router and switch performance testing with unicast traffic. To load up the control plane, engineers configured Spirent TestCenter to emulate 768 Border Gateway Protocol (BGP) routing peers, each using a unique Autonomous System Number (ASN). Each Spirent BGP router brought up a peering session with the H3C S12516X-AF switch, then advertised a total of 768 unique routes. The Spirent test tool then offered fully meshed traffic among all networks learned using BGP. A fully meshed pattern means each port exchanges traffic with all other ports. This is the most stressful traffic pattern for a switch fabric. Test engineers determined the throughput rate the highest speed at which the switch correctly forwarded all traffic, in sequence and without frame loss. Frame sizes ranged from the Ethernet minimum of 64 bytes to the maximum of 1,518, and beyond to 9,216-byte jumbo frames. Engineers used a duration of 60 seconds for each test. Note that the choice of 768 routes is due to a limit in the number of trackable receive streams supported by the Spirent dx3 test modules, and not of the H3C switch s routing capacity. As demonstrated in the Routing Scalability and Throughput test in this report, the H3C S12516X-AF offers the same high performance to nearly 1 million routes using a different traffic pattern. Other Spirent test modules also support higher trackable stream counts. With the dx3 module, a higher route count also would have been possible using fewer than 768 ports. What We Found In nearly all test cases, the H3C switch moved traffic at the theoretical maximum rate to all gigabit Ethernet ports. These tests involved traffic in a fully meshed pattern the most stressful possible load. In some of these tests, traffic moved so fast that the switch processed more than 100 million frames per second per port simultaneously, on each of 768 ports. Aggregate layer-1 throughput was nearly 77 terabits per second. Page 5
6 Figure 2 presents results from the IPv4-with-BGP throughput tests. Although the industry-standard methodology calls for seven frame lengths to be tested, H3C also included two extra frame sizes. Engineers included 102-byte frames to demonstrate that theoretical maximum rates are possible with this relatively short frame size, not far up from the 64-byte minimum. Engineers also tested with 9,216-byte jumbo frames to demonstrate that the H3C switch can handle the large payloads commonly found in data-center applications. Figure 2: IPv4 throughput with BGP routing, G Ethernet ports Page 6
7 Routing Scalability and Throughput Why It Matters: It s not just the global Internet that continues to expand at a rapid pace, requiring ever-larger routing tables. Inside the data center, huge numbers of routes are now increasingly common. Both for external and internal connectivity, data center core switches must be able to route traffic among huge numbers of unique networks. This test involved nearly 1 million routes learned via BGP. To put this number in context, consider that the entire global Internet consists, at this writing, of fewer than 700,000 BGP routes. For internal routing within even very large data centers, a core switch capable of routing to 1 million networks has a very high capacity. How We Tested: This test is conceptually similar to the previous IPv4-plus-BGP test, with the Spirent test instrument emulating 768 BGP peers attached to the H3C switch. But instead of advertising just 1 route per port, test engineers this time advertised nearly 1 million routes, all emulated by the Spirent test instrument. The advertised routes included nearly 750,000 networks with a /24 prefix length. The remaining networks, approximately 250,000 in number, used a pseudorandom distribution of prefix lengths between /8 and /32. After verifying that the H3C switch had learned all routes, engineers again configured Spirent TestCenter to offer traffic at the throughput rate using a variety of frame sizes. One difference from previous tests is that engineers configured a port-pair traffic pattern, in which 384 pairs of ports exchanged bidirectional traffic, instead using a fully meshed pattern. Engineers made this change in traffic pattern to accommodate the maximum number of trackable receive streams supported on the test instrument s dx3 modules; the change does not reflect a limitation on the part of the H3C switch. What We Found Throughput was identical for all frame sizes when routing to nearly 1 million networks as it was when routing to just 1 network per port. The H3C data center core switch exhibited wire-speed throughput for all frame sizes of 102 bytes and larger, regardless of the number of routes and the prefix lengths involved. Page 7
8 Figure 3 presents throughput results from the routing scalability tests. Although the industry-standard methodology calls for seven frame lengths to be tested, H3C also included two extra frame sizes. Engineers included 102- byte frames to demonstrate that theoretical maximum rates are possible with this relatively short frame size, not far up from the 64-byte minimum. Engineers also tested with 9,216-byte jumbo frames to demonstrate that the H3C switch can handle the large payloads commonly found in data-center applications. Figure 3: BGP routing scalability throughput, 1 million routes and G Ethernet ports Page 8
9 IPv4 Unicast Latency and Jitter With BGP Routing Why It Matters: For some applications, latency and jitter are even more important than throughput. Spikes in latency and/or jitter can degrade performance not only for voice and video, but also for the mission-critical applications used in some vertical markets. For example, in the financial-services sector, many trading applications require the lowest possible latency to ensure rapid order fulfillment. Further, while throughput is a measure of switch performance at its maximum speed, latency and jitter affect application at every speed, regardless of load. Network device architecture is yet another factor that may affect latency and jitter. Some top-of-rack switches may reduce latency and jitter by using cut-through designs that begin forwarding a frame as soon as they receive the very beginning of a frame (the Ethernet header). In contrast, virtually all data center core switches, including the H3C S12516X-AF, use a store-and-forward design that caches the entire incoming frame before deciding where to forward it. A store-and-forward design is mandatory in situations involving routing, as was the case in these tests, since the switch must look beyond the Ethernet header before making a forwarding decision. How We Tested: As required by RFC 2544, we measured latency at the throughput rate in all tests. All network devices have a load-vs.-delay curve that spikes up sharply as offered loads approach the throughput rate. In production networks, where utilization rates seldom hit 100 percent, both average and maximum delay will be lower than results obtained from RFC 2544 tests. To illustrate this, we also ran an additional test to measure delay, using 64-byte frames at 65 percent of line rate, slightly below the throughput rate equivalent to 71 percent of line rate. This additional test served to show how delay is lower just below the throughput rate. Because we measured latency and jitter at the same time as throughput, the same test conditions applied G Ethernet interfaces, with Spirent TestCenter offering traffic in a fully meshed pattern. What We Found Average and maximum latency and jitter are very consistent across test cases for most frame sizes, both for average and maximum measurements. In addition, variations in latency and jitter are relatively small across most frame sizes. Delay for 64-byte frames, when measured just below the throughput rate, is significantly lower than at the throughput rate. Page 9
10 Table 1 presents average and maximum latency and jitter measurements for the H3C data center core switch when handling IPv4 routed traffic. Again, the switch ran BGP routing and learned 1 unique route on each of its G Ethernet ports. Latency Table 1: IPv4 with BGP routing latency and jitter Jitter Frame size (bytes) Minimum (us) Average (us) Maximum (us) Average (us) Maximum (us) , , , , Table 2 shows the differences in delay for 64-byte frames between traffic offered at and just below the throughput rate. As the table shows, there are significant reductions in delay and jitter at the slightly lower rate, especially for maximum delay and jitter measurements. Delay Jitter Intended load (% of line rate) Minimum (us) Average (us) Maximum (us) Average (us) Maximum (us) Delta Table 2: Delay for 64-byte frames at 65% load and 71% load for IPv4 with BGP routing Page 10
11 Routing Scalability Latency and Jitter With BGP Routing Why It Matters: In a network, frames don t know if they are being switched or routed latency and jitter are still significant factors in application performance. If anything, the extra overhead involved with lookups in large routing tables may increase latency and jitter, making measurement of these metrics even more critical than in situations with little or no routing. How We Tested: We measured latency and jitter at the same time as throughput, using the same test conditions: G Ethernet interfaces, with Spirent TestCenter offering traffic in a port-pair pattern. The test instrument advertised nearly 1 million routes to the H3C switch, including nearly 750,000 networks with a /24 prefix length. The remaining networks, approximately 250,000 in number, consisted of a pseudorandom distribution of prefix lengths between /8 and /32. As in the scenario with 1 route, test engineers also compared delay and jitter for 64-byte frames at the throughput rate, and just below the throughput rate. What We Found Even when routing traffic to nearly 1 million unique routes, latency and jitter for the H3C data center core switch remained very similar to tests with just 1 route per port. In addition, variations in latency and jitter are relatively small across most frame sizes. Delay for 64-byte frames, when measured just below the throughput rate, is significantly lower than at the throughput rate. Table 3 presents average and maximum latency and jitter measurements for the H3C data center core switch. Latency Table 3: BGP routing scalability latency and jitter Jitter Frame size (bytes) Minimum (us) Average (us) Maximum (us) Average (us) Maximum (us) , , , , Page 11
12 Table 4 shows the differences in delay for 64-byte frames between traffic offered at and just below the throughput rate. As the table shows, there are significant reductions in delay and jitter at the slightly lower rate, especially for maximum delay and jitter measurements. Delay Jitter Intended load (% of line rate) Minimum (us) Average (us) Maximum (us) Average (us) Maximum (us) Delta Table 4: Delay for 64-byte frames at 65% load and 71% load for BGP routing scalability IPv6 Unicast Throughput With BGP Routing Why It Matters: Now that the pool of routable IPv4 allocations has been exhausted, enterprises and service providers are turning in record numbers to IPv6. This naturally raises the question of whether throughput and routing performance will be the same as with IPv4 traffic. How We Tested: We measured throughput by stressing both the data-plane fabric and control-plane routing capabilities of the H3C switch using IPv6 traffic. RFC 5180 builds upon the foundation in RFCs 2544 and 2889 to describe an IPv6-specific data-plane stress test. Conceptually RFC 5180 is very similar to the previous work for IPv4, especially in its use of maximally stressful IPv6 traffic patterns to measure throughput. To load up the control plane, engineers configured Spirent TestCenter to emulate 768 Multiprotocol-Border Gateway Protocol (MP-BGP) routing peers, each using a unique Autonomous System Number (ASN). Each Spirent MP-BGP router brought up a peering session with the H3C S12516X-AF switch, then advertised a total of 768 unique routes. The Spirent test tool then offered fully meshed IPv6 traffic between all networks learned using MP-BGP. A fully meshed pattern means each port exchanges traffic with all other ports. This is the most stressful traffic pattern for a switch fabric. Test engineers determined the throughput rate the highest speed at which the switch correctly forwarded all traffic, in sequence and without frame loss. Frame sizes ranged from 86 bytes to the Ethernet maximum of 1,518, and beyond to 9,216-byte jumbo frames. Engineers used a duration of 60 seconds for each test. For the IPv6 tests, test engineers used a minimum frame size of 86 bytes rather than 64 bytes to accommodate IPv6 s larger header size and the signature field added to each test frame by the Spirent test instrument. Note that the choice of 768 routes is due to a limit in the number of trackable receive streams supported by the Spirent dx3 test modules, and not of the H3C switch s routing capacity. Other Spirent test modules also support higher trackable stream counts. With the dx3 module, a higher route count also would have been possible using fewer than 768 ports. Page 12
13 What We Found In every test case involving frames of 102 bytes and larger, the H3C switch moved IPv6 traffic at the theoretical maximum rate to all gigabit Ethernet ports, with zero frame loss. These tests involved traffic in a fully meshed pattern the most stressful possible load and MP-BGP routing on all ports. In some IPv6 tests, traffic moved so fast that the switch processed more than 100 million frames per second per port simultaneously, on each of 768 ports. Aggregate layer-1 throughput was nearly 77 terabits per second. Figure 4 presents results from the IPv6 with MP-BGP throughput tests. Although the industry-standard methodology calls for seven frame lengths to be tested, H3C also included two extra frame sizes. Engineers included 102- byte frames to demonstrate that theoretical maximum rates are possible with this relatively short frame size, not far up from the 86-byte minimum. Engineers also tested with 9,216-byte jumbo frames to demonstrate that the H3C switch can handle the large payloads commonly found in data-center applications. Figure 4: IPv6 throughput with MP-BGP routing, G Ethernet ports Page 13
14 Figure 5 compares throughput from the IPv4 and IPv6 tests with 102-byte frames and larger (so results are directly comparable). For both address families, throughput is line rate, with zero frame loss. Thus, there is no throughput penalty in moving from IPv4 to IPv6. Figure 5: IPv4 and IPv6 throughput compared, G Ethernet ports IPv6 Unicast Latency and Jitter With MP-BGP Routing Why It Matters: Latency and jitter can be even more important significant considerations than throughput with IPv6 traffic, given the longer frame size of the newer address family. Longer frames potentially means more time being cached and de-cached by the switch, with every extra bit of delay holding the potential to degrade application performance. Thus, a key question in migrating to IPv6 in the data center is whether core switches can deliver the same latency and jitter as for IPv4 traffic. How We Tested: Engineers measured latency and jitter at the same time as throughput, using the same test conditions: G Ethernet interfaces, with Spirent TestCenter offering traffic to all ports in a fully meshed pattern. Each Spirent test port emulated one MP-BGP peer and advertised 1 unique route. As required by RFCs 2544 and 5180, we measured latency and jitter at the throughput rate in all tests. Page 14
15 What We Found Average and maximum latency and jitter for IPv6 traffic are very slightly higher than for IPv4 traffic, typically by less than 1 microsecond for average latency with most frame sizes. The additional size of the IPv6 packet may help explain the slight increase. In addition, variations in latency and jitter are relatively small across most frame sizes. Table 5 presents results from the IPv6 with MP-BGP latency and jitter tests. With 128-byte and larger frames, latency and jitter are fairly consistent as payload size increases. Latency Jitter Frame size (bytes) Minimum (us) Average (us) Maximum (us) Average (us) Maximum (us) , , , , Table 5: IPv6 with MP-BGP routing latency and jitter EVPN Scalability With VXLAN Tunnels Why It Matters: Ethernet virtual private networks (EVPNs) offer a powerful new method for creating Layer-2 overlay networks across MPLS and other types of backbones. Through use of VXLAN tunnels, EVPN simplifies network designs and operations for data center interconnect (DCI). Scalability is a key question in assessing EVPN implementations, specifically the number of concurrent VXLAN tunnels a device can support, and the performance of traffic flowing through EVPN-capable data center core switches. As far as Network Test is aware, this is the largest single-switch EVPN demonstration ever conducted. Page 15
16 How We Tested: The test bed modeled a scenario in which the H3C S12516X-AF connected 768 different sites, each using different EVPN instances and VXLAN tunnels. Figure 6 shows the EVPN/VXLAN test bed topology. Figure 6: The EVPN with VXLAN test bed In this configuration, 384 pairs of hosts communicated across EVPN tunnels set up using VXLAN and BGP routing. Although each host resided in a different Layer-3 IP subnet, the hosts reached one another across transports set up with VXLAN tunneling. Engineers configured a loopback interface on the H3C switch with the address , which served as the VX- LAN tunnel endpoint (VTEP) for all VXLAN tunnels. The switch also ran BGP, which specified the loopback address as the source for routing updates and brought up a BGP peering session with each VTEP advertised by the Spirent test instrument. After bringing up tunnels and BGP peers and advertising networks across the EVPN tunnels, engineers then configured the Spirent test instrument to offer bidirectional traffic streams between all hosts. Engineers measured throughput, latency, and jitter for small, medium, and large frame sizes, but omitted 64-byte frames due to the tunneling encapsulation overhead added by UDP and VXLAN headers. Page 16
17 What We Found The H3C data center core switch seamlessly set up EVPN connectivity among 768 different sites, and moved traffic at wire speed in all test cases with zero frame loss. Latency and jitter were low and consistent across frame sizes. Figure 7 presents throughput results from the EVPN with VXLAN scalability tests. Although EVPN and VXLAN are control-plane technologies, the tests results show no impact on data-plane forwarding capabilities. Figure 8: EVPN with VXLAN throughput Table 6 presents latency and jitter measurements from the EVPN tests. In fact, latency and jitter with EVPNs and VXLAN are virtually identical to tests that used BGP routing and 1 million routes, as described in the Routing Scalability Latency and Jitter With BGP routing section. Since both tests used port-pair traffic patterns, results are directly comparable. Table 7 illustrates the negligible differences in latency and jitter between the BGP routing and EVPN/VXLAN test cases. This figure presents differences between the two sets of measurements, in the form of EVPN measurements minus routing measurements. Page 17
18 Latency Table 6: EVPN with VXLAN latency and jitter Jitter Frame size (bytes) Minimum (us) Average (us) Maximum (us) Average (us) Maximum (us) , , , , Frame size (bytes) Minimum delta (us) Latency Average delta (us) Maximum delta (us) Average delta (us) Jitter Maximum delta (us) , , , , Table 7: EVPN with VXLAN and route scalability latency and jitter compared (deltas in microseconds) There are two items of note in comparing the two sets of results. First, the differences with the routing scalability tests are minuscule less than 100 nanoseconds in most cases. Second, the EVPN and VXLAN results show significantly lower maximum jitter than BGP routing, likely because routing control-plane messages add extra overhead. ISSU/ISSD Failover Times Why It Matters: Downtime is not an option in modern data centers, especially for core switches handling traffic that may represent vast sums and life-saving data. It s also important to keep switch software up-to-date as security patches and new features become available. To balance these requirements, equipment makers offer in-service software upgrades (ISSU), updating software via redundant modules with little or no change visible to users. Equally important, however, is the ability to downgrade software versions via in-service software downgrade (ISSD). Downgrades are necessary for a variety of reasons: Orchestration software might need to put all units on the same version of software. Bugs and/or security flaws in software may occur. And network engineers may try an upgrade temporarily to see if a new feature works as expected. For both upgrades and downgrades, the same question applies: What impact will ISSU or ISSD have on user traffic? Page 18
19 How We Tested: This test bed used two H3C S12516X-AF data center core switches, each running an alpha version of Comware, to be upgraded to a released version of Comware. This test involved two core switches; a future software release will support ISSU and ISSD within a single switch chassis with redundant supervisor modules. Figure 9 illustrates the test bed topology. H3C s Intelligent Resilient Framework (IRF) technology connected the two core switches and maintained control-plane state between them; with IRF, the two core switches appeared to the rest of the network as a single logical switch. Two other H3C switches redundantly connected to each core switch. And the Spirent test instrument attached to each of these switches, emulating hosts sending traffic across the data center backbone. Engineers configured the switches with two sets of VLANs and configured traffic flows within and across VLAN boundaries. The use of separate Layer-2 and Layer-3 flows would determine whether ISSU and ISSD had any impact when switching and routing traffic. Figure 9: The ISSU/ISSD test bed Page 19
20 Test engineers configured Spirent TestCenter to offer test traffic continuously throughout the test, and then initiated a software upgrade on one of the two core switches. The switch to be upgraded, previously designated at the master, went into backup mode, with the other switch taking over as master. After the upgrade completed, engineers stopped the test instrument and noted any frame loss. Since engineers offered traffic at 1 million frames per second in each direction, it was possible to derive failover time from frame loss, with one lost frame equal to 1 microsecond. Using the display version command, engineers also verified that the software upgrade was complete. Engineers then repeated the same test with a software downgrade back to the original alpha image. Again, engineers derived failover time from frame loss, and verified that software versions had changed. What We Found For both ISSU and ISSD, there was zero frame loss on at least two out of four links used in this test. Frame loss did exist on the remaining links, but in minuscule amounts equivalent to 20 and 28 microseconds for the ISSU test and 1.8 milliseconds for the ISSD test. Thus, in both ISSU and ISSD tests, failover was less than 2 milliseconds, but only on some links, with no service interruption for users on other links. Table 8 presents results from the ISSU and ISSD test cases. Test case Traffic direction Switched (L2) or routed (L3) Table 8: ISSU and ISSD failover times Failover time (usec) ISSU North -> South L2 20 ISSU North -> South L3 0 ISSU South -> North L2 28 ISSU South -> North L3 0 ISSD North -> South L2 0 ISSD North -> South L3 0 ISSD South -> North L2 1,880 ISSD South -> North L3 0 Page 20
21 Conclusion With this largest-ever test of 100G Ethernet networking, the H3C S12516X-AF sets a new high-water mark for data-center core networking. In an extensive set of stressful benchmark tests, the S12516X-AF pumped traffic through G Ethernet interfaces and set several records along the way: Line-rate, lossless performance for IPv4 and IPv6 traffic at nearly all frame sizes using BGP routing and fully meshed traffic (the most stressful possible test pattern) Line-rate, lossless performance using BGP routing to nearly 1 million unique routes Virtually no difference in IPv4 and IPv6 performance, so no cost for IPv6 migration Record-high scalability for EVPN, with 768 concurrent tunnels established using VXLAN and line-rate, zero-loss performance for every frame size Virtually no difference in latency and jitter between BGP routing and EVPN/VXLAN test cases Minimal to no impact on user traffic during ISSU and ISSD software upgrades and downgrades As these test results demonstrate, the H3C S12516X-AF is a highly capable performer even under the most demanding conditions. Such high performance on such an unprecedented scale offers a measure of future proofing for tomorrow s data center networks. Data centers will continue to grow ever larger; as these test results demonstrate, the H3C S12516X-AF is well positioned to serve as the engine of that growth. Page 21
22 Appendix A: About Network Test Network Test is an independent third-party test lab and engineering services consultancy. Our core competencies are performance, security, and conformance assessment of networking equipment and live networks. Our clients include equipment manufacturers, large enterprises, service providers, industry consortia, and trade publications. Appendix B: Hardware and Software Releases Tested This appendix describes the software versions used on the test bed. Network Test conducted all benchmarks in May 2017 at H3C s labs in Beijing, China. Component H3C S12516X-AF Version Spirent TestCenter Appendix C: Disclaimer Comware , Feature 2702; Comware , Alpha 0718 (ISSU/ISSD tests only) Network Test Inc. has made every attempt to ensure that all test procedures were conducted with the utmost precision and accuracy, but acknowledges that errors do occur. Network Test Inc. shall not be held liable for damages which may result for the use of information contained in this document. All trademarks mentioned in this document are property of their respective owners. Version Copyright 2017 Network Test Inc. All rights reserved. Network Test Inc Via Colinas, Suite 113 Westlake Village, CA USA info@networktest.com
Juniper QFX10002: Performance and Scalability for the Data Center. December 2015
Juniper QFX10002: Performance and Scalability for the Data Center December 2015 Executive Summary The requirements list for data center switches is long, and getting longer. They must help cloud and data
More informationA Whole Lot of Ports: Juniper Networks QFabric System Assessment. March 2012
A Whole Lot of Ports: Juniper Networks QFabric System Assessment March 2012 Executive Summary Juniper Networks commissioned Network Test to assess the performance, interoperability, and usability of its
More informationFive Pillars: Assessing the Cisco Catalyst 4948E for Data Center Service
Five Pillars: Assessing the Cisco Catalyst 4948E for Data Center Service August 2010 Page2 Contents Executive Summary... 3 Features and Manageability... 3 Fast Convergence With Flex Link... 4 Control Plane
More informationCisco Nexus 3548 Switch Performance Validation December 2012
Cisco Nexus 3548 Switch Performance Validation December 212 212 Spirent Cisco. All rights reserved. Page 1 Contents Executive Summary... 3 Test Bed... 4 How Testing Was Performed... 4 Test Results... 6
More informationCisco XR Series Service Separation Architecture Tests
Cisco XR 12000 Series Service Separation Architecture Tests Introduction In April 2005, Cisco Systems released the XR 12000 Series routers bringing the functionality of IOS XR to the edge of next generation
More informationIntroduction. Executive Summary. Test Highlights
Introduction Cisco commissioned EANTC to conduct an independent performance test of its new Catalyst 9000 family switches. The switches are designed to work in enterprise campus environments. Cisco offers
More informationData Center Configuration. 1. Configuring VXLAN
Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2
More informationNetwork Design Considerations for Grid Computing
Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom
More informationTOLLY. Extreme Networks, Inc.
No. 202135 June 2002 BlackDiamond 6800 series 10GLRi module versus Cisco Systems, Inc. Catalyst 6509 outfitted with WS-X6502-10GE 10-Gigabit Ethernet LAN PHY Interface Competitive Performance Evaluation
More informationMPLS VPN--Inter-AS Option AB
The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) service provider
More informationCisco Nexus 6004 Switch Performance Validation
Cisco Nexus 6004 Switch Performance Validation White Paper February 2013 2013 Cisco Ixia. All rights reserved. Page 1 Contents What You Will Learn... 3 Overview... 3 Test Bed... 4 How Testing Was Performed...
More informationTOLLY. No March Fortress Technologies, Inc.
Fortress Technologies, Inc. Encryption and Compression Performance Evaluation of Three Models (FC-1500, FC-500 and FC-250) Test Highlights Premise: Wireless networks are getting faster and enterprise deployments
More informationHuawei Technologies engaged Miercom to evaluate the S12700
Key findings and conclusions: Huawei S12708 agile switch demonstrates 100% line rate throughput with 384 10GE ports in full line rate throughput tests Lab Testing Summary Report August 2013 Report SR130801
More informationConfiguring VXLAN EVPN Multi-Site
This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual
More informationImplementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN
This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing
More informationCisco Nexus 9508 Switch Power and Performance
White Paper Cisco Nexus 9508 Switch Power and Performance The Cisco Nexus 9508 brings together data center switching power efficiency and forwarding performance in a high-density 40 Gigabit Ethernet form
More informationMPLS VPN Inter-AS Option AB
First Published: December 17, 2007 Last Updated: September 21, 2011 The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol
More informationVXLAN Overview: Cisco Nexus 9000 Series Switches
White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide
More informationConfiguring VXLAN EVPN Multi-Site
This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling
More informationHochverfügbarkeit in Campusnetzen
Hochverfügbarkeit in Campusnetzen Für die deutsche Airheads Community 04. Juli 2017, Tino H. Seifert, System Engineer Aruba Differences between Campus Edge and Campus Core Campus Edge In many cases no
More informationHigher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.
This chapter tells how to configure Virtual extensible LAN (VXLAN) interfaces. VXLANs act as Layer 2 virtual networks over Layer 3 physical networks to stretch Layer 2 networks. About VXLAN Encapsulation
More informationExtending the Enterprise: Building Secure Infrastructures Using the Wireless LAN Services Module On Catalyst 6500 Series Switches
Extending the Enterprise: Building Secure Infrastructures Using the Wireless LAN Services Module On Catalyst 6500 Series Switches prepared for Cisco Systems October 2004 C O N T E N T S Executive Summary...3
More informationTOLLY. No December 2001 Fujitsu, Ltd. GeoStream R940 IP Switching Node Performance Evaluation. Cause
T H E TOLLY G R O U P No. 201139 December 2001 GeoStream R940 IP Switching Node Performance Evaluation Test Summary Premise: Carrier-class routers designed for the core backbone of the Internet must exhibit
More informationHuawei Technologies requested Miercom evaluate the
Lab Testing Summary Report June 2012 Report SR120602 Product Category: Data Center TOR Switch Vendor Tested: Key findings and conclusions: Huawei CloudEngine 6850 TOR switch proved full line rate throughput
More informationTraffic Load Balancing in EVPN/VXLAN Networks. Tech Note
Traffic Load Balancing in EVPN/VXLAN Networks Tech Note December 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks assumes no
More informationNetworking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ
Networking for Data Acquisition Systems Fabrice Le Goff - 14/02/2018 - ISOTDAQ Outline Generalities The OSI Model Ethernet and Local Area Networks IP and Routing TCP, UDP and Transport Efficiency Networking
More informationMellanox Virtual Modular Switch
WHITE PAPER July 2015 Mellanox Virtual Modular Switch Introduction...1 Considerations for Data Center Aggregation Switching...1 Virtual Modular Switch Architecture - Dual-Tier 40/56/100GbE Aggregation...2
More informationTOLLY. No November 2005 Nortel Ethernet Routing Switch 5510, 5520, 5530 Layer 2 Performance, Resiliency and Ease of Use
T H E TOLLY G R O U P No. 205137 November 2005 Nortel Ethernet Routing Switch 5510, 5520, 5530 Layer 2 Performance, Resiliency and Ease of Use Test Summary Premise: When considering the purchase of stackable
More informationIntroduction to External Connectivity
Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.
More informationForce10 Networks, Inc.
No. 204148 Force10 Networks, Inc. TeraScale E-Series E1200 Resilient Switch/Router Evaluation of Non-Stop Networks, Advanced QoS and Scalability Premise: Enterprises and service providers are faced with
More informationMPLS VPN Carrier Supporting Carrier
MPLS VPN Carrier Supporting Carrier Feature History Release 12.0(14)ST 12.0(16)ST 12.2(8)T 12.0(21)ST 12.0(22)S 12.0(23)S Modification This feature was introduced in Cisco IOS Release 12.0(14)ST. Support
More informationSpirent TestCenter EVPN and PBB-EVPN AppNote
Spirent TestCenter EVPN and PBB-EVPN AppNote Executive summary 2 Overview of EVPN 2 Relevant standards 3 Test case: Single Home Test Scenario for EVPN 4 Overview 4 Objective 4 Topology 4 Step-by-step instructions
More informationConfiguring MPLS and EoMPLS
37 CHAPTER This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Catalyst 3750 Metro switch. MPLS is a packet-switching technology that integrates
More informationImplementing MPLS VPNs over IP Tunnels
The MPLS VPNs over IP Tunnels feature lets you deploy Layer 3 Virtual Private Network (L3VPN) services, over an IP core network, using L2TPv3 multipoint tunneling instead of MPLS. This allows L2TPv3 tunnels
More informationEqualLogic Storage and Non-Stacking Switches. Sizing and Configuration
EqualLogic Storage and Non-Stacking Switches Sizing and Configuration THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS
More informationData Center 40GE Switch Study. Cisco Nexus 9508 DR L. 24 February 2014 Report DR140126L
Data Center 40GE Switch Study DR 140126L 24 February 2014 Report DR140126L Contents 1 Executive Summary... 3 2 Introduction... 4 3 Test Bed Setup and How We Did It... 5 4 Throughput and Latency Performance
More informationThe Interconnection Structure of. The Internet. EECC694 - Shaaban
The Internet Evolved from the ARPANET (the Advanced Research Projects Agency Network), a project funded by The U.S. Department of Defense (DOD) in 1969. ARPANET's purpose was to provide the U.S. Defense
More informationSpirent Journal of Cloud Application and Security Services PASS Test Methodologies PASS
Spirent Journal of Cloud Application and Security Services PASS Test Methodologies PASS Introduction Today s Devices Under Test (DUT) represent complex, multi-protocol network elements with an emphasis
More informationTOLLY. Nortel, Inc. Ethernet Routing Switch 5000 Series. Test Summary
, Inc. Switch 5 Series Competitive Performance Evaluation versus Catalyst 75G and ProCurve cl Premise: When considering the purchase of stackable switches, network managers need to know the performance
More informationMPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012
MPLS VPN over mgre Last Updated: November 1, 2012 The MPLS VPN over mgre feature overcomes the requirement that a carrier support multiprotocol label switching (MPLS) by allowing you to provide MPLS connectivity
More informationDell Networking S6000
Dell Networking S6000 High-performance 10/40 GbE Top-of-Rack Switch Miercom Lab Testing Report August 2013 October Report ******* 2013 Report 130815 Contents 1.0 Executive Summary... 3 2.0 About the Dell
More informationIntroduction to Segment Routing
Segment Routing (SR) is a flexible, scalable way of doing source routing. Overview of Segment Routing, page 1 How Segment Routing Works, page 2 Examples for Segment Routing, page 3 Benefits of Segment
More informationConnecting to a Service Provider Using External BGP
Connecting to a Service Provider Using External BGP First Published: May 2, 2005 Last Updated: August 21, 2007 This module describes configuration tasks that will enable your Border Gateway Protocol (BGP)
More informationArista 7060X, 7060X2, 7260X and 7260X3 series: Q&A
Arista 7060X, 7060X2, 7260X and 7260X3 series: Q&A Product Overview What are the 7060X, 7060X2, 7260X & 7260X3 series? The Arista 7060X Series, comprising of the 7060X, 7060X2, 7260X and 7260X3, are purpose-built
More informationIntroduction. Hardware and Software. Test Highlights
Introduction Nuage Networks, a Nokia business, commissioned EANTC to conduct an independent test of the vendor s SD-WAN solution. The tests were executed at Nuage Networks headquarters in Mountain View,
More informationLabTest Report. Cisco Nexus 3064
LabTest Report DR110330 Cisco Nexus 3064 April 2011 Miercom www.miercom.com! Contents 1.0 Executive Summary... 3! 1.0 Executive Summary... 3! 2.0 Key Findings... 4! 3.0 Test Bed Diagram... 6! How We Did
More informationSolution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.
Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved.
More informationQ-Balancer Range FAQ The Q-Balance LB Series General Sales FAQ
Q-Balancer Range FAQ The Q-Balance LB Series The Q-Balance Balance Series is designed for Small and medium enterprises (SMEs) to provide cost-effective solutions for link resilience and load balancing
More informationMPLS VPN Carrier Supporting Carrier Using LDP and an IGP
MPLS VPN Carrier Supporting Carrier Using LDP and an IGP Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) Carrier Supporting Carrier (CSC) enables one MPLS VPN-based service provider
More informationDeploying Data Center Switching Solutions
Deploying Data Center Switching Solutions Choose the Best Fit for Your Use Case 1 Table of Contents Executive Summary... 3 Introduction... 3 Multivector Scaling... 3 Low On-Chip Memory ASIC Platforms...4
More informationEthernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note
White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)
More informationConnecting to a Service Provider Using External BGP
Connecting to a Service Provider Using External BGP This module describes configuration tasks that will enable your Border Gateway Protocol (BGP) network to access peer devices in external networks such
More informationVXLAN Design with Cisco Nexus 9300 Platform Switches
Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What
More informationConfiguring Advanced BGP
CHAPTER 6 This chapter describes how to configure advanced features of the Border Gateway Protocol (BGP) on the Cisco NX-OS switch. This chapter includes the following sections: Information About Advanced
More informationVXLAN Testing with TeraVM
August 2013 VXLAN 1.0 and 2.0 Application Note Introduction 1 Introduction... 3 1.1 Common Use Cases...4 2 VXLAN Evolution... 5 2.1 VXLAN 1.0 Challenges...7 2.2 VXLAN 2.0...7 2013 Shenick Network Systems
More informationBGP IN THE DATA CENTER
BGP IN THE DATA CENTER A PACKET DESIGN E-BOOK Contents Page 3 : BGP the Savior Page 4 : Traditional Data Center Architecture Traffic Flows Scalability Spanning Tree Protocol (STP) Page 6 : CLOS Architecture
More informationPluribus Data Center Interconnect Validated
Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN
More informationSolace Message Routers and Cisco Ethernet Switches: Unified Infrastructure for Financial Services Middleware
Solace Message Routers and Cisco Ethernet Switches: Unified Infrastructure for Financial Services Middleware What You Will Learn The goal of zero latency in financial services has caused the creation of
More informationSD-WAN Deployment Guide (CVD)
SD-WAN Deployment Guide (CVD) All Cisco Meraki security appliances are equipped with SD-WAN capabilities that enable administrators to maximize network resiliency and bandwidth efficiency. This guide introduces
More informationTOLLY. No July 2002
No. 202131 July 2002 VINA Technologies, Ltd. VINA erouter versus ADTRAN Total Access 616-TDM and TA 600R-TDM, Cisco Systems 1720 and Netopia R5300 Competitive Performance Evaluation Premise: Customers
More informationTechnical Document. What You Need to Know About Ethernet Audio
Technical Document What You Need to Know About Ethernet Audio Overview Designing and implementing an IP-Audio Network can be a daunting task. The purpose of this paper is to help make some of these decisions
More informationMPLS VPN Carrier Supporting Carrier Using LDP and an IGP
MPLS VPN Carrier Supporting Carrier Using LDP and an IGP Last Updated: December 14, 2011 Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) Carrier Supporting Carrier (CSC) enables one
More informationPolitecnico di Torino Network architecture and management. Outline 11/01/2016. Marcello Maggiora, Antonio Lantieri, Marco Ricca
Politecnico di Torino Network architecture and management Marcello Maggiora, Antonio Lantieri, Marco Ricca Outline Politecnico di Torino network: Overview Building blocks: Edge, Core, Distribution, Access
More informationEnabling Efficient and Scalable Zero-Trust Security
WHITE PAPER Enabling Efficient and Scalable Zero-Trust Security FOR CLOUD DATA CENTERS WITH AGILIO SMARTNICS THE NEED FOR ZERO-TRUST SECURITY The rapid evolution of cloud-based data centers to support
More informationCreating and Managing Admin Domains
This chapter has the following sections: Admin Domain Overview, page 1 Viewing Admin Domain, page 2 Creating an Admin Domain, page 2 Creating DCI Interconnect Profiles, page 6 Admin Domain Overview The
More informationFeature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane
Feature Information for, page 1 Setup, page 1 Feature Information for Table 1: Feature Information for Feature Releases Feature Information PoAP diagnostics 7.2(0)N1(1) Included a new section on POAP Diagnostics.
More informationOptimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)
White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual
More information- Hubs vs. Switches vs. Routers -
1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing
More informationMPLS VPN Multipath Support for Inter-AS VPNs
The feature supports Virtual Private Network (VPN)v4 multipath for Autonomous System Boundary Routers (ASBRs) in the interautonomous system (Inter-AS) Multiprotocol Label Switching (MPLS) VPN environment.
More informationHPE FlexFabric 5940 Switch Series
HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development
More informationCisco SCE 2020 Service Control Engine
Data Sheet Cisco SCE 2000 Series Service Control Engine The Cisco SCE 2000 Series Service Control Engine is a network element specifically designed for carrier-grade deployments requiring high-capacity
More informationMultiprotocol Label Switching (MPLS) on Cisco Routers
Multiprotocol Label Switching (MPLS) on Cisco Routers This document describes commands for configuring and monitoring Multiprotocol Label Switching (MPLS) functionality on Cisco routers and switches. This
More informationHierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017
Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded
More informationLocator ID Separation Protocol (LISP) Overview
Locator ID Separation Protocol (LISP) is a network architecture and protocol that implements the use of two namespaces instead of a single IP address: Endpoint identifiers (EIDs) assigned to end hosts.
More informationIntelligent WAN Multiple Data Center Deployment Guide
Cisco Validated design Intelligent WAN Multiple Data Center Deployment Guide September 2017 Table of Contents Table of Contents Deploying the Cisco Intelligent WAN... 1 Deployment Details...1 Deploying
More informationMPLS VPN. 5 ian 2010
MPLS VPN 5 ian 2010 What this lecture is about: IP CEF MPLS architecture What is MPLS? MPLS labels Packet forwarding in MPLS MPLS VPNs 3 IP CEF & MPLS Overview How does a router forward packets? Process
More informationCisco engaged Miercom to verify the performance and advanced
Lab Testing Summary Report November 2011 Report 111018 Key findings and conclusions: Cisco Catalyst 4500E Supervisor 7L-E offers high performance and deterministic low latency for unicast/multicast traffic
More informationThis document is not restricted to specific software and hardware versions.
Contents Introduction Prerequisites Requirements Components Used Background Information Configure Network Diagram Configuration DN Bit Verify Troubleshoot Related Cisco Support Community Discussions Introduction
More informationTable of Contents. Cisco How NAT Works
Table of Contents How NAT Works...1 This document contains Flash animation...1 Introduction...1 Behind the Mask...2 Dynamic NAT and Overloading Examples...5 Security and Administration...7 Multi Homing...9
More informationinternet technologies and standards
Institute of Telecommunications Warsaw University of Technology 2017 internet technologies and standards Piotr Gajowniczek Andrzej Bąk Michał Jarociński Internet datacenters Introduction Internet datacenters:
More informationIPv6 Switching: Provider Edge Router over MPLS
Multiprotocol Label Switching (MPLS) is deployed by many service providers in their IPv4 networks. Service providers want to introduce IPv6 services to their customers, but changes to their existing IPv4
More informationCisco ASR 1000 Series Aggregation Services Routers: ISSU Deployment Guide and Case Study
Cisco ASR 1000 Series Aggregation Services Routers: ISSU Deployment Guide and Case Study In most networks, a significant cause of downtime is planned maintenance and software upgrades. The Cisco ASR 1000
More informationUnicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1
Unicast, on page 1 Unicast Flows Overview Intra and inter subnet forwarding are the possible unicast forwarding flows in the VXLAN BGP EVPN fabric, between leaf/tor switch VTEPs. They are explained in
More informationHP0-Y36: DEPLOYING HP ENTERPRISE NETWORKS
HP0-Y36: DEPLOYING HP ENTERPRISE NETWORKS HP Networking Exam preparation guide HP0-Y36: DEPLOYING HP ENTERPRISE NETWORKS HP Networking Exam preparation guide Overview Requirements for successful completion
More informationALCATEL Edge Services Router
ALCATEL 7420 Edge Services Router Alcatel builds next generation networks, delivering integrated end-to-end voice and data networking solutions to established and new carriers, as well as enterprises and
More informationMultiprotocol Label Switching (MPLS) on Cisco Routers
Multiprotocol Label Switching (MPLS) on Cisco Routers This document describes commands for configuring and monitoring Multiprotocol Label Switching (MPLS) functionality on Cisco routers and switches. This
More informationIPv6 Switching: Provider Edge Router over MPLS
Multiprotocol Label Switching (MPLS) is deployed by many service providers in their IPv4 networks. Service providers want to introduce IPv6 services to their customers, but changes to their existing IPv4
More informationContents. EVPN overview 1
Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote
More informationSecurizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN
Platformă de e-learning și curriculă e-content pentru învățământul superior tehnic Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN MPLS VPN 5-ian-2010 What this lecture is about: IP
More informationSharkFest 18 US. BGP is not only a TCP session https://goo.gl/mh3ex4
SharkFest 18 US BGP is not only a TCP session https://goo.gl/mh3ex4 Learning about the protocol that holds networks together Werner Fischer Principal Consultant avodaq AG History and RFCs Direction for
More informationIPv6 Module 6x ibgp and Basic ebgp
IPv6 Module 6x ibgp and Basic ebgp Objective: Using IPv6, simulate four different interconnected ISP backbones using a combination of IS-IS, internal BGP, and external BGP. Topology : Figure 1 BGP AS Numbers
More informationWAN Edge MPLSoL2 Service
4 CHAPTER While Layer 3 VPN services are becoming increasing popular as a primary connection for the WAN, there are a much larger percentage of customers still using Layer 2 services such Frame-Relay (FR).
More informationNetwork World Clear Choice Test: Access Switches Scheduled for publication in Network World in March 2008 Test Methodology
Network World Clear Choice Test: Access Switches Scheduled for publication in Network World in March 2008 Test Methodology Version 2008012101. Copyright 1999-2008 by Network Test Inc. Vendors are encouraged
More informationVoice, Video and Data Convergence:
: A best-practice approach for transitioning your network infrastructure White Paper The business benefits of network convergence are clear: fast, dependable, real-time communication, unprecedented information
More informationFrequently Asked Questions for HP EVI and MDC
Frequently Asked Questions for HP EVI and MDC Q. What are we announcing at VMworld? A. HP will be expanding Virtual Application Networks with new FlexFabric innovations that simplify the interconnection
More informationIT220 Network Standards & Protocols. Unit 8: Chapter 8 The Internet Protocol (IP)
IT220 Network Standards & Protocols Unit 8: Chapter 8 The Internet Protocol (IP) IT220 Network Standards & Protocols REMINDER Student Evaluations 4 Objectives Identify the major needs and stakeholders
More informationNot all SD-WANs are Created Equal: Performance Matters
SD-WAN Lowers Costs and Increases Productivity As applications increasingly migrate from the corporate data center into the cloud, networking professionals are quickly realizing that traditional WANs were
More informationIntroduction to Cisco ASR 9000 Series Network Virtualization Technology
White Paper Introduction to Cisco ASR 9000 Series Network Virtualization Technology What You Will Learn Service providers worldwide face high customer expectations along with growing demand for network
More informationHuawei Technologies engaged Miercom to conduct an evaluation
Time (ms) Key findings and conclusions: Lab Testing Summary Report March 2012 Report SR120130B Huawei switches, with up to 48 10GE ports, provide high performance for server access and enterprise aggregation
More informationProvisioning Overlay Networks
This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers
More information