Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

Size: px
Start display at page:

Download "Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture"

Transcription

1 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture June 2017 A DELL EMC Reference Architecture

2 Table of contents 1 vcloud NFV and ScaleIO Overview Converged Infrastructure ScaleIO Storage System Overview ScaleIO Features ScaleIO Components ScaleIO Traffic Network Infrastructure Versa SD-WAN NFV Overview Use Cases vcloud and Versa SD-WAN High Level Deployment Description VMware VDS (vsphere Distributed Switch) Deployment and Configuration Snapshot for Deployment Versa Director Web UI Director Context Appliance Context Deploy Versa Flex NFV on Dell EPC-5000 for Branch Office Performance Tuning and Hardware Acceleration Performance Tunings BIOS Settings: ESXi Hypervisor Settings: ESXi Guest VM Settings: Versa SD-WAN Performance Index and Test Cases Abstract Pre-Requisites / Assumptions Versa VNF Settings Test Cases Guidelines Use Iperf to test the link bandwidth between client and server VMs Routing Protocol Interop Testing - OSPF, BGP, and BFD interop DHCP Server and Relay NAPT - Network Address Port Translation and other NAT flavors QoS - Objective Application ID IPSec VPN (LAN to LAN) Objective Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

3 4.12 Performance Tests - UDP Throughput, Latency, and PDV TCP Throughput, Latency, and PDV Concurrent TCP connections TCP Session Setup Rate Security Tests - Security Zones & DDoS protection Service based security policies Logging Tests - SNMP, IPFIX, Syslog Rate Limiting and application control URL and Application Filtering Redundancy Tests - Control Link Failure (Optional) Network Element Configurations Test Equipment configuration and software versions SR-IOV/DPDK/NUMA requirement for Versa SR-IOV in vsphere NUMA ALIGNMENT BEST PRACTICES Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

4 Copyright 2017 Dell Inc. or its subsidiaries All Rights Reserved. Except as stated below, no part of this document may be reproduced, distributed, or transmitted in any form or by any means, without express permission of Dell. You may distribute this document within your company or organization only, without alteration of its contents. THIS DOCUMENT IS PROVIDED AS-IS, AND WITHOUT ANY WARRANTY, EXPRESS OR IMPLIED. IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE SPECIFICALLY DISCLAIMED. PRODUCT WARRANTIES APPLICABLE TO THE DELL PRODUCTS DESCRIBED IN THIS DOCUMENT MAY BE FOUND AT: Performance of network reference architectures discussed in this document may vary with differing deployment conditions, network loads, and the like. Third party products may be included in reference architectures for the convenience of the reader. Inclusion of such third party products does not necessarily constitute Dell s recommendation of those products. Please consult your Dell representative for additional information. Trademarks used in this text: Dell, the Dell logo, PowerEdge, PowerVault, PowerConnect, OpenManage, EqualLogic, Compellent, KACE, FlexAddress, Force10 and Vostro are trademarks of Dell Inc. EMC VNX, and EMC Unisphere are registered trademarks of Dell Inc Other Dell trademarks may be used in this document. VMware, VMware vcloud, Virtual SMP, vmotion, vcenter, vsphere, VMware vsphere Distributed Switch ' VMware vcloud NFV, VMware ESX, VMware NSX Virtual Switch, and VMware ESXi are registered trademarks or trademarks of VMware, Inc. in the United States or other countries. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and/or names or their products and are the property of their respective owners. Dell disclaims proprietary interest in the marks and names of others. 4 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

5 1 vcloud NFV and ScaleIO Overview 1.1 Converged Infrastructure A converged infrastructure refers to a rack that contains the following components. Computer server Network switch Storage These components are connected through ToR (Top of Rack) switches to provide a unified management and communication layer for the underlying services and software such as NFV (Network Function Virtualization). This Reference Architecture (RA) is built with VMware vcloud NFV as the Virtual Infrastructure Management (VIM) layer. ScaleIO as host attached, hyper-converged storage system and Versa SD- WAN NFV as the NFV application that provides software-defined wide area network (SD-WAN) services. The rack itself has nine Dell R730 servers, one Dell management switch and two Dell ToR switches (see Figure 1 for details). 5 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

6 VMware block level diagram The interconnections between Dell switches are comprised with the following sections. LACP port channel between spine and leaf switches fo 0/8 leaf1-s6000 fortygige 0/8 interface fortygige 0/8 no ip address! port-channel-protocol LACP port-channel 1 mode active no shutdown 6 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

7 Ethernet port channel between two leaf switches fo 0/0 spine2-s6000 fortygige 0/0 interface Port-channel 127 no ip address channel-member fortygige 0/0,4 no shutdown The interconnections between the Dell servers and leaf switches are illustrated in Figure 2. Physical Topology and Interconnection between Dell server and switch 7 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

8 Table 1 shows the port map configuration between the Dell servers and the two leaf switches. Port map for Dell Server and Switch Interconnection server6 4 p1p1 leaf1 Te0/75 SIO-1 idrac: p1p2 leaf2 Te0/75 SIO-2 6 p2p1 leaf1 Te0/74 po74 7 p2p2 leaf2 Te0/74 po74 MGMT 10 p4p1 leaf1 Te0/91 po90 11 p4p2 leaf2 Te0/91 po90 12 p5p1 leaf1 Te0/90 po90 HostIO 13 p5p2 leaf2 Te0/90 po90 server7 4 p1p1 sw1 Te0/77 SIO-1 idrac: p1p2 sw2 Te0/77 SIO-2 6 p2p1 sw1 Te0/76 po76 7 p2p2 sw2 Te0/76 po76 MGMT 10 p4p1 sw1 Te0/93 po92 11 p4p2 sw2 Te0/93 po92 12 p5p1 sw1 Te0/92 po92 HostIO 13 p5p2 sw2 Te0/92 po92 server8 4 p1p1 sw1 Te0/79 SIO-1 idrac: p1p2 sw2 Te0/79 SIO-2 6 p2p1 sw1 Te0/78 po78 7 p2p2 sw2 Te0/78 po78 MGMT 10 p4p1 sw1 Te0/95 po94 11 p4p2 sw2 Te0/95 po94 12 p5p1 sw1 Te0/94 po94 HostIO 13 p5p2 sw2 Te0/94 po94 The solution in this reference architecture consists of a rack comprised of one Dell Management Switch, a pair (2) of Dell Leaf/ToR Switches, (virtualized to behave as a single Switch), and nine Dell R730 Servers. Each Server has four dual-port 10GbE NICs configured to support bonded 40GbE HostIO, 20GbE Management, and Fault-Tolerant ScaleIO networks. ScaleIO interfaces The two ScaleIO interfaces from each Dell server are not combined as one Ethernet port channel. They are configured as tagged or trunk VLAN interfaces. The ScaleIO software service module running on the Dell servers manage the interfaces as two independent IP interfaces and balance the traffic from the IP layer. interface Vlan 30 description SIO Data x no ip address 8 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

9 tagged TenGigabitEthernet 0/16-63,65,67,69,71,73,75,77,79,97, interface TenGigabitEthernet 0/65 no ip address mtu switchport no shutdown These two interfaces from each Dell server are connected to two Dell leaf switches for high availability (HA). If one interface or one Dell switch is shut off, the communication between the server and external network continues with half the bandwidth. Management I/O interfaces Two 10GbE interfaces from each Dell server are combined as one LACP Ethernet port channel. interface TenGigabitEthernet 0/74 no ip address mtu 12000! port-channel-protocol LACP port-channel 74 mode active no shutdown interface Port-channel 74 no ip address mtu switchport spanning-tree rstp edge-port rate-interval 5 vlt-peer-lag port-channel 74 no shutdown These two interfaces from the Dell server are connected to two Dell leaf switches for HA. If one interface or one Dell switch is shut off, the communication between the server and external network continues with half the bandwidth. Host IO interfaces Four 10GbE interfaces from each Dell server are configured as one combined LACP port channel. Two interfaces are connected to one leaf Dell switch for HA. interface TenGigabitEthernet 0/88 no ip address mtu 12000! port-channel-protocol LACP port-channel 88 mode active no shutdown 9 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

10 leaf1-s6000#sh run int po 88! interface Port-channel 88 no ip address mtu switchport spanning-tree rstp edge-port rate-interval 5 vlt-peer-lag port-channel 88 no shutdown The Versa NFV applications run on the host IO interfaces. 1.2 ScaleIO Storage System Overview ScaleIO is a Dell EMC storage product that provides a unified and hyper-converged storage system directly attached to servers/hypervisors. Each server has direct attached storage disk arrays such as ATA or SATA disks. These disks can be configured as ScaleIO Data Server (SDS), ScaleIO Data Client (SDC), Meta Data Manager (MDM), and Tier Break (TB) through the ScaleIO software. ScaleIO can provide a shared, unified, auto distributed, and replicated storage system to the entire vcloud cluster. ScaleIO Block Level Diagram ScaleIO creates a server-based SAN from direct-attached server storage to deliver flexible and scalable performance and capacity on demand. As an alternative to a traditional SAN infrastructure, ScaleIO combines HDDs, SSDs, and PCIe flash cards to create a virtual pool of block storage with varying performance tiers. ScaleIO provides enterprise-grade data protection, multi-tenant capabilities, and add-on enterprise features such as QoS, thin provisioning, and snapshots. ScaleIO is hardware-agnostic, supports physical and virtual application servers, and has been proven to deliver significant TCO savings vs. traditional SAN. 10 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

11 1.2.1 ScaleIO Features Massive Scale - ScaleIO can scale from three to 1024 nodes. The scalability of performance is linear with regard to the growth of the deployment. As devices or nodes are added, ScaleIO rebalances data evenly, which results in a balanced and fully utilized pool of distributed storage. Extreme Performance - Every device in a ScaleIO storage pool is used to process I/O operations. This massive I/O parallelism eliminates bottlenecks. Throughput and IOPS scale in direct proportion to the number of storage devices added to the storage pool. Performance and data protection optimization is automatic. Component loss triggers a rebuild operation to preserve data protection. Addition of a component triggers a rebalance to increase available performance and capacity. Both operations occur in the background with no downtime to applications and users. Compelling Economics - ScaleIO does not require a Fibre Channel fabric or dedicated components like HBAs. There are no forklift upgrades for outdated hardware. Failed and outdated components are simply removed from the system. ScaleIO can reduce the cost and complexity of the solution resulting in greater than 60 percent TCO savings vs. traditional SAN. Unparalleled Flexibility - ScaleIO provides flexible deployment options. In a two-layer deployment, the applications and storage are installed in separate servers. A two-layer deployment allows compute and storage teams to maintain operational autonomy. In a hyper-converged deployment, the applications and storage are installed on the same set of servers. This provides the lowest footprint and cost profile. The deployment model can also be mixed to provide independent scaling of compute and storage resources. ScaleIO is infrastructure agnostic. It can be used with mixed server brands, virtualized and bare metal operating systems, and mixed storage media types (HDDs, SSDs, and PCIe flash cards). Supreme Elasticity - Storage and compute resources can be increased or decreased whenever the need arises. The system automatically rebalances data on the fly. Additions and removals can be done in small or large increments. No capacity planning or complex reconfiguration is required. Rebuild and rebalance operations happen automatically without operator intervention. Essential Features for Enterprises and Service Providers - With ScaleIO, you can limit the amount of performance (IOPS or bandwidth) that selected customers can consume. The limiter allows resource usage to be imposed and regulated, preventing application hogging scenarios. ScaleIO offers instantaneous, writeable snapshots for data backups and cloning. DRAM caching enables you to improve read performance by using server RAM. Any group of servers hosting storage that may go down together (such as SDSs residing on nodes in the same physical enclosure) can be grouped together in a fault set. A fault set can be defined to ensure data mirroring occurs outside the group, improving business continuity. Volumes can be thin provisioned, providing on-demand storage as well as faster setup and startup times. ScaleIO also provides multi-tenant capabilities via protection domains and storage pools. Protection Domains allow you to isolate specific servers and data sets. Storage Pools can be used for further data segregation, tiering, and performance management. For example, data that is accessed very frequently can be stored in a flash-only storage pool for the lowest latency, while less frequently accessed data can be stored in a low-cost, high-capacity pool of spinning disks. 11 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

12 1.2.2 ScaleIO Components Hyper-converged ScaleIO Infrastructure Figure 4 is a logical illustration of a two-layer ScaleIO deployment. Systems running the ScaleIO Data Clients (SDCs) reside on different physical servers than those running the ScaleIO Data Servers (SDSs). Each volume available to an SDC is distributed across many systems running the SDS. The Meta Data Managers (MDMs) reside outside the data path, and are only consulted by SDCs when an SDS fails or when the data layout changes. Hyper-converged deployments, rebuild, and rebalance operations would be represented as a complete graph, where all nodes are logically connected to all other nodes (not shown). ScaleIO Data Servers The ScaleIO Data Server (SDS) serves raw local storage in a server as part of a ScaleIO cluster. The SDS is the server-side software component. A server that takes part in serving data to other nodes has an SDS installed on it. A collection of SDSs forms the ScaleIO persistence layer. SDSs maintain redundant copies of the user data, protect each other from hardware loss, and reconstruct data protection when hardware components fail. SDSs may leverage SSDs, PCIe based flash, spinning media, RAID controller write caches, available RAM, or a combination of the above. SDSs may run natively on Windows or Linux, or as a virtual appliance on ESX. A ScaleIO cluster may have 1024 nodes, each running an SDS. Each SDS requires only 500 megabytes of RAM. SDS components can communicate directly with each other. They are fully meshed and optimized for rebuild, rebalance, and I/O parallelism. Data layout between SDS components is managed through storage pools, protection domains, and fault sets. 12 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

13 Client volumes used by the SDCs are placed inside a storage pool. Storage pools are used to logically aggregate types of storage media at drive-level granularity. Storage pools provide varying levels of storage service priced by capacity and performance. Protection from node, device, and network connectivity failure is managed at node-level granularity through protection domains. Protection domains are groups of SDSs where replicas are maintained. Fault sets allow large systems to tolerate multiple simultaneous failures by preventing redundant copies from residing in a single rack or chassis. ScaleIO Data Clients The ScaleIO Data Client (SDC) allows an operating system or hypervisor to access data served by ScaleIO clusters. The SDC is a client-side software component that can run natively on Windows, Linux, or ESX. It is analogous to a software initiator, but is optimized to use networks and endpoints in parallel. The SDC provides the operating system or hypervisor running it access to logical block devices called volumes. A volume is analogous to a LUN in a traditional SAN. Each logical block device provides raw storage for a database or a file system. The SDC knows which SDS endpoint to contact based on block locations in a volume. The SDC consumes distributed storage resources directly from other systems running ScaleIO. SDCs do not share a single protocol target or network end-point with other SDCs. SDCs distribute load evenly and autonomously. The SDC is extremely lightweight. SDC to SDS communication is inherently multi-pathed across SDS storage servers, in contrast to approaches like iscsi, where multiple clients target a single protocol endpoint. The SDC allows for shared volume access for uses such as clustering. The SDC does not require an iscsi initiator, a fibre channel initiator, or a FCoE initiator. Each SDC requires only 50 megabytes of RAM. The SDC is optimized for simplicity, speed, and efficiency. META Data Managers Meta Data Managers (MDMs) control the behavior of the ScaleIO system. They determine and provide the mapping between clients and their data, keep track of the state of the system, and issue reconstruct directives to SDS components. MDMs establish the notion of quorum in ScaleIO. They are the only tightly clustered component of ScaleIO. They are authoritative, redundant, and highly available. They are not consulted during I/O operations or during SDS to SDS operations like rebuild and rebalance. When a hardware component fails, the MDM cluster will begin an auto-healing operation within seconds. 13 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

14 1.2.3 ScaleIO Traffic Traffic Types ScaleIO performance, scalability, and security can benefit when the network architecture reflects ScaleIO traffic patterns. This is particularly true in large ScaleIO deployments. The software components that make up ScaleIO (SDCs, SDSs, and MDMs) converse with each other in a predictable way. Architects designing a ScaleIO deployment should be aware of these traffic patterns in order to make informed choices about the network layout. i. Figure 5 is an illustration of how the ScaleIO software components communicate. A ScaleIO system has many SDCs, SDSs, and MDMs, for simplification, this illustration groups these components together. The arrows from the SDSs and MDMs that point back to themselves represent communication to other SDSs and MDMs. The traffic patterns are the same regardless of the physical location of an SDC, SDS, or MDM ScaleIO Traffics Between SDS, SDC, and MDM ScaleIO Data Client (SDC) to ScaleIO Data Server (SDS) Traffic between the SDCs and the SDSs forms the bulk of front-end storage traffic. Front-end storage traffic includes all read and write traffic arriving at or originating from a client. This network requires high throughput. ScaleIO Data Server (SDS) to ScaleIO Data Server (SDS) Traffic between SDSs forms the bulk of the back-end storage traffic. Back-end storage traffic includes writes that are mirrored between SDSs, rebalance traffic, and rebuild traffic. This network requires high throughput. Although not required, there may be situations where isolating front-end and back-end traffic for the storage network may be ideal. This is required in two-layer deployments where the storage and server teams act independently. Meta Data Manager (MDM) to Meta Data Manager (MDM) MDMs coordinate operations inside the cluster. They issue directives to ScaleIO to rebalance, 14 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

15 rebuild, and redirect traffic. MDMs are redundant, and must communicate with each other to maintain a shared understanding of data layout. MDMs also establish the notion of quorum in ScaleIO. MDMs do not carry or directly interfere with I/O traffic and do not require the same level of throughput required for SDS or SDC traffic. MDM to MDM traffic requires a stable, reliable, low latency network. MDM to MDM traffic is considered back-end storage traffic. ScaleIO supports the use of one or more networks dedicated to traffic between MDMs. At least two 10-gigabit links should be used for each network connection. Meta Data Manager (MDM) to ScaleIO Data Client (SDC) The master MDM must communicate with SDCs when the data layout changes. This occurs when the SDSs that host storage for the SDCs are added, removed, placed in maintenance mode, or go offline. Communication between the Master MDM and the SDCs is asynchronous. MDM to SDC traffic requires a reliable, low latency network. MDM to SDC traffic is considered frontend storage traffic. Meta Data Manager (MDM) to ScaleIO Data Server (SDS) The master MDM must communicate with SDSs to issue rebalance and rebuild directives. MDM to SDS traffic requires a reliable, low latency network. MDM to SDS traffic is considered back-end storage traffic Network Infrastructure Leaf-spine (also called Clos) is the most common network infrastructure in use today. In modern datacenters, leaf-spine topologies are preferred over legacy hierarchical topologies. Dell EMC recommends the use of a non-blocking network design. Non-blocking network designs allow the use of all switch ports concurrently, without blocking some of the network ports to prevent message loops. Therefore, Dell EMC strongly recommends against the use of Spanning Tree Protocol (STP) on a network hosting ScaleIO. In order to achieve maximum performance and predictable quality of service, do not oversubscribe the network. Leaf-Spine Network Topologies A two-tier leaf-spine topology provides a single hop between leaf switches and provides a large amount of bandwidth between end-points. A properly sized leaf-spine topology eliminates oversubscription of uplink ports. While very large datacenters may use a three-tier leaf-spine topology, for simplicity, this paper focuses on two tier leaf-spine deployments. In a leaf-spine topology, each leaf switch is attached to all spine switches. Leaf switches do not need to be directly connected to other leaf switches. Spine switches do not need to be directly connected to other spine switches. 15 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

16 In most instances, Dell EMC recommends using a leaf-spine network topology. This is because: ScaleIO can scale out to 1024 nodes. Leaf-spine architectures are future proof. They facilitate scale-out deployments without having to re-architect the network. A leaf-spine topology allows the use of all network links concurrently. Legacy hierarchical topologies must employ technologies like Spanning Tree Protocol (STP), which blocks some ports to prevent loops. Properly sized leaf-spine topologies make latency more predictable by eliminating oversubscription of uplinks. A two-tier leaf-spine network topology. Each leaf switch has multiple paths to every other leaf switch and all links are active. This provides increased throughput between devices on the network. Leaf switches may be connected to each other for use with MLAG (not shown). Ethernet Considerations Jumbo Frames While ScaleIO supports jumbo frames, enabling jumbo frames can be challenging in some network infrastructures. Inconsistent implementation of jumbo frames by various network components can lead to performance problems that are difficult to troubleshoot. When jumbo frames are in use, they must be enabled on every network component used by ScaleIO infrastructure, including the hosts and switches. Enabling jumbo frames allows more data to be passed in a single Ethernet frame. This decreases the total number of Ethernet frames and the number of interrupts that must be processed by each node. If jumbo frames are enabled on every component in your ScaleIO infrastructure, there may be a performance benefit of approximately 10%, depending on your workload. Because of the relatively small performance gains and potential for performance problems, Dell EMC recommends leaving jumbo frames disabled initially. Enable jumbo frames only after you have a working and stable setup and have confirmed your infrastructure can support their use. Ensure that 16 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

17 jumbo frames are configured on all nodes along each path. Utilities like the Linux tracepath command can be used to discover MTU sizes along a path. VLAN Tagging ScaleIO supports native VLANs and VLAN tagging on the connection between the server and the access or leaf switch. When measured by ScaleIO engineering, both options provided the same level of performance. Link Aggregation Groups Link Aggregation Groups (LAGs) and Multi-Chassis Link Aggregation Groups (MLAGs) combine ports between end-points. The end-points can be a switch and a host with LAG or two switches and a host with MLAG. Link aggregation terminology and implementation varies by switch vendor. MLAG functionality on Cisco Nexus switches is called Virtual Port Channels (vpc). LAGs use the Link Aggregation Control Protocol (LACP) for setup, tear down, and error handling. LACP is a standard, but there are many proprietary variants. Regardless of the switch vendor or the operating system hosting ScaleIO, LACP is recommended when link aggregation groups are used. The use of static link aggregation is not recommended. Link aggregation can be used as an alternative to IP-level redundancy, where each physical port has its own IP address. Link aggregation can be simpler to configure, and useful in situations where IP address exhaustion is an issue. Link aggregation must be configured on both the node running ScaleIO and the attached network equipment. IP-level redundancy is slightly preferred over link aggregation, but ScaleIO is resilient and high performance regardless of the choice of IP-level redundancy or link aggregation. Performance of SDSs and SDCs when MLAG is in use is close to the performance of IP-level redundancy. The choice of MLAG or IP-level redundancy for SDSs and SDCs should therefore be considered an operational decision. LACP LACP sends a message across each physical network link in the aggregated group of network links on a periodic basis. This message is part of the logic that determines if each physical link is still active. The frequency of these messages can be controlled by the network administrator using LACP timers. LACP timers can typically be configured to detect link failures at a fast rate (one message per second) or a normal rate (one message every 30 seconds). When an LACP timer is configured to operate at a fast rate, corrective action is taken quickly. Additionally, the relative overhead of sending a message every second is small with modern network technology. LACP timers should be configured to operate at a fast rate when link aggregation is used between a ScaleIO SDS and a switch. To establish an LACP connection, one or both of the LACP peers must be configured to use active mode. It is therefore recommended that the switch connected to the ScaleIO node be configured to use active mode across the link. 17 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

18 Load Balancing When multiple network links are active in a link aggregation group, the end-points must choose how to distribute traffic between the links. Network administrators control this behavior by configuring a load balancing method on the end-points. Load balancing methods typically choose which network link to use based on some combination of the source or destination IP address, MAC address, or TCP/UDP port. This load-balancing method is referred to as a hash mode. Hash mode load balancing aims to keep traffic to and from a certain pair of source and destination addresses or transport ports on the same physical link, provided that link remains active. The recommended configuration of hash mode load balancing depends on the operating system in use. If a node running an SDS has aggregated links to the switch and is running Windows, the hash mode should be configured to use Transport Ports. This mechanism uses the source and destination TCP/UDP ports and IP addresses to load balance between physical network links. If a node running an SDS has aggregated links to the switch and is running VMware ESX, the hash mode should be configured to use Source and destination IP address or Source and destination IP address and TCP/UDP port. The MDM Network Although MDMs do not reside in the data path between hosts (SDCs) and their distributed storage (SDSs), they are responsible for maintaining relationships between themselves to keep track of the state of the cluster. MDM to MDM traffic is therefore sensitive to network events that impact latency, such as the loss of a physical network link in an MLAG. It is recommended that MDMs use IP-level redundancy on two or more network segments rather than MLAG. The MDMs may share one or more dedicated MDM cluster networks. MDMs are redundant. So, ScaleIO can survive not just an increase in latency, but loss of MDMs. The use of MLAG to a node hosting an MDM will work. However, if you require the use of MLAG on a network that carries MDM to MDM traffic, please work with a Dell EMC ScaleIO representative to ensure you have chosen a robust design. ScaleIO deployed on vcloud NFV using VDS (vsphere Distributed Switch) A vcenter is configured with the following clusters, MDM Cluster, SDS Cluster, and SDC Cluster. These clusters are connected through VDS switches. 18 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

19 ScaleIO components running on hypervisor as VMs and kernel modules VDS connection to VMs To deploy ScaleIO as a hyper-converged storage, every hypervisor server should have both SDS/SDC installed. SDC is a kernel module and SDC/MDM/TB is a VM. With this deployment, ScaleIO can then distribute data storage on every hypervisor in the vcenter. 19 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

20 Figure 9 shows the Reference Architecture compute resources in vcenter deployment. Versa SD-WAN NFV deployed on vcenter resource cluster 20 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

21 This vcenter has six hypervisor servers distributed into two clusters, the resource cluster and the edge cluster. ScaleIO is deployed on these two clusters in a hyper-converged fashion. All six servers have SDS and SDC running. ScaleIO SDS SDS SDS ScaleIO ScaleIO gateway Primary MDM Secondary MDM Tie Breaker MDM ScaleIO VM IP address scheme The MDM and TB VMs can play both SDS and MDM/TB roles. The ScaleIO gateway is a ScaleIO installation and UI management application that can run on any virtual machine. 21 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

22 2 Versa SD-WAN NFV 2.1 Overview Traditional wide-area network (WAN) deployment always comprises some form of physical circuitry layout such as MPLS, X.25, ATM, etc. It is rigid and involves a long deployment cycle that can easily take years to complete. Today s applications transported through WAN are very dynamic, burst, and eventful. Overcoming the challenges of modem applications and make the network more elastic, agile and easy to scale is a daunting task that every service provider strives to resolve. Hosting NFV SD- WAN on a converged infrastructure provides a very attractive solution to this problem. The Benefits of SD-WAN with Integrated Branch Security Branch or remote office network architectures have barely changed for 15+ years. However, the requirements for branch WANs have changed significantly. Most branch offices connect to the rest of the business through an MPLS circuit or VPN. This approach worked well for many years, but the significant increase in traffic volumes due to video, cloud storage/collaboration and other high bandwidth applications have caused the required WAN bandwidth to increase. Options include adding more capacity to the existing WAN circuit or introducing an Internet connection to the branch WAN architecture. The Internet approach can help mitigate the overall congestion of the WAN, but also increases the complexity, security requirements, and cost of designing and managing the branch network, requiring additional infrastructure, policies and management/oversight. From a bandwidth management and allocation basis, traffic engineering to ensure available bandwidth for given applications requires timeconsuming manual mapping of specific traffic to specific circuits. From a security perspective, adding Internet connectivity requires additional security infrastructure, policy creation, and management. Finally, when Internet connectivity is added, the ability to effectively monitor and obtain an overall view of the branch WAN becomes increasing complex, and ongoing issues are often difficult to mitigate. Versa SD-WAN Deployment Diagram 22 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

23 2.2 Use Cases SD-WAN is a virtualized WAN using IPSec tunnel across any type of WAN circuits including low-cost broadband connections from service providers. It is a zero-touch provisioning over the top (OTT) secure IP VPN solution. Versa Networks offers two options for deploying overlay VPN solutions Static: using standard site-to-site VPN IPSec setup (which is covered as part of this test plan) Dynamic using a zero-touch provisioning framework, which is the Versa Networks SD-WAN solution. Current IPSec VPN solution challenges include: Heavy manual provisioning processes. Convenient tools are not provided to implement full-mesh and/or partial-mesh connectivity. Makes it complex to implement intranets and extranets topologies Increase complexity to implement global security policies for full/partial-mesh topologies Lack of application-based, SLA-based smart routing The goals behind Versa Networks SD-WAN include: Cut circuit cost - leverage low-cost broadband connections - Reduce or eliminate dependence on MPLS Increase application assurance - Intelligent route selection - Guarantee bandwidth or latency/jitter for business critical applications Simplify the branch office environment - Zero-touch provisioning & management - No full mesh of signaling channels - Eliminate appliances & consolidate services Strengthen security - Private, Public Cloud & SaaS - Direct, cloud-based or backhaul Internet access The Versa FlexVNFs use zero-touch provisioning to auto-configure themselves (DHCP, VPN setup and route distribution leveraging BGP RR technology, on-the-fly service activation, etc ) and connect the remote branch sites to the rest of the enterprise sites. Note that Versa SD-WAN solution allows the user to define the logical topology of the IPSec Overlay networks; users can configure huband-spoke topology, any-to-any topology as well as anything in between. The SD-WAN architecture concepts are illustrated below. The first diagram highlights the initial installation/deployment process while the second shows how the user/data plane IPsec are instantiated creating the required overlay logical topology between the different remote offices and HQ. 23 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

24 The Versa FlexVNF appliance can seamlessly deploy across multiple network environments, tightly integrating with cloud management tools. Using vcloud Director deployment, enterprises and service providers deploy Versa services across a set of compute resources that meet their needs with respect to capacity, scalability, and flexibility. Versa Director acts as the Virtual Network Function (VNF) manager, while individual FlexVNF instances are deployed across the available compute capacity. vcloud Director (VCD). In this scenario, Versa Director interfaces with VCD APIs to dynamically deploy a FlexVNF instance. Versa Director uses pre-deployed images and CMS (Component Management System) networks, and leverages CMS APIs to instantiate the FlexVNF dynamically. Software-defined WAN (SD-WAN). Using a fully integrated set of FlexVNF-based services, enterprises and service providers can deploy a centrally managed set of WAN routing and service nodes. SD-WAN nodes are deployed and integrated using a specially configured FlexVNF node acting as an SD-WAN controller. For SD-WAN, the Versa Director communicates with each FlexVNF instance via the encrypted IPSec tunnels terminated at the controller, and configuration templates are deployed dynamically to enable zero-touch provisioning of the FlexVNF appliance. Versa SD-WAN NFV Architecture Diagram (1) 24 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

25 Versa SD-WAN NFV Architecture Diagram (2) 2.3 vcloud and Versa SD-WAN High Level Deployment Description In this RA, we have deployed Versa SD-WAN NFV on a resource cluster in vcloud NVF. It has three Dell servers with a six TB ScaleIO storage volume. We deployed Versa Analytics (VA), Versa Director (VD), Versa FlexNFV (controller), Branch offices and testing servers (client and server). 25 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

26 SD-WAN Reference Architecture Network Topology Table 2 provides the VLAN and IP address for the networks shown in Figure 14 SD-WAN Reference Architecture Network Network VLAN IP Address 1. Mgmt and Internet network VLAN x/24 2. North bound interface network VLAN x/24 3. NetCONF network VLAN x/24 4. Client Network VLAN x/24 5. Server Network VLAN x/24 For demonstration purposes, we have created test VMs and branch VMs on the same rack. In a real deployment scenario, the red line labeled as Internet (Figure 14) should be a network island consisting of Internet. Versa SD-WAN NFV establishes an IPSec tunnel between the Versa controller and branch offices through the Internet (Figure 15). 26 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

27 SD-WAN Connections in the read world VMware VDS (vsphere Distributed Switch) Deployment and Configuration The four10gbe ports on each Dell server are connected to two Dell switches as a LACP port channel. The VDS utilizes this port channel and its underlaying four NIC ports as its uplink and create its own LACP bonding. By doing that, it creates an end-to-end NIC bonding channel from switch to server to achieve HA and load balancing on these four Ethernet links. 27 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

28 VDS Configuration Launch vsphere web client and bring up the Networking inventory. vsphere web client home page screen shot DS Creation screen shot 28 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

29 Create a VDS switch and configure LACP for the uplink. LACP configure screen shot Click the + sign to bring up the new LACP configuration wizard. LACP uplink configuration screen shot 29 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

30 Provide a name for the new LACP group, the number of ports is the number of host NIC ports bonded to switches as LACP port channel. Here we are using hostio port channel that has four NIC ports (Figure 19). Mode should be active to get HA and load balance from the LACP bonding channel. Leave the default selection for the load balancing mode. Most switches have these load balance modes enabled by default. On Dell switches, we have the following load balance mode enabled by default. show load-balance Load-Balancing Configuration For LAG & ECMP: IPV4 Load Balancing Enabled IPV4 FIELDS : source-ipv4 dest-ipv4 vlan protocol L4-source-port L4-dest-port IPV6 Load Balancing Enabled IPV6 FIELDS : source-ipv6 dest-ipv6 vlan protocol L4-source-port L4-dest-port Mac Load Balancing Enabled MAC FIELDS : source-mac dest-mac vlan ethertype Load Balancing Configuration for tunnels ipv4-over-ipv4 Payload header ipv4-over-ipv6 Payload header ipv6-over-ipv6 Payload header ipv6-over-ipv4 Payload header ipv4-over-gre-ipv4 Payload header ipv6-over-gre-ipv4 Payload header ipv4-over-gre-ipv6 Payload header ipv6-over-gre-ipv6 Payload header mac-in-mac header based hashing is disabled TcpUdp Load Balancing Enabled Notice the VLAN configuration is greyed out in this LACP configuration page. VLAN in LACP uplink is always using VLAN trunking; access VLAN or untagged VLAN is not supported. 30 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

31 Click the Migrating network traffic to LAGs hyper link (see Figure 18) brings up the migration wizard. Screen shot of Migrating VDS to LACP If some of the port groups underneath this VDS switch were earlier configured for no LACP traffic, you will need to run this wizard to migrate them to LACP port groups. Configure LACP port groups Bring up the VDS port group configuration page. Screen shot for VDS port group configuration Fill the name field or use the default name, click next. VDS port group creation wizard 31 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

32 Fill out the VLAN configuration based on the topology (Figure 22) and leave everything else as default. The VLAN information should match the configuration on the switch side. In the Dell switches leaf1 and leaf2, we have configured the following VLANs, leaf1-s6000#sh vlan NUM Status Description Q Ports 50 Active x/24 T Po92(Te 0/92-93) T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) 60 Active x/24 T Po92(Te 0/92-93) T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) 70 Active x/24 T Po92(Te 0/92-93) T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) 80 Active x/24 T Po92(Te 0/92-93) T Po94(Te 0/94-95) T Po98(Te 0/98-99) leaf1-s6000#sh run int vlan 50! interface Vlan 50 description x/24 ip address /24 tagged Port-channel 92,94,98 no shutdown interface Port-channel 92 no ip address mtu 9000 switchport spanning-tree rstp edge-port rate-interval 5 vlt-peer-lag port-channel 92 no shutdown sh int po 92 Port-channel 92 is up, line protocol is up Created by LACP protocol Hardware address is ec:f4:bb:fc:54:4b, Current address is ec:f4:bb:fc:54:4b Interface index is Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :ecf4bbfc544b MTU 9000 bytes, IP MTU 8982 bytes LineSpeed Mbit Members in this channel: Te 0/92(U) Te 0/93(U) leaf2-s6000#sh vlan NUM Status Description Q Ports 50 Active x/24 T Po92(Te 0/92-93) T Po93() T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) 60 Active x/24 T Po92(Te 0/92-93) 32 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

33 T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) 70 Active x/24 T Po92(Te 0/92-93) T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) 80 Active x/24 T Po92(Te 0/92-93) T Po94(Te 0/94-95) T Po98(Te 0/98-99) V Po127(Fo 0/0,4) #sh run int vlan 50! interface Vlan 50 description x/24 ip address /24 tagged Port-channel 92-94,98 no shutdown leaf2-s6000#sh run int po 92! interface Port-channel 92 no ip address mtu 9000 switchport spanning-tree rstp edge-port rate-interval 5 vlt-peer-lag port-channel 92 no shutdown sh int po 92 Port-channel 92 is up, line protocol is up Created by LACP protocol Hardware address is ec:f4:bb:fc:54:c9, Current address is ec:f4:bb:fc:54:c9 Interface index is Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :ecf4bbfc54c9 MTU 9000 bytes, IP MTU 8982 bytes LineSpeed Mbit Members in this channel: Te 0/92(U) Te 0/93(U) 33 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

34 Port map for server and switch connection server7 4 p1p1 leaf1 Te0/77 SIO-1 idrac: p1p2 leaf2 Te0/77 6 p2p1 leaf1 Te0/76 po76 7 p2p2 leaf2 Te0/76 po76 SIO-2 MGMT 10 p4p1 leaf1 Te0/93 po92 11 p4p2 leaf2 Te0/93 po92 12 p5p1 leaf1 Te0/92 po92 HostIO 13 p5p2 leaf2 Te0/92 po92 The configurations on the switches for VLAN 50 match the port map table (Table 3). These four 10GbE ports on the two Dell switches connect directly to the Dell server and serves as LACP port channel uplinks. Bring up the port group edit page by clicking the Edit button on the upper left corner. VDS Port Group Configuration Screen Shot 34 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

35 Port Group Property Edit Screen Shot Click the Teaming and failover on the left panel. Port Group NIC Teaming and Failover Edit Page 35 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

36 Move the lag1 uplinks to the top and move the generic uplinks to the bottom. Click OK to finish. Port Group LAG uplink Edit Page Add hosts and its NICs to VDS Bring up the Add/Manage host configuration wizard from VDS Port Group Host Management Wizard (1) 36 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

37 Click the Add hosts or Manage host networking radio button and click next. Port Group Host Management Wizard (2) Click next to traverse to the following wizard page. Port Group Host Management Wizard (3) 37 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

38 Attach the lag interfaces to the right VDS switches. Port Group Host Management Wizard (4) Port Group Host Management Wizard (5) Click the finish button to run the wizard. If everything goes well, it should display the following VDS topology on vsphere client application. Port Group Property Screen Shot 38 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

39 VLAN 50 is attached to a VM and the uplink is LACP port channel with four NIC ports. Click the blue color icon with letter i to bring up an interface information page on the switch side. Port Group uplink interface information page through LLDP This switch side information shows the LLDP protocol between switches and servers. The VDS is configured by enabling LLDP. VDS Edit page for Link Layer Discovery Protocol Also, make sure all the server and switch interfaces are configured with MTU 9000 to support jumbo frame and ScaleIO traffic. 39 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

40 Versa Software Deployment Launch Deploy OVF Template menu from vsphere Windows client application. Versa VM creation using OVF Wizard Select Versa Director, Versa Analytics, or Versa FlexVNF ova files to create VMs for each of Versa software component. Use ScaleIO volume for Versa VM storage so that any data from Versa VMs isffig automatically backed up and replicated by ScaleIO. OFV Wizard for VM Storage 40 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

41 Each VM has 2 CPU, 200 GB disk, and 32 GB memory. The network interfaces for these VMs should be configured according to the deployment topology diagram (Figure 14). All the interfaces are created as E1000. VM Edit Screen Shot for Network Interfaces On the following Ready to complete page, verify the OVF file deployment settings and click Finish. OVF Wizard Finish Page 41 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

42 In the Recent Tasks pane of the vsphere client, you can track the deployment process. After about five minutes, the deployment is complete and a green icon to Deploy OVF template and Initialize OVF deployment is displayed. VM Creation status screen shot You can now view the FlexVNF virtual machine in the host Add CMS (Cloud Management System) Connectors in Versa Director Web UI. You can create third-party CMS connectors on Versa Director. This enables you to connect the CMS instance (vcloud Director) with organizations (tenants). Steps 1. Under the Director Context, go to Administration. Select Connectors > CMS. Versa Director CMD Configuration Screen Shot (1) 42 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

43 2. Click the + to open the Add CMS Connector page. Versa Director CMD Configuration Screen Shot (2) a. Add the CMS details (name, IP address, CMS flavor). Select OpenStack as CMS Flavor for a FlexVNF on OpenStack or bare metal. Versa Director supports KeyStone v2.0 and v3 versions with domains. b. Click the Authentication tab. i. For User Name, enter the user name to connect to the CMS connector. ii. For Password, enter the password to connect to the CMS connector. iii. From the Type list, select the type of authentication. c. if required, click the Notifications tab to enter the notification details. Versa Director CMD Configuration Screen Shot (3) 43 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

44 i. Select the Configure Notifications check box to configure notifications for the CMS connector. ii. In AMQP Host IP Address, specify the IP address of AMQP. iii. In AMQP Port, specify the port number of AMQP. iv. Specify the Exchange name and vhost address. v. Specify the Prefix and select the Use SSL check box to use SSL for the connector. vi. To enable notifications, select the Enable Notifications check box. 3. Click OK This adds the CMS connector. Versa Director CMD Configuration Screen Shot (4) Deploy Two Test VMs We can now create two test VMs, to simulate branch office sample computes, to attach to the Versa branch office networks. These two VMs are created using Ubuntu guest OS. It has two Ethernet interfaces one for management access and one to connect to the Versa branch office network Test VMs Attached to Branch Office networks The branch office network is configured using DHCP to get IP address directly from Versa branch office. 44 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

45 # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address netmask network broadcast gateway # dns-* options are implemented by the resolvconf package, if installed dns-nameservers auto eth1 iface eth1 inet dhcp The gateway on these two test VMs should point to branch office network gateway. route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface UG eth U eth U eth1 If everything is been configured appropriately, we should be able to ping from one test VM to another. 45 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

46 ping PING ( ) 56(84) bytes of data. 64 bytes from : icmp_seq=1 ttl=62 time=1.86 ms 64 bytes from : icmp_seq=2 ttl=62 time=1.04 ms 64 bytes from : icmp_seq=3 ttl=62 time=1.22 ms The traceroute should reveal that the traffic is going through Versa branch office IPSec tunnel. traceroute traceroute to ( ), 30 hops max, 60 byte packets ( ) ms ms ms ( ) ms ms ms 3 * ( ) ms ms admin@sanfrancisco-cli> show interfaces bri NAME IP MAC OPER ADMIN TNT VRF ptvi2 [ /32 ] n/a up up 2 Test1-Control-VR tvi-0/2 n/a up up tvi-0/2.0 [ /18 ] n/a up up 2 Test1-Control-VR tvi-0/3 n/a up up tvi-0/3.0 [ /18 ] n/a up up 2 Test1-Control-VR tvi-0/624 n/a up up tvi-0/624.0 [ /24 ] n/a up up 2 WAN-Transport-VR tvi-0/625 n/a up up tvi-0/625.0 [ /24 ] n/a up up 2 Test1-LAN-VR vni-0/0 00:50:56:b0:18:ba up up vni-0/0.0 [ /24 ] 00:50:56:b0:18:ba up up 2 WAN-Transport-VR vni-0/1 00:50:56:b0:a7:41 up up vni-0/1.0 [ /24 ] 00:50:56:b0:a7:41 up up 2 Test1-LAN-VR 46 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

47 2.4 Snapshot for Deployment All the VMs deployed in vsphere can have a snapshot for backup and restore. From the VM, right click to bring up VM pop-up menu and select snapshot->take snapshot to bring up the following menu. VM Snapshot Click OK to finish the snapshot. The snapshot manager shows you all the VM snapshots you have taken so far. Snapshot Manager You can revert to any snapshots you have taken by selecting the snapshot and clicking the Go to button. 47 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

48 2.5 Versa Director Web UI Versa Director Web UI is a general web portal to manage all Versa components, Versa Analytics, Versa Controller, and Versa Branches Director Context Click Test1. Versa Director Organization Tab (Director Context) Test1 Organization Edit page Ethernet tab under Configurations. Configurations Ethernet Tab 48 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

49 Tunnel tab under Configurations. Workflows tab. Configurations Tunnel Tab Click controller1. Workflow Tab controller1 inside Workflow 49 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

50 Click Control Network tab. Control Network inside controller1 Click WAN Interfaces tab. Appliances tab. WAN Interfaces inside controller1 Appliance Tab Check NewYork checkbox. NewYork branch inside Appliances Tab 50 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

51 Click CLI icon to bring up NewYear branch office shell. Administration tab. Shell for NewYork Branch VM Analytics tab. Administration Tab Analytics Tab 51 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

52 Monitor tab. Monitor Tab controller1 inside Monitor Tab 52 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

53 2.5.2 Appliance Context Organization tab. Configurations tab. Organization tab under Appliance Context Administration tab. Configuration Tab Administrator Tab 53 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

54 2.6 Deploy Versa Flex NFV on Dell EPC-5000 for Branch Office The Dell Embedded Box PC 5000 is optimized for wall, DIN-rail or VESA mounting. With plug-andplay deployment capabilities, all the Embedded Box PC 5000 needs is an Internet connection to quickly establish a secure network path and provide complete, advanced functionality for SD-WAN. Here is the specification of Dell EPC WWAN Antenna cable LTE Verizon 5812 Mobile Broadband 16GB (2x8G) 2133MHz DDR4 Memory 500GB 5400 RPM SATA Hard Drive Intel Core i7-6820eq Processor Intel Core i7-6820eq Processor 128GB Solid State Drive M.2 SATA,2nd 1xPCI Slot + 1xPCIe FH/HL Slot WLAN DW1901 Card,WW Intel I350 QP 1Gb Svr Adp,FH,CK Versa FlexNFV can be remotely installed on this bare metal server through Internet connection and manage by Versa Director in the head end data center. Dell EPC-5000 deployed as Versa SD-WAN branch office 54 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

55 3 Performance Tuning and Hardware Acceleration 3.1 Performance Tunings Bios Settings ESXi Hypervisor Settings ESXi Guest VM Settings 3.2 BIOS Settings: The keystroke sequence to get to the System BIOS varies across different systems, covering different options is not in the scope of this document. (In most cases, the Del or F10 key can be tried.) Once the BIOS is reached, 1. Enable Hyperthreading. It usually will be under the Advanced CPU Settings or Chipset Settings section of the BIOS. 2. Set Power Management to High or if the High option is not available, there should be an option to Disable Power Management. This option should be under the Advanced CPU Settings or Chipset Settings section of the BIOS. 3. Save the BIOS settings and restart the system. Note: Both the above settings are persistent across reboots. 3.3 ESXi Hypervisor Settings: 1. Set Rx-Tx to run in the different context. Get to the ESXi ssh shell and issue the command: esxcli system settings advanced set -o /Net/NetNetqRxQueueFeatPairEnable -i 0 2. Set the ring sizes and queue sizes for the interfaces to 4K, using the following: ethtool -G vmnic2 rx 4096 tx 4096 ethtool -G vmnic3 rx 4096 tx 4096 vsish -e set /config/net/intopts/maxnetiftxqueuelen 4096 In the above command, vmnic3 and vmnic2 are the vmnic interfaces that are used. For the Hypervisor to take the above settings, the network driver needs to be reloaded. One way to accomplish this is to flap the vswitch standard / the dvs mtu: esxcli network vswitch [ standard dvs vmware ] list esxcfg-vswitch -m 9000 dvs-left esxcfg-vswitch -m 1500 dvs-left esxcfg-vswitch -m 9000 dvs-right esxcfg-vswitch -m 1500 dvs-right where dvs-left and dvs-right are the two DVS switches that are mapped to the two networks of the VM. 55 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

56 Another way to reload the driver is by reloading the driver used by the nic (ixgbe in our case) vmkload_mod -u ixgbe vmkload_mod ixgbe Note: The above settings are not persistent across reboots, so it must be configured every time the ESXi host is rebooted. A simpler way will be to create a shell script with the above commands and then running it as part of the boot up procedure, by having it as part of the following script /etc/rc.local.d/local.sh 3.4 ESXi Guest VM Settings: The following settings are required to manage the VM using the vsphere web client. 1. Upgrade the VM Hardware version If the VM was created using an earlier version of the vsphere Client or using older Hardware settings, it needs to be upgraded to the latest version (Version 10). On the vsphere Web Client, navigate to the VM that requires upgrading and right click on it (when it is powered off), Click Compatibility and select Upgrade VM Compatibility 2. Latency Sensitivity set to high On the vsphere Web Client, right click on the VM, select Edit Settings. On the VM settings screen, select the VM Options tab, click on Advanced and navigate to Latency Sensitivity and set to High from the drop down menu. When the latency sensitivity is set to high, it is necessary to modify the sched.mem.min to the total available memory. Add the following lines to the VM s vmx file present in the hypervisor. The location of the VMX file can be located by issuing the following command on the ESXi ssh shell: esxcli vm process list. Once the location of the file is found, open that file (using vi) and add / edit the following line: sched.mem.min = "16384" sched.mem.minsize = "16384" Note: The above settings will pre-allocate and reserve the required memory resources to the VM, so other VMs on the same host will not able to share use of them. This setting will impact Oversubscription mode.. 3. TX thread per VNIC By default, the Tx thread is created per VM, and not per VNIC. As the bottleneck is on the hypervisor tx-side as well, change the tx-thread to per VNIC to improve the performance. Edit the vmx file of the vm and add the following entry for each port (<port_id>) connected to PNIC. ethernet<port_id>.ctxperdev = 1 (example: Ethernet0.ctxPerDev= 1 ) 56 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

57 4. Disable CPU Affinity Expand the CPU under Virtual Hardware option and remove any CPU ids under Scheduling Affinity, if previously entered. Keep the HT Sharing setting to Any. 5. SCSI Controller Under Virtual Hardware, navigate to SCSI controller 0 and set the Change Type to VMware Paravirtual. 6. Disable some Periodic Operations From esxi ssh shell, execute below commands, to disable some periodic operations: esxcfg-advcfg --set 0 /Mem/SamplePeriod esxcfg-advcfg --set /Misc/MCEMonitorInterval Also, edit the.vmx file (the same file described in Step 2), to enter the following details: monitor_control.disable_gphys_tree = "TRUE" isolation.tools.setinfo.disable = "TRUE" Note: The above settings (1 and 2) on the VM are persistent across reboots, so when configured once, the settings will stay across reboots until they are modified. 57 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

58 4 Versa SD-WAN Performance Index and Test Cases 4.1 Abstract This application note provides suggestions and best practice configurations to attain least packet loss during high network performance in a topology, which uses VMware ESXi hosts. This document will be constantly updated as more tuning parameters are researched. 4.2 Pre-Requisites / Assumptions Some of the performance tunings provided in the document are applicable only to VMware ESXi 5.5 hypervisors and requires a web based vsphere Client that manages it. The Versa VNF settings are applicable for the 14.1R2 release unless mentioned otherwise and the network adapters are assumed to be from Intel, and vmware emulating them as of type vmxnet Versa VNF Settings These settings are recommended in the /opt/versa/etc/vsboot.conf file in the Versa VNF. In the 14.1R2 release, these settings are defaulted to other values, so are recommended to be modified, but in future releases they will be set to the required values automatically, without any user intervention required. 1. Rx/Tx descriptor size: Set the rx descriptor (nic_num_rx_desc)and tx descriptor (nic_num_tx_desc) size in the vsboot to 4096 and the bulking values to be Rx/Tx bulking size in vsboot: Set the rx bulking (max_rx_bulk) and tx bulking (max_tx_bulk) value to 64. Basically, for items 1 and 2 above, the vsboot file needs to be added with the following keyvalue pairs: nic_num_rx_desc = 4096, nic_num_tx_desc = 4096, max_rx_bulk = 64, max_tx_bulk = 64, 3. Pin Kernel tasks/threads to dedicated cores (core-0): Update the GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub to GRUB_CMDLINE_LINUX_DEFAULT="nhoz=off nmi_watchdog=0 isolcpus=1-23" update-grub Update /etc/init/versa-service.conf file with taskset 0x3fff /opt/versa/bin/versa-vsmd. Reboot VM 58 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

59 4.4 Test Cases Guidelines Among the different functionalities provided by Versa FlexVNF, below is a list of the most relevant functions requested by our customers and prospects when it comes to vcpe and VSecGW use cases. Should America Movil requires some custom test items or more in-depth testing of a particular capabilities, Versa Networks will appropriately tailor the below suggested test plan. 4.5 Use Iperf to test the link bandwidth between client and server VMs The client and server test VMs are deployed for testing purpose (see Figure 15 for details). We can install Iperf3 on these two VMs and start to send IP packets between them. On server VM, run this CLI, iperf3 -s -p 5001 This will make server VM to receive packets from client VM On client VM run this CLI, iperf3 -c p n 1G This will send 1G IP packets to server VM Once finished, bring up Versa Director GUI, 59 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

60 Click controller1 Analytics SD-WAN to Display Traffics in Appliances Traffics for Controller 60 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

61 Click Applications Traffics Categorized based on Applications The icmp portion in the pie chart is coming from iperf tool. 4.6 Routing Protocol Interop Testing - OSPF, BGP, and BFD interop BGP, static, and default routes will be configured on FlexVNF and peer router. Objective The goal of this test is to verify the Versa FlexVNF routing interoperability with CPE and Internet routers running BGP, OSPF, and potentially BFD on certain interfaces (PE facing interfaces or Upstram Core routers etc ). Configuration Notes See configuration section for details. Methodology OSPF: Configure proper area and interface types on the Versa FlexVNF and neighbour router(s). BGP: Configure the proper AS numbers and peer settings on the Versa FlexVNF, CPE and Internet router. If applicable set BFD timers on select interfaces. If applicable set BFD timers on select interfaces. Verify reachability to prefixes being announced by each router using ICMP echos. Monitor KPIs including 61 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

62 CPU utilization, process monitoring, memory consumption, as well as manageability responsiveness during the test from both a CLI session and a Director session. Expected Results OSPF neighbors should be in FULL state and LSDB should match in routers with same areas configured. BGP session should be in ESTABLISHED state and prefixes exchanged amongst peers should be populated in the appropriate RIB/FIB. BFD session should be in UP state. Verify dataplane connectivity with IMIX traffic generated by the Ixia to and from hosts in the prefixes announced by OSPF and BGP. Actual Results Show output of the 3 protocols configured above on all peer/neighbor routers: versa@vmwareflexvnf-cli> show ospf neighbor brief State codes: atmpt - attempt, exchg - exchange, exst - exchange start, load - loading, 2-way - two-way, full - full Op codes: gdown - going down, gup - going up Intf address Interface State Neighbor ID Pri Op vni-0/1.151 full up versa@vmwareflexvnf-cli> show bgp neighbor brief routing-instance: Customer7 Neighbor V AS MsgRcvd MsgSent Uptime State/PfxRcd PfxSent :08: Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

63 4.7 DHCP Server and Relay Objective The goal of this test is to verify that the FlexVNF DHCP server function distributes individual IP addresses with the proper client parameters in the DHCP offer packet- such as subnet mask, default gateway IP address, DNS servers etc, to host clients such as Windows, Linux or Mac OS requesting host IP address allocation dynamically. The FlexVNF can also relay the DHCP discover packets from the requesting host to a 3 rd party DHCP server by performing the DHCP relay function. Configuration Notes See configuration section for details. Methodology Test equipment functioning as a DHCP client or actual host clients with various OS will be configured to request IP address allocation and will be physically connected on the same broadcast domain as the FlexVNF IP interface. Expected Results DHCP DISCOVER messages from the client or test equipment should be processed by the FlexVNF DHCP server and an appropriate OFFER should be sent towards the client with the configured parameters from section The DHCP client should respond to the OFFER packet from the FlexVNF server with a REQUEST packet confirming the parameters sent by the server. The FlexVNF DHCP server should transmit an ACKNOWLEDGE packet upon receipt of the properly formatted REQUEST packet form the client. The client should have an IP address allocation and should be able to verify connectivity via ICMP echo to the DG IP interface of the FlexVNF. The FlexVNF should display the IP address(es) that have been allocated to the client identified by the MAC address of the client and the hostname as displayed in the sample show command below: admin@flexvnf-cli> show orgs org-services Customer5 dhcp active-leases SERIAL CLIENT VALID SUBNET DYNAMIC SERVICE FQDN FQDN NUMBER IP ADDRESS HW ADDRESS ID LIFETIME EXPIRES ID INTERFACE POOL PROFILE FORWARD REVERSE HOSTNAME :10:78:d2:c7:4b:4f /06/05 15:12:09 3 vni-0/ false - WIN-FLJKT62D1NS.Customer5.com. 63 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

64 4.8 NAPT - Network Address Port Translation and other NAT flavors Objective The goal of this test is to verify that a single globally routable IPv4 source address can be shared among multiple customers/tenants using identical private RFC1918 IP space by using NAPT function on the FlexVNF. Each customer will have their own private VRF, to reach external networks their source IP addresses will be translated to the public IP announced in the GRT of the FlexVNF. ALGs such as FTP, etc will be verified for successful traversal across the NAT. Configuration Notes See configuration section for details. Methodology Configure 2 customer/tenants with identical RFC1918 space for the LAN with a NAT policy that translates the source IP to a publicly routable address. Use the Ixia to generate various TCP and UDP traffic flows with the source IP addresses local to the LAN destined for a prefix in the GRT. Using a host with an FTP client, initiate an FTP session to an FTP server host in the GRT to verify ALG functionality. Expected Results NAPT sessions should be created by the FlexVNF for each traffic flow, verified by the command show orgs org Customer3 sessions nat NAT NAT NAT VSN SESS DESTINATION SOURCE DESTINATION NAT SOURCE DEST SOURCE DEST ID ID SOURCE IP IP PORT PORT PROTOCOL IP IP PORT PORT Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

65 4.9 QoS - Objective The goal of this test is to verify that the FlexVNF QoS mechanisms can shape selected traffic at the bit rates and burst size configured for the entity under test. Configuration Notes See configuration section for details. Methodology Configure a customer/tenant with a QoS policy to shape the selected high priority traffic to a rate and drop profile that is superior to another traffic class. Transmit both traffic types simultaneously at the same bit rate using the Ixia test set or 2 hosts with applications such as ftp and http (wget). Expected Results Traffic should be shaped to the rate configured and measured by the application itself (ftp) or the Ixia test set 4.10 Application ID Objective The goal of this test is to verify that the FlexVNF Application ID mechanisms can recognize selected application traffic (Layer 7) and perform actions such as, ALLOW, DENY, REJECT and/or LOG the action upon proper identification of the application configured in the policy. Complementary testing as suggested in the Rate Limiting and Application Control section will allow applying rate limit control to particular set of applications or URL type on top of regular actions listed within that paragraph. Configuration Notes See configuration section for details. Methodology Configure a customer/tenant with an Application ID policy to recognize the unique application traffic and perform an action such as DENY and LOG. Transmit the unique application traffic with the Ixia test set or use an authentic host that has the application installed and verified functional before applying the application DENY policy. Expected Results The unique application traffic should be dropped and logged to the Analytics node- verification of this can be view with the CLI command: 65 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

66 4.11 IPSec VPN (LAN to LAN) Objective The goal of this test is to verify that the FlexVNF IPSec protocols create a private VPN amongst other FlexVNFs as well as 3 rd party IPSec devices. IP reachability to all member sites of the VPN and verification of the encryption of the data will be the criteria for pass or fail. Configuration Notes See configuration section for details. Methodology Configure a customer/tenant with IPSec policies to encrypt the traffic and forward it to other member sites of the VPN based on routing table information. Expected Results Traffic should be shaped to the rate configured and measured by the application itself (ftp) or the Ixia test set Performance Tests - UDP Throughput, Latency, and PDV Objective The goal of this test is to determine the theoretical maximum raw throughput, latency, and PDV (packet delay variation) of the Device Under Test (DUT) using semi stateful UDP traffic. Configuration Notes See configuration section for details. Methodology Test equipment functioning as a traffic generator is used to blast UDP semi stateful traffic bidirectionally through the DUT. The test iterations include various IMIX packet sizes. All security is typically disabled (allow all rule) in the first iterations and incrementally added as required. The equipment used for this test is the IXIA Build Other KPIs include CPU utilization, process monitoring, memory consumption, as well as manageability responsiveness during the test from both a CLI session and a Director session. Expected Results The throughput should be approximately 1Gbps. The latency should be approximately (customer SLA value here) µsec. The PDV value should be approximately (customer SLA value here) µsec. FlexVNF show commands- admin@flexvnf1> show processes admin@flexvnf1> show system load-stats 66 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

67 show system status Actual Results Throughput= Latency= Average Latency = PDV= Average PDV= 4.13 TCP Throughput, Latency, and PDV Objective The goal of this test is to determine the theoretical maximum throughput of the Device Under Test (DUT) using TCP applications of HTTP and FTP traffic. Configuration Notes See configuration section for details. Methodology All security configuration is typically disabled in the first iterations and incrementally added as required. The equipment used for this test is the IXIA Build Other KPIs include CPU, process monitoring, memory utilization, as well as manageability responsiveness during the test from both CLI and Director. Expected Results Approximately 1Gbps of throughput is expected. Actual Results HTTP RESULTS admin@flexvnf1>show interfaces port statistics Throughput = approximately Gbps FTP RESULTS admin@flexvnf1>show interfaces port statistics Throughput = approximately Gbps Additional Comments/Notes 67 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

68 4.14 Concurrent TCP connections Objective The goal of this test is to identify the maximum number of concurrent TCP connections that the DUT can maintain. Configuration Notes See configuration section for details. Methodology This stateful test is conducted using the Ixia XYZ chassis and IxLoad software. Once the goal is reached, the sessions are held open for a period of time (approx. 300 seconds) to demonstrate that the DUT can handle the session load. Iterations include incremental security features. Other KPIs include accurate session counting, CPU utilization, process monitoring, memory consumption, as well as manageability responsiveness during the test from both a CLI session and a Director session. Expected Results 1 million Concurrent Connections. Actual Results admin@flexvnf1> show orgs org Customer1 sessions summary NAT NAT NAT SESSION VSN SESSION SESSION SESSION SESSION SESSION SESSION SESSION COUNT ID COUNT CREATED CLOSED COUNT CREATED CLOSED FAILED MAX TCP Session Setup Rate Objective The goal of this test is to determine how quickly the DUT can build new TCP sessions without suffering any failed attempts. Configuration Notes See configuration section for details. Methodology This stateful test is primarily conducted with the Ixia XYZ and IxLoad software, although other possible iterations are to test the ramp rate while background traffic is being generated with the Ixia XYZ and IxExplorer. Iterations include with and without background traffic, with and without existing sessions, with and without closing the sessions, incrementally adding security features. 68 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

69 Expected Results Approximately 190 K connections per second Actual Results Approximately connections per second 4.16 Security Tests - Security Zones & DDoS protection Objective The goal of this test is to verify the effectiveness of enabling the zone protection security feature not only for protecting the FLexVNF iself (control plane protection) but also the data/service plane. Security Zone-based protection mechanisms are configured per tenant (organization) for the later. For the former a dedicated configuration section is provided below. Configuration Notes The Ixload application with DDoS clients was used for this test. admin@flexvnf1> (config-org-services-customer1)%show id 3; security { profiles { zone-protection { basic-zone-protection { flood { icmp { enable yes; red { other-ip { activate-rate 20; alarm-rate 20; maximal-rate 30; drop-period 300; enable yes; 69 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

70 red { alarm-rate 20; tcp { enable yes; action random-early-drop; red { alarm-rate 20; udp { enable yes; red { alarm-rate 20; scan { tcp { enable action yes; alert; interval 10; threshold 25; udp { enable action yes; alert; interval 10; 70 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

71 hostsweep { enable action yes; alert; interval 10; threshold 25; tcp { reject-non-syn yes; ip { discard-ip-spoof discard-ip-frag yes; yes; discard-strict-source-routing yes; discard-loose-source-routing yes; discard-timestamp discard-record-route discard-stream discard-unknown discard-malformed yes; yes; yes; yes; yes; icmp { discard-frag yes; discard-error-messages yes; discard-large-packet discard-ping-zero-id yes; yes; In order to define control plane ratre limiting protection, you have to define 3 main configuration objects: - Services objects: which will identify the type of traffic/packets one wants to rate limit - Rate limit profile: which will be applied against the traffic identified by services object(s) 71 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

72 - Qos-Policies: which will tie together the services objects, rate-limit profile as well as provide some additional macth criteria (source/destination IP address as an example). Let s define an ICMP rate limit protection; - First create an object under: org -> org-services -> customer org-name -> objects with the following command: set services icmp protocol ICMP - Then create a rate limit profile under: org -> org-services -> customer org-name -> class-of-service -> qosprofiles with the following commands: set low-bw peak-kbps-rate 3000 set low-bw peak-burst-size Note that low-bw is the name I gave to this rate-limit profile. - Last create a policy under: org -> org-services -> customer org-name -> qos-policies with the following set commands: set class-of-service qos-policies ICMP_FLOOD rules r1 match services services-list icmp set class-of-service qos-policies ICMP_FLOOD rules r1 match destination address address-list 20_1_3_2 set class-of-service qos-policies ICMP_FLOOD rules r1 set action allow set class-of-service qos-policies ICMP_FLOOD rules r1 set qos-profile low-bw Note address-list is an object that needs to be separately created under the objects hierarchy this one should identify the IP address we want to protect on the VCSN. Regarding TCP_SYN protection the same recipe would have to be applied except we would then define TCP as the services object along with a set of ports that we know are open on the platform (22 for SSH for example) and with the appropriate destination IP address(es), there s no need to specify the flags as by default Versa FlexVNF only answers to TCP_SYN, so the rate limit would then automatically apply to TCP_SYN only for the destination IP and TCP ports configured. 72 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

73 show objects services { ah { protocol AH; esp { protocol ESP; icmp { protocol ICMP; tcp { protocol TCP; destination-port 80,8080; tcp-src-4000-dest-4000 { protocol TCP; source-port 4000; destination-port 4000; udp-dest-5000 { protocol UDP; destination-port 5000; Additionally please find below a sample configuration protecting Versa FlexVNF instance s control and management plane functions. The configuration highlights protection against 1. ARP Floods 2. Broadcast Packet Floods 3. Multi-cast Packet Floods 4. BGP Packet Floods 5. OSPF Packet Floo Methodology Configure the following security sets in the tenants target zone (the zone to which the connected interface is bound) and control plane and launch the DDoS attacks. Verify that the proper firewall 73 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

74 thresholds have been configured compared to the packet rates being sent by the test equipment attack script. org-services Customer1 { id 1; class-of-service { qos-profiles { qos-prof-arp { peak-pps-rate 16000; qos-prof-bc-mc { peak-pps-rate 16000; qos-prof-bgp { peak-pps-rate 16000; qos-prof-ospf { peak-pps-rate 16000; qos-policies { qos-pol-1 { rules { qos-rule-arp { match { ether-type ARP; set { action allow; qos-profile qos-prof-arp; Ixia & IxLoad UDP Flood 74 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

75 Transmit 10,000pps of UDP traffic with UDP flood threshold configured to 5,000pps SYN Flood Transmit 10,000pps of SYN packets with SYN flood threshold configured at 5,000pps Ping Sweep Transmit ICMP echo requests to multiple IP addresses and configure ping sweep threshold to 10 packets in 1000µsec. Ping of Death Transmit ping of death packets at a host IP interface TCP Port Scan Execute TCP port scans and configure threshold to 10 packets in 1000µsec Tear Drop Transmit Tear Drop packets Land Transmit packets with same source IP and destination IP address Expected Results UDP Flood The packets will be limited to the threshold configured for UDP packets. SYN Flood The packets will be limited to the threshold configured for SYN packets Ping Sweep The ping sweep will be detected and limited Ping of Death The ping of death packets will be dropped TCP Port Scan The port scans will be detected and limited Tear Drop The packets will be dropped 75 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

76 Land The packets will be dropped Actual Results orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection flood-sessions-dropped 0 scans-detected 0 ip-spoof-dropped 0 ip-fragment-dropped 0 ip-option-strict-source-route-dropped 0 ip-option-loose-source-route-dropped 0 ip-option-timestamp-dropped 0 ip-option-record-route-dropped 0 ip-option-security-id-dropped 0 ip-option-stream-id-dropped 0 ip-option-unknown-class-dropped 0 ip-malform-dropped 0 tcp-non-syn-dropped 0 icmp-error-dropped 0 icmp-fragment-dropped 0 icmp-large-packet-dropped 0 icmp-ping-zero-id-dropped 0 log-alerts-sent 4 SYN Flood admin@flexvnf1> show orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection Ping Sweep 76 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

77 show orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection Ping of Death show orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection TCP Port Scan show orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection Tear Drop show orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection Land show orgs org-services Customer1 security profiles zone-protection zoneprotection-statistics basic-zone-protection Additional Comments/Notes 4.17 Service based security policies Objective Add additional security policies for specific services to the configuration. As with all other settings, policies may be configured through the Director, or a CLI session. Configuration Notes See configuration section for details. Methodology There are a myriad of combinations and options in the security policies so to test every permutation is not practical. Best-practice is to identify the specific policies most likely to be implemented by the typical end customer. Expected Results All relevant policies are created and examined via the show commands. Measured Results Additional Comments/Notes 77 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

78 4.18 Logging Tests - SNMP, IPFIX, Syslog Objective The goal of this section is to show logging capabilities of the Versa FlexVNF to send SNMP traps, syslog messages, or IPFIX flow data. SNMP traps and syslog event messages should be sent when an alarm is raised (interface DOWN event) or a threshold has been crossed. IPFIX flow information can be optionally configured to account for per flow activity and sent to a collector server such as Analytics for reporting. Configuration Notes See configuration section for details. Methodology Generate SNMP trap conditions (e.g. downing an interface) causing the FlexVNF to send traps or syslog messages to appropriately configured IP destinations. Using the test equipment generate typical background traffic consisting of UDP and TCP traversing the firewall for collection by IPFIX. Record the bytes in each flow of the matching 5 tuple conversations to compare with reports that will be generated in Analytics. Expected Results Appropriate SNMP traps or syslog messages should be received and recorded with proper timestamps by the EMS/Trap server or Syslog server configured to receive them. IPFIX records should be transmitted to the Analytics node and appropriate dashboard report generated. Actual Results 4.19 Rate Limiting and application control Objective The goal is to demonstrate rate limiting capabilities either for the whole end-customer site or for a specific application or URL type. The main idea is to attach a qos-profile to some identified application or URL-category. This mechanism can also be enhanced using time of day scheduler. One typical example would be rate-limit streaming related URL category down to 64 Kbits during working hours. Configuration Notes Site rate limit: qos-policy { peak-kbps-rate 500; peak-burst-size 25000; 78 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

79 Application rate limit: class-of-service { app-qos-policies { slow-app { rules { r1 { match { url-category { predefined [ streaming_media ]; set { qos-profile low-bw; qos-profiles { low-bw { peak-kbps-rate 3000; peak-burst-size 25000; 79 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

80 Methodology Configure appropriate rate limiting capabilities and then generate traffic to verify traffic is enforced according to defined configuration Expected Results Appropriate amount of traffic according to the defined rate limit configuration is processed by the system 4.20 URL and Application Filtering Objective The goal is to demonstrate URL and application ID filtering capabilities Configuration Notes security { access-policies { policy1 { rules { deny-high-risk-apps { match { application { filter-list [ high-risk-apps ]; set { action deny; lef { profile VAN_Log_Profile; event end; 80 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

81 control-low-productivity-apps { match { application { filter-list [ low-productivity-apps ]; set { action deny; lef { profile VAN_Log_Profile; event end; deny-gambling { match { url-category { predefined [ gambling ]; set { action deny; lef { profile VAN_Log_Profile; event end; 81 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

82 Allow_All { set { action allow; lef { profile VAN_Log_Profile; event start; Methodology Configure appropriate URL filtering. Try to access specific URL using curl or wget tool from a simulated remote site Expected Results Targeted URL should not be accessible while non-target URL query should go through the FlexVNF. Actual Results 4.21 Redundancy Tests - Control Link Failure (Optional) Objective The goal of this section is to show the failover time when a control link in the server cluster fails. This testing may be not possible depending on the environment as some network overlay may prevent Versa FlexVNF HA/redundancy from being implemented. We will discuss the specific with America Movil. Configuration Notes See configuration section for details. 82 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

83 Methodology Configure the FlexVNFs in a cluster. Fail the control link and determine failover time. UDP traffic is typically used to perform failover tests and the time is calculated by taking the amount of packets transmitted, subtract the number of packets received, and divide by the rate of traffic being sent. Expected Results No Packet Loss. Actual Results admin@rcn> show redundancy intra-chassis control nodes VCN VCN RED INSTANCE SLOT ROLE IP VCN0 0 Active* Packets TX = X Packets RX = Y Rate = TX RX/Rate = XYZ Additional Comments/Notes A typical result for failover time illustrated below: Packets TX = X Packets RX = Y Rate = 1000pps X-Y= /2000 = 0.5sec or 500msec 83 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

84 4.22 Network Element Configurations This section includes some CLI configurations of the FlexVNF. Section will be completed and documented throughout the testing process. Display the FlexVNF CLI configuration show configuration interfaces vni-0/1 description "Public Configured from VOAE"; unit 151 { vlan-id 151; enable true; family { inet { address /30; unit 153 { vlan-id 153; enable true; family { inet { address /30; address /24; versa@flexvnf-cli> show configuration protocols (OSPF Option) ospf { 1 { router-id ; 84 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

85 area { network { authentication { type none; dead-interval 40; hello-interval 10; priority 1; retransmit-interval 5; transit-delay 1; admin@flexvnf1> show configuration protocols (BGP Option) bgp { 2 { multihop { ttl 1; route-flap { free-max-time 180; reuse-max-time 60; reuse-size 256; reuse-array-size 1024; router-id ; 85 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

86 cluster-id ; local-as { as-number 65534; group group { type external; local-address ; peer-as 65535; neighbor { peer-as 65535; policy-options { redistribution-policy redis_direct_to_bgp { term net13 { match { protocol direct; address /24; action { accept; redistribute-to-bgp redis_direct_to_bgp; 86 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

87 show configuration orgs org-services Customer5 dhcp dhcp { dhcp4-dynamic-pools { Customer5-pool { address-pools { addr-pool1 { ipv4-range { begin-address ; end-address ; subnet-mask ; dhcp4-lease-profiles { Customer5-lease-profile { valid-lifetime 21600; renew-timer 10000; rebind-timer 20001; dhcp4-options-profiles { Customer5-options { domain-name Customer5.com; dns-server [ ]; default-router [ ]; 87 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

88 dhcp4-server-and-relay { server-identifier ; default-lease-profile Customer5-lease-profile; default-options-profile Customer5-options; service-profiles { Customer5-service-1 { dhcp-request-match { interfaces [ vni-0/0.501 ]; dhcp-service-type { allocate-address { dynamic Customer5-pool; server-identifier ; lease-profile options-profile Customer5-lease-profile; Customer5-options; 4.23 Test Equipment configuration and software versions The Ixia/Spirent is equipped with 10GbE interfaces. The different types of traffic used will be UDP, FTP, HTTP, SMTP, and POP3. Typical IMIX packet sizes used for UDP are 64, 128, 256, 512, 1024, and 1518 bytes. 88 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

89 5 SR-IOV/DPDK/NUMA requirement for Versa 5.1 SR-IOV in vsphere Log on vcenter using vsphere web client, click hosts, manage, and networking tab. Click the edit icon. SR-IOV For Physical Network Adapters Enable SR-IOV on physical adapters 89 Dell EMC + VMware + Versa Networks SD-WAN NFV Reference Architecture

DELL EMC SCALEIO. Networking Best Practices and Design Considerations ABSTRACT

DELL EMC SCALEIO. Networking Best Practices and Design Considerations ABSTRACT DELL EMC SCALEIO Networking Best Practices and Design Considerations ABSTRACT This document describes core concepts and best practices for designing, troubleshooting, and maintaining a Dell EMC ScaleIO

More information

Deployment of Dell M6348 Blade Switch with Cisco 4900M Catalyst Switch (Simple Mode)

Deployment of Dell M6348 Blade Switch with Cisco 4900M Catalyst Switch (Simple Mode) Deployment of Dell M6348 Blade Switch with Cisco 4900M Catalyst Switch (Simple Mode) Dell Networking Solutions Engineering July 2011 A Dell EMC Deployment and Configuration Guide Revisions Date Description

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

SOFTWARE DEFINED STORAGE

SOFTWARE DEFINED STORAGE SOFTWARE DEFINED STORAGE ABDULHAMID GHANDOUR 1 WHAT ABOUT STORAGE? HOW CAN WE TRANSFORM TO THE NEXT GEN STORAGE INFRASTRUCTURE? Commodity Elastic Cloud / IaaS Software Defined Storage Scale-Out Architectures

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

Deployment of Dell M8024-k Blade Switch in Simple Mode with Cisco Nexus 5k Switch

Deployment of Dell M8024-k Blade Switch in Simple Mode with Cisco Nexus 5k Switch Deployment of Dell M8024-k Blade Switch in Simple Mode with Cisco Nexus 5k Switch Dell Networking Solutions Engineering August 2011 A Dell EMC Interoperability Whitepaper Revisions Date Description Authors

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Dell EMC + VMware Cloud Infrastructure Platform for NFV

Dell EMC + VMware Cloud Infrastructure Platform for NFV Dell EMC + VMware Cloud Infrastructure Platform for NFV Service Provider Solutions Group April 2017 A Dell EMC Design Guide Revisions Date April 2017 Description Initial release Copyright 2017 Dell Inc.

More information

GUIDE. Optimal Network Designs with Cohesity

GUIDE. Optimal Network Designs with Cohesity Optimal Network Designs with Cohesity TABLE OF CONTENTS Introduction...3 Key Concepts...4 Five Common Configurations...5 3.1 Simple Topology...5 3.2 Standard Topology...6 3.3 Layered Topology...7 3.4 Cisco

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale Kiewiet Kritzinger DELL EMC CPSD Snr varchitect Introduction to hyper-converged Focus on innovation, not IT integration

More information

PowerEdge FX2 - Upgrading from 10GbE Pass-through Modules to FN410S I/O Modules

PowerEdge FX2 - Upgrading from 10GbE Pass-through Modules to FN410S I/O Modules PowerEdge FX - Upgrading from 0GbE Pass-through Modules to FN40S I/O Modules Dell Networking Solutions Engineering June 06 A Dell EMC Deployment and Configuration Guide Revisions Date Revision Description

More information

Modernizing Virtual Infrastructures Using VxRack FLEX with ScaleIO

Modernizing Virtual Infrastructures Using VxRack FLEX with ScaleIO Background As organizations continue to look for ways to modernize their infrastructures by delivering a cloud-like experience onpremises, hyperconverged offerings are exceeding expectations. In fact,

More information

DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS

DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS PRODUCT OVERVIEW DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS Dell EMC VxRack FLEX is a Dell EMC engineered and manufactured rack-scale

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Dell EMC ScaleIO Ready Node

Dell EMC ScaleIO Ready Node Essentials Pre-validated, tested and optimized servers to provide the best performance possible Single vendor for the purchase and support of your SDS software and hardware All-Flash configurations provide

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

Access Control Policies

Access Control Policies Access Control Policies The new feature within EqualLogic firmware 7.0 that enables centralized management of access controls for volume access. Dell Engineering January 2014 A Dell Technical White Paper

More information

Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform

Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform Deployment Automation Architecture Guide for VMware NFV 3.0 with VMware Integrated OpenStack 5.0 with Kubernetes Dell Engineering

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

vcloud NFV ScaleIO Detailed Design

vcloud NFV ScaleIO Detailed Design vcloud NFV ScaleIO Detailed Design Installation Guide Dell Networking Solutions Engineering December 2017 A Dell EMC Technical White Paper Revisions Date Description Version September 2017 Initial release

More information

Software Defined Storage

Software Defined Storage Software Defined Storage Riyadh Forum 2016 By: Ayman El Marazky Sr. System Engineer software defined solutions The CIO Dilemma IT Budget $ By 2020, the digital universe of data will go to 44 ZB Modern

More information

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA DELL EMC TECHNICAL SOLUTION BRIEF ARCHITECTING A DELL EMC HPERCONVERGED SOLUTION WITH VMware vsan Version 1.0 Author: VICTOR LAMA Dell EMC Networking SE July 2017 What is VMware vsan? VMware vsan (formerly

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric Solution Overview Next-Generation Interconnect Powered by the Adaptive Cloud Fabric Increases availability and simplifies the stretching and sharing of resources across distributed data centers Highlights

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

VMware Virtual SAN Technology

VMware Virtual SAN Technology VMware Virtual SAN Technology Today s Agenda 1 Hyper-Converged Infrastructure Architecture & Vmware Virtual SAN Overview 2 Why VMware Hyper-Converged Software? 3 VMware Virtual SAN Advantage Today s Agenda

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Pluribus Adaptive Cloud Fabric

Pluribus Adaptive Cloud Fabric Product Overview Adaptive Cloud Fabric Powering the Software-Defined Enterprise Highlights Completely software enabled and built on open networking platforms Powered by the Netvisor ONE network Operating

More information

Cisco Nexus 1000V InterCloud

Cisco Nexus 1000V InterCloud Deployment Guide Cisco Nexus 1000V InterCloud Deployment Guide (Draft) June 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 49 Contents

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

VMware vsan Network Design-OLD November 03, 2017

VMware vsan Network Design-OLD November 03, 2017 VMware vsan Network Design-OLD November 03, 2017 1 Table of Contents 1. Introduction 1.1.Overview 2. Network 2.1.vSAN Network 3. Physical Network Infrastructure 3.1.Data Center Network 3.2.Oversubscription

More information

NetApp HCI Network Setup Guide

NetApp HCI Network Setup Guide Technical Report NetApp HCI Network Setup Guide Version 1.2 Aaron Patten, NetApp April 2018 TR-4679 TABLE OF CONTENTS 1 Introduction... 4 2 Hardware... 4 2.1 Node and Chassis Layout... 4 2.2 Node Types...

More information

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017 Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Solution Overview

More information

Wired + Wireless Cloud-managed Campus Reference Architecture

Wired + Wireless Cloud-managed Campus Reference Architecture Wired + Wireless Cloud-managed Campus Reference Architecture Dell Networking Solutions Engineering September 2016 A Dell EMC Reference Architecture Revisions Date Description Authors September 2016 v1.0

More information

DELL EMC SCALEIO FOR VMWARE ENVIRONMENTS

DELL EMC SCALEIO FOR VMWARE ENVIRONMENTS DELL EMC SCALEIO FOR VMWARE ENVIRONMENTS Deployment and Best Practices Guide ABSTRACT This white paper provides technical information and best practices that should be considered when planning or designing

More information

Soluzioni integrate con vsphere La virtualizzazione abilita il percorso evolutivo di innovazione dell'it

Soluzioni integrate con vsphere La virtualizzazione abilita il percorso evolutivo di innovazione dell'it Soluzioni integrate con vsphere La virtualizzazione abilita il percorso evolutivo di innovazione dell'it Matteo Montuori Systems Engineer, VMware mmontuori@vmware.com 2010 VMware Inc. All rights reserved

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Small & Medium Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure

More information

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer 21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal By Adeyemi Ademola E. Cloud Engineer 1 Contents Introduction... 5 1.2 Document Purpose and Scope...5 Service Definition...

More information

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo Vendor: EMC Exam Code: E20-002 Exam Name: Cloud Infrastructure and Services Exam Version: Demo QUESTION NO: 1 In which Cloud deployment model would an organization see operational expenditures grow in

More information

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware

More information

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform Discuss database workload classification for designing and deploying SQL server databases

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

Running Milestone XProtect with the Dell FS8600 Scale-out File System

Running Milestone XProtect with the Dell FS8600 Scale-out File System Running Milestone XProtect with the Dell FS8600 Scale-out File System Dell Storage Engineering January 2015 A Dell Reference Architecture Revisions Date January 2015 Description Initial release THIS WHITE

More information

Microsoft SQL Server for Common Workload on Dell EMC VxRack FLEX

Microsoft SQL Server for Common Workload on Dell EMC VxRack FLEX White Paper Microsoft SQL Server for Common Workload on Dell EMC VxRack FLEX Abstract This paper highlights the benefits of hosting SQL Server 2016 on Dell EMC VxRack FLEX. April 2019 000049 Revisions

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

Unity EdgeConnect SP SD-WAN Solution

Unity EdgeConnect SP SD-WAN Solution As cloud-based application adoption continues to accelerate, geographically distributed enterprises increasingly view the wide area network (WAN) as critical to connecting users to applications. As enterprise

More information

vrealize Operations Management Pack for NSX for vsphere 3.0

vrealize Operations Management Pack for NSX for vsphere 3.0 vrealize Operations Management Pack for NSX for vsphere 3.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

THE OPEN DATA CENTER FABRIC FOR THE CLOUD Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient

More information

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc. Modern hyperconverged infrastructure Karel Rudišar Systems Engineer, Vmware Inc. 2 What Is Hyper-Converged Infrastructure? - The Ideal Architecture for SDDC Management SDDC Compute Networking Storage Simplicity

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

ScaleIO IP Fabric Best Practice and Deployment Guide

ScaleIO IP Fabric Best Practice and Deployment Guide ScaleIO IP Fabric Best Practice and Deployment Guide Dell EMC Networking Leaf-Spine Architecture for ScaleIO Dell EMC Networking Solutions Engineering April 2018 1 ScaleIO IP Fabric Best Practice and Deployment

More information

The ScaleIO plugin for Fuel Documentation

The ScaleIO plugin for Fuel Documentation The ScaleIO plugin for Fuel Documentation Release 2.1-2.1.1-1 EMC Corporation September 13, 2016 CONTENTS 1 Plugin Guide 1 1.1 Release Notes v2.1.1........................................... 1 1.2 Introduction...............................................

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

Dell 1741M Converged Network Adapter FCoE Boot from SAN Guide

Dell 1741M Converged Network Adapter FCoE Boot from SAN Guide Dell 1741M Converged Network Adapter FCoE Boot from SAN Guide Dell Engineering July 2014 A Dell Deployment and Configuration Guide Revisions Date Description Authors July 2014 Initial release Neal Beard,

More information

Network Configuration Example

Network Configuration Example Network Configuration Example MetaFabric Architecture 2.0: Configuring Virtual Chassis Fabric and VMware NSX Modified: 2017-04-14 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE DATASHEET THUNDER SOFTWARE FOR BARE METAL YOUR CHOICE OF HARDWARE A10 Networks application networking and security solutions for bare metal raise the bar on performance with an industryleading software

More information

vstart 50 VMware vsphere Solution Overview

vstart 50 VMware vsphere Solution Overview vstart 50 VMware vsphere Solution Overview Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY,

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Abstract The Thin Import feature of Dell Storage Center Operating System offers solutions for data migration

More information

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms EXECUTIVE SUMMARY Intel Cloud Builder Guide Intel Xeon Processor-based Servers Novell* Cloud Manager Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms Novell* Cloud Manager Intel

More information

EMC ScaleIO In The Enterprise: The Citi Experience. Tamir Segal Head of ScaleIO Marketing Eliot A. Wilson Senior Engineer, Citi

EMC ScaleIO In The Enterprise: The Citi Experience. Tamir Segal Head of ScaleIO Marketing Eliot A. Wilson Senior Engineer, Citi EMC ScaleIO In The Enterprise: The Citi Experience Tamir Segal Head of ScaleIO Marketing Eliot A. Wilson Senior Engineer, Citi EMC ScaleIO In The Enterprise: The Citi Experience Agenda: ScaleIO Overview

More information

Pluribus Adaptive Cloud Fabric Powering the Software-Defined Enterprise

Pluribus Adaptive Cloud Fabric Powering the Software-Defined Enterprise Adaptive Cloud Fabric Powering the Software-Defined Enterprise Highlights Completely software enabled and built on open networking platforms Powered by the Netvisor ONE network Operating System Eliminates

More information

Remote and Branch Office Reference Architecture for VMware vsphere with Dell PowerEdge VRTX

Remote and Branch Office Reference Architecture for VMware vsphere with Dell PowerEdge VRTX Remote and Branch Office Reference Architecture for VMware vsphere with Dell PowerEdge VRTX Dell Engineering April 2014 A Dell Reference Architecture Revisions Date Description Authors June 2013 Initial

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Dell EMC Engineering December 2016 A Dell Best Practices Guide Revisions Date March 2011 Description Initial

More information

Virtualized Network Services SDN solution for service providers

Virtualized Network Services SDN solution for service providers Virtualized Network Services SDN solution for service providers Nuage Networks Virtualized Network Services (VNS) is a fresh approach to business networking that seamlessly links your enterprise customers

More information

vrealize Operations Management Pack for NSX for vsphere 2.0

vrealize Operations Management Pack for NSX for vsphere 2.0 vrealize Operations Management Pack for NSX for vsphere 2.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

The Cisco HyperFlex Dynamic Data Fabric Advantage

The Cisco HyperFlex Dynamic Data Fabric Advantage Solution Brief May 2017 The Benefits of Co-Engineering the Data Platform with the Network Highlights Cisco HyperFlex Dynamic Data Fabric Simplicity with less cabling and no decisions to make The quality

More information

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

Emulex Universal Multichannel

Emulex Universal Multichannel Emulex Universal Multichannel Reference Manual Versions 11.2 UMC-OCA-RM112 Emulex Universal Multichannel Reference Manual Corporate Headquarters San Jose, CA Website www.broadcom.com Broadcom, the pulse

More information

Dell Networking MXL / PowerEdge I/O Aggregator with Cisco Nexus 5000 series NPV mode and Cisco MDS 9100 fabric switch Config Sheets

Dell Networking MXL / PowerEdge I/O Aggregator with Cisco Nexus 5000 series NPV mode and Cisco MDS 9100 fabric switch Config Sheets Dell Networking MXL / PowerEdge I/O Aggregator with Cisco Nexus 5000 series NPV mode and Cisco MDS 9100 fabric switch Config Sheets CLI Config Sheets Dell Networking Engineering November 2013 A Dell Deployment

More information

Dell Networking MXL and PowerEdge I/O Aggregator with Cisco Nexus 5000 series fabric mode Config Sheets

Dell Networking MXL and PowerEdge I/O Aggregator with Cisco Nexus 5000 series fabric mode Config Sheets Dell Networking MXL and PowerEdge I/O Aggregator with Cisco Nexus 5000 series fabric mode Config Sheets CLI Config Sheets Dell Networking Engineering November 2013 A Dell Deployment and Configuration Guide

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Harry Meier GLOBAL SPONSORS Dell EMC VxRack SDDC Integrated compute, storage, and networking powered by VMware Cloud Foundation

More information

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

Unify Virtual and Physical Networking with Cisco Virtual Interface Card White Paper Unify Virtual and Physical Networking with Cisco Virtual Interface Card Simplicity of Cisco VM-FEX technology and Power of VMware VMDirectPath What You Will Learn Server virtualization has

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

SOLUTION BRIEF Enterprise WAN Agility, Simplicity and Performance with Software-Defined WAN

SOLUTION BRIEF Enterprise WAN Agility, Simplicity and Performance with Software-Defined WAN S O L U T I O N O V E R V I E W SOLUTION BRIEF Enterprise WAN Agility, Simplicity and Performance with Software-Defined WAN Today s branch office users are consuming more wide area network (WAN) bandwidth

More information

Exam Name: VMware Certified Associate Network Virtualization

Exam Name: VMware Certified Associate Network Virtualization Vendor: VMware Exam Code: VCAN610 Exam Name: VMware Certified Associate Network Virtualization Version: DEMO QUESTION 1 What is determined when an NSX Administrator creates a Segment ID Pool? A. The range

More information

VxRack SDDC Deep Dive:

VxRack SDDC Deep Dive: VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation GLOBAL SPONSORS What is HCI? Systems design shift Hyper-converged HYPER-CONVERGED SERVERS SAN STORAGE THEN NOW 2 What is HCI?

More information