VMware vsan Network Design-OLD November 03, 2017

Size: px
Start display at page:

Download "VMware vsan Network Design-OLD November 03, 2017"

Transcription

1 VMware vsan Network Design-OLD November 03,

2 Table of Contents 1. Introduction 1.1.Overview 2. Network 2.1.vSAN Network 3. Physical Network Infrastructure 3.1.Data Center Network 3.2.Oversubscription Considerations 3.3.Host Network Adapter 3.4.Virtual Network Infrastructure (VMkernel) 3.5.Virtual Switch 3.6.NIC Teaming 3.7.Multicast 3.8.Network I/O Control 3.9.Jumbo Frames 4. Switch 4.1.Switch Discovery Protocol 5. Network 5.1.Network Availability 6. iscsi Target 6.1.iSCSI Networking Design Considerations 6.2.Quality of Service (QoS) 6.3.VMkernel Port Guidance 7. Conclusion 7.1.Conclusion 8. About the Author 8.1.About the Author 9. Vendor Specific Guidance 9.1.Cisco ACI 2

3 1. Introduction This document is targeted toward virtualization, network, and storage architects interested in deploying VMware vsan solutions. 3

4 1.1 Overview vsan is a hypervisor-converged, software-defined storage solution for the softwaredefined data center. It is the first policy-driven storage product designed for VMware vsphere environments that simplifies and streamlines storage provisioning and management. vsan is a distributed, shared storage solution that enables the rapid provisioning of storage within VMware vcenter Server as part of virtual machine creation and deployment operations. vsan uses the concept of disk groups to pool together locally attached flash devices and magnetic disks as management constructs. Disk groups are composed of a cache device and several magnetic or flash capacity devices. In hybrid architectures, flash devices are used as read cache and write buffer in front of the magnetic disks to optimize virtual machine and application performance. In all flash, the cache device endurance is leveraged to allow lower cost capacity devices. The vsan datastore aggregates the disk groups across all hosts in the vsan cluster to form a single shared datastore for all hosts in the cluster. vsan requires a correctly configured network for virtual machine I/O as well as communication among cluster nodes. Since the majority of virtual machine I/O travels the network due to the distributed storage architecture, highly performing and available network configuration is critical to a successful vsan deployment. This paper gives a technology overview of vsan network requirements and provides vsan network design and configuration best practices for deploying a highly available and scalable vsan solution. 4

5 2. Network vsan is an integral part of an overall VMware vsphere network configuration and therefore cannot work in isolation from other vsphere network services. 5

6 2.1 vsan Network The hosts in a vsan cluster must be part of a vsan network and must be on the same subnet regardless whether the hosts contribute storage or not. vsan requires a dedicated VMkernel port type and uses a proprietary transport protocol for vsan traffic between the hosts. The vsan network is an integral part of an overall vsphere network configuration and therefore cannot work in isolation from other vsphere network services. vsan utilizes either VMware vsphere Standard Switch (vss) or VMware vsphere Distributed Switch (vds) to construct a dedicated storage network. However, vsan and other vsphere workloads commonly share the underlying virtual and physical network infrastructure. Therefore, the vsan network must be carefully designed following general vsphere networking best practices in addition to its own. The following sections review general guidelines that should be followed when designing a vsan network. These recommendations do not conflict with vsphere Networking Best Practices. 6

7 3. Physical Network Infrastructure This section discusses the physical network infrastructure recommendation for a successful vsan deployment. 7

8 3.1 Data Center Network The traditional access-aggregation-core, three-tier network topology was built to serve north-south traffic in and out of a data center. Convergence and virtualization has changed datacenter traffic patterns to include an east-west flow. While the three-tier network offers great redundancy and resiliency, it limits overall bandwidth by as much as 50% due to critical network links being oversubscribed. The Spanning Tree Protocol (STP) is implemented to prevent network looping. However, as virtualization and cloud computing evolves, more data centers have adopted the leaf-spine topology for data center fabric simplicity, scalability, bandwidth, fault tolerance, and quality of service (QoS). vsan is compatible with both topologies regardless how the core switch layer is constructed. 3.2 Oversubscription Considerations East West and Throughput Concerns VMware vsan requires low latency and ample throughput between the hosts, as reads may come from any host in the cluster, and writes must be acknowledged by two hosts. For simple configurations utilizing modern, wire speed, top of rack switches, this is a relatively simple consideration as all ports can speak wire speed to all ports. As clusters are stretched across datacenters (perhaps using the vsan fault domains feature), the potential for over-subscription become a concern. Typically, the largest demand for throughput is during a host rebuild or host evacuation as potentially all hosts may be requesting to send and receive traffic at wire speed to reduce the time of the action. The larger the capacity consumed on each host, the more important the over subscription ratio becomes. A host with only 1Gbps and 12TB of capacity would take over 24 hours to refill with data. Leaf-Spine In traditional leaf-spine architecture, due to the full mesh topology and port density constraints, leaf switches are normally oversubscribed for bandwidth. For example, a fully utilized 10GbE uplink utilized by the vsan network in reality may only achieve 2.5Gbps throughput on each node when the leaf switches are oversubscribed at a 4:1 ratio and vsan traffic needs to go across the spine, as illustrated 8

9 in Figure 1. The impact of network topology on available bandwidth should be considered when designing your vsan cluster. The leaf switches are fully meshed to the spine switches with links that could either be switched or routed, these are referred to as Layer 2 and Layer 3 leaf-spine architectures respectively. vsan over layer 3 networks is currently supported. VMware Recommends: Consider using layer 2 multicast for simplicity of configuration and operations Here is an example of how over commitment can impact rebuild times. Let us assume the the above design is used with 3 fault domains, and data is being mirrored between cabinets. In this example each host has 10TB of raw capacity, with 6TB of it being used for virtual machines protected by FTT=1. In this case we will also assume 3/4ths (or 30Gbps) of the available bandwidth is available for rebuild. Assuming no disk contention bottlenecks it would take approximately 26 minutes to rebuild over the over subscribed link. If the capacity needing to rebuild was increased to 12TB of data, and the bandwidth was reduced to only 10Gbps, then the rebuild would take at a minimum 156 minutes. Any time capacity increases, or bandwidth between hosts is decreased the time for rebuilds becomes longer. VMware Recommends: Minimize oversubscription to reduce opportunities for congestion during host rebuilds or high throughput operations. Equal-Cost-Multi-Path (ECMP) A number of vendors have implemented Ethernet fabrics that eliminate the need for spanning tree to prevent loops, and employ layer 2 routing mechanisms to best use the shortest paths as well as supplemental paths for added throughput. SPB (Shortest Path Bridging) or TRILL ("Transparent Interconnection of Lots of Links") are commonly used, but often with proprietary extensions. vsan is 9

10 compatible with these topologies, but be sure to design adequate east west traffic within each vsan cluster. Cisco FEX/Nexus 2000 It should be noted that fabric extending devices such as the Cisco Nexus 2000 product line have unique considerations. These devices lack the ability for port to port direct traffic on the same switch, and all traffic must travel through the uplink to the Nexus 5000 or 7000 series device and back down. While this will increase port to port latency, the larger concern is large throughput operations (such as a host rebuild) will potentially put pressure on the over subscribed uplinks back to the switch. Non-Stacked top of rack switches and Cisco Fabric Interconnects. VMware Recommends: Deploy all hosts within a fault domain to a low latency wire speed switch or switch stack. When multiple switches are used, pay attention to throughput of the links between switches. Deployments with limited or heavily oversubscribed throughput should be carefully considered. Flow Control Pause Frames are related to Ethernet flow control and are used to manage the pacing of data transmission on a network segment. Sometimes, a sending node (ESXi/ESX host, switch, etc.) may transmit data faster than another node can accept it. In this case, the overwhelmed network node can send pause frames back to the sender, pausing the transmission of traffic for a brief period of time. vsan manages congestion by introducing artificial latency to prevent cache/buffer exhaustion. Since vsan has built-in congestion management, disabling flow control on VMkernel interfaces tagged for vsan traffic is recommended. Note Flow Control is enabled by default on all physical uplinks. For further information on Flow Control see KB VMware Recommends: Disable flow control for vsan traffic. Security Considerations VMware vsan like other IP storage traffic, is not encrypted and should be deployed to isolated networks. VLAN s can be leveraged to securely separate vsan traffic from virtual machine and other networks. Security can also be added at a higher layer by encrypting data in guest in order to meet security and compliance requirements. 3.3 Host Network Adapter VMware Recommends: Each vsan cluster node should follow these practices: At least one physical NIC must be used for vsan network. One or more additional physical NICs are recommended to provide failover capability. The physical NIC(s) can be shared amongst other vsphere networks such as virtual machine network and vmotion network. Logical Layer 2 separation of vsan VMkernel traffic (VLANs) is recommended when physical NIC(s) share traffic types. QoS can be provided for traffic types via Network IO Control (NIOC). 10GbE NIC or larger is strongly recommended for vsan, and a requirement for all flash vsan. If 1GbE NIC is used for hybrid configurations, VMware recommends it to be dedicated for vsan. 10

11 Larger than 10Gbps such as 25/40/100Gbps is supported as long as your edition of vsphere supports it. 3.4 Virtual Network Infrastructure (VMkernel) To enable the exchange of data in the vsan cluster, there must be a VMkernel network adapter for vsan traffic on each ESXi host. This is true even for hosts that do not contribute storage to vsan. For each vsan cluster, a VMkernel port group for vsan should be created in the vss or vds, and the same port group network label should be used to ensure labels are consistent across all hosts. Unlike multiple-nic vmotion, vsan does not support multiple VMkernel adapters. 3.5 Virtual Switch VMware vsan supports both vss and vds virtual switches. It should be noted that vds licensing is included with vsan and licensing should not be a consideration when choosing a virtual switch type. vds is required for dynamic LACP (Link Aggregation Control Protocol), LBT (Load Based Teaming), LLDP (Link Layer Discovery Protocol), bidirectional CDP (Cisco Discovery Protocol), and Network IO Control (NIOC). vds is preferred for superior performance operational visibility, and management capabilities. VMware recommends: Deploy vds for use with VMware vsan. vcenter and vds Considerations VMware fully supports deploying a vcenter that manages a cluster on top of the storage cluster. Starting with vsphere 5.x static port groups became the default port group type for vds, and will persist assignment to a virtual machine through a reboot. In the event vcenter is unable to be bind to the vds a pre-created ephemeral port group, or a vss can be leveraged to restore access to the vcenter Server. 3.6 NIC Teaming The vsan network can use teaming and failover policy to determine how traffic is distributed between physical adapters and how to reroute traffic in the event of adapter failure. NIC teaming is used mainly for high availability, but not load balancing when the team is dedicated for vsan. However, additional vsphere traffic types sharing the same team could still leverage the aggregated bandwidth by distributing different types of traffic to different adapters within the team. vsan supports all vss and vds supported NIC teaming options. Load Based Teaming Route based on physical NIC load, also known as Load Based Teaming (LBT), allows vsphere to balance the load on multiple NIC s without a custom switch configuration. It begins balancing similar to Virtual Port ID, but will dynamically reassess physical to virtual NIC bindings every 30 seconds based on congestion thresholds. To prevent impact on port change settings such as Cisco s portfast or HP admin-edge-port on ESXi host facing physical switch ports should be configured. With this setting, network convergence on these switch ports will happen fast after the failure because the port 11

12 will enter the Spanning tree forwarding state immediately, bypassing the listening and learning states. Additional information can be found on different teaming policies in the vsphere Networking Documentation. IP Hash/LACP An additional failover path option is the IP hash based policy. Under this policy, vsan, either alone or together with other vsphere workloads, is capable of balancing load between adapters within a team, although there is no guarantee of performance improvement for all configurations. While vsan does initiate multiple connections, there is no deterministic balancing of traffic. This policy requires the physical switch ports to be configured for a port link aggregation technology or port-channel architecture such as Link Aggregation Control Protocol (LACP) or EtherChannel. Only static mode EtherChannel is supported with the vsphere Standard Switch. Dynamic LACP is additionally supported with vsphere Distributed Switch, as well as additional hashes. LAGs using the vds and dynamic LACP enable advanced hashes that allow using things such as source and destination port. These hashes will allow for potentially balancing of traffic that is split across multiple connection sessions between the same two hosts. Note, you will need to see what your switch supports and even within the same vendor or product family different ASICs may only support specific options. VMware Recommends: Use Load Based Teaming or LACP for load balancing. 3.7 Multicast IP multicast sends source packets to multiple receivers as a group transmission. Packets are replicated in the network only at the points of path divergence, normally switches or routers, resulting in the most efficient delivery of data to a number of destinations with minimum network bandwidth consumption. For specifics on multicast please see VMware vsan Layer 2/Layer 3 Network Topologies. vsan uses multicast to deliver metadata traffic among cluster nodes for efficiency and bandwidth conservation. Multicast is required for VMkernel ports utilized by vsan. While Layer 3 is supported, Layer 2 is recommended to reduce complexity. All VMkernel ports on the vsan network subscribe to a multicast group using Internet Group Management Protocol (IGMP). IGMP snooping configured with an IGMP snooping querier can be used to limit the physical switch ports participating in the multicast group to only vsan VMkernel port uplinks. The need to configure an IGMP snooping querier to support IGMP snooping varies by switch vendor. Consult your specific switch vendor/model best practices for IGMP snooping configuration. If deploying a vsan cluster across multiple subnets, be sure to review best practices and limitations in scaling Protocol Independent Multicast (PIM) dense or sparse node. A default multicast address is assigned to each vsan cluster at time of creation. When multiple vsan clusters reside on the same layer 2 network, the default multicast address should be changed within the additional vsan clusters to prevent multiple clusters from receiving all multicast streams. Similarly, multicast address ranges must 12

13 be carefully planned in environments where other network services such as VXLAN also utilize multicast. VMware KB can be consulted for the detailed procedure of changing the default vsan multicast address. Isolating each clusters traffic to its own VLAN will remove possibility for conflict. VMware Recommends: Isolate each vsan cluster's traffic to its own VLAN when using multiple clusters. 3.8 Network I/O Control vsphere Network I/O Control (NIOC) can be used to set quality of service (QoS) for vsan traffic over the same NIC uplink in a vds shared by other vsphere traffic types including iscsi traffic, vmotion traffic, management traffic, vsphere Replication (VR) traffic, NFS traffic, Fault Tolerance (FT) traffic, and virtual machine traffic. General NIOC best practices apply with vsan traffic in the mix: For bandwidth allocation, use shares instead of limits as the former has greater flexibility for unused capacity redistribution. Always assign a reasonably high relative share for the Fault Tolerance resource pool because FT is a very latency-sensitive traffic type. Use NIOC together with NIC teaming to maximize network capacity utilization. Leverage the vds Port Group and Traffic Shaping Policy features for additional bandwidth control on different resource pools. VMware Recommends: Follow these recommendation for vsan: Do not set a limit on the vsan traffic; by default, it is unlimited. Set a relative share for the vsan resource pool based on application performance requirements on storage, also holistically taking into account other workloads such as bursty vmotion traffic that is required for business mobility and availability. Avoid reservations as they will share unused traffic only with other management types (vmotion, Storage, etc.) but not with Virtual Machine networking needs. 3.9 Jumbo Frames vsan supports jumbo frames, but does not require them. VMware testing finds that using jumbo frames can reduce CPU utilization and improve throughput, however, with both gains at minimum level because vsphere already uses TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) to deliver similar benefits. In data centers where jumbo frames are already enabled in the network infrastructure, jumbo frames are recommended for vsan deployment. If jumbo frames are not currently in use, vsan alone should not be the justification for deploying Jumbo Frames. VMware Recommends: Use the existing MTU/Frame size you would otherwise be using in your environment. 13

14 4. Switch Switch discovery protocols allow vsphere administrators to determine which switch port is connected to a given vss or vds. 14

15 4.1 Switch Discovery Protocol Switch discovery protocols allow vsphere administrators to determine which switch port is connected to a given vss or vds. vsphere supports Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). CDP is available for vsphere Standard Switches and vsphere Distributed Switches connected to Cisco physical switches. When CDP or LLDP is enabled for a particular vsphere Distributed Switch or vsphere Standard Switch, you can view properties of the peer physical switch such as device ID, software version, and timeout from the vsphere Client. VMware Recommends: Enable LLDP or CDP in both send and receive mode. 15

16 5. Network The vsan network should have redundancy in both physical and virtual network paths and components to avoid single points of failure. 16

17 5.1 Network Availability For high availability, a vsan network should have redundancy in both physical and virtual network paths and components to avoid single points of failure. The architecture should configure all port groups or distributed virtual port groups with at least two uplink paths using different NICs that are configured with NIC teaming, set a failover policy specifying the appropriate active-active or active-standby mode, and connect each NIC to a different physical switch for an additional level of redundancy. VMware Recommends: Redundant uplinks for vsan and all other traffic. 17

18 6. iscsi Target iscsi networking design with vsan

19 6.1 iscsi Networking Design Considerations iscsi best practices generally mirror vsan and existing Best Practices for Running VMware vsphere on iscsi. Specific vsan guidance is available iscsi Target Usage Guide. Configuration considerations Like vsan, iscsi does not require Jumbo Frames, but will support them if used. IPv6 as well as IPv4 is supported. When configuring a target you will select a VMkernel port, and the same numbered port will be used on all hosts. For this reason the cluster should have a uniform configuration with all VMkernel ports configured with the same MTU. Unlike iscsi initiator ports, you do not need to tag a port as iscsi, as the target service will manage adding the service to all ports in the cluster. VMware Recommends: Use isolated unique VLANs and VMkernel ports for performance as well as security. Security CHAP with bidirectional handshakes is fully supported by the iscsi target service. Performance and Availability Performance in general will be improved by using more targets, as it will increase access to VMkernel ports, as well as target queue depth. It should be noted that all connections for a target will connect to the same target VMkernel port on a single host. Initial connections can be made to any host, and an iscsi redirect will be used to send the connection to the host owning IO access for a given target. As 19

20 additional targets are created they will be balanced across the cluster. Any host can take over access in the event of failure of a given target. 6.2 Quality of Service (QoS) Class of service (CoS) and DSCP tags can be used to prioritize iscsi traffic. Consult with your switching vendor for best practices for configuring and tagging VLANs. 20

21 6.3 VMkernel Port Guidance The VMware iscsi Service, is designed with several assumptions and best practices of design. All hosts will contribute a VMkernel port for the target All VMkernel ports for a target will use the same number (IE a target will use vmk2 on all hosts). Different VMkernel ports can be used for different targets or they can be shared All imitators can see all VMkernel ports used in the cluster A/B network separation (with different non-routed subnets for different VMkernel ports for the same target) are not supported. The VMkernel ports used for iscsi will ideally be used only for iscsi traffic. 21

22 7. Conclusion vsan Network design should be approached in a holistic fashion, taking into account other traffic types utilized in the vsphere cluster in addition to the vsan network. 22

23 7.1 Conclusion vsan Network design should be approached in a holistic fashion, taking into account other traffic types utilized in the vsphere cluster in addition to the vsan network. Other factors to consider should be the physical network topology, and the overprovisioning posture of your physical switch infrastructure. vsan requires a 1GbE network at the minimum for hybrid clusters and 10Gbps for all flash clusters. As a best practice, VMware strongly recommends 10GbE network for vsan to avoid the possibility of the network congestion leading to degraded performance. A 1GbE network can easily be saturated by vsan traffic and teaming of multiple NICs can only provide availability benefits in limited cases. If 1GbE network is used, VMware recommends it be used for smaller clusters, and be to be dedicated to vsan traffic. To implement a highly available network infrastructure for vsan, redundant hardware components and network paths are recommended. Switches can be configured either in uplink or stack mode, depending on switch capability and your physical switch configuration. vsan supports both vsphere Standard Switches and vsphere Distributed Switches. However, VMware recommends the use of vsphere Distributed Switches in order to realize network QoS benefits offered by vsphere NIOC. When various vsphere network traffic types must share the same NICs as vsan, separate them onto different VLANs and use shares as a quality of service mechanism to guarantee the level of performance expected for vsan in possible contention scenarios. 23

24 8. About the Author John Nicholson is a Senior Technical Marketing Manager in the Storage and Availability Business Unit. 24

25 8.1 About the Author John Nicholson is a Senior Technical Marketing Manager in the Storage and Availability Business Unit. He focuses on delivering technical guidance around VMware vsan solutions. John previously worked in architecting and implementing enterprise storage and VMware solutions. Follow John on Appendix Multicast Configuration Examples Multicast configuration examples should be used only as a reference. Consult with your switch vendor as configuration commands may change between platforms and versions. Cisco IOS (Default is IGMP snooping on) switch# configure terminal switch(config)# vlan 500 switch(config vlan)# no ip igmp snooping switch(config vlan)# do write memory Brocade ICX (Default is IGMP snooping off) Switch#configure Switch(config)# VLAN 500 Switch(config vlan 500)# multicast disable igmp snoop Switch(config vlan 500)# do write memory Brocade VDX Guide (See guide for vsan VDX configuration) HP ProCurve (Default is IGMP snooping on) switch# configure terminal switch(config)# VLAN 500 ip IGMP 25

26 switch(config)# no VLAN 500 ip IGMP querier switch(config)# write memory References 1. vsan Product Page 2. VMware vsan Hardware Guidance Hardware-Guidance.pdf 3. VMware NSX Network Virtualization Design Guide 4. Understanding IP Hash Load Balancing, VMware KB Sample configuration of EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches, VMware KB Changing the multicast address used for a VMware vsan Cluster, VMware KB Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware environment, VMware KB IP Multicast Technology Overview White_papers/mcst_ovr.pdf 9. Essential vsan: Administrator s Guide to VMware vsan by Cormac Hogan, Duncan Epping 10. VMware Network I/O Control: Architecture, Performance and Best Practices 26

27 9. Vendor Specific Guidance This section includes links and resources to specific networking vendors. 27

28 9.1 Cisco ACI Cisco and Cisco Application Centric Infrastructure (ACI) fabric Guidance General Cisco Guidance VMware vsan with Cisco UCS Architecture documents can be found here. Cisco ACI Version Guidance 2.1(1h) is the Cisco ACI release that introduced IGMP snoop static group support and IGMP snoop access group support. VMware has identified this as a minimum release version for a successful vsan experience. You should always consult Cisco's documentation for Minimum and Recommended Cisco ACI and APIC Releases. By default, IGMPsnooping is enabled on the bridge domain. Note Layer 3 IPv6 multicast routing is not supported by APIC IGMP Snooping. In layer 2, IPv6 multicast will be flooded to the entire bridge domain. For these reasons we do not recommend IPv6 with CIsco ACI. For these reasons is is currently not recommended to use IPv6 and Cisco ACI for vsan multicast traffic. For more information on Multicast with Cisco ACI see Cisco APIC and IGMP Snoop Layer 2 Multicast Configuration Below are sample screenshots taken from a Cisco ACI fabric that was configured for VMware vsan. Sample EPG for VMware vsan Sample IGMP snooping configuration Sample IGMP Querier Sample verification of igmp snooping configuration. 28

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1193BU A Closer Look at vsan Networking Design and Configuration Considerations Cormac Hogan Andreas Scherr VMworld 2017 Content: Not for publication #VMworld #STO1193BU Disclaimer This presentation

More information

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

GUIDE. Optimal Network Designs with Cohesity

GUIDE. Optimal Network Designs with Cohesity Optimal Network Designs with Cohesity TABLE OF CONTENTS Introduction...3 Key Concepts...4 Five Common Configurations...5 3.1 Simple Topology...5 3.2 Standard Topology...6 3.3 Layered Topology...7 3.4 Cisco

More information

VMware Virtual SAN Routed Network Deployments with Brocade

VMware Virtual SAN Routed Network Deployments with Brocade VMware Virtual SAN Routed Network Deployments with Brocade Deployments TECHNICAL WHITE PAPER UPDATE NOVEMBER VERSION 1.1 Table of Contents Introduction... 2 VMware Virtual SAN Overview... 3 Brocade Network

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

VMware vsphere Networking. Nutanix Best Practices

VMware vsphere Networking. Nutanix Best Practices VMware vsphere Networking Nutanix Best Practices Version 1.1 May 2017 BP-2074 Copyright Copyright 2017 Nutanix, Inc. Nutanix, Inc. 1740 Technology Drive, Suite 150 San Jose, CA 95110 All rights reserved.

More information

vsphere Networking for the Network Admin Jason Nash, Varrow CTO

vsphere Networking for the Network Admin Jason Nash, Varrow CTO vsphere Networking for the Network Admin Jason Nash, Varrow CTO Agenda What is virtualization? How does virtualization impact the network team? How should you approach virtualization? vsphere 101 Where

More information

Datrium DVX Networking Best Practices

Datrium DVX Networking Best Practices Datrium DVX Networking Best Practices Abstract This technical report presents recommendations and best practices for configuring Datrium DVX networking for enterprise level use for VMware vsphere environments.

More information

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min.

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min. 2V0-642 vmware Number: 2V0-642 Passing Score: 800 Time Limit: 120 min Exam A QUESTION 1 A network administrator has been tasked with deploying a 3-tier application across two data centers. Tier-1 and tier-2

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay Table of Contents Executive Summary... 2 Benefits of NSX Architecture... 4 2.1 NSX Primary Use Cases... 4 2.2 Logical Layer Connectivity...

More information

Exam Name: VMware Certified Associate Network Virtualization

Exam Name: VMware Certified Associate Network Virtualization Vendor: VMware Exam Code: VCAN610 Exam Name: VMware Certified Associate Network Virtualization Version: DEMO QUESTION 1 What is determined when an NSX Administrator creates a Segment ID Pool? A. The range

More information

Adaptive Resync in vsan 6.7 First Published On: Last Updated On:

Adaptive Resync in vsan 6.7 First Published On: Last Updated On: First Published On: 04-26-2018 Last Updated On: 05-02-2018 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.vSAN's Approach to Data Placement and Management 1.3.Adaptive Resync 1.4.Results 1.5.Conclusion

More information

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3 Architecture and Design of VMware NSX-T for Workload Domains Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3 You can find the most up-to-date technical documentation on the VMware

More information

Vmware VCXN610. VMware Certified Implementation Expert (R) Network Virtualization.

Vmware VCXN610. VMware Certified Implementation Expert (R) Network Virtualization. Vmware VCXN610 VMware Certified Implementation Expert (R) Network Virtualization http://killexams.com/exam-detail/vcxn610 QUESTION: 169 A company wants to deploy VMware NSX for vsphere with no PIM and

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

VMware Validated Design for NetApp HCI

VMware Validated Design for NetApp HCI Network Verified Architecture VMware Validated Design for NetApp HCI VVD 4.2 Architecture Design Sean Howard Oct 2018 NVA-1128-DESIGN Version 1.0 Abstract This document provides the high-level design criteria

More information

VMware Validated Design for Micro-Segmentation Reference Architecture Guide

VMware Validated Design for Micro-Segmentation Reference Architecture Guide VMware Validated Design for Micro-Segmentation Reference Architecture Guide VMware Validated Design for Micro-Segmentation 3.0 This document supports the version of each product listed and supports all

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

VMware vsan Design and Sizing Guide First Published On: February 21, 2017 Last Updated On: April 04, 2018

VMware vsan Design and Sizing Guide First Published On: February 21, 2017 Last Updated On: April 04, 2018 VMware vsan Design and Sizing Guide First Published On: February 21, 2017 Last Updated On: April 04, 2018 1 Table of Contents 1. Introduction 1.1.Overview 2. vsan Design Overview 2.1.Adhere to the VMware

More information

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems. VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER Emil Kacperek Systems Engineer Brocade Communication Systems Mar, 2011 2010 Brocade Communications Systems, Inc. Company Proprietary

More information

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1350BUR Deploying NSX on a Cisco Infrastructure Jacob Rapp jrapp@vmware.com Paul A. Mancuso pmancuso@vmware.com #VMworld #NET1350BUR Disclaimer This presentation may contain product features that are

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Exam Questions

Exam Questions Exam Questions 300-160 DCID Designing Cisco Data Center Infrastructure https://www.2passeasy.com/dumps/300-160/ 1. Which three components are needed to implement Cisco VM-FEX on the Cisco UCS platform?

More information

DELL EMC VSCALE FABRIC

DELL EMC VSCALE FABRIC NETWORK DATA SHEET DELL EMC VSCALE FABRIC FIELD-PROVEN BENEFITS Increased utilization and ROI Create shared resource pools (compute, storage, and data protection) that connect to a common, automated network

More information

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Performance VMware vsphere 4.1 T E C H N I C A L W H I T E P A P E R Table of Contents Scalability enhancements....................................................................

More information

Design Guide to run VMware NSX for vsphere with Cisco ACI

Design Guide to run VMware NSX for vsphere with Cisco ACI White Paper Design Guide to run VMware NSX for vsphere with Cisco ACI First published: January 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page

More information

Cisco ACI with Cisco AVS

Cisco ACI with Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Cisco AVS Installation, page 6 Key Post-Installation Configuration Tasks for the Cisco AVS, page 43 Distributed Firewall, page 62

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Cisco ACI and Cisco AVS

Cisco ACI and Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Installing the Cisco AVS, page 5 Key Post-Installation Configuration Tasks for the Cisco AVS, page 14 Distributed Firewall, page

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Introduction. Executive Summary. Test Highlights

Introduction. Executive Summary. Test Highlights Introduction Cisco commissioned EANTC to conduct an independent performance test of its new Catalyst 9000 family switches. The switches are designed to work in enterprise campus environments. Cisco offers

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

Dell EMC Networking VxRail Networking Quick Guide

Dell EMC Networking VxRail Networking Quick Guide Dell EMC Networking VxRail Networking Quick Guide Networking guidelines for a VxRail environment Dell Networking Solutions Engineering March 2018 A Dell EMC Deployment and Configuration Guide Revisions

More information

Reference Architecture. DataStream. Architecting DataStream Network. Document # NA Version 1.03, January

Reference Architecture. DataStream. Architecting DataStream Network. Document # NA Version 1.03, January Reference Architecture DataStream Architecting DataStream Network Document # 317-0026NA Version 1.03, January 2016 www.cohodata.com Abstract This document provides an overview of data center networking

More information

Virtualization Design

Virtualization Design VMM Integration with UCS-B, on page 1 VMM Integration with AVS or VDS, on page 3 VMM Domain Resolution Immediacy, on page 6 OpenStack and Cisco ACI, on page 8 VMM Integration with UCS-B About VMM Integration

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information

Networking solution for consolidated IT infrastructure

Networking solution for consolidated IT infrastructure Networking solution for consolidated IT infrastructure Timo Lonka timo@extremenetworks.com Topics 1.The New Extreme 2. IP Storage and HCI Networking 3. Agile Data Center Architecture 4. Case study: Ficolo

More information

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

Unify Virtual and Physical Networking with Cisco Virtual Interface Card White Paper Unify Virtual and Physical Networking with Cisco Virtual Interface Card Simplicity of Cisco VM-FEX technology and Power of VMware VMDirectPath What You Will Learn Server virtualization has

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Data Center Interconnect Solution Overview

Data Center Interconnect Solution Overview CHAPTER 2 The term DCI (Data Center Interconnect) is relevant in all scenarios where different levels of connectivity are required between two or more data center locations in order to provide flexibility

More information

Virtual Machine Manager Domains

Virtual Machine Manager Domains This chapter contains the following sections: Cisco ACI VM Networking Support for Virtual Machine Managers, page 1 VMM Domain Policy Model, page 3 Virtual Machine Manager Domain Main Components, page 3,

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework White Paper Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Architecture and Design. VMware Validated Design 4.0 VMware Validated Design for Micro-Segmentation 4.0

Architecture and Design. VMware Validated Design 4.0 VMware Validated Design for Micro-Segmentation 4.0 VMware Validated Design 4.0 VMware Validated Design for Micro-Segmentation 4.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

Mellanox Virtual Modular Switch

Mellanox Virtual Modular Switch WHITE PAPER July 2015 Mellanox Virtual Modular Switch Introduction...1 Considerations for Data Center Aggregation Switching...1 Virtual Modular Switch Architecture - Dual-Tier 40/56/100GbE Aggregation...2

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1927BU vsphere Distributed Switch Best Practices for NSX Gabriel Maciel VMware, Inc. @gmaciel_ca #VMworld2017 #NET1927BU Disclaimer This presentation may contain product features that are currently

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

PLEXXI HCN FOR VMWARE ENVIRONMENTS

PLEXXI HCN FOR VMWARE ENVIRONMENTS PLEXXI HCN FOR VMWARE ENVIRONMENTS SOLUTION BRIEF FEATURING Plexxi s pre-built, VMware Integration Pack makes Plexxi integration with VMware simple and straightforward. Fully-automated network configuration,

More information

What's New in vsan 6.2 First Published On: Last Updated On:

What's New in vsan 6.2 First Published On: Last Updated On: First Published On: 07-07-2016 Last Updated On: 08-23-2017 1 1. Introduction 1.1.Preface 1.2.Architecture Overview 2. Space Efficiency 2.1.Deduplication and Compression 2.2.RAID - 5/6 (Erasure Coding)

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v A Design and Implementation Guide for SharePoint Server 2010 Collaboration Profile on Active System 800 with VMware vsphere Dell

More information

White Paper. OCP Enabled Switching. SDN Solutions Guide

White Paper. OCP Enabled Switching. SDN Solutions Guide White Paper OCP Enabled Switching SDN Solutions Guide NEC s ProgrammableFlow Architecture is designed to meet the unique needs of multi-tenant data center environments by delivering automation and virtualization

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme PBO1221BE Beginners Guide to the Software-Defined Data Center Kyle Gleed, Group Manager, Technical Marketing Ben Sier, Staff Architect, Technical Marketing #VMworld #PBO1221BE Disclaimer This presentation

More information

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Table of Contents HOL-PRT-1305

Table of Contents HOL-PRT-1305 Table of Contents Lab Overview... 2 - Abstract... 3 Overview of Cisco Nexus 1000V series Enhanced-VXLAN... 5 vcloud Director Networking and Cisco Nexus 1000V... 7 Solution Architecture... 9 Verify Cisco

More information

Tintri VMstore with VMware Best Practices Guide

Tintri VMstore with VMware Best Practices Guide TECHNICAL WHITE PAPER Tintri VMstore with VMware Best Practices Guide Best Practices for Deploying the Tintri VMstore in VMware vsphere Environments October 2017 www.tintri.com Revision History Version

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices., page 1 Sample Topologies, page 6 OTV is a MAC-in-IP method that extends Layer 2 connectivity across

More information

Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three.

Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three. Volume: 162 Questions Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three.) A. easy management B. infrastructure

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide Technical white paper vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide Updated: 4/30/2015 Hongjun Ma, HP DCA Table of contents Introduction...

More information

iscsi Target Usage Guide December 15, 2017

iscsi Target Usage Guide December 15, 2017 December 15, 2017 1 Table of Contents 1. Native VMware Availability Options for vsan 1.1.Native VMware Availability Options for vsan 1.2.Application Clustering Solutions 1.3.Third party solutions 2. Security

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

The Cisco HyperFlex Dynamic Data Fabric Advantage

The Cisco HyperFlex Dynamic Data Fabric Advantage Solution Brief May 2017 The Benefits of Co-Engineering the Data Platform with the Network Highlights Cisco HyperFlex Dynamic Data Fabric Simplicity with less cabling and no decisions to make The quality

More information

ZyPerUHD Network Requirements

ZyPerUHD Network Requirements ZyPerUHD Network Requirements ZeeVee, Inc. 295 Foster Street, Suite 200 Littleton, MA 01460 USA Table of Contents Basic elements of ZyPerUHD communication... 3 IP Address allocation... 3 Ports... 3 ZyPer

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

Configuring EtherChannels and Link-State Tracking

Configuring EtherChannels and Link-State Tracking CHAPTER 37 Configuring EtherChannels and Link-State Tracking This chapter describes how to configure EtherChannels on Layer 2 and Layer 3 ports on the switch. EtherChannel provides fault-tolerant high-speed

More information

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017 Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Solution Overview

More information

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE WHITE PAPER - DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE ABSTRACT This planning guide provides best practices and requirements for using stretched clusters with VxRail appliances. April 2018

More information

What s New in VMware Virtual SAN (VSAN) v 0.1c/AUGUST 2013

What s New in VMware Virtual SAN (VSAN) v 0.1c/AUGUST 2013 What s New in VMware Virtual SAN (VSAN) Technical WHITE PAPER v 0.1c/AUGUST 2013 Table of Contents 1. Introduction.... 4 1.1 Software-Defined Datacenter.... 4 1.2 Software-Defined Storage.... 4 1.3 What

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration EqualLogic Storage and Non-Stacking Switches Sizing and Configuration THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS

More information

Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances

Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances The Single-node Trial for VMware vcenter Server on IBM Cloud is a single-tenant hosted private cloud that delivers

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

The Next Opportunity in the Data Centre

The Next Opportunity in the Data Centre The Next Opportunity in the Data Centre Application Centric Infrastructure Soni Jiandani Senior Vice President, Cisco THE NETWORK IS THE INFORMATION BROKER FOR ALL APPLICATIONS Applications Are Changing

More information

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S Contents Introduction...1 iscsi Explained...1 Initiators...1 Discovery and Logging On...2 Authentication...2 Designing the

More information

Reference Architecture. DataStream. UCS Direct Connect. For DataStream OS 2.6 or later Document # NA Version 1.08 April

Reference Architecture. DataStream. UCS Direct Connect. For DataStream OS 2.6 or later Document # NA Version 1.08 April DataStream For DataStream OS 2.6 or later Document # 310-0026NA Version 1.08 April 2017 www.cohodata.com Abstract This reference architecture describes how to connect DataStream switches to Cisco Unified

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information

VMware Cloud Foundation Planning and Preparation Guide. VMware Cloud Foundation 3.0

VMware Cloud Foundation Planning and Preparation Guide. VMware Cloud Foundation 3.0 VMware Cloud Foundation Planning and Preparation Guide VMware Cloud Foundation 3.0 You can find the most up-to-date techni documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

vsan Stretched Cluster & 2 Node Guide January 26, 2018

vsan Stretched Cluster & 2 Node Guide January 26, 2018 vsan Stretched Cluster & 2 Node Guide January 26, 2018 1 Table of Contents 1. Overview 1.1.Introduction 2. Support Statements 2.1.vSphere Versions 2.2.vSphere & vsan 2.3.Hybrid and All-Flash Support 2.4.On-disk

More information

Cisco Virtualized Workload Mobility Introduction

Cisco Virtualized Workload Mobility Introduction CHAPTER 1 The ability to move workloads between physical locations within the virtualized Data Center (one or more physical Data Centers used to share IT assets and resources) has been a goal of progressive

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1926BU A Day in the Life of a VSAN I/O Diving in to the I/O Flow of vsan John Nicholson (@lost_signal) Pete Koehler (@vmpete) VMworld 2017 Content: Not for publication #VMworld #STO1926BU Disclaimer

More information

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers VMware vcloud Network VMware vcloud Architecture Toolkit for Service Providers Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers Version 2.8 August 2017 Harold Simon 2017 VMware,

More information

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Dell EqualLogic Best Practices Series SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Storage Infrastructure

More information