Datrium DVX Networking Best Practices

Size: px
Start display at page:

Download "Datrium DVX Networking Best Practices"

Transcription

1 Datrium DVX Networking Best Practices Abstract This technical report presents recommendations and best practices for configuring Datrium DVX networking for enterprise level use for VMware vsphere environments. It covers basic DVX concepts and setup for individual DVX sites as well as connectivity between DVX systems. Date: November 6, 2018 Report: TR Moffett Park Dr. Sunnyvale, CA Technical Report

2 Table of Contents Executive Summary 5 1 Introduction Audience Purpose and Assumptions Version History 5 2 Topic Overview Terminology Datrium DVX Compute Node Data Node DVX Networking Overview DVX Network Traffic Types Data Network Management Network Replication Network Telemetry/Support Traffic Out of Band Management Traffic Other Networks Compute Node Connectivity Physical Network Interface Cards (pnics) Data Node Connectivity Physical Network Connectivity Data Node IP Addressing Data Network Link Redundancy and Scalability Management Network Bonded Pair 11 3 DVX Networking Best Practices Recommended DVX Network Design DVX Networks Management Network Data Network Jumbo Frames Replication Network Replication Traffic Configuration DVX to DVX Replication Network Ports Requirements DVX to Cloud DVX Replication Network Ports Requirements Replication Throttling 17 2

3 Snapshot Metadata Replication Telemetry/Support Traffic Telemetry/Support Port Requirements Out of Band Management Traffic Other Network Types Compute Nodes Best Practices Physical Network Interface Cards (pnics) Recommended Compute Node Hardware Configuration Options Compute Node Networking Configuration Option Compute Node Networking Configuration Option Highly-Available Networking Configuration Virtual Networking Virtual Switches Virtual Switch Uplinks vswitch Uplink Configuration Distributed Virtual Switch (DVS) Configuration with Uplink NIC Adapters Distributed Virtual Switch (DVS) Configuration with Uplink LAGs vsphere Port Groups VMkernel Ports Data Network VMkernel Port VMkernel Port Uplinks Load-Balancing, Teaming & Failover vsphere Load-Balancing & Failover Link Redundancy and Scalability Network IO Control (NIOC) DVX Networking Traffic Priorities Configuring DVX and iscsi Storage Data Node Best Practices Uplink Connectivity Physical Network Best Practices Redundant Switch Topology 36 4 Conclusion 37 5 About the Authors 38 3

4 List of Figures Figure 1 DVX Split Provisioning 7 Figure 2 DVX Logical Networking Diagram 8 Figure 3 Data Node Floating IP Addressing 11 Figure 4 DVX Data Network 13 Figure 5 - Jumbo Frame Configuratrion Setting 14 Figure 6 VMkernel Ports 15 Figure 7 Restrict Replication traffic to use Replication Network 15 Figure 8 DVX to DVX Replication Traffic Flow 16 Table 1 DVX to DVX Replication Data Ports 16 Table 2 DVX to Cloud DVX Replication Data Ports 17 Figure 9 Replication Throttling 17 Table 3 Telemetry /Support Port Requirements 19 Figure 10 Compute Node Networking Configuration, Option 1 20 Figure 11 Compute Node Networking Configuration, Option 2 21 Figure 12 Highly-Available Networking Configuration 22 Figure 13 Management vswitch 23 Figure 14 Data, Replication & Other Network vswitch Portgroups 24 Figure 15 Management Network Portgroup NIC Configuration 25 Figure 16 Data Network Portgroup NIC Configuration 25 Figure 17 Replication & Other Network Portgroups NIC Configuration 26 Figure 18 Management Network Portgroup LAG Configuration 27 Figure 19 Data Network Portgroup NIC Configuration 27 Figure 20 Replication & Other Network Portgroups NIC Configuration 28 Figure 21 Example DVX Port Group Configuration 29 Figure 22 VMkernel Port Uplinks 30 Figure 23 vsphere Teaming and Failover Configuration 31 Table 4 - vsphere Teaming and Failover Configuration 31 Figure 24 Port Group and LAG Configuration 33 Figure 25 Physical LAG Connectivity 33 Table 5 Recommended NIOC Priorities 34 Figure 26 Recommended NIOC Priorities 34 Figure 27 - Data Node Network Connectivity 36 Figure 28 Redundant Switch Topology 37 4

5 Executive Summary The Datrium DVX system is a simple setup of servers (Compute Nodes) attached to backing storage (Datrium Data Nodes). The Compute Nodes run the Datrium DVX software in conjunctions with the VMware vsphere hypervisor and the Data Nodes provide resilient persistent storage and protection of the primary data running on the ESXi hosts. Connecting the DVX system components together (Compute Nodes and Data Nodes) and presenting them to the VMware management infrastructure with vcenter is done through basic ethernet networking. Multiple Datrium DVX system sites can be connected together providing the ability to replicate data between DVX systems. This guide will address the networking practices and recommendations for building an enterprise level solution on the DVX virtualization platform. 1 Introduction The Datrium DVX system provides an enterprise level platform for running VMware virtualization, KVM virtualization and some bare metal Linux applications. This Technical Report will cover the basics of setting up the networking for the DVX system and between DVX systems constructed with the VMware virtualization configuration. 1.1 Audience This Technical Report is intended for solution designers, system architects, or systems administrators that are looking to deploy Datrium DVX systems following best practices applied from experience with VMware configurations, user administration, enterprise level considerations and common networking options. 1.2 Purpose and Assumptions The purpose of this Technical Report is to cover enough details about the features, functionality and configurations to better understand deployment considerations to achieve the ideal system implementation for today s private cloud data center needs. IMPORTANT: VMware vsphere terminology is used extensively throughout this section. It is highly recommended to have a good understanding of VMware s networking concepts in order to fully comprehend Datrium s best practices. The latest VMware vsphere networking documentation can be found here: Version History Version Date Authors Notes /9/2018 Simon Long, Mike McLaughlin Initial release 2 Topic Overview This Technical Report covers the basics of the networking concepts and setup for building an enterprise ready DVX system. This section will cover the basic components, terminology and logical networking framework for the DVX system solution. 5

6 2.1 Terminology Some of the terminology used in this document which is specific to the Datrium solution is covered here and in more detail throughout the Technical Report. DVX - the combination of at least one Compute Node and at least one Data Node configured together to provide an on-premisies platform for running VMware vsphere hypervisor Compute Node servers that run the VMware ESXi hypervisor and the Datrium DVX software Compute Nodes are equipped with local flash storage to run the primary workloads more information can be found here: dvx-compute-node-specifications/ Data Node Datrium supplied storage appliance that maintains the persistent (final) copy of all data accessed on the Compute Nodes as well as backup copies (snapshots) of local data or data that has been replicated in from another DVX system more information can be found here: Storage Pool a collection of Data Nodes (up to 10) that form a larger single pool of data that can be referenced by the VMware environment at the Datastore, Virtual Machine, or VMDK level Hyperdriver software the Datrium host-based software that runs most of the DVX functionality and data services. This software runs in the user space so it does not impact VMkernel level operations or present itself as a separate VM entity to be managed Adaptive Pathing Datrium specific network path management capabilities between Data Nodes and Compute Nodes to simplify scalability and availability in the overall DVX configuration Split Provisioning the approach taken within the DVX architecture to separate compute and data components to allow for simpler management and scalability of the total configuration more information can be found here: resources/datrium-split-provisioning/ 2.2 Datrium DVX The Datrium DVX system is a combination of Compute Nodes and Data Nodes networked together to provide a scalable environment for independent processing (compute) and data (storage) management. The figure below shows the separation of the compute and storage nodes enabled by the DVX architecture. This split provisioning deployment (Figure 1) is then connected back together through appropriate network as described in this technical note. 6

7 Figure 1 DVX Split Provisioning Compute Node The Compute Node is a physical server running VMware vsphere hypervisor software and installed with Datrium hyperdriver software. The Compute Node runs the application VMs from local flash on the host. Compute Node network configurations can vary as it is possible to use Datrium Compute Nodes or customer supplied 3 rd Party Compute Nodes Data Node The Datrium Data Node comes preconfigured with fully redundant, hot swappable components. This includes mirrored NVRAM for fast writes, and redundant 10Gb or 25Gb network ports with load balancing and path failover for high speed data traffic. The Data Node provides the resilient backing storage for the Compute Node flash-based data copy as well as the built-in backup storage for self-protecting the enterprise data sets. 2.3 DVX Networking Overview This section provides an overview of the DVX networking. As you can see from Figure 2, the networking design for the DVX system is pretty simple. However, it is important to understand the fundamentals before reading Datrium s recommendations in Section 3. 7

8 Figure 2 DVX Logical Networking Diagram DVX Network Traffic Types There are multiple logical networks used within a DVX system. It s important to understand the different traffic types as each traffic type has its own configuration recommendations which are documented in Section Data Network The Data Network is a non-routed, Layer 2 network used to send/receive storage IO between the Compute Nodes and Data Nodes Management Network The Management Network is used to connect the Data Nodes to the Compute Nodes (ESXi Hosts) and vcenter servers. During initial configuration and on-going management of the DVX system, the Management network provides the primary administration connection for vcenter, Compute Nodes and Data Nodes. The Management Network is also used to transfer metadata between Data Nodes if replication is required between multiple DVX systems. The Management Network should not be confused with the Out-Of-Band management network which is discussed in Section Replication Network The Replication Network is used for DVX to DVX replication. Replication uses both Compute Node and Data Node connections between the source and destination DVX systems. 8

9 Replication between DVX systems is largely performed by the Compute Nodes and not the Data Nodes as you might expect. For example, if a virtual machine is replicated from Site A and Site B, the replication data is transferred from the Compute Nodes in Site A to the Compute Nodes in Site B via the Replication Network. DVX has been carefully designed in this way to enable better scalability of data services through adding Compute Nodes and reduce bottlenecks on the Data Nodes and the Data Network. DVX replication also requires the transfer of metadata between Data Nodes. Metadata transfer, although required for Replication, is performed over the Management Network and not the Replication Network Telemetry/Support Traffic The DVX system includes support software that monitors the storage system and sends log data to Datrium Support. The System sends two kinds of log data to Datrium. This support capability is automatic. The DVX System sends a small amount of heartbeat data to Datrium every five minutes The DVX System sends accumulated statistics and log data daily In the context of a support call, it may be useful for Datrium Support personnel to have network access to the Data Node. When remote support is enabled, Datrium Support personnel can log into the Data Node for the purpose of running diagnostics and collecting data. Support mode must be explicitly enabled to allow remote access Out of Band Management Traffic Out of Band (OOB) Management is used to directly administer physical servers when their vsphere Management interface becomes unavailable. At the connections level this traffic is typically tied to console, serial or IPMI capabilities Other Networks The networks documented above are required for DVX. However, there will often be additional networks types seen within a DVX environment such as: Virtual Machine vmotion Provisioning Section 3.2 covers Datrium s best practices for the DVX networks Compute Node Connectivity Physical Network Interface Cards (pnics) Each Compute Node in a DVX has physical network connectivity. Physical network connectivity is required for the Compute Nodes to communicate with Data Nodes and other Compute Nodes within the environment. The amount and speed of the Physical Network Interface Cards (pnics) will vary, as not all Compute Nodes are created equal. 9

10 DVX supports heterogeneous clusters, meaning, Compute Nodes do not have to be the same configuration, model or brand. This flexible approach allows customers to use both old and new hardware within the same DVX without issue. This does, however, mean that your networking configuration may vary between Compute Nodes. Compute nodes may each have multiple pnics and different pnic configurations of 1GbE, 10GbE or 25GbE components. Section documents our recommendations for the physical networking connectivity of Compute nodes Data Node Connectivity Physical Network Connectivity Data Nodes require physical network connectivity in order to communicate with Compute Nodes and other Data Nodes within the environment. However, unlike Compute Nodes, Data Node configurations are more uniform. Data Node hardware is standardized which helps keep the networking configuration simple Data Node IP Addressing Data Nodes use floating IP addresses to offer continued availability in the event of a controller failover. The Data Node has one floating IP address for the data ports and a second floating IP address for the management ports. For a DVX with multiple Data Nodes in a single Storage Pool, there is one floating IP for the Data Network and one floating IP for the Management Network. The DVX Hyperdriver software on the Compute Nodes uses the data floating IP address to communicate with the Storage Pool. Once connectivity has been established the data traffic flows over the connected ports of each active controller. The DVX GUI and CLI uses the management floating IP address to communicate with the Storage Pool. The DVX System maintains the floating IP addresses for continued access, regardless of any controller failover that might occur in a Storage Pool. 10

11 Figure 3 Data Node Floating IP Addressing Data Network Link Redundancy and Scalability Networking performance is critical when storage I/O is being transferred across a physical network. The smallest amount of contention or latency on a network can lead to poorly performing applications. In order to prevent this from happening within a DVX system, Datrium has created our own propriety link algorithm that dynamically monitors and changes the flow of data between physical Data Node links and Compute Node data interface. This ensures that all the load is evenly spread across all available links within the Data Node(s). This is important to understand as this capability impacts how vsphere virtual networking and load-balancing should be configured with a DVX system. Our recommendations regarding link redundancy management can be found in Section Management Network Bonded Pair Management Network connectivity on the Data Nodes uses a different configuration to the Data Network redundancy solution documented in Section The two physical links used for themanagement Network are configured as an Active/Passive Bonded pair. Management traffic 11

12 will only be sent and received on the Active port. In the event of a failure on the Active interface, DVX will failover, promoting the Passive interface to become the Active interface. There is a MAC address assigned to the floating IP address and the MAC addresses of the physical bonded interfaces share the identity of the currently active physical link. 3 DVX Networking Best Practices 3.1 Recommended DVX Network Design Section 3 documents networking best practices to be followed when using Datrium DVX within your environment. IMPORTANT: VMware vsphere terminology is used extensively throughout this section. It is highly recommended to have a good understanding of VMware s networking concepts in order to fully comprehend Datrium s best practices. The latest VMware vsphere networking documentation can be found here: DVX Networks Section 2.3 documents the different networks required within a DVX environment. Each of the networks are different and have a different set of recommendations. The following sections document Datrium s best practices for each network Management Network The Management Network contains browsable interfaces used by administrators for access to Compute Nodes / ESXi Hosts, vcenters and Datrium Data Node administration UI (GUI & CLI). Management Networks support routable interfaces, VLAN tagging and trunking. The bandwidth requirement for DVX/Sphere management traffic is low and can be serviced by 1Gbit network connectivity. Management traffic should be logically separated from all other traffic types. Logical separation can be achieved by using a separate VLAN, dedicated subnet or physical separation. Multiple redundant uplinks/paths should be used at all times to ensure high-availability. Configuration recommendations for network uplinks can be found in Section 3.3. If an environment contains multiple sites and multiple replicating DVX systems, the Management Network is used to transfer metadata between the Data Nodes. In order for this to happen, the Management IP address of each DVX system should be routable between sites. 12

13 3.2.2 Data Network Data traffic should be logically separated from all other traffic types. Logical separation can be achieved by using a separate VLAN and/or a dedicated subnet. The Data Network can be trunked with other traffic on the Compute Node side, however, it should be access mode (non-trunk) ports on the Data Node side of the switches, as illustrated in Figure 4. Figure 4 DVX Data Network The bandwidth requirement on the Data Network can be very heavy. At a minimum, 10Gbit connectivity should be used between the Compute and Data nodes. Where available, 25Gbit connectivity can be exploited for better performance for IO intensive environments. 25Gbit connectivity requires compatible network switches and cables in order to support the higher bandwidth connections. Multiple redundant uplinks/paths should be used at all times to ensure high-availability. Configuration recommendations for network uplinks can be found in Section Jumbo Frames The most simple and recommended setup of the DVX Data Network can use the default MTU of 1500 between Compute Nodes and Data Nodes with no significant performance considerations. Using Jumbo frames configured with an MTU of 9000 is not a requirement with DVX. If your network infrastructure, server and storage connectivity is using Jumbo Frames, the DVX Data Node supports this with a simple setting available in CLI with network setup or in the UI as shown in Figure 5. 13

14 Figure 5 - Jumbo Frame Configuratrion Setting In some situations, using Jumbo Frames can help reduce bandwidth usage by using fewer frames to transfer the same amount of data. They can also improve efficiency on existing networking equipment by reducing the number of packets each device receives and processes. In order for Jumbo Frames to function correctly, all of the networking equipment used within the environment needs to support and be configured for Jumbo Frames as indicated in the IMPORTANT note in Figure 5. If a single device within the traffic flow isn t configured correctly for Jumbo Frames, this can cause performance problems. It is imperative to check and configure each device within the network before enabling Jumbo Frames, this includes: Data Node data interface Switches between the Data Node and the Compute Nodes Compute Node pnics Distributed Virtual Switch (DVS) Standard vswitch VMkernel Ports NOTE: For details on how to configure Jumbo Frames within your physical switches, follow your switch vendor s documentation Replication Network Replication traffic is typically inter-datacenter, meaning data will usually flow over a Wide-Area Network (WAN) or over the Internet via a VPN. For this reason, replication traffic should be logically separated from all other traffic types. Logical separation can be achieved by using a separate VLAN and/or a dedicated subnet. The bandwidth requirement on the Replication Network can be heavy in bursts. At a minimum, 1Gbit connectivity should be used. For situations where the DVX systems are within the same physical 14

15 location, 10Gbit connectivity can also be exploited for potentially faster replication performance Replication Traffic Configuration By default, Replication traffic will flow over the Compute Node s Management interface. Additional configuration is required to enable the Replication traffic to use the Replication Network. Figure 6 VMkernel Ports In the configuration illustrated in Figure 6, there are three DVX VMkernel ports configured as recommended in Section Based on this configuration, vmk2 should be used for Replication Traffic. If not explicitly configured, by default, Replication traffic will flow over the Management Network. A manual configuration is required to push the Replication traffic via the Replication VMkernel Port, see Figure 7. This configuration is made within the DVX interface. Detailed steps on how this is configured can be found in the DVX System Management Guide. Figure 7 Restrict Replication traffic to use Replication Network As mentioned in Section , Replication between DVX systems is performed by the Compute Nodes. Replication data is sent from the source Compute Nodes to the destination Compute Nodes. Replication traffic between sites will typically be transferred over a WAN or VPN over the internet. Figure 8, below shows the traffic flows required to replicate data from one DVX site to another. 15

16 Figure 8 DVX to DVX Replication Traffic Flow In order for replication to take place, each source site Compute Node should have network access to each Compute Node at each destination site. When possible a stretched Layer2 network should be used for the Replication Network allowing all Compute Nodes to logically exist in the same Layer 2 network. If a stretched Layer 2 network is not available, the local subnets used for the Replication Network on both sites should be routable in order for the replication to flow between sites. Customers should follow their organizational standards to determine whether a WAN, VPN or similar solution should be used to transfer Replication data from one site to another. IMPORTANT: Do not use Network Address Translation (NAT) for DVX replication traffic DVX to DVX Replication Network Ports Requirements The following ports are required to be open to allow Replication traffic to flow between DVX systems. Purpose Source Interface Destination Interface Replication Data Snapshot Metadata Compute Node - Replication VMkernel Adapter Data Node - Management interface: Floating IP address and both controller IP addresses Compute Node - Replication VMkernel Adapter Data Node - Management interface: Floating IP address and both controller IP addresses Port Number Direction 1525* Bi-Directional 4105 Bi-Directional *The port number (1525) for Replication traffic can be customized if needed. Table 1 DVX to DVX Replication Data Ports 16

17 DVX to Cloud DVX Replication Network Ports Requirements The following ports are required to be open to allow Replication traffic to flow between onpremises DVX systems and Cloud DVX. Purpose Source Interface Destination Interface Port Number Direction Replication Data AWS Service Data Node - Management interface: Floating IP address and both controller IP addresses Data Node - Management interface: Floating IP address and both controller IP addresses AWS 1758, 4105 Bi-Directional AWS 443 Outbound (DVX to AWS) Table 2 DVX to Cloud DVX Replication Data Ports Replication Throttling Replication Throttling is a feature built into DVX that allows administrators to restrict the amount of bandwidth that is used for Replication traffic and should be used when bandwidth is limited between DVX systems and Cloud DVX. Throttling can be enabled and configured on a per-site basis which is beneficial when bandwidth connections vary. Figure 9 Replication Throttling Customers should work with their Networking/WAN teams to understand their bandwidth availability and throttle replication traffic accordingly. 17

18 NOTE: If the throttling rate is set too low, the replication jobs may begin to fall behind. In this case you will get alerts from the DVX systems and can adjust throttling or Protection Groups details accordingly Snapshot Metadata Replication When Replication is configured between DVX systems, snapshot metadata is transferred over the Management Network between the Data Nodes. The snapshot metadata that travels over the Management Network is very low, roughly 1% of the data that goes over the Data Network. In order for the snapshot traffic to be sent between Data Nodes, the Management Network needs to be either on a stretched L2 network or a routed network Telemetry/Support Traffic The DVX System requires an Internet gateway for access to Datrium Support servers. The Internet gateway should be on the same subnet as the management port that is configured on the Data Node Telemetry/Support Port Requirements The following ports are required to be open to allow DVX telemetry data to be sent to Datrium support for proactive monitoring and for Datrium remote support services. 18

19 Purpose Network Configuration Data Node Interface Datrium Server, Port & Protocol DVX Autosupport Remote Support Software Upgrade Combined data and management Separate data and management Combined data and management Separate data and management Combined data and management Separate data and management Data interface: Floating IP address and both controller IP addresses Management interface: Floating IP address and both controller IP addresses Data interface: Floating IP address and both controller IP addresses Management interface: Floating IP address and both controller IP addresses Data interface: Floating IP address and both controller IP addresses Management interface: Floating IP address and both controller IP addresses Table 3 Telemetry /Support Port Requirements Server: autosupport.datrium.com Port: 443 Protocol: HTTPS Server: autosupport.datrium.com Port: 443 Protocol: HTTPS Servers: autosupport-tunnel. datrium.com autosupport-tunnel-https. datrium.com Port: 443 Protocol: HTTPS Servers: autosupport-tunnel. datrium.com autosupport-tunnel-https. datrium.com Port: 443 Protocol: HTTPS Server: upgrade-center-01. datrium.com Port: 443 Protocol: HTTPS Server: upgrade-center-01. datrium.com Port: 443 Protocol: HTTPS Out of Band Management Traffic Out of Band Management (OOB) management in a DVX system should be configured following your organizations best practices. All Compute Nodes should be configured with an OOB management interface to ensure that the Compute Nodes can still be administered in the event of a misconfiguration or network failure Other Network Types Follow your organizations practices for enabling and configuring these networks. Virtual Machine Networks vmotion Network 19

20 3.3 Compute Nodes Best Practices Physical Network Interface Cards (pnics) Physical Network Interface Cards (pnics) are what connect the Compute Nodes to the physical network infrastructure. Physical hardware configuration will often vary between Compute Nodes for each customer site Recommended Compute Node Hardware Configuration Options In instances where customers have the option to purchase new Compute Nodes for their DVX environment, we would recommend the following two configurations Compute Node Networking Configuration Option 1 Figure 10 Compute Node Networking Configuration, Option 1 Option 1, is recommended when running IO intensive workloads or when Distributed Virtual Switches (DVS) are not available. By physically separating the Management Network traffic from all the other network traffic, the Compute Note Management Network will always be available in the event of high network contention on the 10Gbit uplinks. With this pnic configuration, the virtual networking should be divided up as follows: 1x OOB o Should be dedicated to Out of band (OOB) management 2x 1Gbit o The two 1Gbit pnics should be used for Management and Replication metadata traffic only. Management and metadata traffic within a DVX environment is minimal and 1Gbit is sufficient bandwidth. 2x 10Gbit o The two 10Gbit pnics will be used for all other network traffic; Data, Replication, Virtual Machine and vmotion traffic. 20

21 Compute Node Networking Configuration Option 2 Figure 11 Compute Node Networking Configuration, Option 2 Option 2, is recommended when running non-intensive IO workloads and/or when there is a limitation in physical network switch ports. Multiple traffic types will share the same pnics. To ensure higher priority traffic is prioritized over low priority traffic it s recommended to configure NIOC. Please see Section for our recommendation of networking traffic priorities. If NIOC is not available, Option 1 is recommended. Without the ability to prioritize traffic during times of contention access to Compute Nodes could be limited. With this pnic configuration, the virtual networking should be divided up as follows: 1x OOB o Should be dedicated to Out of band (OOB) management 2x 10Gbit The two 10Gbit pnics will be used for ALL network traffic; Management, Data, Replication, Virtual Machine and vmotion traffic Highly-Available Networking Configuration Regardless of whether Compute Nodes have two or ten pnics, it is always a recommended best practice to connect Compute Nodes to at least two physical switches to reduce single points of failure (SPOF). This will allow the Compute Node to continue to communicate in the event of a physical switch or cabling failure. NOTE: For some environments, it may be desirable to physically separate the Virtual Machine (user / application facing) or Replication (leaving the site) traffic on dedicated pnics. This works fine within the context of setting up the basic DVX connectivity. For example; if a Compute Node has 2x 1Gbit and 2x 10Gbit pnics, the pnic to switch connectivity should be configured similarly to Figure

22 3.4 Virtual Networking Virtual Switches Figure 12 Highly-Available Networking Configuration Both standard Virtual Switches (vswitch) and Distributed Virtual Switches (DVS) can be used to manage the virtual networking within the Compute Nodes of the DVX environment. All of our networking recommendations can be applied to either virtual switch type, with the exception of Network IO Control which requires a DVS. If Network IO Control (NIOC) is required due to a limited amount of pnics, DVS must be used as NIOC is not supported with a standard vswitch. See Section for NIOC recommendations Virtual Switch Uplinks Uplink ports are logical ports associated with physical network adapters (pnics) installed on the Compute Node, providing a connection between the virtual network and a physical network. pnics are assigned to uplink ports when they are initialized by a device driver or when the teaming policies for virtual switches are reconfigured. Uplinks are assigned to vswitches and DVS during their creation. The number of Uplinks available will vary depending on the server hardware used and how many pnics are installed on each Compute Node. Section documents the recommendations number of pnics for a DVX Compute Node. If the Compute Nodes are configured based on our recommended hardware configuration in Section the vswitch/dvs to Uplink configuration should be configured as illustrated in the following two sections. 22

23 vswitch Uplink Configuration If only standard vswitches are used within the vsphere environment, we recommend the following configuration: The Management vswitch is configured with 2x 1Gbit Uplinks (e.g., vmnic0 & vmnic1) The Data, Replication & Other networks vswitch is configured with 2x 10Gbit Uplinks (e.g., vmnic4 & vmnic5) By physically separating the Management Network traffic from all the other network traffic, we can guarantee that the Compute Node Management Network will have higher availability in the event of high network contention on the 10Gbit uplinks. NOTE: Link Aggregation (LAG) is not supported with Standard vswitches. Figure 13 Management vswitch 23

24 Figure 14 Data, Replication & Other Network vswitch Portgroups Both Uplinks assigned to the Port Group should be associated to a vmnic that is connected to different physical switches, as shown in Figure 12. Following this configuration allows traffic to continue to flow in the event of a physical switch failure Distributed Virtual Switch (DVS) Configuration with Uplink NIC Adapters If Distributed Virtual Switches (DVS) are used within the vsphere environment, for simplicity, Datrium recommends using basic NIC Adapter uplinks in the following configuration: The Management Portgroup is configured with a 2x 1Gbit Uplinks (e.g., NICs 1&2) The Data, Replication and other Portgroups are configured with a 2x 10Gbit Uplinks (e.g., NICs 3&4) 24

25 Figure 15 Management Network Portgroup NIC Configuration Figure 16 Data Network Portgroup NIC Configuration 25

26 Figure 17 Replication & Other Network Portgroups NIC Configuration Both Uplinks assigned to the Portgroups should be associated to a vmnic that is connected to different physical switches, as shown in Figure 12. Following this configuration allows traffic to continue to flow in the event of a physical switch failure Distributed Virtual Switch (DVS) Configuration with Uplink LAGs If Distributed Virtual Switches (DVS) are used within the vsphere environment and the underlying network is already configured for using LAGs, Datrium recommends using LAG uplinks in the following configuration: The Management Portgroup is configured with a 2x 1Gbit LAG (e.g., lag2) The Data, Replication and other Portgroups are configured with a 2x 10Gbit LAG (e.g., lag1) For more information on Link Aggregation see Section

27 Figure 18 Management Network Portgroup LAG Configuration Figure 19 Data Network Portgroup NIC Configuration 27

28 Figure 20 Replication & Other Network Portgroups NIC Configuration vsphere Port Groups Whether you are using a single or multiple vswitches or a DVS, individual vsphere Port Groups should be created for the different networks documented in Section 3.2. Management Network Data Network Replication Network Virtual Machine Network Section shows a couple of Port Group configuration examples as does Figure

29 Figure 21 Example DVX Port Group Configuration The only two configurations that are recommended to be changed during the configuration of the Port Groups are: The number of Uplinks See Section The VLAN ID, if VLANs are being used All of the other Port Group settings should be configured according to your organizational specifications. It is not uncommon for there to be multiple Virtual Machine network Port Groups within your environment depending on what workloads are running. These Virtual Machine Port Groups should be configured according to your organizational specifications VMkernel Ports A VMkernel port (or VMkernel NIC or Interface) is used for VMkernel services when they need connecting to the physical network. VMkernel Ports are used for management interfaces, IP-based storage (such as NFS or iscsi), vmotion traffic etc. Each of the networks discussed in Section 3.2 requires their own VMkernel adapters in order to communicate with their individual networks. 29

30 Data Network VMkernel Port IMPORTANT: Only a single VMkernel port is required on each Compute Node for Data Node traffic. Adding multiple VMkernel ports will not improve performance and could have a serious impact on reliability VMkernel Port Uplinks VMkernel ports rely on the virtual switches Uplink for connectivity between VMkernel ports and the physical network. The number of Uplinks available for the VMkernel port will vary based on the number of virtual switch Uplinks. It is always recommended to have a minimum of two Uplinks configured for each VMkernel port. Even If the same Uplinks are used for multiple VMkernel ports. Figure 22 below illustrates how Uplinks can be configured if pnics are limited. Figure 22 VMkernel Port Uplinks Based on our recommended Compute Node configuration in Section 3.3.1, Datrium recommends following the virtual switch configurations provided in Section Load-Balancing, Teaming & Failover Load-Balancing, teaming and failover options are particularly important when it comes to configuring an environment that is both performant and highly available. 30

31 For the simplest network configuration, Datrium recommends basic NIC Teaming with load balancing and failover. This can be implemented with standard vsphere switches or vsphere Distributed Virtual Switch (DVS) configurations vsphere Load-Balancing & Failover When using the DVS, configure each Port Group to use the configuration illustrated in Figure 23 and documented in Table 4. Figure 23 vsphere Teaming and Failover Configuration Setting Load balancing Network failure detection Notify Switches Failback Active Uplinks Recommendation Route based on physical NIC load Link Status only Yes Yes At least two Uplinks Table 4 - vsphere Teaming and Failover Configuration 31

32 IMPORTANT: Do not use Route Based on IP Hash if you are not using EtherChannel. This will cause you to lose network connectivity. If your physical network is configured to use Beacon Probing, this can be used as the Network failure detection option instead of Link Status Only Link Redundancy and Scalability Link aggregation enables Ethernet interfaces to be grouped together to form a logical Ethernet link for the purpose of providing fault-tolerance and high-speed links between switches, routers and servers. Link aggregation balances traffic across the member links within an aggregated Ethernet bundle and effectively increases the available uplink bandwidth. Another advantage of link aggregation is increased availability because the logical Ethernet link is composed of multiple member links. If one member link fails, traffic continues over the remaining links. The two most common standards for link aggregation are EtherChannel (Cisco proprietary) and IEEE 8023ad which is an Open Standard. Either standard can be used within a Datrium environment. NOTE: For details on how to configure link aggregation within your physical switches, follow your switch vendor s documentation. Link aggregation is not required but if your organizations current network infrastructure is configured for link aggregation it is recommended to continue to follow this approach for the Compute Node connections to the Data Network in order to get the added benefit of increased bandwidth and availability. Where possible, it should also be used for the Management and Replication Networks, but this isn t as critical. IMPORTANT: LAG configuration should not be configured on the Data Nodes uplinks. 32

33 Figure 24 Port Group and LAG Configuration Uplinks assigned to the LAG should be associated to a vmnic that is connected to different physical switches, as shown in Figure 25. Following this configuration allows traffic to continue to flow in the event of a physical switch failure. Figure 25 Physical LAG Connectivity If Link Aggregation is not an option within your networking environment, our recommendation would be to use vsphere load-balancing. See Section for our loadbalancing recommendations Network IO Control (NIOC) Network IO Control (NIOC), a feature of the vsphere Distributed Virtual Switch (DVS) can be used 33

34 to prioritize network traffic in environments where Compute Nodes have a limited number of pnics. This section documents when NIOC might be applicable within your DVX environment and Datrium s network traffic priority recommendations. Below is a list of the network traffic types that will typically be present within your DVX environment. These traffic types have been described in Section 2.3. Management Traffic Data Traffic Replication Traffic vmotion Traffic Virtual Machine Traffic The hardware configuration of each customer s Compute Nodes will ultimately determine if and how NIOC should be configured within the environment. As every customer s environment is likely to be unique, instead of documenting every configuration possibility, Datrium has documented its high-level traffic priority recommendations which will enable all customers to configure NIOC regardless their hardware configuration DVX Networking Traffic Priorities In the event of network contention, it is particularly important to ensure certain traffic types get priority over other traffic. Table 5 below, documents Datrium s recommended networking traffic priority levels. Priority Highest High Medium Low Network Traffic Type Data Network (NFS) Virtual Machine Traffic Management Traffic vmotion Traffic Table 5 Recommended NIOC Priorities Figure 26 llustrates how this configuration may look when configuring NIOC following Datrium s priority recommendations. Figure 26 Recommended NIOC Priorities 34

35 3.4.8 Configuring DVX and iscsi Storage In environments that have both iscsi storage and DVX storage presented to the same Compute Node, both storage solutions should be logically separated from each other. Logical separation can be achieved by using a separate VLAN and/or a dedicated subnet. 3.5 Data Node Best Practices Uplink Connectivity As discussed in Section 3.2.2, the bandwidth requirements on the Data Network can be very heavy. At the time of publishing, the current Data Node configuration has 2x 10GBase-T and 2x SPF+ networking interfaces options. However, only two interfaces can be used (either 10GBase-T or SPF+, not both). These high-speed interfaces are dedicated to Data traffic. At a minimum, 10Gbit connectivity should be used between the Compute and Data nodes. Where available, 25Gbit connectivity can be exploited for better performance for IO intensive environments. 25Gbit connectivity requires compatible network switches and cables in order to support the higher bandwidth connections. The physical switch ports that connect to the Data Nodes must be configured in access mode (non-trunk), as illustrated in Figure 12. Data Node Network Interface Cards (NICs) should be connected to different physical switches, as shown in Figure 27. Following this configuration allows traffic to continue to flow in the event of a physical switch failure. 35

36 Figure 27 - Data Node Network Connectivity 3.6 Physical Network Best Practices Redundant Switch Topology In order to prevent service outages in the event of a physical switch failure, Datrium recommends using a networking topology that can provide multiple redundant paths between the Compute and Data Nodes, eliminating single points of failure (SPOF). Redundant network switches, where possible, should be connected using Inter-Switch Links (ISL) or a similar technology (e.g. VPC) which allows two switches to remain logically separated but enables the flow of information and traffic between the two switches. A simple example of how this can be achieved is illustrated in Figure 28. When connecting switches in this type of topology, traffic will flow over the links between the switches. The links should be sized appropriately to carry half of the aggregate traffic between the Compute Node and Data Node layers. The physical switch ports that connect to the Data Nodes must be configured in access mode (non-trunk). 36

37 4 Conclusion Figure 28 Redundant Switch Topology This technical note covered several aspects of DVX networking setup and practices. The primary focus is on connecting the Compute Nodes and Data Nodes together to form the construction of the DVX system. Connecting the DVX systems to each other and to the customers management / administration environment is also covered. Once the DVX networking is addressed the VMware environment can be connected to the rest of the IT infrastructure as best fits individual customer needs and practices. In a world of complexity, Datrium seeks to offer simplicity. Datrium s networking best practices follow many of the networking industry s best practices that have been practiced over many decades. Alongside simplicity, Datrium also prides itself on high-performance. Following the networking best practices documented through this technical report will help to ensure your applications will get the most out of the DVX System. 37

38 5 About the Authors Simon Long (Double VCDX #105) is a Senior Solutions Architect in the Office of the CTO at Datrium. In his role, he creates technical solutions and architecture guidance for current and future Datrium customers. Previous to Datrium, Simon was working for VMware for close to eight years, in which he had multiple roles ranging from an enterprise level consultant to the lead architect and service owner for VMware s Internal Horizon deployments. In a career spanning 17+ years, Simon has worked in a wide range of IT environments ranging from Cloud Service providers and software vendors to start-ups. Mike McLaughlin is the Director, Solutions and Technical Marketing in the Office of the CTO at Datrium. Prior to Datrium Mike was the Sr. Manager of Technical Marketing at Nimble Storage (now part of HPE). Mike has been involved in VMware solutions for the past several years working with customers and partners in helping define, test and deploy enterprise virtualization solutions. 38

vsphere Networking for the Network Admin Jason Nash, Varrow CTO

vsphere Networking for the Network Admin Jason Nash, Varrow CTO vsphere Networking for the Network Admin Jason Nash, Varrow CTO Agenda What is virtualization? How does virtualization impact the network team? How should you approach virtualization? vsphere 101 Where

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware

More information

VMware vsan Network Design-OLD November 03, 2017

VMware vsan Network Design-OLD November 03, 2017 VMware vsan Network Design-OLD November 03, 2017 1 Table of Contents 1. Introduction 1.1.Overview 2. Network 2.1.vSAN Network 3. Physical Network Infrastructure 3.1.Data Center Network 3.2.Oversubscription

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

GUIDE. Optimal Network Designs with Cohesity

GUIDE. Optimal Network Designs with Cohesity Optimal Network Designs with Cohesity TABLE OF CONTENTS Introduction...3 Key Concepts...4 Five Common Configurations...5 3.1 Simple Topology...5 3.2 Standard Topology...6 3.3 Layered Topology...7 3.4 Cisco

More information

NetApp HCI Network Setup Guide

NetApp HCI Network Setup Guide Technical Report NetApp HCI Network Setup Guide Version 1.2 Aaron Patten, NetApp April 2018 TR-4679 TABLE OF CONTENTS 1 Introduction... 4 2 Hardware... 4 2.1 Node and Chassis Layout... 4 2.2 Node Types...

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1193BU A Closer Look at vsan Networking Design and Configuration Considerations Cormac Hogan Andreas Scherr VMworld 2017 Content: Not for publication #VMworld #STO1193BU Disclaimer This presentation

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

Reference Architecture. DataStream. Architecting DataStream Network. Document # NA Version 1.03, January

Reference Architecture. DataStream. Architecting DataStream Network. Document # NA Version 1.03, January Reference Architecture DataStream Architecting DataStream Network Document # 317-0026NA Version 1.03, January 2016 www.cohodata.com Abstract This document provides an overview of data center networking

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Revised January 30, 2015 06:00 pm IST Citrix CloudPlatform Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5

More information

Deployments and Network Topologies

Deployments and Network Topologies TECHNICAL GUIDE Deployments and Network Topologies A technical guide to deploying Family Zone School in different network topologies. Contents Introduction...........................................3 Transparent

More information

Dell FS8600 with VMware vsphere

Dell FS8600 with VMware vsphere Dell FS8600 with VMware vsphere Deployment and Configuration Best practices Dell Engineering April 04 Revisions Date Revision Author Description April 04.0 Sammy Frish FluidFS System Engineering Initial

More information

VMware Validated Design for NetApp HCI

VMware Validated Design for NetApp HCI Network Verified Architecture VMware Validated Design for NetApp HCI VVD 4.2 Architecture Design Sean Howard Oct 2018 NVA-1128-DESIGN Version 1.0 Abstract This document provides the high-level design criteria

More information

Reference Architecture. DataStream. UCS Direct Connect. For DataStream OS 2.6 or later Document # NA Version 1.08 April

Reference Architecture. DataStream. UCS Direct Connect. For DataStream OS 2.6 or later Document # NA Version 1.08 April DataStream For DataStream OS 2.6 or later Document # 310-0026NA Version 1.08 April 2017 www.cohodata.com Abstract This reference architecture describes how to connect DataStream switches to Cisco Unified

More information

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers VMware vcloud Network VMware vcloud Architecture Toolkit for Service Providers Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers Version 2.8 August 2017 Harold Simon 2017 VMware,

More information

New Features in VMware vsphere (ESX 4)

New Features in VMware vsphere (ESX 4) New Features in VMware vsphere (ESX 4) VMware vsphere Fault Tolerance FT VMware Fault Tolerance or FT is a new HA solution from VMware for VMs. It is only available in vsphere 4 and above and provides

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

VMware Cloud on AWS Networking and Security. 5 September 2018 VMware Cloud on AWS

VMware Cloud on AWS Networking and Security. 5 September 2018 VMware Cloud on AWS VMware Cloud on AWS Networking and Security 5 September 2018 VMware Cloud on AWS You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances

Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances The Single-node Trial for VMware vcenter Server on IBM Cloud is a single-tenant hosted private cloud that delivers

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme LHC2103BU NSX and VMware Cloud on AWS: Deep Dive Ray Budavari, Senior Staff Technical Product Manager NSX @rbudavari #VMworld #LHC2103BU Disclaimer This presentation may contain product features that are

More information

Citrix Ready Setup for XenDesktop on Datrium DVX

Citrix Ready Setup for XenDesktop on Datrium DVX Citrix Ready Setup for XenDesktop on Datrium DVX 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction This document covers the setup and use of Citrix XenDesktop

More information

Running the vsan Witness Appliance in vcloud Air First Published On: Last Updated On:

Running the vsan Witness Appliance in vcloud Air First Published On: Last Updated On: Running the vsan Witness Appliance in vcloud Air First Published On: 02-03-2017 Last Updated On: 10-10-2017 1 Table of Contents 1. Overview 1.1.Introduction 1.2.2 Node & Stretched Cluster Basic Requirements

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

E V A L U A T O R ' S G U I D E. VMware vsphere 4 Evaluator s Guide

E V A L U A T O R ' S G U I D E. VMware vsphere 4 Evaluator s Guide E V A L U A T O R ' S G U I D E Evaluator s Guide 20030127 Contents Getting Started................................ 3 About This Guide................................ 3 Intended Audience..............................

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

10GbE Network Configuration

10GbE Network Configuration 10GbE Network Configuration Software Version 1.1.5 or Later Revision Part Number: 760-000009 Rev C SimpliVity and OmniCube TM are trademarks of SimpliVity Corporation. All trademarks and registered trademarks

More information

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3 Architecture and Design of VMware NSX-T for Workload Domains Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3 You can find the most up-to-date technical documentation on the VMware

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Nimble Storage SmartStack Getting Started Guide Cisco UCS and VMware ESXi5

Nimble Storage SmartStack Getting Started Guide Cisco UCS and VMware ESXi5 Technical Marketing Solutions Guide Nimble Storage SmartStack Getting Started Guide Cisco UCS and VMware ESXi5 Document Revision Date Revision Description (author) 5/16/2014 1. 0 Draft release (mmclaughlin)

More information

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

2V0-602.exam. Number: 2V0-602 Passing Score: 800 Time Limit: 120 min File Version: Vmware 2V0-602

2V0-602.exam. Number: 2V0-602 Passing Score: 800 Time Limit: 120 min File Version: Vmware 2V0-602 2V0-602.exam Number: 2V0-602 Passing Score: 800 Time Limit: 120 min File Version: 1.0 Vmware 2V0-602 VMware vsphere 6.5 Foundations Version 1.0 Exam A QUESTION 1 A vsphere Administrator recently introduced

More information

Administering VMware vsphere and vcenter 5

Administering VMware vsphere and vcenter 5 Administering VMware vsphere and vcenter 5 Course VM-05 5 Days Instructor-led, Hands-on Course Description This 5-day class will teach you how to master your VMware virtual environment. From installation,

More information

VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7)

VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7) VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7) COURSE OVERVIEW: This five-day course features intensive hands-on training that focuses on installing, configuring, and managing VMware vsphere

More information

Why Datrium DVX is Best for VDI

Why Datrium DVX is Best for VDI Why Datrium DVX is Best for VDI 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction Managing a robust and growing virtual desktop infrastructure in current

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture Date: 2017-03-29 Version: 1.0 Copyright IBM Corporation 2017 Page 1 of 16 Table of Contents 1 Introduction... 4 1.1 About

More information

Cisco ACI and Cisco AVS

Cisco ACI and Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Installing the Cisco AVS, page 5 Key Post-Installation Configuration Tasks for the Cisco AVS, page 14 Distributed Firewall, page

More information

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES WHITE PAPER JULY 2017 Table of Contents 1. Executive Summary 4 2.

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Cisco ACI with Cisco AVS

Cisco ACI with Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Cisco AVS Installation, page 6 Key Post-Installation Configuration Tasks for the Cisco AVS, page 43 Distributed Firewall, page 62

More information

Emulex Universal Multichannel

Emulex Universal Multichannel Emulex Universal Multichannel Reference Manual Versions 11.2 UMC-OCA-RM112 Emulex Universal Multichannel Reference Manual Corporate Headquarters San Jose, CA Website www.broadcom.com Broadcom, the pulse

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

VMware vsphere 6.5/6.0 Ultimate Bootcamp

VMware vsphere 6.5/6.0 Ultimate Bootcamp VMware vsphere 6.5/6.0 Ultimate Bootcamp Class Duration 5 Days Introduction This fast paced, high energy, hands-on course provides not only the foundation needed for a top performing software defined datacenter

More information

vsan Stretched Cluster & 2 Node Guide January 26, 2018

vsan Stretched Cluster & 2 Node Guide January 26, 2018 vsan Stretched Cluster & 2 Node Guide January 26, 2018 1 Table of Contents 1. Overview 1.1.Introduction 2. Support Statements 2.1.vSphere Versions 2.2.vSphere & vsan 2.3.Hybrid and All-Flash Support 2.4.On-disk

More information

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer 21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal By Adeyemi Ademola E. Cloud Engineer 1 Contents Introduction... 5 1.2 Document Purpose and Scope...5 Service Definition...

More information

VMware Cloud Foundation Overview and Bring-Up Guide. Modified on 27 SEP 2017 VMware Cloud Foundation 2.2

VMware Cloud Foundation Overview and Bring-Up Guide. Modified on 27 SEP 2017 VMware Cloud Foundation 2.2 VMware Cloud Foundation Overview and Bring-Up Guide Modified on 27 SEP 2017 VMware Cloud Foundation 2.2 VMware Cloud Foundation Overview and Bring-Up Guide You can find the most up-to-date technical documentation

More information

OpenNebula on VMware: Cloud Reference Architecture

OpenNebula on VMware: Cloud Reference Architecture OpenNebula on VMware: Cloud Reference Architecture Version 1.2, October 2016 Abstract The OpenNebula Cloud Reference Architecture is a blueprint to guide IT architects, consultants, administrators and

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Logical Operations Certified Virtualization Professional (CVP) VMware vsphere 6.0 Level 2 Exam CVP2-110

Logical Operations Certified Virtualization Professional (CVP) VMware vsphere 6.0 Level 2 Exam CVP2-110 Logical Operations Certified Virtualization Professional (CVP) VMware vsphere 6.0 Level 2 Exam CVP2-110 Exam Information Candidate Eligibility: The Logical Operations Certified Virtualization Professional

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Exam Name: VMware Certified Associate Network Virtualization

Exam Name: VMware Certified Associate Network Virtualization Vendor: VMware Exam Code: VCAN610 Exam Name: VMware Certified Associate Network Virtualization Version: DEMO QUESTION 1 What is determined when an NSX Administrator creates a Segment ID Pool? A. The range

More information

VMware vsphere: Fast Track [V6.7] (VWVSFT)

VMware vsphere: Fast Track [V6.7] (VWVSFT) VMware vsphere: Fast Track [V6.7] (VWVSFT) Formato do curso: Presencial Preço: 3950 Nível: Avançado Duração: 50 horas This five-day, intensive course takes you from introductory to advanced VMware vsphere

More information

HP BladeSystem Networking Reference Architecture

HP BladeSystem Networking Reference Architecture HP BladeSystem Networking Reference Architecture HP Virtual Connect FlexFabric Module and VMware vsphere 5 Technical white paper Table of contents Executive Summary... 2 Virtual Connect FlexFabric Module

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, four-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Cisco Nexus 1000V InterCloud

Cisco Nexus 1000V InterCloud Deployment Guide Cisco Nexus 1000V InterCloud Deployment Guide (Draft) June 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 49 Contents

More information

VMware Cloud on AWS. A Closer Look. Frank Denneman Senior Staff Architect Cloud Platform BU

VMware Cloud on AWS. A Closer Look. Frank Denneman Senior Staff Architect Cloud Platform BU VMware Cloud on AWS A Closer Look Frank Denneman Senior Staff Architect Cloud Platform BU Speed is the New Currency Cloud Computing We are in the 3 rd fundamental structural transition in the history of

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

1V Number: 1V0-621 Passing Score: 800 Time Limit: 120 min. 1V0-621

1V Number: 1V0-621 Passing Score: 800 Time Limit: 120 min.  1V0-621 1V0-621 Number: 1V0-621 Passing Score: 800 Time Limit: 120 min 1V0-621 VMware Certified Associate 6 - Data Center Virtualization Fundamentals Exam Exam A QUESTION 1 Which tab in the vsphere Web Client

More information

Configuring the Software Using the GUI

Configuring the Software Using the GUI CHAPTER 3 This chapter describes how to use the GUI application to complete the Cisco Nexus 1000V configuration, and includes the following sections. GUI Software Configuration Process, page 3-2 Guidelines

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

VMware vsphere 5.5 Professional Bootcamp

VMware vsphere 5.5 Professional Bootcamp VMware vsphere 5.5 Professional Bootcamp Course Overview Course Objectives Cont. VMware vsphere 5.5 Professional Bootcamp is our most popular proprietary 5 Day course with more hands-on labs (100+) and

More information

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

IMPLEMENTING VIRTUALIZATION IN A SMALL DATA CENTER

IMPLEMENTING VIRTUALIZATION IN A SMALL DATA CENTER International scientific conference - ERAZ 2016: Knowledge based sustainable economic development IMPLEMENTING VIRTUALIZATION IN A SMALL DATA CENTER Ionka Gancheva, PhD student 213 Abstract: The article

More information

VMware Cloud on AWS Operations Guide. 18 July 2018 VMware Cloud on AWS

VMware Cloud on AWS Operations Guide. 18 July 2018 VMware Cloud on AWS VMware Cloud on AWS Operations Guide 18 July 2018 VMware Cloud on AWS You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Small & Medium Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS

Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS Abstract This white paper discusses Dell EMC UnityVSA Cloud Edition and Cloud Tiering Appliance running within VMware Cloud on Amazon Web Services

More information

Best Practices for Sharing an iscsi SAN Infrastructure with Dell PS Series and SC Series Storage using VMware vsphere Hosts

Best Practices for Sharing an iscsi SAN Infrastructure with Dell PS Series and SC Series Storage using VMware vsphere Hosts Best Practices for Sharing an iscsi SAN Infrastructure with Dell PS Series and SC Series Storage using VMware vsphere Hosts Dell Storage Engineering January 2017 Dell EMC Best Practices Revisions Date

More information

Deploying the Cisco Tetration Analytics Virtual

Deploying the Cisco Tetration Analytics Virtual Deploying the Cisco Tetration Analytics Virtual Appliance in the VMware ESXi Environment About, on page 1 Prerequisites for Deploying the Cisco Tetration Analytics Virtual Appliance in the VMware ESXi

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

vstart 50 VMware vsphere Solution Overview

vstart 50 VMware vsphere Solution Overview vstart 50 VMware vsphere Solution Overview Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY,

More information

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide 1 Table of Contents Introduction... 3 Prerequisites... 3 Hardware Configurations... 4 Storage... 4 VMWare ESXi Server... 4

More information

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Table of Contents Executive Summary....4 Audience....4 Overview....4 VMware Software Components....6 Architectural Overview... 7 Cluster...

More information

Setting Up Cisco Prime LMS for High Availability, Live Migration, and Storage VMotion Using VMware

Setting Up Cisco Prime LMS for High Availability, Live Migration, and Storage VMotion Using VMware CHAPTER 5 Setting Up Cisco Prime LMS for High Availability, Live Migration, and Storage VMotion Using VMware This chapter explains setting up LMS for High Availability (HA), Live migration, and, Storage

More information

Virtual Machine Manager Domains

Virtual Machine Manager Domains This chapter contains the following sections: Cisco ACI VM Networking Support for Virtual Machine Managers, page 1 VMM Domain Policy Model, page 3 Virtual Machine Manager Domain Main Components, page 3,

More information

SD-WAN Deployment Guide (CVD)

SD-WAN Deployment Guide (CVD) SD-WAN Deployment Guide (CVD) All Cisco Meraki security appliances are equipped with SD-WAN capabilities that enable administrators to maximize network resiliency and bandwidth efficiency. This guide introduces

More information

VMWARE SOLUTIONS AND THE DATACENTER. Fredric Linder

VMWARE SOLUTIONS AND THE DATACENTER. Fredric Linder VMWARE SOLUTIONS AND THE DATACENTER Fredric Linder MORE THAN VSPHERE vsphere vcenter Core vcenter Operations Suite vcenter Operations Management Vmware Cloud vcloud Director Chargeback VMware IT Business

More information

Hypervisors networking: best practices for interconnecting with Cisco switches

Hypervisors networking: best practices for interconnecting with Cisco switches Hypervisors networking: best practices for interconnecting with Cisco switches Ramses Smeyers Customer Support Engineer Agenda What is this session about? Networking virtualization concepts Hypervisor

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Question No: 2 What three shares are available when configuring a Resource Pool? (Choose three.)

Question No: 2 What three shares are available when configuring a Resource Pool? (Choose three.) Volume: 70 Questions Question No: 1 A VMware vsphere 6.x Administrator sees the following output In esxtop: What does the %ROY column represent? A. CPU Cycle Walt Percentage B. CPU Utilization C. CPU Ready

More information

VMware vsphere 6.0 / 6.5 Advanced Infrastructure Deployment (AID)

VMware vsphere 6.0 / 6.5 Advanced Infrastructure Deployment (AID) Title: Summary: Length: Overview: VMware vsphere 6.0 / 6.5 Advanced Infrastructure Deployment (AID) Class formats available: Online Learning (OLL) Live In-Classroom Training (LICT) Mixed class with Classroom

More information

iscsi Configuration for ESXi using VSC Express Guide

iscsi Configuration for ESXi using VSC Express Guide ONTAP 9 iscsi Configuration for ESXi using VSC Express Guide May 2018 215-11181_E0 doccomments@netapp.com Updated for ONTAP 9.4 Table of Contents 3 Contents Deciding whether to use this guide... 4 iscsi

More information