Dell FS8600 with VMware vsphere

Similar documents
VMware vsphere with ESX 4.1 and vcenter 4.1

Running Milestone XProtect with the Dell FS8600 Scale-out File System

VMware vsphere with ESX 6 and vcenter 6

Deploying Solaris 11 with EqualLogic Arrays

Setting Up the Dell DR Series System on Veeam

A Dell Technical White Paper Dell Virtualization Solutions Engineering

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

FluidFS Antivirus Integration

Configuring Direct-Connect between a DR Series System and Backup Media Server

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

Dell PowerVault NX1950 configuration guide for VMware ESX Server software

DELL EQUALLOGIC FS7500 INTEGRATION WITHIN A LINUX ENVIRONMENT

Setting Up the DR Series System on Veeam

Server Fault Protection with NetApp Data ONTAP Edge-T

VMware vsphere Administration Training. Course Content

DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT

Administering VMware vsphere and vcenter 5

vstart 50 VMware vsphere Solution Overview

Configuring and Managing Virtual Storage

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

Deployment of VMware ESX 2.5 Server Software on Dell PowerEdge Blade Servers

Access Control Policies

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

VMware vsphere 6.5 Boot Camp

Setting Up the Dell DR Series System as an NFS Target on Amanda Enterprise 3.3.5

Using Dell Repository Manager with Dell OpenManage Essentials

Deployment of Dell M6348 Blade Switch with Cisco 4900M Catalyst Switch (Simple Mode)

VMware vsphere with ESX 4 and vcenter

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

Basic Configuration Installation Guide

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

DELL POWERVAULT NX3500. A Dell Technical Guide Version 1.0

Setting Up the DR Series System as an NFS Target on Amanda Enterprise 3.3.5

Dell Compellent Storage Center. Microsoft Server 2008 R2 Hyper-V Best Practices for Microsoft SCVMM 2012

Dell Compellent FS8600

Dell Compellent Storage Center with CommVault Simpana 9.0. Best Practices

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

Basic Configuration Installation Guide

A Dell technical white paper By Fabian Salamanca, Javier Jiménez, and Leopoldo Orona

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

VMware vsphere 5.5 Advanced Administration

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5

Logical Operations Certified Virtualization Professional (CVP) VMware vsphere 6.0 Level 1 Exam CVP1-110

vstart 50 VMware vsphere Solution Specification

Logical Operations Certified Virtualization Professional (CVP) VMware vsphere 6.0 Level 2 Exam CVP2-110

Setting Up Replication between Dell DR Series Deduplication Appliances with NetVault 9.2 as Backup Software

VMware - VMware vsphere: Install, Configure, Manage [V6.7]

Introduction to Using EMC Celerra with VMware vsphere 4

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Lifecycle Controller with Dell Repository Manager

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

Shared LOM support on Modular

Dell Fluid File System Overview

Backup and Recovery Best Practices With Tintri VMstore

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

Dell Wyse Datacenter for VMware Horizon View Cloud Pod Architecture

Deployment of VMware ESX 2.5.x Server Software on Dell PowerEdge Blade Servers

VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7)

iscsi Boot from SAN with Dell PS Series

Setting Up Cisco Prime LMS for High Availability, Live Migration, and Storage VMotion Using VMware

NexentaStor VVOL

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Implementing SharePoint Server 2010 on Dell vstart Solution

Dell Storage Compellent Integration Tools for VMware

2014 VMware Inc. All rights reserved.

EMC VSPEX END-USER COMPUTING

IM B09 Best Practices for Backup and Recovery of VMware - DRAFT v1

Stellar performance for a virtualized world

EMC VSPEX END-USER COMPUTING

VMware vsphere Customized Corporate Agenda

RecoverPoint for Virtual Machines

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Maintaining High Availability for Enterprise Voice in Microsoft Office Communication Server 2007

Vmware VCP-310. VMware Certified Professional on VI3.

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

EMC Business Continuity for Microsoft Applications

StorageCraft OneXafe and Veeam 9.5

Dell Compellent Storage Center

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

Virtual Server Agent for VMware VMware VADP Virtualization Architecture

Deployment of VMware Infrastructure 3 on Dell PowerEdge Blade Servers

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Datrium DVX Networking Best Practices

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Microsoft SharePoint Server 2010 on Dell Systems

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Detail the learning environment, remote access labs and course timings

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

E V A L U A T O R ' S G U I D E. VMware vsphere 4 Evaluator s Guide

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Transcription:

Dell FS8600 with VMware vsphere Deployment and Configuration Best practices Dell Engineering April 04

Revisions Date Revision Author Description April 04.0 Sammy Frish FluidFS System Engineering Initial release THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 04 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. PRODUCT WARRANTIES APPLICABLE TO THE DELL PRODUCTS DESCRIBED IN THIS DOCUMENT MAY BE FOUND AT: http://www.dell.com/learn/us/en/9/terms-of-sale-commercial-and-public-sector Performance of network reference architectures discussed in this document may vary with differing deployment conditions, network loads, and the like. Third party products may be included in reference architectures for the convenience of the reader. Inclusion of such third party products does not necessarily constitute Dell s recommendation of those products. Please consult your Dell representative for additional information. Trademarks used in this text: Dell, the Dell logo, PowerEdge, PowerVault, PowerConnect, EqualLogic, Compellent and Force0 are trademarks of Dell Inc. Other Dell trademarks may be used in this document. VMware, Virtual SMP, vmotion, vcenter and vsphere are registered trademarks or trademarks of VMware, Inc. in the United States or other countries. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and/or names or their products and are the property of their respective owners. Dell disclaims proprietary interest in the marks and names of others. Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Table of contents Revisions... Preface... 5. Audience... 5. Purpose... 5.3 Additional Resources... 5.4 Terms and Abbreviations... 6.5 Disclaimer... 6 Introduction... 7. Solution Overview... 7. Fluid File System Overview... 8.. FluidFS Architecture... 8.. Network Load-Balancing Layer... 8 3 Network Infrastructure and Configuration... 0 3. Managing Client Files and Datastores in a Single File System... 0 3. Switch Topology... 3.3 ESX Host NICs for Datastore Traffic... 3.4 FS8600 Client Network Configuration... 3.5 Load Balancing Between FS8600 Controllers... 4 NFS Datastore Traffic Load Sharing... 4 4. Subnet-Based Load Sharing... 4 4. Link Aggregation (LACP) Based Load Sharing... 5 4.3 Selecting the Load Sharing Method... 6 5 Virtual Network Configuration... 7 5. NFS Datastore Traffic Configuration... 7 5. Virtual Machine Traffic Configuration... 8 6 Datastore Configuration... 9 6. Determining the Number of Datastores in the Environment... 9 6. Increasing the Maximum Number of NFS Datastores per ESX Host... 9 6.3 Configuring NFS Datastores in the FS8600... 0 6.3. FS8600 NAS Volumes Configuration Procedure... 0 6.4 Cloning Datastores in the FS8600... 0 3 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

6.5 Migrating Datastores/Virtual Machines to the FS8600... 7 Summary... A Appendix I ESX Host Outbound Link Selection when Load Balancing based on IP Hash... 3 4 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Preface. Audience This document is intended for system, network and/or storage administrators and integrators who plan to deploy the FS8600 as a storage solution for VMware vsphere-based virtualization platform. It is assumed throughout the document that the reader is familiar with the following topics:. FS8600 network attached storage platform functionality, features, installation, user interface and operation.. Architecture, configuration and deployment of VMware s vsphere platform. 3. Network File System (NFS) protocol implementation and terminology. 4. Network infrastructure design, configuration and deployment.. Purpose This document outlines key points to consider throughout the design, deployment and ongoing maintenance of a vsphere solution with Dell s FS8600 file-level storage. The topics included in this document provide the reader with fundamental knowledge and tools needed to make vital decisions to optimize the solution. This guide constitutes a conceptual approach with directives to set up and tune the environment. The reader should not expect to find step-by-step procedures for recommended configurations. It is assumed that these are described in other documents, which are referenced throughout the document when necessary..3 Additional Resources Recommended publications available in the Compellent Knowledge Center (http://kc.compellent.com): FS8600 Administrator's Guide Recommended publications available in DellTechCenter (http://delltechcenter.com): Dell Fluid File System Overview FS8600 Snapshot and Volume Cloning Best Practices Dell Compellent FluidFS (FS8600) Networking Best Practices Other publications: Best Practices for Running VMware vsphere on Network-Attached Storage (NAS) (http://www.vmware.com/resources/techresources/0096) 5 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

.4 Terms and Abbreviations The following table summarizes terms and abbreviations used in this document: Term Adaptive Load Balancing (ALB) Link Aggregation Control Protocol (LACP) Network File System (NFS) NFS Datastore NFS Datastore Traffic NFS Export Virtual IP (VIP) Virtual Machine Files Application and User Files VMDK VMFS VMknic/VMkernel port VMNIC Description MAC address-based mechanism that balances client connections across all available network interfaces within an FS8600 system. Also known as 80.3ad, LACP allows the combination of multiple physical links to create a single logical channel. Standard protocol for file and directory sharing over a network. Logical container of Virtual Machine files that represents a remote file system accessed over NFS. Refers to the virtual machine file I/O generated by ESX hosts on an NFS Datastore server. NFS datastore traffic is created by the creation or cloning of virtual machines as well as guest OS level I/O operations on virtual machines local hard drives. Directory shared by an NFS server for remote client access. IP address that connects the NAS cluster solution to the client or LAN network. The VIP address allows clients to access the NAS solution as a single entity, thereby providing access to the file system. It enables the FS8600 system solution to perform load balancing between controllers, and ensures that the service continues even if a controller fails. Set of files that store, among others, virtual machine configuration, virtual hard disk data and snapshots. Output files created by a client or user in applications on top of the operating system, such as documents, browser data, media files, and so on. File containing a virtual machine s virtual hard drive. VMware s clustered file system. Virtual device in the VMkernel. VMkernel ports are used by the TCP/IP stack that services Motion, NFS and software iscsi clients that run at the VMkernel level, and remote console traffic. In the context of NFS datastore traffic, VMkernel ports are referred to as interfaces for I/O over NFS. ESX host physical network interface card..5 Disclaimer The information contained within this best practices document is intended to provide general recommendations only. Actual configurations in customer environments may vary due to individual circumstances, budget constraints, service level agreements, applicable industry-specific regulations, or other factors. Configurations should be tested before implementing them in a production environment. 6 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Introduction In vsphere-based solutions, virtual machine data is stored in a set of files holding information such as virtual hardware configuration, virtual hard drives and snapshot data, which ESX hosts store in logical data containers called datastores. Datastores are representations of some underlying physical storage space either in the ESX server s local hard drives or in external storage systems. External storage system options supported by vsphere include block devices presenting LUNs (via iscsi or Fiber Channel) formatted with VMware s VMFS file system, and external file systems that support remote access via NFS (Network File System protocol). The later are incorporated into the virtualization solution by ESX hosts, which are able to mount folders exported by the external file system (or NFS exports) as datastores. This document addresses the deployment of the FS8600 as a complete storage solution for a vsphere platform implementation. As such, the FS8600 can act as the storage both for virtual machine data files via NFS (datastores) and virtual machine user files via NFS and/or CIFS.. Solution Overview Using standard network infrastructure, Dell s FS8600 offers a scalable, flexible and highly available storage solution for vsphere. The key advantages of the solution are: A single FS8600 system can act as an NFS datastore server for ESX hosts and at the same time as a file server for Windows/Linux/UNIX clients. The combination of these two functions best leverages the FS8600 file-level storage, access, protection and management features on client data, and constitutes a cost-effective solution for the storage of both virtual machine files and user files in vsphere. Independent management of storage capacity for virtual machine files and application data, even though they reside all in the same storage system Fast and simple datastore provisioning and management. The creation of NAS volumes in the FS8600 is a metadata operation regardless of volume size. Datastores residing in FS8600 NAS volumes can be rapidly and non-disruptively scaled up to meet a virtual data center s growing requirements. NAS volumes thick and thin provisioning features allow the administrator to reserve storage space for datastores or share the entire NAS pool capacity among them or other workloads. Thin provisioned datastores can present to hosts higher capacity than physically available. Since all datastores in the FS8600 reside within a single file system, they can be cloned quickly and easily. NAS volume cloning in the FS8600 is a metadata operation. A clone volume consists of a new set of pointers to its parent volume data blocks. New data written either to the base or the cloned volume will be exclusively owned and referenced by it, while they continue sharing their common data blocks. 7 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

. Fluid File System Overview Fluid File Fystem (FluidFS) is an enterprise-class, fully distributed file system that provides customers with the tools necessary to manage file data in an efficient and simple manner. The underlying software architecture leverages a symmetric clustering model with distributed metadata, native load balancing, advanced caching capabilities and a rich set of enterprise-class features. FluidFS removes the scalability limitations such as limited volume size associated with traditional file systems, and supports high capacity, performance-intensive workloads via scaling up (adding capacity to the system) and by scaling out (adding nodes, or performance, to the system)... FluidFS Architecture The FluidFS architecture is an open standards-based Network Attached Storage (NAS) file system that supports industry standard protocols, including NFS v3 and v4 as well as CIFS/SMB v.0, v.0 and v.. It offers innovative features that provide high availability, performance, efficient data management, data integrity, and data protection. As a core component of the Dell Fluid Data architecture, FluidFS brings differentiated value to the various Dell storage offerings, including the Dell Compellent FS8600 as well as the Dell EqualLogic FS7600 and FS760... Network Load-Balancing Layer MAC MAC MAC MAC MAC MAC MAC A, Client B C D E F G H Single Controller Controller FluidFS appliance Controller Controller FluidFS appliance NAS storage pool (underlying SAN storage) Figure Load Balancing in a FluidFS Cluster FluidFS presents multiple network ports from multiple controllers to the client network. To simplify access and administration, FluidFS supports Virtual IP addresses. Physical distribution of client sessions is managed by a built-in load balancing capability. By default, the load is balanced between the interfaces 8 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

within a controller using Adaptive Load Balancing (ALB). ALB is a MAC address-based balancing mechanism that does not require any configuration on the network switch. For client access within the same layer network, FluidFS balances client connections across all available network interfaces within the cluster, based on client MAC addresses. When load balancing across a routed network boundary, FluidFS utilizes DNS round robin in addition to the MAC-based address lookup. Each NAS controller can access and serve all data stored in the FluidFS system. 0 Figure shows load balancing in a FluidFS cluster on a single layer network. 9 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

3 Network Infrastructure and Configuration This section describes various considerations and recommendations related to the physical network infrastructure as well as its logical configuration. 3. Managing Client Files and Datastores in a Single File System The FS8600 in a vsphere environment can act as an NFS datastore server for virtual machine files and at the same time as a file server for application and user files. Storing client files outside virtual machine hard drives and directly in the FS8600 is the best way to leverage the FS8600 native file-level features on application and user data. This methodology has several benefits: Seamless differentiation and separate management of guest operating system data and application and user data. Faster and more flexible application and user data recovery from corruption and loss using snapshots. Increased efficiency of application and user data replication. Accessibility of application and user data is independent of the virtual machine. Dell recommends logically separating virtual machine files from application and user data in different NAS volumes to consistently implement the aforementioned benefits. Achieve this by creating a set of NAS volumes, one for each datastore, and a different set of volumes with their corresponding CIFS shares and/or NFS exports for client files. VLAN0 Virtual Machines Network VLAN User Shares Volume User Shares Volume User Shares Volume VLAN0 VLAN Datastore Volume Datastore Volume Datastore Volume User Shares FluidFS Appliance NFS Datastores Network ESX Hosts Figure Network Schema In addition, for better security and management, Dell recommends segmenting the network paths physically and logically. Assign separate physical NIC teams for each path and split the traffic logically into VLANs. See Figure for reference. 0 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Gb Gb Gb 3 Gb 4 Gb Gb Gb 3 Gb 4 3 5 7 4 6 8 3 5 7 4 6 8 9 3 5 0 4 6 9 3 5 0 4 6 7 9 LNK 3 ACT 8 0 4 7 9 LNK 3 ACT 8 0 4 It is VMware s best practice to minimize the number of hops on the NFS datastore network. Routing between ESX host NFS datastore traffic links and the FS8600 client ports should be avoided. VLANs and subnets can and should be used to logically separate workloads and load-share the network throughput as long as the links between the ESX hosts and the FS8600 reside in the same broadcast domain. 3. Switch Topology The switch infrastructure that connects to the FS8600 should be redundant (See example on Figure 3) in order to support and maintain the appliance s high availability. Architecturally, three guiding principles should be used to construct a best practice switch connectivity topology: Avoid any single points of failure. Ensure sufficient inter-switch throughput. Keep the FS8600 client ports and ESX hosts datastore traffic ports in the same broadcast domain. ESX Host FS860i 3 4 0 Port 5 6 ST GRN=0G ACT/LNK B GRN=0G ACT/LNK B GRN=0G ACT/LNK A GRN=0G ACT/LNK A GRN=0G ACT/LNK B GRN=0G ACT/LNK B GRN=0G ACT/LNK A GRN=0G ACT/LNK A Trunk 3 4 0 Port 5 6 ST ESX Host Figure 3 Sample ESX Host to FS8600 Interconnect Network Switch Topology 3.3 ESX Host NICs for Datastore Traffic Dell recommends assigning at least one pair of VMNICs (both of the same line speed) on every ESX host to exclusively transport NFS datastore traffic between the host and the FS8600. These links will provide the network infrastructure required for the creation, deletion and cloning of virtual machines, as well as their I/O workload on their virtual hard drives. Determining the VMNIC throughput required for this type of workload should take into consideration the environment and throughput requirements of the virtual machines in use. The VMNICs should be teamed to provide both fault tolerance as well as increased Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

aggregated throughput by applying ESX host load sharing methods (See NFS Datastore Traffic Load Sharing on Page 4). 3.4 FS8600 Client Network Configuration The client network in the FS8600 should be configured to accommodate the logical separation of guest OS data and application/user data into VLANs and subnets. Dell recommends having at least two subnets, each in a different VLAN, to allocate these two main workloads. Additional subnets may be needed to support the NFS datastore traffic load sharing methodology chosen for the solution (See NFS Datastore Traffic Load Sharing on Page 4). Since these subnets are primarily used for traffic load sharing, they can be allocated in the same VLAN as illustrated in Figure 4. The FS8600 can be configured with multiple network (subnet) access to/from the client network links. The configuration of each client network requires at least one virtual IP (VIP) and one IP address for each node. Client networks have optional VLAN tagging configuration. All networks configured in the FS8600 are accessible to/from all client network ports simultaneously, thus granting high availability and no single point of failure. Subnet A Subnet C User Shares Volume User Shares Volume User Shares Volume VLAN Subnet B VLAN Datastore Volume Datastore Volume Datastore Volume User Shares Figure 4 FS8600 Client Network Configuration 3.5 Load Balancing Between FS8600 Controllers The FS8600 automatically distributes client connections among all controllers in the cluster and their physical network links. Uniform distribution is guaranteed when many clients are served by the appliance, where each is assigned a node and a physical interface. However, in deployments where only a few clients are served review the client distribution status and verify its uniformity. For example, some environments may have a limited number of ESX hosts generating significant NFS workload over the links. After mounting their datastores for the first time, review the client balancing status and if necessary rebalance the hosts manually. Figure 9 shows an example of three ESX hosts load balanced between the two controllers in the FS8600 appliance. Ideally, each ESX host link should connect to a different controller, and all the connections assigned to a controller should be load balanced among its client network interfaces. Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Port Port Gb Gb Gb 3 Gb 4 Gb Gb Gb 3 Gb 4 Gb Gb Gb 3 Gb 4 Gb Gb Gb 3 Gb 4 ESX Host FS860i Appliance 3 4 0 5 6 ST GRN=0G ACT/LNK B GRN=0G ACT/LNK A GRN=0G ACT/LNK B GRN=0G ACT/LNK A GRN=0G ACT/LNK A GRN=0G ACT/LNK B GRN=0G ACT/LNK A GRN=0G ACT/LNK B ESX Host 3 4 0 5 6 ST NFS Datastores Network 0 Port 3 4 5 0 Port 3 4 5 6 ST 6 ST ESX Host ESX Host Figure 5 Load Balancing Between FS8600 Controllers 3 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Gb Gb Gb 3 Gb 4 4 NFS Datastore Traffic Load Sharing ESX Host 3 4 0 Port 5 6 ST NFS Datastores Network Figure 6 Load Sharing among ESX Host Links Using more than a single physical NIC on each ESX host provides fault tolerance and, when properly set up, can significantly increase the aggregated logical link throughput. The NICs on each ESX host that are allocated for NFS datastore traffic should be configured as a team when the user desires traffic distribution among them. Achieve effective load sharing among the links of an ESX host NIC team via one of the following methods: Subnetting A datastore s outbound link is determined by the destination subnet. Link aggregation (LACP) A datastore s outbound link is determined by source and destination IP address hashing. These two methods are described in detail in the following sub-sections. 4. Subnet-Based Load Sharing For outbound NFS datastore traffic, an ESX host uses a VMkernel port on the same subnet as the destination NFS server IP. If several available VMkernel ports exist in the same subnet, the host will only use a single VMkernel port and a single VMNIC. The subnet-based load sharing method uses multiple destination subnets and VMkernel ports to access them. NFS datastore traffic from an ESX host can be distributed among its physical links if the following conditions are met: a. The corresponding NICs are teamed. b. VMkernel ports are configured with IP addresses in different subnets. Note: When applying subnet-based load sharing, the load balancing method of the VMkernel port group should be set to Route based on originating virtual port (See NFS Datastore Traffic Configuration on Page 7) 4 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

0 Port 3 4 5 6 Gb Gb Gb 3 Gb 4 In a subnet-based load sharing configuration, first determine the number of subnets required to effectively distribute the traffic among the links on a team. ESX Host FluidFS Appliance ST VMK0 = 0.0.0./4 VMK = 0.0../4 Physical Path Between VMK0 and VIP0 NFS Datastores Network 0.0.0.0/4 Physical Path Between VMK and VIP 0.0..0/4 Figure 7 Switch Infrastructure Subnet Based Load Sharing VIP0 = 0.0.0.0/4 VIP = 0.0..0/4 For example, a host with two teamed VMNICs and two VMkernel ports vmk 0 0.0.0./4 and vmk 0.0../4 will use vmk 0 to reach 0.0.0.0/4 and vmk to reach 0.0..0/4, as shown in Figure 7. When the traffic to each destination subnet is routed via a different VMkernel, it is also transmitted via a different physical VMNIC accordingly. Therefore, the number of subnets required to effectively share NFS datastore traffic among teamed VMNICs on an ESX host needs to be equal to the number of NICs on the team. In a solution where multiple hosts have different numbers of VMNICs assigned for datastore traffic, choose the number of subnets according to the host with the largest number of VMNICs. Finally, configure the corresponding subnets in the the FS8600 client network with one VIP each (See FS8600 Client Network Configuration on Page ) 4. Link Aggregation (LACP) Based Load Sharing The use of LACP to load-share traffic between ESX host VMNICs on a team requires a switch that supports the Link Aggregation Control Protocol (LACP), and the ports need to be configured as a link aggregation channel with source/destination IP hashing policy. The configuration requires one port group in the virtual switch with the load balancing policy set to Route based on IP hash (See NFS Datastore Traffic Configuration on Page 7). When using LACP for load sharing, the number of VIPs required in the FS8600 to effectively share NFS datastore traffic among teamed VMNICs on an ESX host needs to be equal to the number of VMNICs in the host. Distribution of the network load among all the VMNICs is best achieved using consecutive VIPs (VIP n = VIP n- + ). If this is not feasible, check Appendix I ESX Host Outbound Link Selection when Load Balancing based on IP Hash on Page 3 for information on an alternate way to choose the VIPs. When LACP is properly set up, each VMkernel source IP and VIP pair will be routed through a different physical link. 5 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Gb Gb Gb 3 Gb 4 ESX Host FS860i 3 4 0 Port 5 6 ST VMK0 = 9.68.0./4 Physical Path Between VMK0 and VIP0 VIP0 = 9.68.0.00/4 VIP = 9.68.0.0/4 NFS Datastores Network Physical Path Between VMK0 and VIP Figure 8 Switch Infrastructure Load Sharing via Link Aggregation 4.3 Selecting the Load Sharing Method Using the subnetting method to configure multiple source IP addresses per ESX host adds additional value in terms of performance in environments with fewer ESX hosts (eight or less). The uniformity of traffic distribution across the file system resources is proportional to the number of source IP addresses. Thus, assigning more than one IP address per host can increase the maximum aggregated throughput that each individual ESX host can achieve. In an environment where fewer than eight ESX hosts are served by a single FS8600 appliance (two controllers), Dell recommends applying the subnet-based load sharing method. For solutions with larger number of ESX hosts, there is no significant difference between the LACP and subnet-based load sharing methods. In such cases, the choice primarily depends on network requirements and/or environment considerations. 6 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

5 Virtual Network Configuration As mentioned in Managing Client Files and Datastores on Page 0, segmentation of traffic types is desired for better management and increased security. The implementation of this approach starts with the vsphere virtual network configuration, as described below. 5. NFS Datastore Traffic Configuration Dell recommends creating a vswitch (can be standard or distributed) exclusively for NFS datastore traffic in its own VLAN with the following configuration:. Add the VMNICs associated with NFS datastore traffic to the physical side of the vswitch.. Configure one port group on the virtual side with the proper VLAN configuration destined to include the VMkernel ports. 3. Add one VMkernel port per subnet to the port group in accordance to the load sharing method chosen on the ESX host (See NFS Datastore Traffic Load Sharing on Page 4). Figure 9 Sample Configuration of an NFS Datastore Traffic Switch Set the proper load balancing policy under the teaming and failover of the VMkernel port group settings: For subnet-based load sharing, set the load balancing method to Route based on originating virtual port. For LACP-based load sharing, set the load balancing method to Route based on IP hash. Note: Setting Route based on IP hash as the load balancing policy in the port group requires the configuration of LACP in the switch. See Figure 0 for sample screen shot of the teaming and failover settings for port groups. 7 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

Figure 0 Teaming and Failover Settings 5. Virtual Machine Traffic Configuration Ideally, use a separate virtual switch with separate physical links and in a separate VLAN for the virtual machines outbound traffic to the FS8600 client data exports and shares. Figure Virtual Machines Network Switch 8 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

6 Datastore Configuration NFS Datastores in vsphere represent ESX host mounts, each accessing an NFS export over an IP address. Note that an ESX host creates a single I/O connection to the FS8600 for every individual datastore. Regardless of the teaming or load sharing method, this connection is given (and limited to) a single physical path to the FS8600, composed of one physical link from the ESX host to the switch and one from the switch to the FS8600 controller. As described in this section, creating a minimum number of datastores allows you to fully utilize the paths between ESX hosts and the FS8600, maximize the aggregated throughput and add an additional level of throughput management across the physical links. 6. Determining the Number of Datastores in the Environment The recommended number of datastores in an environment varies, and depends on several considerations:. Number of physical links between the ESX host and the switch: Creating at least one datastore per link ensures full utilization of the maximum aggregate throughput. Their configuration depends on the load sharing method in use (See NFS Datastore Traffic Load Sharing 0 on Page 4). For subnet-based load sharing, configure at least one datastore per subnet. For LACP-based load sharing, configure at least one datastore per destination VIP.. Link speed distribution among the ESX host links: Two or more datastores in the same subnet will maintain their connections over the same physical link; hence their aggregated throughput will be limited by the link speed. This mode of operation adds an additional level of throughput management that, if leveraged, can also impact the number of datastores. For example, on an ESX host with a two physical link connection, an administrator can grant maximum link speed to a single datastore by creating it in the first subnet and all others in the second. 6. Increasing the Maximum Number of NFS Datastores per ESX Host The parameter that defines the maximum number of NFS mounts (datastores) on each ESX host is set to 8 by default. In environments where more than 8 NFS datastores are needed, this parameter can be increased (up to 56). When increasing this number, VMware recommends increasing the TCP/IP heap size, which is the amount of memory in megabytes allocated up front by the VMkernel to TCP/IP as heap. The maximum number of NFS mounts an ESX host can have is determined by the NFS.MaxVolumes parameter. When setting it to 56, also set the Net.TcpipHeapSize to 3 and Net.TcpipHeapMax to 8. The configuration of these parameters is done via the ESX host advanced settings. Note: Changes on Net.TcpipHeapSize and Net.TcpipHeapMax require a host reboot to take effect. 9 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

6.3 Configuring NFS Datastores in the FS8600 NFS exports are mounted by ESX hosts as datastores. These NFS exports are configured inside NAS volumes, which are virtual data containers in the FS8600. The administrator can configure each NAS volume to a specific size, and each volume has its own administrative policies that apply to all files, folders and subfolders contained within. NAS volumes are baselines for snapshots, which in turn constitute the infrastructure for data replication and backup. Each NAS volume has its own directory hierarchy that starts at the root folder. Dell recommends configuring only one datastore on each NAS volume in the FS8600. Maintaining a oneto-one datastore to NAS volume relationship ensures that all virtual machines inside the same datastore are ruled by common policies in the FS8600. NAS volumes created for datastores should be used exclusively for storage of virtual machine files. It is Dell s best practice to store and manage other data types in separate NAS volumes (See Managing Client Files and Datastores in a Single File System on Page 0). Grant access to datastore NAS volumes by exporting only their root folders. Groups of virtual machines with different management policies should be stored in different datastores, each contained in a different NAS volume. Once all client networks are configured, grant all ESX hosts access to all exported root folders on datastore NAS volumes to ensure that high availability is maintained. 6.3. FS8600 NAS Volumes Configuration Procedure To configure FS8600 NAS volumes as datastores:. Create NAS volumes destined to be mounted as datastores and configure them with native UNIX security style.. Export the root folder of each NAS volume. 3. On each export, grant read and write permissions to everyone with the source IP addresses of the ESX hosts in NAS volumes where you want to allow creation, deletion, cloning and migration of virtual machines. 6.4 Cloning Datastores in the FS8600 The built-in thin cloning feature of the FS8600 is a time- and space-efficient way to create a replica of a NAS volume. NAS volume clones are essentially writable snapshots created by means of metadata operations. The result of volume cloning is a new set of pointers to parent data blocks, making the space costs of volume cloning negligible. From the moment a volume is cloned, new data written either to the base or the cloned volume will be exclusively owned and referenced by it, while they continue to share their common data blocks. 0 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

To ensure consistency, use the following procedure to clone datastores:. Ensure the virtual machines residing in the base datastore are shut down.. In the FS8600, take a snapshot of the datastore NAS volume. 3. Make a NAS volume clone from the snapshot. 4. Export the root folder of the clone volume and grant permissions to the ESX hosts IP addresses to access it. 5. Mount the clone volume as a new datastore. 6. Browse the new datastore and import the Virtual Machines to the vcenter inventory. 6.5 Migrating Datastores/Virtual Machines to the FS8600 When the FS8600 is deployed in an existing environment and virtual machines or datastores need to be migrated from another backend to the FS8600, Dell recommends performing the migration process using vsphere migration. Virtual machine files written to the FS8600 by the ESX hosts will be better distributed among the file system resources and result in better overall performance. Dell does not recommend bypassing the hosts and using external clients or tools to migrate virtual machine files. Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

7 Summary The deployment of the FS8600 as a storage system for a vsphere solution is a multi-layered process consisting of network infrastructure considerations, configuration procedures and management of multiple workloads over different protocols. As such, it is key to make a preliminary design based on the topics listed in this document to achieve maximum resource utilization and high availability. A deployment, design and implementation based on the information included in this guide helps maximize the the solution s robustness and administration simplicity, as well as to minimize ongoing management efforts and improve time efficiency of overall operations. Dell FS8600 with VMware vsphere Deployment and Configuration Best practices

A Appendix I ESX Host Outbound Link Selection when Load Balancing based on IP Hash When LACP is the chosen method to load-share the network traffic between N physical NICs of an ESX host, a single VMkernel port and N VIPs must be configured. An ESX host selects the outbound link of a connection according to the result of the following formula: Outbound_link = (VMkernel_IP xor FS8600_VIP) % N Where N is the number of links on the team and % refers to the modulo (Mod) operator. The possible values for outbound_link are digits between 0 and N-. For example, given 4 outbound links, the possible results will be 0,, or 3 (link to 4). To route each VMkernel_IP to FS8600_VIP connection through a different physical link, the formula needs to return a different result for each VIP. This will occur only if the least significant bits on the last octet is different on all VIPs. In a team of NICs, the last bit needs to differ between both VIPs; in a team of 3 to 4 NICs the last two bits and in a team of 5 to 8, the last three. A straightforward method to achieve this is using consecutive VIPs, i.e. X.X.X.6, X.X.X.7, X.X.X.8 and so on. When consecutive VIPS are not viable, the addresses need to be chosen carefully to meet the least significant bits requirement. This is illustrated in the following example: There are NICs on a team assigned for NFS datastores traffic. One VMkernel port is configured with IP address: vmk 0 ->.0.0. and two FS8600 VIPs: VIP 0 =.0.0.0 and VIP =.0.0.6 The calculation of the outbound link is done as follows:. Convert the last octet of the addresses to Hex. vmk 0 ->.0.0. => Hex() = 0x0 VIP 0 ->.0.0.0 => Hex(0) = 0x0a VIP ->.0.0.6 => Hex(6) = 0x0. Determine the Outbound_link using the formula: Outbound_link(vmk 0 _to_vip 0 ) = (0x0 xor 0x0a) % = Outbound_link(vmk 0 _to_vip ) = (0x0 xor 0x0) % = Given VIPs.0.0.0 and.0.0.6, there is no load sharing. All connections will be routed through link. This is because there are two VIPs and the least significant bit of both is equal Bin(0x0a)= 00 and Bin(0x0)= 0000. It can be verified in the formula that incrementing or decrementing one of the VIPs by would ensure each source-destination IP pair is routed through a different link. 3 Dell FS8600 with VMware vsphere Deployment and Configuration Best practices