HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers

Size: px
Start display at page:

Download "HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers"

Transcription

1 Technical White Paper HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers OceanStor Storage VMware Huawei Technologies Co., Ltd

2 Copyright Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Huawei Technologies Co., Ltd. Address: Website: Huawei Industrial Base Bantian, Longgang Shenzhen People's Republic of China i

3 About This Document About This Document Overview This document details the configuration methods and precautions for connecting Huawei SAN storage devices to VMware ESXi hosts. Intended Audience This document is intended for: Huawei technical support engineers Technical engineers of Huawei's partners Conventions Symbol Conventions The symbols that may be found in this document are defined as follows: Symbol Description Indicates a hazard with a high level of risk, which if not avoided, will result in death or serious injury. Indicates a hazard with a medium or low level of risk, which if not avoided, could result in minor or moderate injury. Indicates a potentially hazardous situation, which if not avoided, could result in equipment damage, data loss, performance degradation, or unexpected results. Indicates a tip that may help you solve a problem or save time. Provides additional information to emphasize or supplement important points of the main text. ii

4 About This Document General Conventions Convention Times New Roman Boldface Italic Courier New Description Normal paragraphs are in Times New Roman. Names of files, directories, folders, and users are in boldface. For example, log in as user root. Book titles are in italics. Examples of information displayed on the screen are in Courier New. Command Conventions Format Boldface Italic Description The keywords of a command line are in boldface. Command arguments are in italics. iii

5 Contents Contents About This Document... ii 1 Introduction to VMware ESXi VMware Infrastructure File Systems in VMware VMware RDM VMware Cluster Specifications Network Planning Non-HyperMetro Networking Fibre Channel Networking Diagram HyperMetro Networking Fibre Channel Networking Diagram Preparations Before Configuration (on a Host) HBA Identification HBA Information Versions Earlier than VMware ESXi VMware ESXi 5.5 and Later Versions Preparations Before Configuration (on a Storage System) Configuring Switches Fibre Channel Switch Querying the Switch Model and Version Configuring Zones Precautions Ethernet Switch Configuring VLANs Binding Ports Establishing Fibre Channel Connections Checking Topology Modes OceanStor 18000/T V2/V3/Dorado V3 Series Storage OceanStor T V1 Storage System iv

6 Contents 6.2 Adding Initiators Establishing Connections Establishing iscsi Connections Host Configurations Configuring Service IP Addresses Configuring Host Initiators Configuring CHAP Authentication HUAWEI Storage System Configuration OceanStor 18000/T V2/V3/Dorado V3 Enterprise Storage System OceanStor T V1 Storage System Mapping and Using LUNs Mapping LUNs to a Host OceanStor 18000/T V2/V3/Dorado V3 Enterprise Storage System OceanStor T V1 Storage System Scanning for LUNs on a Host Using the Mapped LUNs Raw Device Mapping (RDM) Creating Datastores Creating Virtual Disks VMware NMP-based Multipathing Management Overview VMware PSA Overview VMware NMP VMware Path Selection Policy Functions and Features VMware NMP Path Selection Policy VMware NMP Policy Configuration Introduction to ALUA Recommended NMP Configuration for OceanStor V3 Series HyperMetro Working Modes Working Principles and Failover Initiator Mode Introduction and Configuration Recommended VMware NMP Configuration Configuring the ALUA Mode Checking VMware NMP Recommended NMP Configuration for OceanStor Dorado V3 Series Recommended VMware NMP Configuration Configuring the ALUA Mode Configuring the AA Mode v

7 Contents Manually Modifying Path Rules Recommended NMP Configuration for Old-Version HUAWEI Storage Recommended VMware NMP Configuration Configuring the ALUA Mode Configuring the AA Mode Manually Modifying Path Rules Querying and Modifying the Path Selection Policy Querying the Path Policy of a Single LUN Modifying the Path Policy for a Single LUN FAQs VMware APD and PDL How Can I Select a Fixed Preferred Path for a Storage Device with Active-Active Controllers? How Can I Determine Which Controller a Path Is Connected to? Differences Between iscsi Multi-path Networks with Single and Multiple HBAs iscsi Multi-Path Network with a Single HBA iscsi Multi-path Network with Multiple HBAs Common Commands Acronyms and Abbreviations vi

8 Figures Figures Figure 1-1 VMware Infrastructure virtual data center Figure 1-2 Storage architecture in VMware Infrastructure Figure 1-3 VMFS architecture Figure 1-4 Structure of a VMFS volume Figure 1-5 RDM mechanism Figure 2-1 Fibre Channel multi-path direct-connecting on network diagram (dual-controller) Figure 2-2 Fibre Channel multi-path direct-connection networking diagram (four-controller) Figure 2-3 Fibre Channel multi-path switch-based networking diagram (dual-controller) Figure 2-4 Fibre Channel multi-path switch-based networking diagram (four-controller) Figure 2-5 Fibre Channel multi-path switch-based networking diagram (dual-controller) Figure 2-6 Fibre Channel multi-path switch-based networking diagram (four-controller) Figure 3-1 Viewing the HBA information Figure 5-1 Switch information Figure 5-2 Switch port indicator status Figure 5-3 Zone tab page Figure 5-4 Zone configuration Figure 5-5 Zone Config tab page Figure 5-6 Name Server page Figure 6-1 Fibre Channel port details Figure 6-2 Fibre Channel port details Figure 7-1 Adding VMkernel Figure 7-2 Creating a vsphere standard switch Figure 7-3 Specifying the network label Figure 7-4 Entering the iscsi service IP address Figure 7-5 Information summary Figure 7-6 iscsi multi-path network with dual adapters vii

9 Figures Figure 7-7 Home page on vsphere Web Client Figure 7-8 Navigating to the Networking tab page Figure 7-9 Adding VMkernel adapters Figure 7-10 Selecting the connection type Figure 7-11 Selecting the target device Figure 7-12 Adding a physical adapter Figure 7-13 Setting port properties Figure 7-14 Specifying the service IP address Figure 7-15 Checking the settings Figure 7-16 Information summary Figure 7-17 Multi-path networking Figure 7-18 Adding storage adapters Figure 7-19 Adding iscsi initiators Figure 7-20 iscsi Software Adapter Figure 7-21 Initiator properties Figure 7-22 iscsi initiator properties Figure 7-23 Binding with a new VMkernel network adapter Figure 7-24 Initiator properties after virtual network binding Figure 7-25 Adding send target server Figure 7-26 Checking the storage adapter Figure 7-27 Adding a storage adapter Figure 7-28 Checking the created iscsi adapter Figure 7-29 Initiator properties Figure 7-30 Binding a virtual network to the initiator Figure 7-31 After VMkernal port binding Figure 7-32 Dynamic discovery Figure 7-33 Adding a target Figure 7-34 General tab page Figure 7-35 CHAP credentials dialog box Figure 7-36 Editing initiator authentication parameter settings Figure 7-37 Selecting an authentication method Figure 7-38 Setting CHAP authentication parameters Figure 7-39 Modifying IPv4 addresses viii

10 Figures Figure 7-40 Initiator CHAP configuration Figure 7-41 CHAP Configuration dialog box Figure 7-42 Create CHAP dialog box Figure 7-43 Assigning the CHAP account to the initiator Figure 7-44 Setting CHAP status Figure 7-45 Enabling CHAP Figure 7-46 Initiator status after CHAP is enabled Figure 8-1 Scanning for the mapped LUNs Figure 8-2 Scanning for the mapped LUNs (on vsphere Web Client) Figure 8-3 Editing host settings Figure 8-4 Adding disks Figure 8-5 Selecting disks Figure 8-6 Selecting a target LUN Figure 8-7 Selecting a datastore Figure 8-8 Selecting a compatibility mode Figure 8-9 Selecting a virtual device node Figure 8-10 Confirming the information about the disk to be added Figure 8-11 Adding raw disk mappings Figure 8-12 Editing host settings Figure 8-13 Adding RDM disks Figure 8-14 Selecting disks to add Figure 8-15 Completing the disk addition operation Figure 8-16 Checking whether the disk is successfully added Figure 8-17 Adding storage Figure 8-18 Selecting a storage type Figure 8-19 Select a disk/lun Figure 8-20 Selecting a file system version Figure 8-21 Viewing the current disk layout Figure 8-22 Entering a datastore name Figure 8-23 Specifying a capacity Figure 8-24 Confirming the disk layout Figure 8-25 Checking the datastores Figure 8-26 Creating the datastore type ix

11 Figures Figure 8-27 Specifying the datastore name and selecting disks Figure 8-28 Selecting the file system version Figure 8-29 Configuring the partition layout Figure 8-30 Verifying the datastore configurations Figure 8-31 Checking for the datastore Figure 8-32 Editing VM settings Figure 8-33 Adding disks Figure 8-34 Creating a new virtual disk Figure 8-35 Specifying the disk capacity Figure 8-36 Selecting a datastore Figure 8-37 Selecting a virtual device node Figure 8-38 Viewing virtual disk information Figure 8-39 Editing the host settings Figure 8-40 Selecting to add a hard disk Figure 8-41 Checking the information of the added disk Figure 8-42 Modifying disk properties Figure 9-1 VMware PSA Figure 9-2 VMkernel architecture Figure 10-1 Going to the host configuration page Figure 10-2 Selecting an initiator of which information you want to modify Figure 10-3 Modifying initiator information Figure 10-4 Querying the special mode type Figure 10-5 Enabling the Shell and SSH services Figure 10-6 Cluster configuration Figure 10-7 Cluster configuration Figure 10-8 Command output Figure 10-9 VMware path information Figure Enabling the Shell and SSH services Figure Setting the preferred path for a storage device Figure Querying the storage array type Figure Modifying the storage path policy Figure Enabling ALUA for T series V100R005/Dorado2100/Dorado5100/Dorado2100 G Figure Enabling ALUA for T series V200R002/18000 series/v3 series/18000 V3 series x

12 Figures Figure 11-1 Port information about a path Figure 11-2 iscsi network with a single HBA Figure 11-3 Network A with multiple HBAs Figure 11-4 Port mapping of network A with multiple HBAs Figure 11-5 Network B with multiple HBAs Figure 11-6 Port mapping of network B with multiple HBAs Figure 12-1 Selecting a VMware version xi

13 Tables Tables Table 1-1 Major specifications of VMware Table 2-1 Networking modes Table 5-1 Mapping between switch types and names Table 5-2 Comparison of link aggregation modes Table 9-1 Path selection policies Table 10-1 Configuration methods and application scenarios of the typical working modes Table 10-2 Initiator parameter description Table 10-3 Recommended VMware NMP configuration for OceanStor V3 non-hypermetro configuration Table 10-4 Recommended VMware NMP configuration for OceanStor V3 HyperMetro configuration Table 10-5 Configuration on non-hypermetro OceanStor V3 storage when interconnected with VMware ESXi Table 10-6 Configuration on HyperMetro OceanStor V3 storage when interconnected with VMware ESXi Table 10-7 Huawei Storage vendor and model information Table 10-8 Recommended NMP configurations when different ESXi versions interconnect with OceanStor Dorado V Table 10-9 AA mode configuration Table Recommended NMP configurations when different ESX/ESXi versions interconnect with HUAWEI Older Version Storage xii

14 1 Introduction to VMware ESXi 1 Introduction to VMware ESXi 1.1 VMware Infrastructure Today's x86 computers are designed merely for running single operating system or application program. Therefore, most of these computers are under utilization. With virtualization technologies, a physical machine can host multiple virtual machines (VMs), and resources on this physical machine can be shared among multiple environments. A physical machine can host multiple VMs running different operating systems and applications, improving x86 hardware utilization. VMware virtualization adds a condensed software layer on the computer hardware or in the host operating system. This software layer includes a VM monitor utility that allocates hardware resources in a dynamic and transparent way. Each operating system or application can access desired resources anytime as required. As an outstanding software solution for x86 virtualization, VMware enables users to manage their virtual environments in an effective and easy manner. Figure 1-1 shows the VMware Infrastructure virtual data center that consists of x86 computing servers, storage networks, storage arrays, IP networks, management servers, and desktop clients. 13

15 1 Introduction to VMware ESXi Figure 1-1 VMware Infrastructure virtual data center Figure 1-2 provides an example of storage architecture in VMware Infrastructure. A Virtual Machine File System (VMFS) volume contains one or more LUNs belonging to different storage arrays. Multiple ESX servers share one VMFS volume and create virtual disks on the VMFS volume for VMs. 14

16 1 Introduction to VMware ESXi Figure 1-2 Storage architecture in VMware Infrastructure VMware uses VMFS to centrally manage storage systems. VMFS is a shared cluster file system designed for VMs. This file system employs the distributed lock function to enable independent access to disks, ensuring that a VM is accessed by one physical host at a time. Raw Device Mapping (RDM) acts as the agent for raw devices on a VMFS volume. 1.2 File Systems in VMware Features of VMFS VMware VMFS is a high-performance cluster file system that allows multiple systems to concurrently access the shared storage, laying a solid foundation for the management of VMware clusters and dynamic resources. Automated maintenance directory structure File lock mechanism Distributed logical volume management Dynamic capacity expansion Cluster file system Journal logging Optimized VM data storage 15

17 1 Introduction to VMware ESXi Advantages of VMFS Architecture of VMFS Improved storage utilization Simplified storage management ESX server clusters of enhanced performance and reliability In the VMFS architecture shown in Figure 1-3, a LUN is formatted into a VMFS file system, whose storage space is shared by three ESX servers each carrying two VMs. Each VM has a Virtual Machine Disk (VMDK) file that is stored in a directory (named after a VM) automatically generated by VMFS. VMFS adds a lock for each VMDK to prevent a VMDK from being accessed by two VMs at the same time. Figure 1-3 VMFS architecture Structure of a VMFS Volume Figure 1-4 shows the structure of a VMFS volume. A VMFS volume consists of one or more partitions that are arranged in lines. Only after the first partition is used out can the following partitions be used. The identity information about the VMFS volume is recorded in the first partition. 16

18 1 Introduction to VMware ESXi Figure 1-4 Structure of a VMFS volume VMFS divides each extent into multiple blocks, each of which is then divided into smaller blocks. This block-based management is typically suitable for VMs. Files stored on VMs can be categorized as large files (such as VMDK files, snapshots, and memory swap files) and small files (such as log files, configuration files, and VM BIOS files). Large and small blocks are allocated to large and small files respectively. In this way, storage space is effectively utilized and the number of fragments in the file system is minimized, improving the storage performance of VMs. The VMFS-3 file system supports four data block sizes: 1 MB, 2 MB, 4 MB, and 8 MB. Sizes of files and volumes supported by VMFS-3 file systems vary with a file system's block size. The VMFS-5 file system uses a fixed data block size of 1 MB. Supporting VMFS-5, VMware ESXi 5.0/5.1 supports a maximum VMDK file size of 2 TB, and VMware ESXi 5.5/6.0/6.5 supports a maximum VMDK file size of 62 TB. The VMFS-6 file system also uses a fixed data block size of 1 MB. Supporting VMFS-6, VMware ESXi 6.5 supports a maximum VMDK file size of 62 TB. 1.3 VMware RDM VMware RDM enables VMs to directly access storage. As shown in Figure 1-5, an RDM disk exists as an address mapping file on the VMFS volume. This mapping file can be considered as a symbolic link that maps a VM's access to an RDM disk to LUNs. 17

19 1 Introduction to VMware ESXi Figure 1-5 RDM mechanism RDM provides two compatible modes, both of which supports vmotion, Distributed Resource Scheduler (DRS), and High Availability (HA). Virtual compatibility: fully simulates VMDK files and supports snapshots. Physical compatibility: directly accesses SCSI devices and does not support snapshots. RDMs are applicable in the following scenarios: Physical to Virtual (P2V): migrates services from a physical machine to a virtual machine. Virtual to Physical (V2P): migrates services from a virtual machine to a physical machine. Clustering physical machines and virtual machines. 1.4 VMware Cluster VMware Cluster consists of a group of ESX servers that jointly manage VMs, dynamically assign hardware resources, and automatically allocate VMs. With VMware Cluster, loads on VMs can be dynamically transferred among ESX hosts. VMware Cluster is the foundation for Fault Tolerance (FT), High Availability (HA), Distributed Resource Scheduler (DRS), and Storage DRS. 1.5 Specifications VMware specifications vary with VMware versions. Table 1-1 lists major VMware specifications. Table 1-1 Major specifications of VMware Category Specifications Max. Value iscsi LUNs per server

20 1 Introduction to VMware ESXi Category Specifications Max. Value Physical Paths to a LUN Number of total paths on a server Fibre Channel LUNs per host LUN size 512 B-2 TB 512 B-2 TB - 64 TB 64 TB 64 TB 64 TB LUN ID Number of paths to a LUN Number of total paths on a server Number of HBAs of any type HBA ports FCoE NFS Targets per HBA Software FCoE adapters Default NFS datastores NFS datastores 64 (requires changes to advanced settings) VMFS Raw device mapping (RDM) size 2 TB-512B 2 TB-512B Volume size 64 TB-16KB 64 TB TB 64 TB 64 TB Volume per host VMFS-2 Files per volume (64 x additional extents) Block size 256 MB

21 1 Introduction to VMware ESXi Category Specifications Max. Value VMFS-3 VMFS-3 volumes configured per host Files per volume ~30,720 2 ~30,720 2 ~30,720 2 ~30,720 2 ~30,720 2 ~30, Block size 8 MB 8 MB 8 MB 8 MB 8 MB 8 MB - Volume size TB 3 64 TB 3-64 TB 3 - VMFS-5 Volume size TB 4 64 TB 4 64 TB 4 64 TB 4 64 TB 4 Block size 1 MB 1 MB 1 MB 1 MB 1 MB Files per volume - - ~ ~ ~ ~ ~ VMFS-6 Volume size TB Block size MB Files per volume ~ Local disks are included. 2. The file quantity is sufficient to support the maximum number of VMs. 3. If the block size supported by the file system is 1 MB, the maximum volume size is 50 TB. 4. The volume size is also subject to RAID controllers or adapter drivers. Table 1-1 lists only part of specifications. For more information, see: VMware vsphere Configuration Maximums (4.0) VMware vsphere Configuration Maximums (4.1) VMware vsphere Configuration Maximums (5.0) VMware vsphere Configuration Maximums (5.1) VMware vsphere Configuration Maximums (5.5) VMware vsphere Configuration Maximums (6.0) VMware vsphere Configuration Maximums (6.5) 20

22 2 Network Planning 2 Network Planning VMware hosts and storage systems can be networked based on different criteria. Table 2-1 describes the typical networking modes. Table 2-1 Networking modes Criteria Interface module type Whether switches are used Whether multiple paths exist Whether HyperMetro is configured Networking Mode Fibre Channel network/iscsi network Direct-connection network (no switches are used)/switch-based network (switches are used) Single-path network/multi-path network HyperMetro networking, non-hypermetro networking The Fibre Channel network is the most widely used network for VMware EXSi. To ensure service data security, both direct-connection network and switch-based network are multi-path networks. The following details commonly used Fibre Channel and iscsi networks. 2.1 Non-HyperMetro Networking Fibre Channel Networking Diagram Multi-Path Direct-Connection Networking Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path direct-connection network, as shown in Figure

23 2 Network Planning Figure 2-1 Fibre Channel multi-path direct-connecting on network diagram (dual-controller) Multi-Controller On this network, both controllers of the storage system are connected to the host's HBAs through optical fibers. The following uses HUAWEI OceanStor (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path direct-connection network, as shown in Figure 2-2. Figure 2-2 Fibre Channel multi-path direct-connection networking diagram (four-controller) On this network, the four controllers of the storage system are connected to the host's HBAs through optical fibers Multi-Path Switch-based Networking Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. Dual-Controller The following uses HUAWEI OceanStor S5500T as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure

24 2 Network Planning Figure 2-3 Fibre Channel multi-path switch-based networking diagram (dual-controller) On this network, the storage system is connected to the host via two switches. Both controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. Multi-Controller The following uses HUAWEI OceanStor (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure 2-4. Figure 2-4 Fibre Channel multi-path switch-based networking diagram (four-controller) 23

25 2 Network Planning On this network, the storage system is connected to the host via two switches. All controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. 2.2 HyperMetro Networking HyperMetro with the OS native multipathing function has the following networking requirements: Uses the multi-path switch-based networking by default. In the switches' zone configuration, allows a zone to only contain one initiator and one target. You are advised to use dual-switch networking to prevent single points of failure Fibre Channel Networking Diagram Multi-Path Switch-based Networking Dual-Controller Huawei provides dual-controller and multi-controller storage systems, whose network diagrams differ. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following uses HUAWEI OceanStor 6800 V3 (dual-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure

26 2 Network Planning Figure 2-5 Fibre Channel multi-path switch-based networking diagram (dual-controller) On this network, the storage system is connected to the host via two switches. The two storage systems' two controllers are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. In this example, the two storage systems' two controllers are interconnected through optical cables to form replication links. Alternatively, you can also connect the two controllers through a switch to form replication links. Multi-Controller The following uses HUAWEI OceanStor 6800 V3 (four-controller) as an example to explain how to connect a VMware host to a storage system over a Fibre Channel multi-path switch-based network, as shown in Figure

27 2 Network Planning Figure 2-6 Fibre Channel multi-path switch-based networking diagram (four-controller) On this network, the storage system is connected to the host via two switches. All the two storage systems' four controllers are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. In this example, the two storage systems' four controllers are interconnected through optical cables to form replication links. Alternatively, you can also connect the four controllers through two switches to form replication links. 26

28 3 Preparations Before Configuration (on a Host) 3 Preparations Before Configuration (on a Host) Before connecting a host to a storage system, make sure that the host HBAs are identified and working correctly. You also need to obtain the WWNs of HBA ports. The WWNs will be used in subsequent configuration on the storage system. This chapter details how to check the HBA status and query WWNs of HBA ports. 3.1 HBA Identification After an HBA is installed on a host, view information about the HBA on the host. Go to the page for configuration management and choose Storage Adapters in the navigation tree. In the function pane, hardware devices on the host are displayed, as shown in Figure 3-1. Figure 3-1 Viewing the HBA information 27

29 3 Preparations Before Configuration (on a Host) 3.2 HBA Information After a host identifies a newly installed HBA, you can view properties of the HBA on the host. The method of querying HBA information varies with operating system versions. The following details how to query HBA information on ESXi 5.5 and versions earlier than ESXi Versions Earlier than VMware ESXi 5.5 The command for viewing the HBA properties varies according to the HBA type. The details are as follows: QLogic HBA The command syntax is as follows: cat /proc/scsi/qla2xxx/n The command return provides information such as the HBA driver version, topology, WWN, and negotiated rate. Emulex HBA The command syntax is as follows: cat /proc/scsi/lpfcxxx/n The command return provides information such as HBA model and driver. Brocade HBA cat /proc/scsi/bfaxxx/n VMware ESXi 5.5 and Later Versions Since VMware ESXi 5.5, the /proc/scsi/ directory contains no content. Run the following commands to query HBA information: ~ # esxcli storage core adapter list HBA Name Driver Link State UID Description vmhba0 ahci link-n/a sata.vmhba0 (0:0:31.2) Intel Corporation Patsburg 6 Port SATA AHCI Controller vmhba1 megaraid_sas link-n/a unknown.vmhba1 (0:3:0.0) LSI / Symbios Logic MegaRAID SAS Fusion Controller vmhba2 rste link-n/a pscsi.vmhba2 (0:4:0.0) Intel Corporation Patsburg 4-Port SATA Storage Control Unit vmhba3 qlnativefc link-up fc d222d: d222c (0:129:0.0) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA vmhba4 qlnativefc link-up fc d222f: d222e (0:129:0.1) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA ~ # ~ # esxcfg-module -i qlnativefc esxcfg-module module information 28

30 3 Preparations Before Configuration (on a Host) input file: /usr/lib/vmware/vmkmod/qlnativefc License: GPLv2 Version: vmw Name-space: Required name-spaces: com.vmware.vmkapi@v2_2_0_0 Parameters: ql2xallocfwdump: int Option to enable allocation of memory for a firmware dump during HBA initialization. Memory allocation requirements vary by ISP type. Default is 1 - allocate memory. ql2xattemptdumponpanic: int Attempt fw dump for each function on PSOD Default is 0 - Don't attempt fw dump. ql2xbypass_log_throttle: int Option to bypass log throttling.default is 0 - Throttling enabled. 1 - Log all errors. ql2xcmdtimeout: int Timeout value in seconds for scsi command, default is 20 The previous output provides information such as HBA model, WWN, and driver. You can run the following command to obtain more HBA details: # /usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -a The command return provides more detailed HBA information. For more information, visit: xternalid= For details about how to modify the HBA queue depth, visit: xternalid=

31 4 Preparations Before Configuration (on a Storage System) 4 Preparations Before Configuration (on a Storage System) Make sure that the storage pool, RAID groups, LUNs, and hosts are correctly created on the storage systems. These configurations are common and therefore not detailed here. 30

32 5 Configuring Switches 5 Configuring Switches VMware ESXi hosts and storage systems can be connected over a Fibre Channel switch-based network and an iscsi switch-based network. A Fibre Channel switch-based network uses Fibre Channel switches and an iscsi network uses Ethernet switches. This chapter describes how to configure a Fibre Channel switch and an Ethernet switch respectively. 5.1 Fibre Channel Switch The commonly used Fibre Channel switches are mainly from Brocade, Cisco, and QLogic. The following uses a Brocade switch as an example to explain how to configure switches Querying the Switch Model and Version Perform the following steps to query the switch model and version: Step 1 Log in to the Brocade switch from a web page. On the web page, enter the IP address of the Brocade switch. The Web Tools switch login dialog box is displayed. Enter the account and password. The default account and password are admin and password. The switch management page is displayed. CAUTION Web Tools works correctly only when Java is installed on the host. Java 1.6 or later is recommended. Step 2 View the switch information. On the switch management page that is displayed, click Switch Information. The switch information is displayed, as shown in Figure

33 5 Configuring Switches Figure 5-1 Switch information Tue June Note the following parameters: Fabric OS version: indicates the switch version information. The interoperability between switches and storage systems varies with the switch version. Only switches of authenticated versions can interconnect correctly with storage systems. Type: This parameter is a decimal consists of an integer and a decimal fraction. The integer indicates the switch model and the decimal fraction indicates the switch template version. You only need to pay attention to the switch model. Table 5-1 describes switch model mapping. Table 5-1 Mapping between switch types and names Switch Type Switch Name Switch Type Switch Name 1 Brocade 1000 Switch 58 Brocade 5000 Switch 2,6 Brocade 2800 Switch 61 Brocade 4424 Embedded Switch 3 Brocade 2100, 2400 Switches 62 Brocade DCX Backbone 4 Brocade 20x0, 2010, 2040, 2050 Switches 5 Brocade 22x0, 2210, 2240, 2250 Switches 64 Brocade 5300 Switch 66 Brocade 5100 Switch 7 Brocade 2000 Switch 67 Brocade Encryption Switch 32

34 5 Configuring Switches Switch Type Switch Name Switch Type Switch Name 9 Brocade 3800 Switch 69 Brocade 5410 Blade 10 Brocade Director 70 Brocade 5410 Embedded Switch 12 Brocade 3900 Switch 71 Brocade 300 Switch 16 Brocade 3200 Switch 72 Brocade 5480 Embedded Switch 17 Brocade 3800VL 73 Brocade 5470 Embedded Switch 18 Brocade 3000 Switch 75 Brocade M5424 Embedded Switch 21 Brocade Director 76 Brocade 8000 Switch 22 Brocade 3016 Switch 77 Brocade DCX-4S Backbone 26 Brocade 3850 Switch 83 Brocade 7800 Extension Switch 27 Brocade 3250 Switch 86 Brocade 5450 Embedded Switch 29 Brocade 4012 Embedded Switch 87 Brocade 5460 Embedded Switch 32 Brocade 4100 Switch 90 Brocade 8470 Embedded Switch 33 Brocade 3014 Switch 92 Brocade VA-40FC Switch 34 Brocade 200E Switch 95 Brocade VDX Data Center Switch 37 Brocade 4020 Embedded Switch 96 Brocade VDX Data Center Switch 38 Brocade 7420 SAN Router 97 Brocade VDX Data Center Switch 40 Fibre Channel Routing (FCR) Front Domain 41 Fibre Channel Routing, (FCR) Xlate Domain 98 Brocade VDX Data Center Switch 108 Dell M8428-k FCoE Embedded Switch 42 Brocade Director 109 Brocade 6510 Switch 33

35 5 Configuring Switches Switch Type Switch Name Switch Type Switch Name 43 Brocade 4024 Embedded Switch 116 Brocade VDX 6710 Data Center Switch 44 Brocade 4900 Switch 117 Brocade 6547 Embedded Switch 45 Brocade 4016 Embedded Switch 118 Brocade 6505 Switch 46 Brocade 7500 Switch 120 Brocade DCX Backbone 51 Brocade 4018 Embedded Switch 121 Brocade DCX Backbone 55.2 Brocade 7600 Switch Ethernet IPv4: indicates the switch IP address. Effective Configuration: indicates the currently effective configurations. This parameter is important and is related to zone configurations. In this example, the currently effective configuration is ss. --End Configuring Zones Zone configuration is important for Fibre Channel switches. Perform the following steps to configure switch zones: Step 1 Log in to the Brocade switch from a web page. This step is the same as that in section "Querying the Switch Model and Version." Step 2 Check the switch port status. Normally, the switch port indicators are steady green, as shown in Figure 5-2. Figure 5-2 Switch port indicator status If the port indicators are abnormal, check the topology mode and rate. Proceed with the next step after all indicators are normal. Step 3 Go to the Zone Admin page. In the navigation tree of Web Tools, choose Task > Manage > Zone Admin. You can also choose Manage > Zone Admin in the navigation bar. 34

36 5 Configuring Switches Step 4 Check whether the switch identifies hosts and storage systems. On the Zone Admin page, click the Zone tab. In Ports&Attached Devices, check whether all related ports are identified, as shown in Figure 5-3. Figure 5-3 Zone tab page The preceding figure shows that ports 1,8 and 1,9 in use are correctly identified by the switch. Step 5 Create a zone. On the Zone tab page, click New Zone to create a new zone and name it zone_8_9. Select ports 1,8 and 1,9 and click Add Member to add them to the new zone, as shown in Figure 5-4. Figure 5-4 Zone configuration 35

37 5 Configuring Switches CAUTION To ensure data is transferred separately, ensure that each zone contains one initiator and one target only. Step 6 Add the new zone to the configuration file and activate the new zone. On the Zone Admin page, click the Zone Config tab. In the Name drop-down list, choose the currently effective configuration ss. In Member Selection List, select zone zone_8_9 and click Add Member to add it to the configuration file. Click Save Config to save the configuration and click Enable Config to make the configuration effective. Figure 5-5 shows the Zone Config page. Figure 5-5 Zone Config tab page Step 7 Verify that the configuration takes effect. In the navigation tree of Web Tools, choose Task > Monitor > Name Server to go to the Name Server page. You can also choose Monitor > Name Server in the navigation bar. Figure 5-6 shows the Name Server page. 36

38 5 Configuring Switches Figure 5-6 Name Server page The preceding figure shows that ports 8 and 9 are members of zone_8_9 that is now effective. An effective zone is marked by an asterisk (*). --End Precautions Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: The topology mode of the storage system must be set to switch. fill word of ports through which the switch is connected to the storage system must be set to 0. To configure this parameter, run the portcfgfillword <port number> 0 command on the switch. Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: When the switch is connected to module HP VC 8Gb 20-port FC or HP VC FlexFabric 10Gb/24-port, change the switch configuration. For details, visit: play/?javax.portlet.prp_efb5c e51970c8fa22b053ce01=wsrp-navigationalstate% 3DdocId%3Demr_na-c %7CdocLocale%3Dzh_CN&lang=en&javax.portlet.be gcachetok=com.vignette.cachetoken&sp4ts.oid= &javax.portlet.endcachetok= com.vignette.cachetoken&javax.portlet.tpst=efb5c e51970c8fa22b053ce01&hpa ppid=sp4ts&cc=us&ac.admitted= Ethernet Switch This section describes how to configure Ethernet switches, including configuring VLANs and binding ports. 37

39 5 Configuring Switches Configuring VLANs On an Ethernet network to which many hosts are connected, a large number of broadcast packets are generated during the host communication. Broadcast packets sent from one host will be received by all other hosts on the network, consuming more bandwidth. Moreover, all hosts on the network can access each other, resulting data security risks. To save bandwidth and prevent security risks, hosts on an Ethernet network are divided into multiple logical groups. Each logical group is a VLAN. The following uses HUAWEI Quidway 2700 Ethernet switch as an example to explain how to configure VLANs. In the following example, two VLANs (VLAN 1000 and VLAN 2000) are created. VLAN 1000 contains ports GE 1/0/1 to 1/0/16. VLAN 2000 contains ports GE 1/0/20 to 1/0/24. Step 1 Go to the system view. <Quidway>system-view System View: return to User View with Ctrl+Z. Step 2 Create VLAN 1000 and add ports to it. [Quidway]VLAN 1000 [Quidway-vlan1000]port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/16 Step 3 Configure the IP address of VLAN [Quidway-vlan1000]interface VLAN 1000 [Quidway-Vlan-interface1000]ip address Step 4 Create VLAN 2000, add ports, and configure the IP address. [Quidway]VLAN 2000 [Quidway-vlan2000]port GigabitEthernet 1/0/20 to GigabitEthernet 1/0/24 [Quidway-vlan2000]interface VLAN 2000 [Quidway-Vlan-interface2000]ip address End Binding Ports When storage systems and hosts are connected in point-to-point mode, existing bandwidth may be insufficient for storage data transmission. Moreover, devices cannot be redundantly connected in point-to-point mode. To address these problems, ports are bound (link aggregation). Port binding can improve bandwidth and balance load among multiple links Link Aggregation Modes Three Ethernet link aggregation modes are available: Manual aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type. Static aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type and LACP enabled. Dynamic aggregation 38

40 5 Configuring Switches The protocol dynamically adds ports to an aggregation group. Ports added in this way must have LACP enabled and the same speed, duplex mode, and link type. Table 5-2 compares the three link aggregation modes. Table 5-2 Comparison of link aggregation modes Link Aggregation Mode Packet Exchange Port Detection CPU Usage Manual aggregation No No Low Static aggregation Yes Yes High Dynamic aggregation Yes Yes High Configuration HUAWEI OceanStor storage devices support 802.3ad link aggregation (dynamic aggregation). In this link aggregation mode, multiple network ports are in an active aggregation group and work in duplex mode and at the same speed. After binding iscsi host ports on a storage device, enable aggregation for their peer ports on a switch. Otherwise, links are unavailable between the storage device and the switch. This section uses switch ports GE 1/0/1 and GE 1/0/2 and iscsi host ports P2 and P3 as examples to explain how to bind ports. You can adjust related parameters based on site requirements. Bind the iscsi host ports. Step 1 Log in to the DeviceManager and go to the page for binding ports. In the DeviceManager navigation tree, choose Provisioning > Ports > Ethernet Ports. Step 2 Bind ports. Select the ports that you want to bind and choose Bind Ports > Bind in the menu bar. In this example, the ports to be bound are P2 and P3. The Bind iscsi Port dialog box is displayed. In Bond name, enter the name for the port bond and click OK. The Warning dialog box is displayed. In the Warning dialog box, select I have read the warning message carefully and click OK. The Information dialog box is displayed, indicating that the operation succeeded. Click OK. After the storage system ports are bound, configure link aggregation on the switch. Run the following command on the switch: <Quidway>system-view System View: return to User View with Ctrl+Z. [Quidway-Switch]interface GigabitEthernet 1/0/1 [Quidway-Switch-GigabitEthernet1/0/19]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/19]quit [Quidway-Switch]interface GigabitEthernet 1/0/2 39

41 5 Configuring Switches [Quidway-Switch-GigabitEthernet1/0/20]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/20]quit After the command is executed, LACP is enabled for ports GE 1/0/1 and GE 1/0/2. Then the ports can be automatically detected and added to an aggregation group. --End 40

42 6 Establishing Fibre Channel Connections 6 Establishing Fibre Channel Connections After connecting a host to a storage system, check the topology modes of the host and the storage system. Fibre Channel connections are established between the host and the storage system after host initiators are identified by the storage system. The following describes how to check topology modes and add initiators. 6.1 Checking Topology Modes On direct-connection networks, HBAs support specific topology modes. The topology mode of a storage system must be consistent with that of supported by host HBAs. You can use the DeviceManager storage management software to manually change the topology mode of a storage system to that supported by host HBAs. If the storage ports connected to host HBAs are adaptive, there is no need to manually change the storage system topology mode. The method for checking topology modes varies with storage systems. The following describes how to check the topology mode of the OceanStor T series storage system and the OceanStor series enterprise storage system OceanStor 18000/T V2/V3/Dorado V3 Series Storage In the DeviceManager navigation tree, choose System. Then click the device view icon in the upper right corner. Choose Controller Enclosure CTE0 > Controller > Interface Module > FC Port and click the port whose details that you want to view, as shown in Figure 6-1. In the navigation tree, you can see controller A and controller B, each of which has different interface modules. Choose a controller and an interface module based on actual conditions. 41

43 6 Establishing Fibre Channel Connections Figure 6-1 Fibre Channel port details As shown in the preceding figure, the port working mode of the OceanStor 18000/T V2/V3 storage system is P2P OceanStor T V1 Storage System Figure 6-2 shows the details about a Fibre Channel port. Figure 6-2 Fibre Channel port details As shown in the preceding figure, the topology mode of the OceanStor T series storage system is Public Loop. 6.2 Adding Initiators This section describes how to add host HBA initiators on a storage system. Perform the following steps to add initiators: Step 1 Check HBA WWNs on the host. 42

44 6 Establishing Fibre Channel Connections Step 2 Check host WWNs on the storage system and add the identified WWNs to the host. The method for checking host WWNs varies with storage systems. The following describes how to check WWNs on the OceanStor T series storage system and the OceanStor V3/Dorado V3 storage system. OceanStor T series storage system (V100 and V200R001) Log in to the DeviceManager and choose SAN Services > Mappings > Initiators in the navigation tree. In the function pane, check the initiator information. Ensure that the WWNs in step 1 are identified. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. OceanStor 18000/V3 Series/Dorado V3 series Enterprise Storage System --End Log in to the DeviceManager and choose Provisioning in the navigation tree, then choose Host. On the Initiator tab page, click Add Initiator and check that the WWNs in step 1 are found. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. 6.3 Establishing Connections Add the WWNs (initiators) to the host and ensure that the initiator connection status is Online. If the initiator status is Online, Fibre Channel connections are established correctly. If the initiator status is Offline, check the physical links and topology mode. 43

45 7 Establishing iscsi Connections 7 Establishing iscsi Connections Both a host and a storage system need to be configured before establishing iscsi connections between the host and the storage system. This chapter describes how to configure a host and a storage system before establishing iscsi connections. 7.1 Host Configurations Configuring Service IP Addresses For vsphere Client For vsphere Client, you can configure services IP addresses on a VMware host by adding virtual networks. Perform the following steps: Step 1 In vsphere Client, choose Network > Add Network. Step 2 In Add Network Wizard that is displayed, select VMkernel, as shown in Figure 7-1. Figure 7-1 Adding VMkernel 44

46 7 Establishing iscsi Connections Click Next. Step 3 Select the iscsi service network port, as shown in Figure 7-2. Figure 7-2 Creating a vsphere standard switch Step 4 Specify the network label, as shown in Figure 7-3. Figure 7-3 Specifying the network label Step 5 Enter the iscsi service IP address, as shown in Figure

47 7 Establishing iscsi Connections Figure 7-4 Entering the iscsi service IP address Step 6 Confirm the information that you have configured, as shown in Figure 7-5. Figure 7-5 Information summary For a single-path network, the configuration is completed. For a multi-path network, proceed with the next step. Step 7 Repeat steps 1 to 6 to create another virtual network. Figure 7-6 shows the configuration completed for a multi-path network. 46

48 7 Establishing iscsi Connections Figure 7-6 iscsi multi-path network with dual adapters --End For vsphere Web Client For vsphere Web Client, perform the following steps to configure services IP addresses: Step 1 In vsphere Web Client, click Hosts and Clusters on the Home page. Figure 7-7 Home page on vsphere Web Client Step 2 Select the target host, click the Manage tab, and then click the Networking tab. 47

49 7 Establishing iscsi Connections Figure 7-8 Navigating to the Networking tab page Step 3 Add VMkernel adapters. Figure 7-9 Adding VMkernel adapters Step 4 In the displayed Add Networking page, select the VMkernel Network Adapter option in the 1 Select connection type. Figure 7-10 Selecting the connection type Step 5 In 2 Select target device, select the New standard switch option and click Next. 48

50 7 Establishing iscsi Connections Figure 7-11 Selecting the target device Step 6 In 3 Create a Standard Switch, add physical adapters and click Next. Figure 7-12 Adding a physical adapter Step 7 Set port properties and click Next. 49

51 7 Establishing iscsi Connections Figure 7-13 Setting port properties Step 8 Specify the service IP address and click Next. Figure 7-14 Specifying the service IP address Step 9 Confirm the information and click Finish. Figure 7-15 Checking the settings 50

52 7 Establishing iscsi Connections Step 10 Confirm the information that you have configured, as shown in Figure 7-5. Figure 7-16 Information summary If you only need to configure one path, the configuration is complete and you do not need to perform the next step. To configure multiple paths, proceed with the next step. Step 11 Repeat the preceding steps to create another virtual network. Figure 7-17 shows a multi-path networking configuration. Figure 7-17 Multi-path networking --End Configuring Host Initiators Host initiator configuration includes creating host initiators, binding initiators to virtual networks created in section "Configuring Service IP Addresses", and discovering targets. 51

53 7 Establishing iscsi Connections VMware ESXi 5.0 In VMware ESX 4.1 and earlier versions, storage adapters have iscsi adapters. You only need to enable those adapters. In VMware ESXi 5.0 and later versions, you need to manually add iscsi initiators. This section uses VMware ESXi 5.0 as an example to explain how to configure host initiators. Step 1 Choose Storage Adapters and right-click the function pane, as shown in Figure Figure 7-18 Adding storage adapters Step 2 Choose Add Software iscsi Adapter from the shortcut menu. On the dialog box that is displayed, click OK, as shown in Figure Figure 7-19 Adding iscsi initiators The newly added iscsi initiators are displayed, as shown in Figure

54 7 Establishing iscsi Connections Figure 7-20 iscsi Software Adapter Step 3 Right-click a newly created iscsi initiator and choose Properties from the shortcut menu, as shown in Figure Figure 7-21 Initiator properties Step 4 On the dialog box that is displayed, click the Network Configuration tab and click Add, as shown in Figure

55 7 Establishing iscsi Connections Figure 7-22 iscsi initiator properties Step 5 Select a virtual network that you have created in section "Configuring Service IP Addresses" and click OK, as shown in Figure Figure 7-23 Binding with a new VMkernel network adapter Figure 7-24 shows the properties of an initiator bound to the virtual network. 54

56 7 Establishing iscsi Connections Figure 7-24 Initiator properties after virtual network binding Step 6 In the dialog box for configuring initiator properties, click the Dynamic Discovery tab, click Add, and enter the target IP address (service IP address of the storage system), as shown in Figure Figure 7-25 Adding send target server --End vsphere Web Client On vsphere Web Client, perform the following steps to configure the host initiator: Step 1 On vsphere Web Client, click the Manage tab and then the Storage tab to check the storage adapter. 55

57 7 Establishing iscsi Connections Figure 7-26 Checking the storage adapter Step 2 Add a storage adapter. In the displayed Add Software iscsi Adapter dialog box, click OK. Figure 7-27 Adding a storage adapter Step 3 Check the created iscsi adapter. Figure 7-28 Checking the created iscsi adapter Step 4 In the Adapter Details area, click the Network Port Binding tab and click the + icon. 56

58 7 Establishing iscsi Connections Figure 7-29 Initiator properties Step 5 Select a virtual network and bind it to the initiator. Figure 7-30 Binding a virtual network to the initiator After the binding, the adapter properties are shown as follows: Figure 7-31 After VMkernal port binding 57

59 7 Establishing iscsi Connections Step 6 In the Adapter Details area, click the Targets tab. Click the Dynamic Discovery button and click Add. Figure 7-32 Dynamic discovery Step 7 Enter the target's IP address (storage's service IP address) and click OK. Figure 7-33 Adding a target The host initiator configuration is complete. ----End Configuring CHAP Authentication If CHAP authentication is required between a storage system and a host, perform the following steps to configure CHAP authentication. 58

60 7 Establishing iscsi Connections vsphere Client On vsphere Client, perform the following steps to configure CHAP authentication: Step 1 In the dialog box for configuring iscsi initiator properties, click the General tab and click CHAP in the left lower corner, as shown in Figure Figure 7-34 General tab page Step 2 In the CHAP Credentials dialog box that is displayed, choose Use CHAP from the Select option drop-down list. Enter the CHAP user name and password configured on the storage system, as shown in Figure Figure 7-35 CHAP credentials dialog box 59

61 7 Establishing iscsi Connections Step 3 Click OK. --End vsphere Web Client On vsphere Web Client, perform the following steps to configure CHAP authentication: Step 1 In the Adapter Details area, click the Properties tab. On the tab page, click Edit following Authentication. Figure 7-36 Editing initiator authentication parameter settings Step 2 In the displayed Edit Authentication dialog box, select Use Unidirectional CHAP (for example) as Authentication Method. Figure 7-37 Selecting an authentication method Enter the storage system's CHAP name and secret, and click OK. 60

62 7 Establishing iscsi Connections Figure 7-38 Setting CHAP authentication parameters ----End 7.2 HUAWEI Storage System Configuration Different versions of storage systems support different IP protocols. Specify the IP protocols for storage systems based on actual storage system versions and application scenarios. Observe the following principles when configuring IP addresses of iscsi ports on storage systems: The IP addresses of an iscsi host port and a management network port must reside on different network segments. The IP addresses of an iscsi host port and a maintenance network port must reside on different network segments. The IP addresses of an iscsi host port and a heartbeat network port must reside on different network segments. The IP addresses of iscsi host ports on the same controller must reside on different network segments. In some storage systems of the latest versions, IP addresses of iscsi host ports on the same controller can reside on the same network segment. However, this configuration is not recommended. The IP address of an iscsi host port communicates correctly with the IP address of the host' service network port to which this iscsi host port connects, or IP addresses of other storage devices' iscsi host ports connect to this iscsi host port. CAUTION Read-only users are not allowed to modify the IP address of an iscsi host port. Modifying the IP address of an iscsi host port will interrupt the services on the port. The IP address configuration varies with storage systems. The following explains how to configure IPv4 addresses on the OceanStor T series storage system and the OceanStor series enterprise storage system. 61

63 7 Establishing iscsi Connections OceanStor 18000/T V2/V3/Dorado V3 Enterprise Storage System Perform the following steps to configure the iscsi service on the OceanStor 18000/T V2/V3/Dorado V3 enterprise storage system: Step 1 Go to the iscsi Host Port dialog box. Then perform the following steps: 1. On the home page, click. 2. In the basic information area of the function pane, click the device icon. 3. In the middle function pane, click the cabinet whose iscsi ports you want to view. 4. Click the controller enclosure where the desired iscsi host ports reside. The controller enclosure view is displayed. 5. Click to switch to the rear view. 6. Click the iscsi host port whose information you want to modify. 7. The iscsi Host Port dialog box is displayed. 8. Click Modify. Step 2 Modify the iscsi host port. 1. In IPv4 Address or IPv6 Address, enter the IP address of the iscsi host port. 2. In Subnet Mask or Prefix, enter the subnet mask or prefix of the iscsi host port. 3. In MTU (Byte), enter the maximum size of data packet that can be transferred between the iscsi host port and the host. The value is an integer ranging from 1500 to Step 3 Confirm the iscsi host port modification. 1. Click Apply. The Danger dialog box is displayed. 2. Carefully read the contents of the dialog box. Then click the check box next to the statement I have read the previous information and understood subsequences of the operation to confirm the information. 3. Click OK. The Success dialog box is displayed, indicating that the operation succeeded. 4. Click OK. Step 4 Configure CHAP authentication. 1. Select the initiator for whose CHAP authentication you want to configure. The initiator configuration dialog box is displayed. 2. Select Enable CHAP. The CHAP configuration dialog box is displayed. 3. Enter the user name and password of CHAP authentication and click OK. CHAP authentication is configured on the storage system. ----End 62

64 7 Establishing iscsi Connections OceanStor T V1 Storage System Perform the following steps to configure the iscsi service on the OceanStor T V1 storage system: Step 1 Configure the service IP address. In the DeviceManager navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Select a port and choose IP Address > Modify IPv4 Address in the tool bar, as shown in Figure Figure 7-39 Modifying IPv4 addresses In the dialog box that is displayed, enter the new IP address and subnet mask and click OK. If CHAP authentication is not required between the storage system and host, the host initiator configuration is completed. If CHAP authentication is required, proceed with the following steps to configure CHAP authentication on the storage system. Step 2 Configure CHAP authentication. In the DeviceManager navigation tree, choose SAN Services > Mappings > Initiators. In the function pane, select the initiator whose CHAP authentication you want to configure and choose CHAP > CHAP Configuration in the navigation bar, as shown in Figure

65 7 Establishing iscsi Connections Figure 7-40 Initiator CHAP configuration Step 3 In the CHAP Configuration dialog box that is displayed, click Create in the lower right corner, as shown in Figure Figure 7-41 CHAP Configuration dialog box 64

66 7 Establishing iscsi Connections In the Create CHAP dialog box that is displayed, enter the CHAP user name and password, as shown in Figure Figure 7-42 Create CHAP dialog box CAUTION The CHAP user name contains 4 to 25 characters and the password contains 12 to 16 characters. The limitations to CHAP user name and password vary with storage systems. For details, see the help documentation of corresponding storage systems. Step 4 Assign the CHAP user name and password to the initiator, as shown in Figure

67 7 Establishing iscsi Connections Figure 7-43 Assigning the CHAP account to the initiator Step 5 Enable the CHAP account that is assigned to the host. In the DeviceManager navigation tree, choose SAN Services > Mappings > Initiators. In the function pane, select the initiator whose CHAP account is to be enabled and choose CHAP > Status Settings in the navigation bar, as shown in Figure Figure 7-44 Setting CHAP status 66

68 7 Establishing iscsi Connections In the Status Settings dialog box that is displayed, choose Enabled from the CHAP Status drop-down list, as shown in Figure Figure 7-45 Enabling CHAP On the ISM, view the initiator status, as shown in Figure Figure 7-46 Initiator status after CHAP is enabled --End 67

69 8 Mapping and Using LUNs 8 Mapping and Using LUNs 8.1 Mapping LUNs to a Host OceanStor 18000/T V2/V3/Dorado V3 Enterprise Storage System Prerequisites Procedure After a storage system is connected to a VMware ESXi, map the storage system LUNs to the host. LUNs, LUN groups, hosts, and host groups have been created. Step 1 Go to the Create Mapping View dialog box. Then perform the following steps: 1. In the Provisioning page, click Mapping View. 2. Click Create. The Create Mapping View dialog box is displayed. Step 2 Set basic properties for the mapping view. 1. In the Name text box, enter a name for the mapping view. 2. (Optional) In the Description text box, describe the mapping view. Step 3 Add a LUN group to the mapping view. 1. Click. The Select LUN Group dialog box is displayed. If your service requires a new LUN group, click Create to create one. You can select Shows only the LUN groups that do not belong to any mapping view to quickly locate LUN groups. 2. From the LUN group list, select the LUN groups you want to add to the mapping view. 3. Click OK. Step 4 Add a host group to the mapping view. 1. Click. 68

70 8 Mapping and Using LUNs If your service requires a new host group, click Create to create one. 2. The Select Host Group dialog box is displayed. 3. From the host group list, select the host groups you want to add to the mapping view. 4. Click OK. Step 5 (Optional) Add a port group to the mapping view. 1. Select Port Group. 2. Click. The Select Port Group dialog box is displayed. If your service requires a new port group, click Create to create one. 3. From the port group list, select the port group you want to add to the mapping view. 4. Click OK. Step 6 Confirm the creation of the mapping view. 1. Click OK. The Execution Result dialog box is displayed, indicating that the operation succeeded. 2. Click Close. --End OceanStor T V1 Storage System Prerequisites Procedure After a storage system is connected to a VMware host, map the storage system LUNs to the host. Two methods are available for mapping LUNs: Mapping LUNs to a host: This method is applicable to scenarios where only one small-scale client is deployed. Mapping LUNs to a host group: This method is applicable to cluster environments or scenarios where multiple clients are deployed. RAID groups have been created on the storage system. LUNs have been created on the RAID groups. This document explains how to map LUNs to a host. Perform the following steps to map LUNs to a host: Step 1 In the DeviceManager navigation tree, choose SAN Services > Mappings > Hosts. Step 2 In the function pane, select the desired host. In the navigation bar, choose Mapping > Add LUN Mapping. The Add LUN Mapping dialog box is displayed. Step 3 Select LUNs that you want to map to the host and click OK. --End 69

71 8 Mapping and Using LUNs CAUTION The LUNs mapped to the host from storage system must include the LUN with host Lun id= Scanning for LUNs on a Host After LUNs are mapped on a storage system, scan for the mapped LUNs on the host. vsphere Client Figure 8-1 Scanning for the mapped LUNs vsphere Web Client 70

72 8 Mapping and Using LUNs Figure 8-2 Scanning for the mapped LUNs (on vsphere Web Client) 8.3 Using the Mapped LUNs After the mapped LUNs are detected on a host, you can directly use the raw devices to configure services or use the LUNs after creating a file system Raw Device Mapping (RDM) vsphere Client RDM uses raw devices as disks for VMs. Perform the following steps to map raw devices. On vsphere Client, perform the following steps to configure RDM: Step 1 Right-click a VM and choose Edit Settings from the shortcut menu, as shown in Figure

73 8 Mapping and Using LUNs Figure 8-3 Editing host settings Step 2 On the Hardware tab page, click Add. In the Add Hardware dialog box that is displayed, choose Hard Disk in Device Type and click Next, as shown in Figure 8-4. Figure 8-4 Adding disks Step 3 Select disks. You can create a new virtual disk, use an existing virtual disk, or use raw disk mappings, as shown in Figure

74 8 Mapping and Using LUNs Figure 8-5 Selecting disks Select Raw Device Mappings and click Next. Step 4 Select a target LUN and click Next, as shown in Figure 8-6. Figure 8-6 Selecting a target LUN Step 5 Select a datastore. The default datastore is under the same directory as the VM storage. Click Next, as shown in Figure

75 8 Mapping and Using LUNs Figure 8-7 Selecting a datastore Step 6 Select a compatibility mode. Select a compatibility mode based on site requirements and click Next, as shown in Figure 8-8. Snapshots are unavailable if the compatibility mode is specified to physical. Figure 8-8 Selecting a compatibility mode Step 7 In Advanced Options, keep the default virtual device node unchanged, as shown in Figure

76 8 Mapping and Using LUNs Figure 8-9 Selecting a virtual device node Step 8 In Ready to Complete, confirm the information about the disk to be added, as shown in Figure Figure 8-10 Confirming the information about the disk to be added Click Finish. The system starts to add disks, as shown in Figure

77 8 Mapping and Using LUNs Figure 8-11 Adding raw disk mappings After a raw disk is mapped, the type of the newly created disk is Mapped Raw LUN. --End vsphere Web Client On vsphere Web Client, perform the following steps to configure RDM: Step 1 On the Related Objects tab page, click the Virtual Machines tab. On the left pane of the tab page, select the to-be-added host, right-click it, and choose Edit Settings from the shortcut menu. Figure 8-12 Editing host settings 76

78 8 Mapping and Using LUNs Step 2 In the displayed Edit Settings dialog box, click the Virtual Hardware tab. On the tab page, select RDM Disk from the New Device option list at the bottom. Figure 8-13 Adding RDM disks Step 3 Click Add to add the target disk. Figure 8-14 Selecting disks to add Step 4 Verify the disk information and click OK. 77

79 8 Mapping and Using LUNs Figure 8-15 Completing the disk addition operation Step 5 Navigate to the Edit Settings tab page again to check whether the target disk is added successfully. Figure 8-16 Checking whether the disk is successfully added ----End 78

80 8 Mapping and Using LUNs Creating Datastores vsphere Client Create a file system before creating a virtual disk. A file system can be created using the file system disks in datastores. This section details how to create a datastore. On vsphere Client, perform the following steps to create a datastore: Step 1 On the Configuration tab page, choose Storage in the navigation tree. On the Datastores tab page that is displayed, click Add Storage, as shown in Figure Figure 8-17 Adding storage Step 2 Select a storage type and click Next, as shown in Figure The default storage type is Disk/LUN. Figure 8-18 Selecting a storage type 79

81 8 Mapping and Using LUNs Step 3 On the Select Disk/LUN page that is displayed, select a desired disk and click Next, as shown in Figure Figure 8-19 Select a disk/lun Step 4 Select a file system version. VMFS-5 is selected in this example, as shown in Figure Figure 8-20 Selecting a file system version Step 5 View the current disk layout and device information, as shown in Figure

82 8 Mapping and Using LUNs Figure 8-21 Viewing the current disk layout Step 6 Enter the name of a datastore, as shown in Figure Figure 8-22 Entering a datastore name Step 7 Specify a disk capacity. Normally, Maximum available space is selected. If you want to test LUN expansion, customize a capacity, as shown in Figure

83 8 Mapping and Using LUNs Figure 8-23 Specifying a capacity Step 8 Confirm the disk layout. If the disk layout is correct, click Finish, as shown in Figure Figure 8-24 Confirming the disk layout --End vsphere Web Client On vsphere Web Client, perform the following steps to create a datastore: Step 1 On the Related Objects tab page, click the Datastores tab. 82

84 8 Mapping and Using LUNs Figure 8-25 Checking the datastores Step 2 Click Next. to open the New Datastore page. On this page, select VMFS as Type, and click Figure 8-26 Creating the datastore type Step 3 Specify the datastore name, select the disks, and click Next. 83

85 8 Mapping and Using LUNs Figure 8-27 Specifying the datastore name and selecting disks Step 4 Select the file system version (VMFS 5, for example), and click Next. Figure 8-28 Selecting the file system version Step 5 Configure datastore partition and click Next. 84

86 8 Mapping and Using LUNs Figure 8-29 Configuring the partition layout Step 6 Verify the datastore configurations and click Finish. Figure 8-30 Verifying the datastore configurations Step 7 Check whether the datastore is successfully created. 85

87 8 Mapping and Using LUNs Figure 8-31 Checking for the datastore ----End Creating Virtual Disks vsphere Client This section describes how to add LUNs to VMs as virtual disks. On vsphere Client, perform the following steps to create virtual disks: Step 1 Right-click a VM and choose Edit Settings from the shortcut menu, as shown in Figure Figure 8-32 Editing VM settings Step 2 Click Add, select Hard Disk and click Next, as shown in Figure

88 8 Mapping and Using LUNs Figure 8-33 Adding disks Step 3 In Select a Disk, select Create a new virtual disk, as shown in Figure Figure 8-34 Creating a new virtual disk Step 4 Specify the disk capacity based on site requirements, as shown in Figure

89 8 Mapping and Using LUNs Figure 8-35 Specifying the disk capacity Step 5 Select a datastore. In this example, the datastore is disk1 and the file system type is VMFS-5, as shown in Figure Figure 8-36 Selecting a datastore Step 6 Select a virtual device node. If there are no special requirements, keep the default virtual device node unchanged, as shown in Figure

90 8 Mapping and Using LUNs Figure 8-37 Selecting a virtual device node Step 7 View the basic information about the virtual disk, as shown in Figure Figure 8-38 Viewing virtual disk information As shown in the preceding figure, hard disk 1 that you have added is a virtual disk. --End vsphere Web Client On vsphere Web Client, perform the following steps to create virtual disks: 89

91 8 Mapping and Using LUNs Step 1 On the Related Objects tab page, click the Virtual Machines tab. On the tab page, select the host for which you need to create virtual disks, right-click the host, and choose Edit Settings from the shortcut menu. Figure 8-39 Editing the host settings Step 2 On the Virtual Hardware tab page, select New Hard Disk from the New Device option list. Figure 8-40 Selecting to add a hard disk Step 3 Click Add and check the information of the added disk. 90

92 8 Mapping and Using LUNs Figure 8-41 Checking the information of the added disk Step 4 To modify any disk properties, expand New Hard disk by clicking the arrow icon on its left. Figure 8-42 Modifying disk properties ----End 91

93 9 VMware NMP-based Multipathing Management 9 VMware NMP-based Multipathing Management 9.1 Overview The VMware ESXi has its own multipathing software Native Multipath Module (NMP), which is available without the need for extra configurations. This chapter details the NMP multipathing software. 9.2 VMware PSA Overview VMware ESXi 4.0 incorporates a new module Pluggable Storage Architecture (PSA) that can be integrated with Third-Party Multipathing Plugin (MPP) or NMP to provide storage-specific plug-ins such as Storage Array Type Plug-in (SATP) and Path Selection Plugin (PSP), enabling the optimal path selection and I/O performance. Figure 9-1 VMware PSA 92

94 9.2.2 VMware NMP 9 VMware NMP-based Multipathing Management NMP is the default multipathing module of VMware. This module provides two submodules to implement failover and load balancing. SATP: monitors path availability, reports path status to NMP, and implements failover. PSP: selects optimal I/O paths. PSA is compatible with the following third-party multipathing plugins: Third-party SATP: Storage vendors can use the VMware API to customize SATPs for their storage features and optimize VMware path selection. Third-party PSP: Storage vendors or third-party software vendors can use the VMware API to develop more sophisticated I/O load balancing algorithms and achieve larger throughput from multiple paths VMware Path Selection Policy Built-in PSP Third-Party Software By default, the PSP of VMware ESXi 5.0 or later supports three I/O policies: Most Recently Use (MRU), Round Robin, and Fixed. VMware ESXi 4.1 supports an additional policy: Fixed AP. Third-Party Multipathing Plug-in (MPP) supports comprehensive fault tolerance and performance processing, and runs on the same layer as NMP. For some storage systems, Third-Party MPP can substitute NMP to implement path failover and load balancing. 9.3 Functions and Features To manage storage multipathing, ESX/ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operations of multiple plugins (MPPs). The VMkernel multipathing plugin that ESX/ESXi provides, by default, is VMware Native Multipathing (NMP). NMP is an extensible module that manages subplugins. There are two types of NMP plugins: Storage Array Type Plugins (SATPs), and Path Selection Plugins (PSPs). Figure 9-2 shows the architecture of VMkernel. 93

95 Figure 9-2 VMkernel architecture 9 VMware NMP-based Multipathing Management If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or as a replacement for, the default NMP. When coordinating with the VMware NMP and any installed third-party MPPs, PSA performs the following tasks: Loads and unloads multipathing plug-ins. Hides virtual machine specifics from a particular plug-in. Routes I/O requests for a specific logical device to the MPP managing that device. Handles I/O queuing to the logical devices. Implements logical device bandwidth sharing between virtual machines. Handles I/O queuing to the physical storage HBAs. Handles physical path discovery and removal. Provides logical device and physical path I/O statistics. 9.4 VMware NMP Path Selection Policy Policies and Differences VMware supports the following path selection policies, as described in Table 9-1. Table 9-1 Path selection policies Policy/Controller Active/Active Active/Passive Most Recently Used Fixed Administrator action is required to fail back after path failure. VMkernel resumes using the preferred path when connectivity is restored. Administrator action is required to fail back after path failure. VMkernel attempts to resume using the preferred path. This can cause path thrashing or failure when another SP now owns the LUN. 94

96 9 VMware NMP-based Multipathing Management Policy/Controller Active/Active Active/Passive Round Robin The host uses automatic path selection algorithm to ensure that I/Os are delivered to all active paths in turn. It will not switch back even after the faulty path recovers. The host uses automatic path selection algorithm to always select the next path in the RR scheduling queue, therefore ensuring that I/Os are delivered to all active paths in turn. Fixed AP For ALUA arrays, VMkernel picks the path set to be the preferred path. For both A/A, A/P, and ALUA arrays, VMkernel resumes using the preferred path, but only if the path-thrashing avoidance algorithm allows the failback. Fixed AP is available only in VMware ESX/ESXi 4.1. The following details each policy. Most Recently Used (VMW_PSP_MRU) The host selects the path that is used recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when the path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for active-passive storage devices. Working principle: uses the most recently used path for I/O transfer. When the path fails, I/O is automatically switched to the last used path among the multiple available paths (if any). When the failed path recovers, I/O is not switched back to that path. Round Robin (VMW_PSP_RR) Fixed (VMW_PSP_FIXED) The host uses an automatic path selection algorithm rotating through all available active paths to enable load balancing across the paths. Load balancing is a process to distribute host I/Os on all available paths. The purpose of load balancing is to achieve the optimal throughput performance (IPOS, MB/s, and response time). Working principle: uses all available paths for I/O transfer. The host always uses the preferred path to the disk when that path is available. If the host cannot access the disk through the preferred path, it tries the alternative paths. The default policy for active-active storage devices is Fixed. After the preferred path recovers from fault, VMkernel continues to use the preferred path. This attempt may results in path thrashing or failure because another SP now owns the LUN. Working principle: uses the fixed path for I/O transfer. When the current path fails, I/O is automatically switched to a random path among the multiple available paths (if any). When the original path recovers, I/O will be switched back to the original path. 95

97 Fixed AP (VMW_PSP_FIXED_AP) 9 VMware NMP-based Multipathing Management This policy is only supported by VMware ESX/ESXi 4.1.x and is incorporated to VMW_PSP_FIXED in later ESX versions. Fixed AP extends the Fixed functionality to active-passive and ALUA mode arrays. 96

98 10 VMware NMP Policy Configuration 10 VMware NMP Policy Configuration Different OS versions support different VMware NMP policies. This chapter describes the VMware NMP policy recommended by HUAWEI for establishing connections between VMware ESXi and HUAWEI storage systems: New-version HUAWEI storage (namely, storage that supports multi-controller ALUA and ALUA HyperMetro): OceanStor V3/18000 V3 series V300R003C20 (V300R003C20SPC200 and later)/v300r006c00 (V300R006C00SPC100 and later), Dorado V3 V300R001C01 (V300R001C01SPC100 and later) Old-version HUAWEI storage (namely, storage that does not support multi-controller ALUA or ALUA HyperMetro): OceanStor T V1/T V2/18000 V1/V300R001/V300R002/V300R003C00/V300R003C10/V300R005, Dorado V300R001C Introduction to ALUA ALUA Definition Asymmetric Logical Unit Access (ALUA) is a multi-target port access model. In a multipathing state, the ALUA model provides a way of presenting active/passive LUNs to a host and offers a port status switching interface to switch over the working controller. For example, when a host multipathing program that supports ALUA detects a port status change (the port becomes unavailable) on a faulty controller, the program will automatically switch subsequent I/Os to the other controller. ALUA Impacts ALUA is mainly applicable to a storage system that has one (only one) preferred LUN controller. All host I/Os can be routed through different controllers to the working controller for execution. The storage ALUA will instruct the hosts to deliver I/Os preferentially from the LUN working controller, thereby reducing the I/O routing-consumed resources on the non-working controllers. Once the LUN working controller's all I/O paths are disconnected, the host I/Os will be delivered only from a non-working controller and then routed to the working controller for execution. This scenario must be avoided. 97

99 Suggestions for Using ALUA on HUAWEI Storage 10 VMware NMP Policy Configuration To prevent IOs from being delivered to a non-working controller, you are advised to: Ensure that the LUN home/working controllers are evenly distributed on a storage system. A change to the storage system (node fault or replacement) may cause an I/O path switchover. Ensure that the host always tries the best to select the optimal path to deliver I/Os. Prevent all host service I/Os from being delivered only to one controller, thereby preventing load unbalancing on the storage system Recommended NMP Configuration for OceanStor V3 Series HyperMetro Working Modes Typically, HyperMetro works in load balancing mode or local preferred mode. The typical working modes are valid only when both the storage system and host use ALUA. It is advised to set the host's path selection policy to round-robin. If HyperMetro works in load balancing mode, the host's path selection policy must be round-robin. If the host does not use ALUA or its path selection policy is not round-robin, the host's multipathing policy determines the working mode of HyperMetro. HyperMetro storage arrays can be classified into a local and a remote array by their distance to the host. The one closer to the host is the local array and the other one is the remote array. Table 10-1 describes the configuration methods and application scenarios of the typical working modes. Table 10-1 Configuration methods and application scenarios of the typical working modes Working Mode Load balancing mode Configuration Method Enable ALUA on the host and set the path selection policy to round-robin. Configure a switchover mode that supports ALUA for both HyperMetro storage arrays' initiators that are added to the host. Set the path type for both storage arrays' initiators to the optimal path. Application Scenario The distance between both HyperMetro storage arrays is less than 1 km. For example, they are in the same equipment room or on the same floor. 98

100 10 VMware NMP Policy Configuration Working Mode Local preferred mode Other modes Configuration Method Enable ALUA on the host. It is advised to set the path selection policy to round-robin. Configure a switchover mode that supports ALUA for both HyperMetro storage arrays' initiators that are added to the host. Set the path type for the local storage array's initiators to the optimal path and that for the remote storage array's initiators to the non-optimal path. Set the initiator switchover mode for the HyperMetro storage arrays by following instructions in the follow-up chapters in this guide. The path type does not require manual configuration. Application Scenario The distance between both HyperMetro storage arrays is greater than 1 km. For example, they are in different locations or data centers. User-defined Working Principles and Failover When ALUA works, the host multipathing software divides the physical paths to disks into Active Optimized (AO) and Active Non-optimized (AN) paths. The host delivers services to the storage system via the AO paths preferentially. An AO path is the optimal I/O access path and is between the host and a working controller. An AN path is the suboptimal I/O access path and is between the host and a non-working controller. When HyperMetro works in load balancing mode, the host multipathing software selects the paths to the working controllers on both HyperMetro storage arrays as the AO paths, and those to the other controllers as the AN paths. The host accesses the storage arrays via the AO paths. If an AO path fails, the host delivers I/Os to another AO path. If the working controller of a storage array fails, the system switches the other controller to the working mode and maintains load balancing. Host Host AO AN AO AN AO AO AO AN A B A B A B A B Site A Site B Site A Site B Path failure SP failure When HyperMetro works in local preferred mode, the host multipathing software selects the paths to the working controller on the local storage array as the AO paths. This ensures that the host delivers I/Os only to the working controller on the local storage array, reducing link consumption. If all AO paths fail, the host delivers I/Os to the AN paths on the non-working 99

101 10 VMware NMP Policy Configuration controller. If the working controller of the local storage array fails, the system switches the other controller to the working mode and maintains the local preferred mode. Host Host AO AN AN AN AO AO AN AN A B A B A B A B Site A Site B Site A Site B Path failure SP failure Initiator Mode Introduction and Configuration Initiator Parameter Description Table 10-2 Initiator parameter description Parameter Description Example Uses third-party multipath software Switchover Mode This parameter is displayed only after an initiator has been added to the host. If LUNs have been mapped to the host before you enable or disable this parameter, restart the host after you configure this parameter. You do not need to enable this parameter on a host with UltraPath. Path switchover mode The system supports the following modes: early-version ALUA: default value of Switchover Mode for an upgrade from an earlier version to the current version. The detailed requirements are as follows: The storage system is upgraded from V300R003C10 and earlier to V300R003C20 or V300R006C00SPC100 and later; from V300R005 to V300R006C00SPC100 and later; from Dorado V300R001C00 to Dorado V300R001C01SPC100 and later. Before the upgrade, the storage system has a single or dual controllers and has enabled ALUA. common ALUA: applies to V300R003C20 and later, V300R006C00SPC100 and later, or Dorado V300R001C01SPC100 and later. The detailed requirements are as follows: The storage system version is V300R003C20, V300R006C00SPC100, Dorado V300R001C01SPC100, or later. Enabled 100

102 10 VMware NMP Policy Configuration Parameter Description Example The OS of the host that connects to the storage system is SUSE, Red Hat 6.X, Windows Server 2012 (using Emulex HBAs), Windows Server 2008 (using Emulex HBAs), or HP-UX 11i V3. ALUA not used: does not support ALUA or HyperMetro. This mode is used when a host such as HP-UX 11i V2 does not support ALUA or ALUA is not needed. Special mode: supports ALUA and has multiple values. It applies to V300R003C20 and later, V300R006C00SPC100 and later, or Dorado V300R001C01SPC100 and later. It is used by host operating systems that are not supported by the common ALUA mode.the detailed requirements are as follows: The storage system version V300R003C20, V300R006C00SPC100, Dorado V300R001C01SPC100, or later. The OS of the host that connects to the storage system is VMware, AIX, Red Hat 7.X, Windows Server 2012 (using QLogic HBAs), or Windows Server 2008 (using QLogic HBAs). Special mode type Path Type Special modes support ALUA and apply to V300R003C20 and later, V300R006C00SPC100 and later, or Dorado V300R001C01SPC100 and later. The detailed requirements are as follows: Mode 0: The host and storage system must be connected using a Fibre Channel network. The OS of the host that connects to the storage system is Red Hat 7.X, Windows Server 2012 (using QLogic HBAs), or Windows Server 2008 (using QLogic HBAs). Mode 1: The OS of the host that connects to the storage system is AIX or VMware. HyperMetro works in load balancing mode. Mode 2: The OS of the host that connects to the storage system is AIX or VMware. HyperMetro works in local preferred mode. The value can be either Optimal Path or Non-Optimal Path. When HyperMetro works in load balancing mode, set the Path Type for the initiators of Mode 0 Optimal Path 101

103 10 VMware NMP Policy Configuration Parameter Description Example both the local and remote storage arrays to Optimal Path. Enable ALUA on both the host and storage arrays. If the host uses the round-robin multipathing policy, it delivers I/Os to both storage arrays in round-robin mode. When HyperMetro works in local preferred mode, set the Path Type for the initiator of the local storage array to Optimal Path, and that of the remote storage array to Non-Optimal Path. Enable ALUA on both the host and storage arrays. The host delivers I/Os to the local storage array preferentially. Configure the initiators according to the requirements of each OS. The initiators that are added to the same host must be configured with the same switchover mode. Otherwise, host services may be interrupted. After the initiator mode is configured on a storage array, you must restart the host for the configuration to take effect. Configuring the Initiators If you want to configure the initiator mode, perform the following operations. Step 2 Go to the host configuration page. Open OceanStor DeviceManager. In the right navigation tree, click Provisioning and then click Host, as shown in the following figure. Figure 10-1 Going to the host configuration page Step 3 Select an initiator of which information you want to modify. 102

104 10 VMware NMP Policy Configuration On the Host tab page, select a host you want to modify. Then select the initiator (on the host) you want to modify. Click Modify. Figure 10-2 Selecting an initiator of which information you want to modify Step 4 Modify the initiator information. In the Modify Initiator dialog box that is displayed, modify the initiator information based on the requirements of your operating system. The following figure shows the initiator information modification page. Figure 10-3 Modifying initiator information 103

105 10 VMware NMP Policy Configuration Step 5 Repeat the preceding operations to modify the information about other initiators on the host. Step 6 Restart the host to enable the configuration to take effect. ----End Recommended VMware NMP Configuration This section provides recommended VMware NMP configurations for HyperMetro and non-hypermetro configuration with different ESXi versions' VMware NMP For Non-HyperMetro Storage Storage Device Table 10-3 Recommended VMware NMP configuration for OceanStor V3 non-hypermetro configuration ALUA Enabled or Not VM Clust er Recommended SATP Type Recommended PSP Type Remarks ESXi 5.0.* V3/18000 V3 Series V300R003C20 and later ESXi 5.1.* V3/18000 V3 Series V300R003C20 and later Y N/A VMW_SATP_ALUA VMW_PSP_FIXED Y VMW_SATP_ALUA VMW_PSP_FIXED Y N VMW_SATP_ALUA VMW_PSP_RR See notes 1,2,3 See notes 1,2,3 See notes 1,2,3,4,5 ESXi 5.5.*, 6.0.*, 6.5.* V3/18000 V3 Series V300R003C20 and later Y N/A VMW_SATP_ALUA VMW_PSP_RR See notes 1,2,3,4,5 104

106 10 VMware NMP Policy Configuration 1. Failback is supported upon recovery from a path fault. 2. On the VMware ESXi command line interface, run the following commands to add rules: esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on. You need to change the preceding information in bold based on your actual situation. For details, see Table After the command is executed, the new rule will immediately take effect for the newly mapped LUN, but will not take effect for previously mapped LUNs unless ESXi is restarted. 3. This configuration is recommended for ALUA-enabled storage. 4. For the MSCS and WSFC clusters deployed on VMware ESXi 5.1 or earlier VMs, you cannot set the RDM LUN to Round Robin, but can set it to FIXED. For details, see section "Modifying the Path Policy for a Single LUN" or VMware KB When using All-Flash Array, it is recommended to set IO Operation Limit to 1 on ESXi. For ESXi5.x, 6.x, run the command: esxcli storage nmp psp roundrobin deviceconfig set --device=device_naa** --iops=1 --type iops You need to change the preceding information in bold based on your actual situation. For supported ESXi versions, see: For HyperMetro Storage Table 10-4 Recommended VMware NMP configuration for OceanStor V3 HyperMetro configuration Storage Device Number of Controllers ALUA Enabled or Not VM Cluster Recommended SATP Type Recommended PSP Type Remarks VMware ESXi Dorado V3 Series, V3/18000 V3 series V300R003C20 and later N/A Y N/A VMW_SATP_A LUA VMW_PSP_RR See notes 1,2 1. For the MSCS and WSFC clusters deployed on VMware ESXi 5.1 or earlier VMs, you cannot set the RDM LUN to Round Robin, but can set it to FIXED. For details, see section "Modifying the Path Policy for a Single LUN" or VMware KB When using All-Flash Array, it is recommended to set IO Operation Limit to 1 on ESXi. For ESXi5.x, 6.x, the command is as below: esxcli storage nmp psp roundrobin deviceconfig set --device=device_naa** --iops=1 --type iops You need to change the preceding information in bold based on your actual situation. For supported ESXi versions, see: 105

107 Precautions 10 VMware NMP Policy Configuration When using HyperMetro with VMware ESXi, note the following precautions: HyperMetro pairs' LUN mappings on two active-active storage arrays must be consistent. That is, the two LUNs in a HyperMetro pair must use the same LUN ID when being mapped to a host. On the storage arrays, you can run the show host lun host_id=xx command to query all LUNs mapped to the host. In this command, xx indicates the host ID. If you want to modify the ID information, run the change mapping_view mapping_view_id=x host_lun_id_list=lun ID:Host Lun ID command. For OceanStor V3 V300R003C20SPC200, a maximum of 8 controllers for a single storage array with ALUA is supported; while in HyperMetro with ALUA configuration, all controllers of the two array added up to no more than 8 is supported. VMware ESXi 6.0 U2 and later versions support HyperMetro configuration. Versions earlier than VMware ESXi 6.0 U2 have their defects. WARNING Before deploying HyperMetro solution based on VMware ESXi NMP, it s required to consider the compatibility between components (such as storage system, operating system, HBAs, and switches) and the application software. Check the interoperability matrix before deployment: This document provides the configuration methods only for HyperMetro interoperability-related components. For specific interoperability configuration scenarios, you must check the corresponding HyperMetro interoperability matrix Configuring the ALUA Mode Configuration on the Storage System For Non-HyperMetro Configuration For non-hypermetro configuration, use the configuration listed in the following table. 106

108 10 VMware NMP Policy Configuration Table 10-5 Configuration on non-hypermetro OceanStor V3 storage when interconnected with VMware ESXi Operating System Storage Operating System Configuration on the Storage Array Third-Party Multipathing Software Switchover Mode Special Mode Path Type ESXi 5.1.x, ESXi 5.5.x, ESXi 6.0.x, ESXi 6.5.x Dual-cont roller, multi-cont roller VMware ESX Enable Special mode Mode 1 Optimal Path Other ESXi versions and updates Dual-cont roller, multi-cont roller VMware ESX VMware ESX Enable Enable early-version ALUA ALUA not used NA NA Optimal Path Optimal Path For supported ESXi versions, see: WARNING After the initiator mode is configured on a storage array, you must restart the host to enable the new configuration to take effect For HyperMetro Configuration Table 10-6 lists the storage array configurations. Table 10-6 Configuration on HyperMetro OceanStor V3 storage when interconnected with VMware ESXi OS Storage Array Configuration HyperMetro Working Mode Storage OS Third-Party Multipathing Software Switchover Mode Special Mode Type Path Type VMware ESXi Load balancing Local storage array VMware ESX Enabled Special mode Mode 1 Optimal path Remote storage array VMware ESX Enabled Special mode Mode 1 Optimal path 107

109 10 VMware NMP Policy Configuration OS Storage Array Configuration Local preferred Local storage array VMware ESX Enabled Special mode Mode 2 Optimal path Remote storage array VMware ESX Enabled Special mode Mode 2 Non-optimal path For details about the VMware ESXi versions, see the compatibility list: After the initiator mode is configured on a storage array, you must restart the host for the configuration to take effect. In OceanStor V3 V300R003C20, mode 1 and mode 2 are disabled by default. For details about how to enable them, see the OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3 Storage System V300R003C20 Restricted Command Reference or OceanStor V3&18800 V3 Storage System V300R003C20 Restricted Command Reference. Contact Huawei technical support engineers to obtain the documents. In OceanStor V3 V300R006C00SPC100, Dorado V3 V300R001C01SPC100, and later versions, you can configure mode 1 and mode 2 on DeviceManager directly. Figure 10-4 Querying the special mode type 108

110 Configuration on ESXi Hosts 10 VMware NMP Policy Configuration Enabling the Shell and SSH Services for the ESXi Hosts Start ESXi Shell and SSH respectively, as shown in Figure If you do not need the Shell and SSH services any more, you can disable the services. Figure 10-5 Enabling the Shell and SSH services For Non-HyperMetro Configuration For non-hypermetro configuration, perform the following steps to configure VMware NMP. After enabling ALUA on Huawei storage, perform the following steps to add multipathing rule on the ESXi hosts: Step 1 Check the vendor and model information of the storage systems. Use the SSH tool to log in to the ESXi Shell, and run the esxcli storage core device list to view Vendor and Model information of the storage system. [root@localhost:~] esxcli storage core device list naa.630d17e100b d125f Display Name: HUAWEI Fibre Channel Disk (naa.630d17e100b d125f ) Has Settable Display Name: true Size: Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.630d17e100b d125f Vendor: HUAWEI Model: XSG1 Revision: 4303 SCSI Level: 6 Is Pseudo: false Status: on 109

111 Step 2 Add multipathing rules. 10 VMware NMP Policy Configuration Run different configuration commands for the two different multipathing modes: VMW_PSP_Fixed: esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on VMW_PSP_RR: esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on In these commands, HUAWEI is an example of storage Vendor and XSG1 is an example of storage Model. You need to change the two values based on your actual storage configurations. Table 10-7 provides the vendor and model information of Huawei mainstream storage devices. Table 10-7 Huawei Storage vendor and model information Storage Device Vendor Model S2200T/S2600T/S5500T/S5 600T/S5800T/S6800T Dorado2100 G2 Dorado5100 HUAWEI/SYMANTEC/H UASY HUAWEI/SYMANTEC/H UASY HUAWEI/SYMANTEC/H UASY S2200T/S2600T/S5500T/S560 0T/S5800T/S6800T Dorado2100\ G2 Dorado HUAWEI HVS85T 18800/18800F HUAWEI HVS88T V3 series V3 series Dorado V3 Series HUAWEI XSG1 To delete existing multipathing configuration rules, replace [path policy] with the configured path mode (for example, VMW_PSP_Fixed) and then run the following command: esxcli storage nmp satp rule remove -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P [path policy] -c tpgs_on Step 3 Confirm that the rule is added successfully: esxcli storage nmp satp rule list grep HUAWEI ----End 110

112 10 VMware NMP Policy Configuration WARNING After the command is executed, the new rule will immediately take effect for the newly mapped LUN, but will not take effect for previously mapped LUNs unless ESXi is restarted For HyperMetro Storage For HyperMetro storage, perform the following configuration steps Setting the VMware NMP Multipathing Rules Run the following command on the host: esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on In these commands, HUAWEI is an example of storage Vendor and XSG1 is an example of storage Model. You need to change the two values based on your actual storage configurations. Table 10-7 provides the vendor and model information of Huawei mainstream storage devices. After the command is executed, the new rule will immediately take effect for the newly mapped LUN, but will not take effect for previously mapped LUNs unless ESXi is restarted. Restart the host for the configuration to take effect. For details, see: Configuring a VMware Cluster If you want to configure VMware clusters, see the document at the following website. 49%7C %7C %7C %7C &clientWidth=1307&browseTime = See section "Virtualization Platform Configuration" in this document. The contents in this section are as follows Mandatory Configuration Items Deploy ESXi hosts across data centers in a HA cluster and configure the cluster with HA advanced parameter das.maskcleanshutdownenabled = True for VMware vsphere 5.1 and later versions. A VM service network requires L2 interworking between data centers for VM migration between data centers without affecting VM services. For VMware vsphere 5.1 to 5.5 versions, configure all ESXi hosts with the following two advanced parameters. For VMware vsphere 5.1, set Disk.terminateVMOnPDLDefault = True. To configure this setting, you need to log in to the CLI and add Disk.terminateVMOnPDLDefault = True in the /etc/vmware/settings file of 111

113 10 VMware NMP Policy Configuration each ESXi host. For VMware vsphere 5.5 and later versions, set VMkernel.Boot.terminateVMOnPDL = True. The parameter forcibly powers off VMs on a datastore when the datastore enters the PDL state. For VMware vsphere 5.5, set Disk.AutoremoveOnPDL = 0. For VMware vsphere 6.0 U2 and later versions. After connecting to vcenter through the Web Client (the Google Chrome browser is recommended), enter the cluster HA configuration. The configuration requirements are as follows. Figure 10-6 Cluster configuration-1 Figure 10-7 Cluster configuration-2 For VMware vsphere 5.1 to 5.5 versions, restart hosts for the configuration to take effect. For VMware vsphere 6.0 U2 and later versions, re-enable the HA cluster to make the configuration take effect without restarting hosts Recommended Configuration Items The vmotion network, service network, and management network are configured with different VLAN IDs to avoid network interference. 112

114 10 VMware NMP Policy Configuration The management network includes the vcenter Server management node and ESXi hosts that are not accessible to external applications. The service network is divided into VLANs based on service requirements to ensure logical isolation and control broadcast domains. In a single cluster, the number of hosts does not exceed 16. If a cluster has more than 16 hosts, you are advised to use the hosts to create multiple clusters across data centers. A DRS group is configured to ensure that VMs can be recovered first in the local data center in the event of the breakdown of a single host Checking VMware NMP After the configuration, run the following command to confirm that the multipathing software configuration is correct. esxcli storage nmp satp rule list grep -i huawei The following figure shows the command output. Figure 10-8 Command output Run the following command to check whether the path information takes effect. esxcli storage nmp device list -d=naa.6xxxxxxx. The following figure shows the command output. Figure 10-9 VMware path information The path information is displayed in the unit of port group rather than in the unit of a single path Recommended NMP Configuration for OceanStor Dorado V3 Series Recommended VMware NMP Configuration Table 10-8 provides the recommended NMP configurations when different ESX/ESXi versions interconnect with HUAWEI storage. 113

115 10 VMware NMP Policy Configuration WARNING The recommended NMP configuration is a universal configuration, but may be not the best configuration in your storage environments. For example, VMW_PSP_RR has better performance than VMW_PSP_FIXED, but VMW_PSP_RR has some use restrictions: for the MSCS and WSFC clusters deployed on VMs, you can set the RDM LUN to PSP_RR only in VMware ESXi 5.5 and later versions. If you want to configure an optimal path policy, contact local Huawei support. Table 10-8 Recommended NMP configurations when different ESXi versions interconnect with OceanStor Dorado V3 Storage Device Number of Controllers ALUA Enabled or Not VM Cluster Recommended SATP Type Recommended PSP Type Remarks ESXi 5.0.* Dorado V3 series ESXi 5.1.* 2 Y N/A VMW_SATP_ALUA VMW_PSP_FIXED 4 or more N N/A VMW_SATP_DEFAU LT_AA VMW_PSP_FIXED See notes 2, 3, and 4. See notes 1, 2, and 5. Dorado V3 series N/A Y Y VMW_SATP_ALUA VMW_PSP_FIXED N VMW_SATP_ALUA VMW_PSP_RR See notes 2, 3, and 4. See notes 2, 3, 5, 6, and 7. ESXi 5.5.*, 6.0.*, 6.5.* Dorado V3 series N/A Y N/A VMW_SATP_ALUA VMW_PSP_RR See notes 2, 3, 4, and

116 10 VMware NMP Policy Configuration 1. You need to manually set the primary path for each LUN on the vsphere Client. For the default preferred LUN, you can set a non-preferred path and then set the preferred path. 2. Failback is supported upon recovery from a path fault. 3. On the VMware command line interface, run the following commands to add rules: esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on. You need to change the preceding information in bold based on your actual situation. For details, see Table After the command is executed, the new rule will immediately take effect for the newly mapped LUN, but will not take effect for previously mapped LUNs unless ESXi is restarted. 4. This configuration is recommended for ALUA-enabled storage. 5. This configuration is recommended for ALUA-disabled storage. 6. For the MSCS and WSFC clusters deployed on VMware ESXi 5.1 and earlier versions, you cannot set the RDM LUN to Round Robin, but can set it to FIXED. For details, see section "Modifying the Path Policy for a Single LUN" or VMware KB When using All-Flash Array, it is recommended to set IO Operation Limit to 1 on ESXi. For ESXi5.x, 6.x, run the command: esxcli storage nmp psp roundrobin deviceconfig set --device=device_naa** --iops=1 --type iops You need to change the preceding information in bold based on your actual situation. For supported ESXi versions, see: Configuring the ALUA Mode Configuration on the Storage System Open OceanStor DeviceManager. In the right navigation tree, click Provisioning and then click Host. Select target Host and target initiator you want to modify. Click Modify > Enable ALUA. 115

117 10 VMware NMP Policy Configuration Configuration on ESXi Hosts Enabling the Shell and SSH Services for the ESXi Hosts Enable ESXi Shell and SSH respectively, as shown in Figure If you do not need the Shell and SSH services any more, you can disable the services. Figure Enabling the Shell and SSH services 116

118 10 VMware NMP Policy Configuration Running Commands to Add Path Rules After enabling ALUA on Huawei storage, perform the following steps to add multipathing rule on the ESXi hosts: Step 1 Check the vendor and model information of the storage systems. Use the SSH tool to log in to the ESXi Shell, and run the esxcli storage core device list to view Vendor and Model information of the storage system. [root@localhost:~] esxcli storage core device list naa.630d17e100b d125f Display Name: HUAWEI Fibre Channel Disk (naa.630d17e100b d125f ) Has Settable Display Name: true Size: Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.630d17e100b d125f Vendor: HUAWEI Model: XSG1 Revision: 4303 SCSI Level: 6 Is Pseudo: false Status: on Step 2 Add multipathing rules. Run different configuration commands for the two different multipathing modes: VMW_PSP_Fixed: esxcli storage nmp satp rule add -V HUAWEI -M XSG1-s VMW_SATP_ALUA -P VMW_PSP_FIXED -c tpgs_on VMW_PSP_RR: esxcli storage nmp satp rule add -V HUAWEI -M XSG1-s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on In these commands, HUAWEI is an example of storage Vendor and XSG1 is an example of storage Model. You need to change the two values based on your actual storage configurations. Table 10-7 provides the vendor and model information of Huawei mainstream storage devices. To delete existing multipathing configuration rules, replace [path policy] with the configured path mode (for example, VMW_PSP_Fixed) and then run the following command: esxcli storage nmp satp rule remove -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P [path policy] -c tpgs_on. Step 3 Confirm that the rule is added successfully: esxcli storage nmp satp rule list grep HUAWEI ----End 117

119 10 VMware NMP Policy Configuration WARNING After the command is executed, the new rule will immediately take effect for the newly mapped LUN, but will not take effect for previously mapped LUNs unless ESXi is restarted Configuring the AA Mode By default, the storage system's host initiator is set to the AA mode. Table 10-9 describes the methods for setting VMware path rules. Table 10-9 AA mode configuration Storage Device PSP Type Configuration Method T series V200R series Dorado V1 Series Dorado V3 Series V3 series V3 series VMW_PSP_FIXED Manually set preferred paths for all Huawei storage devices' LUNs connected to the ESXi hosts. Manually Setting the Preferred Path Select one path of the storage system as its preferred path, as shown in Figure For a storage device that has a preferred path selected, first set another path as the preferred path and then set original path to the preferred path again. 118

120 Figure Setting the preferred path for a storage device 10 VMware NMP Policy Configuration Manually Modifying Path Rules Query the ESXi host-identified storage array type, as shown in Figure Figure Querying the storage array type If Storage Array Type is VMW_SATP_DEFAULT_AA, manually set the preferred path by following instructions in section "Manually Modifying Path Rules". 119

121 10 VMware NMP Policy Configuration If Storage Array Type is VMW_SATP_ALUA, change the ESXi host's path selection policy, as shown in Figure Figure Modifying the storage path policy 10.4 Recommended NMP Configuration for Old-Version HUAWEI Storage Recommended VMware NMP Configuration Table provides the recommended NMP configurations when different ESX/ESXi versions interconnect with HUAWEI storage. WARNING The recommended NMP configuration is a universal configuration, but may be not the best configuration in your storage environments. For example, VMW_PSP_RR has better performance than VMW_PSP_FIXED, but VMW_PSP_RR has some use restrictions: for the MSCS and WSFC clusters deployed on VMs, you can set the RDM LUN to PSP_RR only in VMware ESXi 5.5 and later versions. If you want to configure an optimal path policy, contact local Huawei support. 120

122 10 VMware NMP Policy Configuration Table Recommended NMP configurations when different ESX/ESXi versions interconnect with HUAWEI Older Version Storage Storage Device Number of ALUA Enabled or Not VM Cluster Recommended SATP Type Recommended PSP Type Remarks Controllers ESX 4.0.* S2600, S5000 series 2 N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, and 5. T V1 series, Dorado51 00 Dorado21 00 G2, T V2 series 4 or more N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, and V1 series V3 series ESXi 4.1.* S2600, S5000 series 2 Y N/A VMW_SATP_A LUA VMW_PSP_FIXED _AP See notes 2, 3, and 4. T V1 series, Dorado51 00 Dorado21 00 G2 T V2 series 4 or more N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, and V1 series V3 series ESXi 5.0.* S2600, S5000 series 2 N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, and 5. T V1 series, 2 Y N/A VMW_SATP_A LUA VMW_PSP_FIXED See notes 2, 3, and

123 10 VMware NMP Policy Configuration Storage Device Number of ALUA Enabled or Not VM Cluster Recommended SATP Type Recommended PSP Type Remarks Controllers Dorado51 00 Dorado21 00 G2 T V2 series 4 or more N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, 5, and V1 series V3 series ESXi 5.1.* S2600, S5000 series 2 N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, and 5. T V1 series, Dorado51 00 Dorado21 00 G2 2 Y Y N VMW_SATP_A LUA VMW_SATP_A LUA VMW_PSP_FIXED VMW_PSP_RR See notes 2, 3, and 4. See notes 2, 3, 5, and 6. T V2 series V1 series 4 or more N VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, 5, and 7. V3 series ESXi 5.5.*, 6.0.*, 6.5.* S2600, S5000 series 2 N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, and 5. T V1 series, Dorado Y N/A VMW_SATP_A LUA VMW_PSP_RR See notes 2, 3, and 4. Dorado21 00 G2, T V2 series V1 series 4 or more N N/A VMW_SATP_D EFAULT_AA VMW_PSP_FIXED See notes 1, 2, 5, and 7. V3 series 122

124 10 VMware NMP Policy Configuration 1. You need to manually set the primary path for each LUN on the vsphere Client. For the default preferred LUN, you can set a non-preferred path and then set the preferred path. 2. A switchback is supported upon recovery from a path fault. 3. On the VMware command line interface, run the following commands to add rules: For ESX/ESXi 4.x: esxcli nmp satp addrule -V HUAWEI -M XSG1 -s VMW_SATP_ALUA P VMW_PSP_RR -c tpgs_on For ESXi 5.0 and later: esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on You need to change the preceding information in bold based on your actual situation. For details, see Table After the command is executed, the new rule will immediately take effect for the newly mapped LUN, but will not take effect for previously mapped LUNs unless ESXi is restarted. 4. This configuration is recommended for ALUA-enabled storage. 5. This configuration is recommended for ALUA-disabled storage. 6. For the MSCS and WSFC clusters deployed on VMware ESXi 5.1 and earlier versions, you cannot set the RDM LUN to Round Robin, but can set it to FIXED. For details, see section "Modifying the Path Policy for a Single LUN" or VMware KB For any future controller expansion purpose, you are advised to disable ALUA and configure VMW_SATP_DEFAULT_AA. WARNING To avoid the Ping-Pong effect in VMware ESX 4.0 clusters, you are advised to disable ALUA. If a path policy or preferred path is set on the VMware page before or after rules are added, this setting prevails. The newly added rule will not take effect to any LUN that has been configured with a path policy or preferred path. For a LUN already configured with a preferred path, first switch to the non-preferred path and then set back to the preferred path, thereby ensuring normal switchback upon recovery from the fault. OceanStor 18000/T V2/V3 supports two or more controllers. When the storage systems have two controllers, they support ALUA and A/A. When the storage systems have more than two controllers, they support only A/A but not ALUA (as of the release of this document). To facilitate future capacity expansion, you are advised to disable ALUA on the OceanStor 18000/T V2/V3 and its host Configuring the ALUA Mode Configuration on the Storage System T Series V100R005/Dorado2100/Dorado5100/Dorado2100 G2 123

125 10 VMware NMP Policy Configuration Use the Huawei OceanStor storage management system to enable ALUA for all the host initiators, as shown in Figure Figure Enabling ALUA for T series V100R005/Dorado2100/Dorado5100/Dorado2100 G2 T Series V200R002/18000 Series/V3 Series/18000 V3 Series Use the Huawei OceanStor storage management system to enable ALUA for all the host initiators, as shown in Figure

126 10 VMware NMP Policy Configuration Figure Enabling ALUA for T series V200R002/18000 series/v3 series/18000 V3 series If there are more than two controllers and ALUA is disabled by default, the ALUA status cannot be changed Configuration on ESXi Hosts The same as section "Configuration on ESXi Hosts." Configuring the AA Mode The same as section "Configuring the AA Mode." Manually Modifying Path Rules The same as section "Manually Modifying Path Rules." 10.5 Querying and Modifying the Path Selection Policy This section describes how to use commands to check and modify the path policy Querying the Path Policy of a Single LUN ESX/ESXi 4.0 The following is an example command for querying a path policy: [root@e4 ~]# esxcli nmp device list -d naa f naa f Device Display Name: HUASY iscsi Disk (naa f ) Storage Array Type: VMW_SATP_DEFAULT_AA 125

127 10 VMware NMP Policy Configuration Storage Array Type Device Config: Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba33:c0:t1:l0;current=vmhba33:c0:t0:l0} Working Paths: vmhba33:c0:t0:l0 ~]# ESX/ESXi 4.1 The following is an example command for querying a path policy: [root@localhost ~]# esxcli corestorage device list naa.60022a b2a9d a Display Name: HUASY Fibre Channel Disk (naa.60022a b2a9d a) Size: Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60022a b2a9d a Vendor: HUASY Model: S5600T Revision: 2105 SCSI Level: 4 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: VAAI Status: unknown Other UIDs: vml a b2a9d a [root@localhost ~]# esxcli nmp device list naa.60022a b2a9d a Device Display Name: HUASY Fibre Channel Disk (naa.60022a b2a9d a) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=2,tpg_state=ao}} Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: Current Path=vmhba1:C0:T0:L1 Working Paths: vmhba1:c0:t0:l1 esxcli corestorage device list is used to display existing disks. esxcli nmp device list is used to display disk paths. ESXi 5.0 and later The following is an example command for querying a path policy: ~ # esxcli storage nmp device list naa b85d

128 10 VMware NMP Policy Configuration Device Display Name: HUASY iscsi Disk (naa b85d ) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{tpg_id=1,tpg_state=ao}{tpg_id=2,tpg_state=an O}} Path Selection Policy: VMW_PSP_RR Path Selection Policy Device Config: Current Path=vmhba39:C0:T0:L2 Path Selection Policy Device Custom Config: Working Paths: vmhba39:c0:t0:l Modifying the Path Policy for a Single LUN Before modifying the PSP path policy of a LUN, you can run the following command to learn the parameters that can be modified: VMware ESXi/ESX 4.1 # esxcli nmp device setpolicy -d naa d00cff95e65664ee011 --psp=vmw_psp_fixed VMware ESXi 5.0 and later # esxcli storage nmp device set -d naa d00cff95e65664ee011 --psp=vmw_psp_fixed Run the following command to check the modification result: VMware ESXi/ESX 4.1 # esxcli nmp device list -d naa d00cff95e65664ee011 grep PSP naa d00cff95e65664ee011 VMware ESXi 5.0 and later # esxcli storage nmp device list -d naa d00cff95e65664ee011 grep PSP naa d00cff95e65664ee

129 11 FAQs 11 FAQs 11.1 VMware APD and PDL For VMware APD (All-Paths-Down) and PDL (Permanent Device Loss), please refer VMware KB as below: externalid= How Can I Select a Fixed Preferred Path for a Storage Device with Active-Active Controllers? Determine which path is preferred based on the performance optimization principle and load balance principle. Select the path connected to the working controller of the LUN. If there are multipath paths connected to the working controller of the LUN, distribute the preferred paths evenly to the multiple paths How Can I Determine Which Controller a Path Is Connected to? A path can be located using the initiator (host port) and the target (storage device port). Figure 11-1 illustrates how to obtain the path information about a storage device. Determine a path connected to a Fibre Channel storage device by the initiator name's former part (for example vmhba4:c0) and the target name. Then based on the target name's latter part (for example 20:08:f8:4a:bf:57:af:b7), determine the storage device's Fibre Channel port that corresponds to the WWPN. The previous information can be combined to determine which controller the path connects to. Determine the IPv4 address of an iscsi storage device's Ethernet port by the target name's latter part (for example :3260). Then you can determine which controller the path connects to. 128

130 Figure 11-1 Port information about a path 11 FAQs 11.4 Differences Between iscsi Multi-path Networks with Single and Multiple HBAs This section describes differences between iscsi multi-path networks with single and multiple HBAs iscsi Multi-Path Network with a Single HBA A blade server generally has only one HBA apart from the one used for management. For example, an IBM HS22 with eight network ports can provide only one HBA during VMkernel creation. In this case, you can bond two VMkernels to the HBA. This configuration is proven applicable by practical experience. Figure 11-2 shows this configuration. 129

131 Figure 11-2 iscsi network with a single HBA 11 FAQs iscsi Multi-path Network with Multiple HBAs If two or more HBAs are available, you can bond VMkernels to different HBAs to set up a cross-connection network. Figure 11-3 shows a parallel network where two VMkernels are bonded to the two network ports of the HBAs on controller A. Figure 11-3 Network A with multiple HBAs Figure 11-4 shows the port mapping. Figure 11-4 Port mapping of network A with multiple HBAs 130

132 11 FAQs Figure 11-5 shows a cross-connection network where two VMkernels are bonded to two HBAs, one of which resides on controller A and the other on controller B. Figure 11-5 Network B with multiple HBAs In this configuration, both NIC 1 and NIC 2 are connected to controller A and controller B, forming a cross-connection network. Services are not interrupted when any path fails. Figure 11-6 shows the port mapping. Figure 11-6 Port mapping of network B with multiple HBAs 131

133 12 Common Commands 12 Common Commands This chapter describes the commands for VMware. Viewing the Version Run the following commands to view the VMware version. ~ # vmware -l VMware ESXi GA ~ # vmware -v VMware ESXi build ~ # Viewing Hardware Information Run the following commands to view hardware information including the ESX hardware and kernel: esxcfg -info a (Displays all related information.) esxcfg info w (Displays ESX hardware information.) Obtaining Help Documentation Command syntax varies with host system versions. You can perform the following steps to obtain help documentation for different versions of host systems. Step 1 Log in to the VMware official website. Step 2 Select a VMware version. Select the latest version of VMware and click vsphere Command-Line Interface Reference, as shown in Figure

134 Figure 12-1 Selecting a VMware version 12 Common Commands After that, you are navigated to the help page of the selected VMware version. ----End 133

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

Huawei SAN Storage Host Connectivity Guide for Red Hat. Issue 04 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei SAN Storage Host Connectivity Guide for Red Hat. Issue 04 Date HUAWEI TECHNOLOGIES CO., LTD. Huawei SAN Storage Host Connectivity Guide for Red Hat Issue 04 Date 2018-04-10 HUAWEI TECHNOLOGIES CO., LTD. 2018. All rights reserved. No part of this document may be reproduced or transmitted in any

More information

Operation Guide for Security NEs Management

Operation Guide for Security NEs Management imanager U2000 Unified Network Management System V100R002C01 Operation Guide for Security NEs Management Issue 03 Date 2010-11-19 HUAWEI TECHNOLOGIES CO., LTD. 2010. All rights reserved. No part of this

More information

Quidway S5700 Series Ethernet Switches V100R006C01. Configuration Guide - Ethernet. Issue 02 Date HUAWEI TECHNOLOGIES CO., LTD.

Quidway S5700 Series Ethernet Switches V100R006C01. Configuration Guide - Ethernet. Issue 02 Date HUAWEI TECHNOLOGIES CO., LTD. V100R006C01 Issue 02 Date 2011-11-21 HUAWEI TECHNOLOGIES CO., LTD. 2011. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 This document supports the version of each product listed and supports all subsequent

More information

Huawei MZ110 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei MZ110 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001 Issue 07 Date 2016-11-21 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

Huawei MZ510 NIC V100R001. White Paper. Issue 09 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei MZ510 NIC V100R001. White Paper. Issue 09 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001 Issue 09 Date 2016-11-21 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent

More information

Huawei MZ912 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei MZ912 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001 Issue 07 Date 2016-11-22 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

UltraPath Technical White Paper

UltraPath Technical White Paper HUAWEI OceanStor Enterprise Unified Storage System Issue 01 Date 2014-04-02 HUAWEI TECHNOLOGIES CO, LTD Copyright Huawei Technologies Co, Ltd 2014 All rights reserved No part of this document may be reproduced

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide AT459-96002 Part number: AT459-96002 First edition: April 2009 Legal and notice information Copyright 2009 Hewlett-Packard

More information

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp Agenda The Landscape has Changed New Customer Requirements The Market has Begun to Move Comparing Performance Results Storage

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

Exam4Tests.   Latest exam questions & answers help you to pass IT exam test easily Exam4Tests http://www.exam4tests.com Latest exam questions & answers help you to pass IT exam test easily Exam : VCP510PSE Title : VMware Certified Professional 5 - Data Center Virtualization PSE Vendor

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, four-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

VMware vsphere 5.5 Professional Bootcamp

VMware vsphere 5.5 Professional Bootcamp VMware vsphere 5.5 Professional Bootcamp Course Overview Course Objectives Cont. VMware vsphere 5.5 Professional Bootcamp is our most popular proprietary 5 Day course with more hands-on labs (100+) and

More information

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 Setup for Failover Clustering and Microsoft Cluster Service 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website

More information

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ] s@lm@n VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ] VMware VCP-511 : Practice Test Question No : 1 Click the Exhibit button. An administrator has

More information

DELL POWERVAULT MD3200I / MD3600I DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

DELL POWERVAULT MD3200I / MD3600I DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE DELL POWERVAULT MD3200I / MD3600I DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE A Dell Technical White Paper Version 1.7 PowerVault MD3200i and MD3600i Storage Arrays www.dell.com/md3200i www.dell.com/md3600i

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Technical Brief: How to Configure NPIV on VMware vsphere 4.0

Technical Brief: How to Configure NPIV on VMware vsphere 4.0 Technical Brief: How to Configure NPIV on VMware vsphere 4.0 Provides step-by-step instructions on how to configure NPIV on VMware vsphere 4.0 in a Brocade fabric. Leveraging NPIV gives the administrator

More information

S Series Switch. Cisco HSRP Replacement. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

S Series Switch. Cisco HSRP Replacement. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD. Cisco HSRP Replacement Issue 01 Date 2013-08-05 HUAWEI TECHNOLOGIES CO., LTD. 2013. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Best Practice of HUAWEI OceanStor T Series Solutions for Key VMware Applications

Best Practice of HUAWEI OceanStor T Series Solutions for Key VMware Applications Best Practice of HUAWEI OceanStor T Series Solutions for Key VMware Applications 2013. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this

More information

Configuration Guide -Server Connection-

Configuration Guide -Server Connection- FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX This page is intentionally left blank. Preface This manual briefly explains the operations

More information

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Setup for Microsoft Cluster Service Revision: 041108

More information

Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.

Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2. Topic Configuration s VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.5 Update 2 When you are selecting and configuring your virtual and physical equipment,

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5 Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5 September 2008 Dell Virtualization Solutions Engineering Dell PowerVault Storage Engineering www.dell.com/vmware www.dell.com/powervault

More information

iscsi Configuration for ESXi using VSC Express Guide

iscsi Configuration for ESXi using VSC Express Guide ONTAP 9 iscsi Configuration for ESXi using VSC Express Guide May 2018 215-11181_E0 doccomments@netapp.com Updated for ONTAP 9.4 Table of Contents 3 Contents Deciding whether to use this guide... 4 iscsi

More information

DELL POWERVAULT MD32XXI / MD36XXI DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

DELL POWERVAULT MD32XXI / MD36XXI DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE DELL POWERVAULT MD32XXI / MD36XXI DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE A Dell Technical White Paper Version 1.5 PowerVault MD32xxi and MD36xxi Storage Arrays www.dell.com/md32xxi DISCLAIMER:

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6. Setup for Failover Clustering and Microsoft Cluster Service Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware

More information

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S Contents Introduction...1 iscsi Explained...1 Initiators...1 Discovery and Logging On...2 Authentication...2 Designing the

More information

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product.

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product. VMware EXAM - VCP5-DCV VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Buy Full Product http://www.examskey.com/vcp5-dcv.html Examskey VMware VCP5-DCV exam demo product is here

More information

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide Dell Storage vsphere Web Client Plugin Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

ESX Server 3 Configuration Guide ESX Server 3.5 and VirtualCenter 2.5

ESX Server 3 Configuration Guide ESX Server 3.5 and VirtualCenter 2.5 ESX Server 3 Configuration Guide ESX Server 3.5 and VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

DumpExam. The best exam dump, valid dumps PDF, accurate exam materials provider

DumpExam.   The best exam dump, valid dumps PDF, accurate exam materials provider DumpExam http://www.dumpexam.com The best exam dump, valid dumps PDF, accurate exam materials provider Exam : 2V0-602 Title : VMware vsphere 6.5 Foundations Vendor : VMware Version : DEMO Get Latest &

More information

Virtual Volumes FAQs First Published On: Last Updated On:

Virtual Volumes FAQs First Published On: Last Updated On: First Published On: 03-20-2017 Last Updated On: 07-13-2018 1 Table of Contents 1. FAQs 1.1.Introduction and General Information 1.2.Technical Support 1.3.Requirements and Capabilities 2 1. FAQs Frequently

More information

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

DSI Optimized Backup & Deduplication for VTL Installation & User Guide DSI Optimized Backup & Deduplication for VTL Installation & User Guide Restore Virtualized Appliance Version 4 Dynamic Solutions International, LLC 373 Inverness Parkway Suite 110 Englewood, CO 80112 Phone:

More information

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Abstract The Thin Import feature of Dell Storage Center Operating System offers solutions for data migration

More information

Configuration Maximums VMware vsphere 5.0

Configuration Maximums VMware vsphere 5.0 Topic VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the following tables

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Oracle VM. Getting Started Guide for Release 3.2

Oracle VM. Getting Started Guide for Release 3.2 Oracle VM Getting Started Guide for Release 3.2 E35331-04 March 2014 Oracle VM: Getting Started Guide for Release 3.2 Copyright 2011, 2014, Oracle and/or its affiliates. All rights reserved. Oracle and

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better

More information

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD. HUAWEI OceanStor Enterprise Unified Storage System HyperReplication Technical White Paper Issue 01 Date 2014-03-20 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may

More information

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems OceanStor 5300F&5500F& 5600F&5800F V5 Huawei mid-range all-flash storage systems (OceanStor F V5 mid-range storage for short) deliver the high performance, low latency, and high scalability that are required

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

Cloud-Oriented Converged Storage

Cloud-Oriented Converged Storage Cloud-Oriented Converged Storage 5600, and 5800 V3 mid-range storage systems are next-generation unified storage products specifically designed for enterprise-class applications. Employing a storage operating

More information

HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system.

HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system. HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system. Martin Gingras Product Field Engineer, Canada mgingras@ca.ibm.com Acknowledgements Thank you to the many people who have contributed and reviewed

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Configuration Cheat Sheet for the New vsphere Web Client

Configuration Cheat Sheet for the New vsphere Web Client Configuration Cheat Sheet for the New vsphere Web Client 1-800-COURSES www.globalknowledge.com Configuration Cheat Sheet for the New vsphere Web Client Bill Ferguson, VCI3, 4, 5, MCT, MCSE, MCP+I, CCSI,

More information

Cloud-Oriented Converged Storage

Cloud-Oriented Converged Storage 5600, and 500 V3 Storage Systems Cloud-Oriented Converged Storage 5600, and 500 V3 mid-range storage systems are next-generation unified storage products specifically designed for enterprise-class applications.

More information

Huawei OceanStor ReplicationDirector Software Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Huawei OceanStor ReplicationDirector Software Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date Huawei OceanStor Software Issue 01 Date 2015-01-17 HUAWEI TECHNOLOGIES CO., LTD. 2015. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without

More information

Using EMC Celerra Storage with VMware vsphere and VMware Infrastructure

Using EMC Celerra Storage with VMware vsphere and VMware Infrastructure Using EMC Celerra Storage with VMware vsphere and VMware Infrastructure Version 4.0 Connectivity of VMware vsphere or VMware Infrastructure to Celerra Storage Backup and Recovery of VMware vsphere or VMware

More information

ClearCube Virtualization. Deployment Guide. ClearCube Technology, Inc.

ClearCube Virtualization. Deployment Guide. ClearCube Technology, Inc. ClearCube Virtualization Deployment Guide ClearCube Technology, Inc. Copyright 2006, 2007, ClearCube Technology, Inc. All rights reserved. Under copyright laws, this publication may not be reproduced or

More information

The audience for this document is technical professionals who wish to learn more about using the Dell Compellent VMware vsphere Client Plug-in.

The audience for this document is technical professionals who wish to learn more about using the Dell Compellent VMware vsphere Client Plug-in. Dell Storage Hands-on Lab Instructions Storage provisioning using the Dell Compellent vsphere Web Plug-in Estimated Completion Time: 30 Minutes Introduction Audience The audience for this document is technical

More information

Use Restrictions for Hitachi Compute Blade 500 Series FASTFIND LINKS. Getting Help Contents MK-91CB

Use Restrictions for Hitachi Compute Blade 500 Series FASTFIND LINKS. Getting Help Contents MK-91CB Use Restrictions for Hitachi Compute Blade 500 Series FASTFIND LINKS Getting Help Contents MK-91CB500072-16 2010-2016 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or

More information

VMware vcenter Site Recovery Manager 4.1 Evaluator s Guide EVALUATOR'S GUIDE

VMware vcenter Site Recovery Manager 4.1 Evaluator s Guide EVALUATOR'S GUIDE VMware vcenter Site Recovery Manager 4.1 EVALUATOR'S GUIDE Table of Contents. Getting Started.... 3 About VMware vcenter Site Recovery Manager.... 3 About This Guide.... 3 Intended Audience.... 3 Assumptions....

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4 W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................

More information

Veritas Storage Foundation in a VMware ESX Environment

Veritas Storage Foundation in a VMware ESX Environment Veritas Storage Foundation in a VMware ESX Environment Linux and Solaris x64 platforms January 2011 TABLE OF CONTENTS Introduction... 3 Executive Summary... 4 Overview... 5 Virtual Machine File System...

More information

Configuration - Security

Configuration - Security Release: Document Revision: 5.3 01.01 www.nortel.com NN46240-600 324564-A Rev01 Release: 5.3 Publication: NN46240-600 Document Revision: 01.01 Document status: Standard Document release date: 30 March

More information

VMware - VMware vsphere: Install, Configure, Manage [V6.7]

VMware - VMware vsphere: Install, Configure, Manage [V6.7] VMware - VMware vsphere: Install, Configure, Manage [V6.7] Code: Length: URL: EDU-VSICM67 5 days View Online This five-day course features intensive hands-on training that focuses on installing, configuring,

More information

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Virtual Private Cloud. User Guide. Issue 21 Date HUAWEI TECHNOLOGIES CO., LTD.

Virtual Private Cloud. User Guide. Issue 21 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 21 Date 2018-09-30 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2018. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any

More information

VI3 to vsphere 4.0 Upgrade and New Technology Ultimate Bootcamp

VI3 to vsphere 4.0 Upgrade and New Technology Ultimate Bootcamp VI3 to vsphere 4.0 Upgrade and New Technology Ultimate Bootcamp Course Length: 2 Days Course Overview This instructor-led course covers everything the student needs to know to upgrade from VMware 3.x to

More information

Exam Name: VMware Certified Professional on vsphere 5 (Private Beta)

Exam Name: VMware Certified Professional on vsphere 5 (Private Beta) Vendor: VMware Exam Code: VCP-511 Exam Name: VMware Certified Professional on vsphere 5 (Private Beta) Version: DEMO QUESTION 1 The VMware vcenter Server Appliance has been deployed using default settings.

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

Best Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment

Best Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment Best Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment Abstract Dell EMC recommended best practices for configuring VMware vsphere hosts connecting to Dell PS Series storage

More information

vsphere Virtual Volumes

vsphere Virtual Volumes vsphere Virtual Volumes Technical Overview Josh Atwell Andy Banta Special Thanks to Rawlinson Rivera and Cormac Hogan Presenters Josh Atwell Solutions Architect, SolidFire Andy Banta Storage Janitor, SolidFire

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

New Features in VMware vsphere (ESX 4)

New Features in VMware vsphere (ESX 4) New Features in VMware vsphere (ESX 4) VMware vsphere Fault Tolerance FT VMware Fault Tolerance or FT is a new HA solution from VMware for VMs. It is only available in vsphere 4 and above and provides

More information

Clustered Data ONTAP 8.3

Clustered Data ONTAP 8.3 Clustered Data ONTAP 8.3 FC Configuration for ESX Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)

More information

Energy Saving Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Energy Saving Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date Energy Saving Technology White Paper Issue 01 Date 2012-08-13 HUAWEI TECHNOLOGIES CO., LTD. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means

More information

Veritas Storage Foundation In a VMware ESX Environment

Veritas Storage Foundation In a VMware ESX Environment Veritas Storage Foundation In a VMware ESX Environment Linux and Solaris x64 platforms December 2008 TABLE OF CONTENTS Introduction... 3 Executive Summary... 4 Overview... 5 Virtual Machine File System...

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.0 Data Protection Guide For Oracle Databases May 2018 215-12930_D0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to read the SnapCenter Data Protection Guide

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] [VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] Length Delivery Method : 5 Days : Instructor-led (Classroom) Course Overview This five-day course features intensive hands-on training that

More information

Juniper Secure Analytics Virtual Appliance Installation Guide

Juniper Secure Analytics Virtual Appliance Installation Guide Juniper Secure Analytics Virtual Appliance Installation Guide Release 7.3.0 Modified: 2017-09- Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 9089 USA 08-75-2000 www.juniper.net Copyright

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

OceanStor 6800F V5 Mission-Critical All-Flash Storage Systems

OceanStor 6800F V5 Mission-Critical All-Flash Storage Systems OceanStor 6800F V5 Mission-Critical s OceanStor 6800F V5 all-flash storage system (OceanStor 6800F V5 for short) is Huawei's next-generation mission-critical all-flash array. It has incorporated all of

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

VMware vsphere: Install, Configure, Manage plus Optimize and Scale- V 6.5. VMware vsphere 6.5 VMware vcenter 6.5 VMware ESXi 6.

VMware vsphere: Install, Configure, Manage plus Optimize and Scale- V 6.5. VMware vsphere 6.5 VMware vcenter 6.5 VMware ESXi 6. VMware vsphere V 6.5 VMware vsphere: Install, Configure, Manage plus Optimize and Scale- V 6.5 vsphere VMware vsphere 6.5 VMware vcenter 6.5 VMware ESXi 6.5 VMware vsphere vcenter ESXi ESXi VMware vcenter

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

The Best Choice for SMBs

The Best Choice for SMBs The Best Choice for SMBs Huawei OceanStor 2600 V3 storage system are flash-oriented storage products specifically designed for enterprise-class applications. Employing a storage operating system built

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information