Configuration Guide -Server Connection-

Similar documents
FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

Configuration Guide -Server Connection-

Configuration Guide -Server Connection-

Configuration Guide -Server Connection-

FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection-

ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for VMware ESX

ETERNUS VAAI Plug-in for VMware vsphere User s Guide

The Contents and Structure of this Manual. This document is composed of the following three chapters and an appendix.

The Contents and Structure of this Manual. This document is composed of the following six chapters and an appendix.

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

The Contents and Structure of this Manual. This document is composed of the following 12 chapters.

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) ETERNUS Disk Storage System Settings

This guide provides configuration settings and considerations for Hosts running VMware ESXi.

Preparing the boot media/installer with the ISO file:

Configuration Guide -Server Connection-

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) ETERNUS Disk Storage System Settings

ETERNUS Disk storage systems Server Connection Guide (FCoE) for Linux

Setup for Failover Clustering and Microsoft Cluster Service

ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for VMware ESX

Compellent Storage Center

Configuring and Managing Virtual Storage

Dell EMC SC Series Best Practices with VMware vsphere 5.x 6.x

The Contents and Structure of this Manual. This document is composed of the following two chapters.

The Contents and Structure of this Manual. This document is composed of the following 12 chapters.

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

The Contents and Structure of this Manual. This document is composed of the following ten chapters.

Multipathing Configuration for Software iscsi Using Port Binding

Best Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for Linux

Vmware Certkiller VCP-410 Exam Bundle

The Contents and Structure of this Manual. This document is composed of the following two chapters.

White Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series

Qsan Document - White Paper. How to configure iscsi initiator in ESXi 6.x

The Contents and Structure of this Manual. This document is composed of the following six chapters and three appendices.

The Contents and Structure of this Manual. This document is composed of the following three chapters.

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

DELL POWERVAULT MD3200I / MD3600I DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

Connect P300H with iscsi initiator in ESX4.0

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for Oracle Solaris

HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide

The Contents and Structure of this Manual. This document is composed of the following four chapters and an appendix.

DELL POWERVAULT MD32XXI / MD36XXI DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

PrepKing. PrepKing-VCP-410xams

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for Oracle Solaris

KillTest *KIJGT 3WCNKV[ $GVVGT 5GTXKEG Q&A NZZV ]]] QORRZKYZ IUS =K ULLKX LXKK [VJGZK YKX\OIK LUX UTK _KGX

shaping tomorrow with you Eternus VVOL Matthias Bothe Fujitsu Technology Solutions

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]

iscsi SAN Configuration Guide

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product.

Dell EMC SC Series Virtual Volumes Best Practices

Vmware VCP-511. VMware Certified Professional on vsphere 5 (Private Beta) Download Full Version :

Managing Multi-Hypervisor Environments with vcenter Server

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

This page is intentionally left blank.

VMware vsphere with ESX 4 and vcenter

Configuration Cheat Sheet for the New vsphere Web Client

VMware ESX Server Software for Dell PowerEdge Servers. Deployment Guide. support.dell.com

HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers

ETERNUS SF AdvancedCopy Manager SRA Version 2.4. User's Guide. Windows(64)

Install ISE on a VMware Virtual Machine

Setting Up Cisco Prime LMS for High Availability, Live Migration, and Storage VMotion Using VMware

Veritas Access. Installing Veritas Access in VMWare ESx environment. Who should read this paper? Veritas Pre-Sales, Partner Pre-Sales

ETERNUS Veritas Operations Manager Storage Insight Plug-in 1.0 User s Guide P2X ENZ0

Design Guide (Basic) FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems. System configuration design

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Install ISE on a VMware Virtual Machine

VMware vsphere with ESX 4.1 and vcenter 4.1

VDCA550 Study Guide. VDCA550 Study Guide - Section 1 Page 1


Setup for Failover Clustering and Microsoft Cluster Service

VMware vsphere 6.0 / 6.5 Infrastructure Deployment Boot Camp

FUJITSU Storage ETERNUS Multipath Driver 2 (for AIX) Installation Information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

NexentaStor VVOL

Install ISE on a VMware Virtual Machine

Vmware VCP410. VMware Certified Professional on vsphere 4. Download Full Version :

HP StoreVirtual Storage Multipathing Deployment Guide

VI3 to vsphere 4.0 Upgrade and New Technology Ultimate Bootcamp

COPYRIGHTED MATERIAL. Windows Server 2008 Storage Services. Chapter. in this chapter:

Dell EMC SC Series Storage with SAS Front-end Support for VMware vsphere

Dell EMC SC Series Storage: Microsoft Multipath I/O

Install ISE on a VMware Virtual Machine

Preparing Virtual Machines for Cisco APIC-EM

Infortrend VMware Solutions

Preparing Virtual Machines for Cisco APIC-EM

VMware vcenter Site Recovery Manager 4.1 Evaluator s Guide EVALUATOR'S GUIDE

Data Protection Guide

A. Both Node A and Node B will remain on line and all disk shelves will remain online.

Transcription:

FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX

This page is intentionally left blank.

Preface This manual briefly explains the operations that need to be performed by the user in order to connect an ETERNUS DX/AF to a server running VMware ESX via a Fibre Channel interface. This manual should be used in conjunction with any other applicable user manuals, such as those for the ETERNUS DX/AF, server, OS, Fibre Channel cards, and drivers. Refer to "Configuration Guide -Server Connection- Notations" for the notations used in this manual such as product trademarks and product names. For storage systems that are supported by the OS, refer to the Server Support Matrix of the ETERNUS DX/AF. 28th Edition April 2018 The Contents and Structure of this Manual This manual is composed of the following 12 chapters. "Chapter 1 Workflow" (page 6) This chapter describes how to connect the ETERNUS DX/AF storage systems to a server running VMware ESX. "Chapter 2 Checking the Server Environment" (page 9) This chapter describes which servers can be connected to ETERNUS DX/AF storage systems. "Chapter 3 Notes" (page 11) This chapter describes issues that should be noted when connecting the ETERNUS DX/AF storage systems and server. "Chapter 4 Installing and Setting Up ETERNUSmgr" (page 16) This chapter describes how to install ETERNUSmgr. "Chapter 5 Setting Up the ETERNUS DX/AF" (page 17) This chapter describes how to set up an ETERNUS DX/AF. "Chapter 6 Setting Up the Fibre Channel Switches" (page 18) This chapter describes how to set up the Fibre Channel switches. "Chapter 7 Setting Up the VMware ESX Server" (page 20) This chapter describes how to set up the VMware ESX server. "Chapter 8 For VMware vsphere" (page 21) This chapter describes how to set up for use with VMware vsphere. "Chapter 9 Virtual Machine" (page 51) This chapter describes how to operate the Virtual Machine. "Chapter 10 Dynamic Recognition of LUNs Added While VMware ESX is in Use" (page 53) This chapter describes dynamic recognition of LUNs added while VMware ESX is in use. "Chapter 11 Storage Migration" (page 54) This chapter describes the procedures that are related to Storage Migration. "Chapter 12 Non-disruptive Storage Migration" (page 59) This chapter describes the procedures that are related to Non-disruptive Storage Migration. 3

Table of Contents Chapter 1 Workflow 6 Chapter 2 Checking the Server Environment 9 2.1 Hardware... 9 2.2 Fibre Channel Cards... 9 2.3 Connection Compatibility of ETERNUS DX/AF Storage Systems to VMware ESX... 10 2.4 Virtual Machine... 10 Chapter 3 Notes 11 3.1 Connection Notes... 11 3.2 VMware ESX Operating Notes... 12 3.3 ETERNUS DX/AF Setup Notes... 12 3.4 SAN Boot Notes... 13 3.5 ETERNUS DX/AF to VMware VCB Proxy Server (on Windows ) Connection Notes... 13 3.6 Server Startup and Power Supply Control Notes... 13 3.7 Notes on WWN Instance Management Table for the Server... 13 3.8 System Design Sheet Notes... 14 3.9 VMware Multipath Plug-in... 14 3.10 Notes on Installing the Storage Cluster Function... 14 3.11 VMware vsphere Virtual Volumes (VVOL) Notes... 14 Chapter 4 Installing and Setting Up ETERNUSmgr 16 Chapter 5 Setting Up the ETERNUS DX/AF 17 Chapter 6 Setting Up the Fibre Channel Switches 18 Chapter 7 Setting Up the VMware ESX Server 20 Chapter 8 For VMware vsphere 21 8.1 Checking the LUNs... 21 4

Table of Contents 8.1.1 Turning On the Devices...21 8.1.2 Checking the Connected Devices...21 8.1.3 Enabling ALUA...22 8.1.4 Checking the LUNs...24 8.2 Checking the Active Paths... 27 8.2.1 Checking the Active Path for a Single-Path Configuration...27 8.2.2 Checking the Active Path for a Multi-Path Configuration...31 8.3 Setting Up the Volumes... 40 8.3.1 For VMware vsphere 6.0 or later...40 8.3.2 For VMware vsphere 5.5 or earlier...42 8.4 Checking the Maximum Command Queue Depth of the Fibre Channel Card... 45 8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines... 46 8.5.1 For VMware vsphere 6.0 or later...46 8.5.2 For VMware vsphere 5.5...47 8.5.3 For VMware vsphere 5.1 or Earlier...49 8.6 Plug-in Settings... 50 8.7 Changing the Multipath Operation... 50 Chapter 9 Virtual Machine 51 9.1 For Windows... 51 9.1.1 Checking the Registry Information...51 9.1.2 Notes for Configuring Windows Server Failover Clustering...52 9.2 For Linux... 52 9.2.1 Applying Virtual Machine Updates...52 Chapter 10 Dynamic Recognition of LUNs Added While VMware ESX is in Use 53 Chapter 11 Storage Migration 54 Chapter 12 Non-disruptive Storage Migration 59 5

Chapter 1 Workflow This chapter describes how to connect the ETERNUS DX/AF storage systems to a server running VMware ESX. This manual explains how to perform various operations not referenced in other documents. Required Documents "Server Support Matrix" "Server Support Matrix for FC-SWITCH" "Configuration Guide -Server Connection- Storage System Settings" that corresponds to the ETERNUS DX/AF to be connected "Configuration Guide -Server Connection- (Fibre Channel) Fibre Channel Switch Settings" "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX Driver Settings for Fujitsu Fibre Channel Cards" "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX Driver Settings for Non-Fujitsu Fibre Channel Cards" "ETERNUS Web GUI User's Guide" "ETERNUSmgr Install Guide" "ETERNUSmgr User Guide" Manuals supplied with the Fibre Channel card and software Workflow Installing ETERNUSmgr and Setting Up the ETERNUS DX/AF If ETERNUSmgr is to be used, install it and set up the ETERNUS DX/AF. "Chapter 4 Installing and Setting Up ETERNUSmgr" (page 16) "Chapter 5 Setting Up the ETERNUS DX/AF" (page 17) Checking the setup and maintenance operations - "ETERNUS Web GUI User's Guide" Installing ETERNUSmgr - "ETERNUSmgr Install Guide" Checking the ETERNUSmgr operational procedures - "ETERNUSmgr User Guide" Setting up the ETERNUS DX/AF - "Configuration Guide -Server Connection- Storage System Settings" that corresponds to the ETER- NUS DX/AF to be connected 6

Chapter 1 Workflow Setting Up the Fibre Channel Switches If a Fibre Channel switch is to be used, set it up now. "Chapter 6 Setting Up the Fibre Channel Switches" (page 18) Setting up the Fibre Channel switches - "Configuration Guide -Server Connection- (Fibre Channel) Fibre Channel Switch Settings" Checking the Fibre Channel switch connection requirements - "Server Support Matrix for FC-SWITCH" Setting Up the VMware ESX Server Install the Fibre Channel card in the server, then identify and record the Fibre Channel card details. For a SAN Boot configuration, also perform the SAN Boot settings. "Chapter 7 Setting Up the VMware ESX Server" (page 20) Setting up the VMware ESX server, SAN Boot, and Fibre Channel card BIOS - "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX Driver Settings for Fujitsu Fibre Channel Cards" - "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX Driver Settings for Non- Fujitsu Fibre Channel Cards" Checking the Fibre Channel card driver versions - "Server Support Matrix" Checking the LUNs Confirm the VMware ESX LUNs (logical units). For VMware vsphere - "8.1 Checking the LUNs" (page 21) Checking the Active Paths Confirm the VMware ESX path(s). For VMware vsphere - "8.2 Checking the Active Paths" (page 27) 7

Chapter 1 Workflow Creating the Volumes Create the VMware ESX volumes. For VMware vsphere - "8.3 Setting Up the Volumes" (page 40) For VMware 6.0 or later For VMware 5.5 or earlier Checking the Maximum Command Queue Depth of the Fibre Channel Card Check the maximum command queue depth of the Fibre Channel card. "8.4 Checking the Maximum Command Queue Depth of the Fibre Channel Card" (page 45) Changing the Maximum Outstanding Disk Requests for Virtual Machines Change the Maximum Outstanding Disk Requests for virtual machines. "8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines " (page 46) When a VMware multipath plug-in is used Setting Up the Plug-in Set the VMware multipath plug-in. For VMware vsphere - "8.6 Plug-in Settings" (page 50) Starting the Virtual Machine Notes on Virtual Machine operation and related settings are given in this manual and other documents. "Chapter 9 Virtual Machine" (page 51) 8

Chapter 2 Checking the Server Environment Connection to servers is possible in the following environments. Check the "Server Support Matrix" for server environment conditions. 2.1 Hardware VMware ESX connection may require the use of Fibre Channel switches, depending on the version of VMware ESX used. Fibre Channel switches that can be connected to the ETERNUS DX/AF vary depending on the connection environment (OS or the ETERNUS DX/AF). Refer to "Server Support Matrix for FC-SWITCH" to check the available Fibre Channel switches in advance. Access the following documents or use the "Server Support Matrix" to determine which models are compatible with which servers. - Search the VMware Compatibility Guide https://www.vmware.com/resources/compatibility/search.php ETERNUS DX/AF storage systems support SAN Boot. For SAN Boot details, refer to "3.4 SAN Boot Notes" (page 13). VMware ESX may be SAN Booted if a SAN Boot capable Fibre Channel card is used. For more details of VMware ESX SAN Booting, refer to the following web-site and documents: - Reference URL: https://www.vmware.com/support/pubs/ - Reference documents: "VMware vsphere Documentation: vsphere Storage Guide" "VMware vsphere 4 Documentation: Fibre Channel SAN Configuration Guide" 2.2 Fibre Channel Cards Refer to the "Server Support Matrix". 9

Chapter 2 Checking the Server Environment 2.3 Connection Compatibility of ETERNUS DX/AF Storage Systems to VMware ESX 2.3 Connection Compatibility of ETERNUS DX/AF Storage Systems to VMware ESX Access the following web-site to check which ETERNUS DX/AF models may be connected to VMware ESX: Search the VMware Compatibility Guide https://www.vmware.com/resources/compatibility/search.php 2.4 Virtual Machine Virtual Machine is a virtual machine created on the VMware ESX server. Details of how to install an OS on the VMware ESX Virtual Machine may be checked via the following URL: http://partnerweb.vmware.com/gosig/home.html 10

Chapter 3 Notes Note the following issues when connecting the ETERNUS DX/AF to a server. 3.1 Connection Notes To ensure reliable access to the ETERNUS DX/AF storage systems, the following methods are recommended: - Connection via multiple paths - Balancing of active paths - VMware multipath plug-in VMware multipath plug-in VMware ESX HBA#1 Cable HBA#2 Example configuration with optimized load balancing of the LUNs SN200#1 Fibre Channel switch Cable SN200#2 Fibre Channel switch CM#0 CA#0 Port#0 CM#1 CA#0 Port#0 Active path#1 (to LUN0) LUN0 LUN1 Active path#2 (to LUN1) Setting up multiple access paths between the server and the ETERNUS DX/AF creates a multipath configuration. A multipath configuration provides improved access redundancy. For VMware multipath plug-in details, refer to "3.9 VMware Multipath Plug-in" (page 14). The recommended "Path Selection Policy" setting is as follows: For VMware vsphere "RoundRobin (VMware)" 11

Chapter 3 Notes 3.2 VMware ESX Operating Notes However, when Windows Server Failover Clustering is used in virtual machines and VMware vsphere 5.1 Update 2 or earlier is used, set to "Most Recently Used (MRU)". For more details, refer to the VMware web-site for KB1010041. It is recommended that system design sheets be drawn up. When deciding the access paths for the multipath function, first design a load balanced system, and then create the corresponding system design sheets. Even if settings and connections are made according to the "Configuration Guide -Server Connection-", confirmation of LUN might not be possible. Set the transfer speed for the FC switch port to a fixed value based on the maximum transfer speed for the server's Fibre Channel card or the maximum transfer speed of the storage system CA port. 3.2 VMware ESX Operating Notes Refer to the following web-site for the number of LUNs that VMware ESX can recognize. https://www.vmware.com/support/pubs/ When Windows is used on the Virtual Machine, the registry will need to be modified after Windows is installed. For details, refer to "Chapter 9 Virtual Machine" (page 51). The VMware ESX's multipath function supports path failover, meaning that server access can continue unaffected by any problems that might arise in the Fibre Channel cables or switches. It should be noted that path failover can fail to occur when multiple SCSI sense codes are repeatedly and simultaneously received. ETERNUS Multipath Driver and GR Multipath Driver do not need to be installed on a VMware ESX Virtual Machine. The multipath function of VMware ESX provides path redundancy. 3.3 ETERNUS DX/AF Setup Notes When connecting a VMware ESX server to the ETERNUS DX/AF, check that the firmware version is as specified in the "Server Support Matrix". When LUN groups (Affinity Groups) are configured for each of the physical servers in a configuration where the physical servers share LUNs (such as in a vmotion configuration), a LUN group (Affinity Group) mapping should be used to ensure that each shared LUN is assigned the same LUN number across every physical server. When connecting to VMware ESX with multiple paths that are sharing a single LUN, the Reset Group setting may need to be changed from the default value for some ETERNUS DX/AF models. Follow the instructions that are provided in "Configuration Guide -Server Connection- Storage System Settings" that corresponds to the ETERNUS DX/AF to be connected. When adding LUNs while VMware ESX is in use (recognizing LUNs dynamically), refer to "Chapter 10 Dynamic Recognition of LUNs Added While VMware ESX is in Use" (page 53). 12

Chapter 3 Notes 3.4 SAN Boot Notes 3.4 SAN Boot Notes All ETERNUS DX/AF storage systems support SAN Boot. VMware ESX calls SAN Boot to "Boot From a SAN". "Boot From a SAN" means to boot the VMware ESX server from a Storage Area Network (SAN). Only set SAN Boot for supported Fibre Channel cards. For the Fibre Channel card settings, refer to "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX" for the Fibre Channel cards to be used. 3.5 ETERNUS DX/AF to VMware VCB Proxy Server (on Windows ) Connection Notes When connecting a Windows -based VMware VCB (VMware Consolidated Backup) Proxy server to an ETER- NUS DX60 S2, the VMware ESX shares the LUNs, and the ETERNUS DX60 S2 settings must be adjusted. For details of the settings, refer to "Configuration Guide -Server Connection- Storage System Settings" that corresponds to the ETERNUS DX/AF to be connected. 3.6 Server Startup and Power Supply Control Notes Before turning the server on, check that the ETERNUS DX/AF storage systems and Fibre Channel switches are all "Ready". If the server is turned on and they are not "Ready", the server will not be able to recognize the ETERNUS DX/AF storage systems. Also, when the ETERNUS DX/AF power supply is being controlled by a connected server, make sure that the ETERNUS DX/AF does not shut down before the connected servers. Similarly, the Fibre Channel switches must also be turned off after the connected servers have been shut down. If turned off, data writes from the running server cannot be saved to the ETERNUS DX/AF storage systems, and already saved data may also be affected. 3.7 Notes on WWN Instance Management Table for the Server The WWN instance management table for the server is a worksheet that helps make the process of installing an ETERNUS DX/AF easy. It is important that the system details be recorded after first installing the system and also each time the system is subsequently modified, expanded, or has maintenance work performed on it. Creating an instance management table makes installation and maintenance of the system easy. Use template instance management tables provided in "Appendix Various Management Tables (Template)" of the "Configuration Guide - Server Connection- (Fibre Channel) for Windows Driver Settings" for the Fibre Channel card being used. 13

Chapter 3 Notes 3.8 System Design Sheet Notes 3.8 System Design Sheet Notes The system design sheet is a spreadsheet program work sheet that is used to simplify the process of installing the ETERNUS DX/AF. It is important that the system details be recorded after first installing the system and also each time the system is subsequently modified, expanded, or has maintenance work performed on it. Creating a system design sheet makes installation and maintenance of the system easy. 3.9 VMware Multipath Plug-in For improved system reliability, installing the VMware multipath plug-in for the ETERNUS DX/AF is recommended. The VMware multipath plug-in for the ETERNUS DX/AF can prevent access from being stopped when a path fails by optimizing the VMware multipath's path fail-over function for the ETERNUS DX/AF. For setting details, refer to "8.6 Plug-in Settings" (page 50). For details on how to obtain VMware Multi-Pathing plug-in for ETERNUS, contact your sales representative. 3.10 Notes on Installing the Storage Cluster Function To install the Storage Cluster function, rebooting the server is necessary after the TFO group is set up. 3.11 VMware vsphere Virtual Volumes (VVOL) Notes In a VVOL environment, RHEL 6.4 may become unable to mount its swap space. As a result, the swap space mounting process during the virtual machine startup becomes stagnated and prevents RHEL 6.4 from booting. Event Cause This problem always occurs when all of the following conditions are met. The ETERNUS DX S4/S3 series or the ETERNUS AF series is used with firmware version V10L70 or later and earlier than V10L80 The connected server is running VMware vsphere 6.5 and is configured with a VVOL environment The virtual machine hardware version is 13 The OS of the virtual machine is RHEL 6.4 The swap space is created in a physical volume (*1) using the RHEL6.4 side settings Example: /dev/sda1 g /boot /dev/sda2 g /swap *1: The swap space is created in the LVM volume during a default installation, so this problem does not occur. 14

Chapter 3 Notes 3.11 VMware vsphere Virtual Volumes (VVOL) Notes Coping Method If a different firmware version can be applied to the ETERNUS DX S4/S3 series or the ETERNUS AF series Apply firmware version V10L80 or later. If a different firmware version cannot be applied to the ETERNUS DX S4/S3 series or the ETERNUS AF series The system administrator must perform one of the following operations and then restart the virtual machine. - Change the virtual machine hardware version to a version other than 13 - Update RHEL 6.4 - Create a swap space in the LVM logical volume - Change the virtual machine environment from a VVOL environment to a VMFS environment 15

Chapter 4 Installing and Setting Up ETERNUSmgr If ETERNUSmgr is to be used, install it according to the directions given in the "ETERNUSmgr Install Guide". After installation is complete, follow the instructions in "ETERNUSmgr User Guide" and set up ETERNUSmgr. 16

Chapter 5 Setting Up the ETERNUS DX/AF Set up the ETERNUS DX/AF storage systems using ETERNUS Web GUI or ETERNUSmgr. ETERNUS DX/AF setup can be performed independently of server setup. For details on how to perform these settings, refer to the following manuals. "Configuration Guide -Server Connection- Storage System Settings" that corresponds to the ETERNUS DX/AF to be connected "ETERNUS Web GUI User's Guide" or "ETERNUSmgr User Guide" 17

Chapter 6 Setting Up the Fibre Channel Switches Perform the settings required to connect the ETERNUS DX/AF storage systems and server via the Fibre Channel switch, according to "Configuration Guide -Server Connection- (Fibre Channel) Fibre Channel Switch Settings". If the access path is set with ETERNUS SF Storage Cruiser, the Host Response settings are set to the default values. If the Host Response settings are changed from the default values, set the Host Response again. The following examples show configurations in which a server is connected to a Fibre Channel switch with zoning. The following example shows a configuration in which multiple servers are connected to multiple CAs. Name : Server#1 Name : Server#2 Port0 Port1 Port0 Port1 ZONE1 ZONE2 ZONE3 ZONE4 Port0 Port2 Port4 Port6 Port0 Port2 Port4 Port6 FC switch FC switch Port1 Port3 Port5 Port7 Port1 Port3 Port5 Port7 Port0 Port1 Port0 Port1 Cascading line CM0 CM1 Name : RAID#1 ETERNUS DX/AF 18

Chapter 6 Setting Up the Fibre Channel Switches The following example shows a configuration in which a single server is connected to multiple CAs. Server Port0 Port1 ZONE1 ZONE3 ZONE4 ZONE2 Port0 Port2 Port4 Port6 Port0 Port2 Port4 Port6 FC switch FC switch Port1 Port3 Port5 Port7 Port1 Port3 Port5 Port7 Port0 Port1 Port0 Port1 Cascading line CM0 CM1 ETERNUS DX/AF 19

Chapter 7 Setting Up the VMware ESX Server Set up the VMware ESX server as detailed in the "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX" for the Fiber Channel card being used. "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX Driver Settings for Fujitsu Fibre Channel Cards" "Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX Driver Settings for Non-Fujitsu Fibre Channel Cards" 20

Chapter 8 For VMware vsphere 8.1 Checking the LUNs This section describes how to check LUNs. 8.1.1 Turning On the Devices To turn on the connected devices, use the following procedure: 1 Turn on the Fibre Channel switch power (if used). 2 Check that the Ready LED (or equivalent) is lit on the Fibre Channel switch. 3 Turn on the ETERNUS DX/AF. 4 Check that the Ready LED is lit on the ETERNUS DX/AF. 5 Turn on the server. 8.1.2 Checking the Connected Devices Check the state of the connection between the server and ETERNUS DX/AF storage systems. Use the vsphere Web Client or the vsphere Client to check that the LUNs of the ETERNUS DX/AF storage systems are recognized by the server. For details on how to check the connected devices, refer to the Installation and Setup Guide at the following URL: Reference URL: https://www.vmware.com/support/pubs/ 21

Chapter 8 For VMware vsphere 8.1 Checking the LUNs 8.1.3 Enabling ALUA This procedure is only required when VMware vsphere contains VMware ESX 4.0/VMware ESXi 4.0. It does not need to be performed when VMware vsphere contains VMware ESX 4.0 Update 1/VMware ESXi 4.0 Update 1 or later versions. In this case, proceed to "8.1.4 Checking the LUNs" (page 24). After the VMware ESX server has been installed, Asymmetric Logical Unit Access (ALUA) should be enabled according to the following procedure. 1 Use the terminal to log in to VMware ESX as "root". 2 Execute the following command in the terminal: When connecting to an ETERNUS DX60 S2/DX80 S2/DX90 S2 esxcli nmp satp addrule --satp="vmw_satp_alua" --vendor="fujitsu" --model="ete RNUS_DXL" --description="eternus DXL with ALUA" --claim-option tpgs_on When connecting to an ETERNUS DX400 S2 series esxcli nmp satp addrule --satp="vmw_satp_alua" --vendor="fujitsu" --model="ete RNUS_DX400" --description="eternus DX400 with ALUA" --claim-option tpgs_on When connecting to an ETERNUS DX8000 S2 series esxcli nmp satp addrule --satp="vmw_satp_alua" --vendor="fujitsu" --model="ete RNUS_DX8000" --description="eternus DX8000 with ALUA" --claim-option tpgs_on 3 Execute the following command in the terminal: esxcli corestorage claiming unclaim --type location Ignore the following error message if it appears when the preceding command is executed: Errors: Unable to perform unclaim. Error message was : Unable to unclaim paths. Busy or in use devices detected. See VMkernel logs for more information. 22

Chapter 8 For VMware vsphere 8.1 Checking the LUNs 4 Execute the following command in the terminal: esxcli corestorage claimrule run 5 Check that the connected ETERNUS DX/AF is shown when the appropriate command is executed in the terminal: esxcli nmp satp listrules --satp VMW_SATP_ALUA When connecting to an ETERNUS DX60 S2/DX80 S2/DX90 S2 VMW_SATP_ALUA FUJITSU ETERNUS_DXL tpgs_on ETERNUS DXL with ALUA When connecting to an ETERNUS DX400 S2 series VMW_SATP_ALUA FUJITSU ETERNUS_DX400 tpgs_on ETERNUS DX400 with ALUA When connecting to an ETERNUS DX8000 S2 series VMW_SATP_ALUA FUJITSU ETERNUS_DX8000 tpgs_on ETERNUS DX8000 with ALUA 6 Log out of the terminal, and restart the VMware ESX server. Refer to the following for details of the above commands: Reference URL: https://www.vmware.com/support/pubs/ Reference documents: "vsphere Command-Line Interface Concepts and Examples" 23

Chapter 8 For VMware vsphere 8.1 Checking the LUNs 8.1.4 Checking the LUNs The following procedure describes how to check LUN recognitions using the vsphere Web Client or the vsphere Client. 8.1.4.1 For VMware vsphere 6.0 or later Log in to VMware ESX from the vsphere Web Client, and then check whether the ETERNUS DX/AF LUNs are recognized. The procedure is as follows: 1 Log in to the vsphere Web Client and select in order, [Hosts and Clusters] > target host. 2 Select in order, the [Manage] tab > [Storage] > [Storage Adapters]. 3 Select [Rescan...]. After selecting [Rescan...], the VMware ESX should attempt to recognize the ETERNUS DX/AF storage systems' LUNs again. 4 Select the virtual HBA (vmhba3 in this example) for the Fibre Channel card from the [Storage Adapters] area. The recognized devices will be shown in the [Adapter Details] area, as follows. 24

Chapter 8 For VMware vsphere 8.1 Checking the LUNs 5 In [Storage Devices], check [Path Selection Policy] for all the LUNs in the ETERNUS DX/AF. When the VMware ESX server recognizes the LUNs, the active path for each LUN is automatically set. Even though [Path Selection Policy] is set to [Most Recently Used (VMware)], changing the [Path Selection Policy] for all the LUNs to [Round Robin (VMware)] is recommended. The vsphere command line can also be used to change the [Path Selection Policy] settings. For more details, refer to the VMware web-site for KB2000552. If a modification method for the relevant version is not described, modify the setting with the ESXi 5.x method. 8.1.4.2 For VMware vsphere 5.5 or earlier Log in to VMware ESX from the vsphere Client, and then check whether the ETERNUS DX/AF LUNs are recognized. The procedure is as follows: 1 Use the vsphere Client to log in to VMware ESX as "root". 2 Select the [Configuration] tab. 3 Select [Storage Adapters] from the [Hardware] list. 4 Select [Rescan...]. 25

Chapter 8 For VMware vsphere 8.1 Checking the LUNs After selecting [Rescan...], the VMware ESX should attempt to recognize the ETERNUS DX/AF storage systems' LUNs again. 5 Select the virtual HBA (vmhba3 in this example) for the Fibre Channel card from the [Storage Adapters] area. The recognized devices will be shown in the [Details] area, as follows. 6 Connect to the VMware ESX server via the vsphere Client and check [Path Selection] for all the LUNs in the ETERNUS DX/AF. When the VMware ESX server recognizes the LUNs, the active path for each LUN is automatically set. Even though [Path Selection] is set to [Most Recently Used (VMware)], changing the [Path Selection] for all the LUNs to [Round Robin (VMware)] is recommended. 26

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths The vsphere command line can also be used to change the [Path Selection Policy] settings. For more details, refer to the VMware web-site for KB2000552. If a modification method for the relevant version is not described, modify the setting with the ESXi 5.x method. 8.2 Checking the Active Paths 8.2.1 Checking the Active Path for a Single-Path Configuration The following example configuration is used to describe how to check and set an active path with the vsphere Web Client or the vsphere Client. HBA#1 VMware ESX SN200#1 (Fibre Channel Switch) CM#0 CA#0 Port#0 CM#1 CA#0 Port#0 Active Path RAID Group#0 : LUN0 RAID Group#1 : LUN1 Outline of checking and setting the active path is as follows: 1. Check the WWN of HBA#1. 2. Check the WWN of CM#0 CA#0 Port#0. 3. Check the connection of HBA#1 and CM#0 CA#0 Port#0 with the WWN. The procedure is as follows: 27

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 8.2.1.1 For VMware vsphere 6.0 or later 1 Log in to the vsphere Web Client and select in order, [Hosts and Clusters] > target host. 2 Select in order, the [Manage] tab > [Storage] > [Storage Adapters]. 3 Select the virtual HBA (vmhba3 in this example) for the Fibre Channel card from the [Storage Adapters] area. The recognized devices will be shown in the [Adapter Details] area, as follows. 28

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 4 Click the virtual device (FUJITSU Fibre Channel Disk [naa.600000e00d0000000000000100040000] in this example) in the [Adapter Details] area. 5 Check the WWNs of the ETERNUS DX/AF storage systems' CA ports that are connected with the Fibre Channel cards. The following connection would be displayed for the current configuration example: HBA#1 to ETERNUS DX/AF CM#0 CA#0 Port#0 29

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 8.2.1.2 For VMware vsphere 5.5 or earlier 1 Use the vsphere Client to log in to VMware ESX as "root". 2 Select the [Configuration] tab. 3 Select [Storage Adapters] from the [Hardware] list. 4 Select the virtual HBA (vmhba3 in this example) for the Fibre Channel card from the [Storage Adapters] area. The recognized devices will be shown in the [Details] area, as follows. 5 Select and right-click the virtual device (vmhba3:c0:t0:l21 in this example) in the [Details] area. Select [Manage Paths] from the pop-up menu. 30

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 6 Check the WWNs of the ETERNUS DX/AF storage systems' CA ports that are connected with the Fibre Channel cards. The following connection would be displayed for the current configuration example: HBA#1 to ETERNUS DX/AF CM#0 CA#0 Port#0 8.2.2 Checking the Active Path for a Multi-Path Configuration The active path for each LUN is automatically set. The following procedure describes how to check the active paths. 8.2.2.1 For VMware vsphere 6.0 or later 1 Log in to the vsphere Web Client and select in order, [Hosts and Clusters] > target host. 2 Select in order, the [Manage] tab > [Storage] > [Storage Adapters]. 31

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 3 Select the virtual HBA (vmhba3 in this example) for the Fibre Channel card from the [Storage Adapters] area. The recognized devices will be shown in the [Adapter Details] area, as follows. 4 Click the virtual device (FUJITSU Fibre Channel Disk [naa.600000e00d0000000000000100040000] in this example) in the [Adapter Details] area. 32

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 5 Check the WWNs of the ETERNUS DX/AF storage systems' CA ports that are connected with the Fibre Channel cards. The following connection would be displayed for the current configuration example: RAID Group#0:LUN0 has the following access paths: - HBA#1 to ETERNUS DX/AF CM#0 CA#0 Port#0 - HBA#2 to ETERNUS DX/AF CM#1 CA#0 Port#0 RAID Group#1:LUN1 has the following access paths: - HBA#1 to ETERNUS DX/AF CM#0 CA#0 Port#0 - HBA#2 to ETERNUS DX/AF CM#1 CA#0 Port#0 33

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 6 Check the active path for each virtual device. When [Path Selection Policy] is [Most Recently Used (VMware)] In this example, it can be confirmed that the operating paths for RAID Group#0:LUN0 and RAID Group#1:LUN1 are as follows. - RAID Group#0:LUN0 ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#1 is the "Active (I/O)" path ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#2 is the "Active" path RAID Group#0:LUN0 paths can be checked in the following window: - RAID Group#1:LUN1 ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#1 is the "Active" path 34

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#2 is the "Active (I/O)" path RAID Group#1:LUN1 paths can be checked in the following window: When [Path Selection Policy] is [Round Robin (VMware)] In the active path confirmation window, the active paths for 4-path configuration LUNs can be confirmed as follows: - ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#1 is the "Active (I/O)" path - ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#2 is the "Active (I/O)" path - ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#3 is the "Active (I/O)" path - ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#4 is the "Active (I/O)" path 4-path configuration LUN active paths can be checked in the following window: 35

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 8.2.2.2 For VMware vsphere 5.5 or earlier 1 Use the vsphere Client to log in to VMware ESX as "root". 2 Select the [Configuration] tab. 3 Select [Storage Adapters] from the [Hardware] list. 4 Select the virtual HBA (vmhba3 in this example) for the Fibre Channel card from the [Storage Adapters] area. The recognized devices will be shown in the [Details] area, as follows. 36

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 5 Right-click the virtual device in the [Details] area and select [Manage Paths]. In this example the vmhba3:c0:t0 path status is shown: 37

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths 6 Check the WWNs of the ETERNUS DX/AF storage systems' CA ports that are connected with the Fibre Channel cards. The following connection would be displayed for the current configuration example: RAID Group#0:LUN0 has the following access paths: - HBA#1 to ETERNUS DX/AF CM#0 CA#0 Port#0 - HBA#2 to ETERNUS DX/AF CM#1 CA#0 Port#0 RAID Group#1:LUN1 has the following access paths: - HBA#1 to ETERNUS DX/AF CM#0 CA#0 Port#0 - HBA#2 to ETERNUS DX/AF CM#1 CA#0 Port#0 7 Check the active path for each virtual device. 38

Chapter 8 For VMware vsphere 8.2 Checking the Active Paths When [Path Selection Policy] is [Most Recently Used (VMware)] In this example, it can be confirmed that the operating paths for RAID Group#0:LUN0 and RAID Group#1:LUN1 are as follows. - RAID Group#0:LUN0 ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#1 is the "Active (I/O)" path ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#2 is the "Active" path RAID Group#0:LUN0 paths can be checked in the following window: - RAID Group#1:LUN1 ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#1 is the "Active" path ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#2 is the "Active (I/O)" path RAID Group#1:LUN1 paths can be checked in the following window: 39

Chapter 8 For VMware vsphere 8.3 Setting Up the Volumes When [Path Selection Policy] is [Round Robin (VMware)] In the active path confirmation window, the active paths for 4-path configuration LUNs can be confirmed as follows: - ETERNUS DX/AF CM#0 CA#0 Port#0 to HBA#1 is the "Active (I/O)" path - ETERNUS DX/AF CM#1 CA#0 Port#0 to HBA#2 is the "Active (I/O)" path - ETERNUS DX/AF CM#0 CA#0 Port#1 to HBA#3 is the "Active (I/O)" path - ETERNUS DX/AF CM#1 CA#0 Port#1 to HBA#4 is the "Active (I/O)" path 4-path configuration LUN active paths can be checked in the following window: 8.3 Setting Up the Volumes Start the VMware ESX. The following ETERNUS DX/AF connection example describes how to create a volume (VMFS). 8.3.1 For VMware vsphere 6.0 or later 1 Log in to the vsphere Web Client. 2 Select [Home] > [Storage]. 40

Chapter 8 For VMware vsphere 8.3 Setting Up the Volumes 3 Right-click [Datacenter] and select in order, [Storage] > [New Datastore...]. A configuration pop-up window is displayed. (1) Click the [Next] button. (2) Select [VMFS] and click the [Next] button. (3) Enter a datastore name in the [Datastore name] field, select the target LUN, and click the [Next] button. (4) Select values for the partition configuration and the maximum capacity then click the [Next] button. (5) The settings are displayed. Check them, and click the [Finish] button if they are correct. 41

Chapter 8 For VMware vsphere 8.3 Setting Up the Volumes A new datastore is displayed under [Datacenter]. 4 To check or change the settings of the created volume, select in order, the datastore's [Manage] tab > [Settings] > [General]. 8.3.2 For VMware vsphere 5.5 or earlier 1 Use the vsphere Client to log in to VMware ESX as "root". 2 Select the [Configuration] tab. 3 Select [Storage] from the [Hardware] list. 4 Select the [Add Storage...] function in the [Storage] area. 5 Select [Disk/LUN] and click the [Next] button. 42

Chapter 8 For VMware vsphere 8.3 Setting Up the Volumes 6 Select the desired LUN and click the [Next] button. Device information is displayed. 7 Enter the datastore name and click the [Next] button. 43

Chapter 8 For VMware vsphere 8.3 Setting Up the Volumes 8 Select the [Maximum file size] and click the [Next] button. 9 The settings are displayed. Check them, and click the [Finish] button if they are correct. 10 Check the created volume in the [Storage] area. 11 To check or change the settings of the created volume, select the target LUN in the [Datastores] area. Click the [Properties...] link after the volume details are displayed in the [Datastore Details] area. 12 Change the settings in the [Volume Properties]. This example shows how to change the datastore name. 44

Chapter 8 For VMware vsphere 8.4 Checking the Maximum Command Queue Depth of the Fibre Channel Card Click the [Rename] button in the [General] area. 13 Change the datastore name and click the [OK] button. 8.4 Checking the Maximum Command Queue Depth of the Fibre Channel Card Confirm whether the configured driver parameters of the Fibre Channel card are applied to the LUN. This procedure is only required when using VMware vsphere 6.0 or later. 1 Check the Device Max Queue Depth of the LUN in the ETERNUS DX/AF that is connected with VMware ESXi. If the default driver value is specified for the Device Max Queue Depth even after the parameter was changed to the same value as the QueueDepth of the Fibre Channel card driver, the configuration may fail due to reasons such as invalid values. In that case, check the settings again. The value specified for the driver parameter and the Device Max Queue Depth may differ depending on the Fibre Channel card. 45

Chapter 8 For VMware vsphere 8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines Check "FUJITSU Fibre Channel Disk (naa.xxx)" and "Device Max Queue Depth" in the following example. # esxcli storage core device list grep -E '(Display Name: Device Max Queue Depth:)' Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Has Settable Display Name: true Device Max Queue Depth: 128 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Has Settable Display Name: true Device Max Queue Depth: 8 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Has Settable Display Name: true Device Max Queue Depth: 8 Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) Has Settable Display Name: false Device Max Queue Depth: 1 8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines Change the Maximum Outstanding Disk Requests for virtual machines to match the queue depth that is specified for the driver parameter. Only change this parameter if multiple virtual machines are installed in a LUN. Note that changing this parameter is not required when only one virtual machine is installed in a LUN or when there are no virtual machines installed in a LUN. 8.5.1 For VMware vsphere 6.0 or later 1 Check the Maximum Outstanding Disk Requests for virtual machines and the device name of the ETER- NUS DX/AF that is connected with VMware ESXi. Perform Step 2 and later steps if the Maximum Outstanding Disk Requests for virtual machines is different from the value (QueueDepth) that is specified for the driver parameter of the Fibre Channel card. Check "FUJITSU Fibre Channel Disk (naa.xxx)" and "No of outstanding IOs with competing worlds" in the following example. # esxcli storage core device list grep -E '(Display Name: No of outstanding IOs with competing worlds:)' Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) Has Settable Display Name: false No of outstanding IOs with competing worlds: 32 46

Chapter 8 For VMware vsphere 8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines 2 Change the setting value for the Maximum Outstanding Disk Requests for virtual machines. Maximum Outstanding Disk Requests for virtual machines No of outstanding IOs with competing worlds Setting value The value (Device Max Queue Depth) that is checked in "8.4 Checking the Maximum Command Queue Depth of the Fibre Channel Card" (page 45). Use the following command to set the same value for each LUN if the Maximum Outstanding Disk Requests for virtual machines are different from the value described in this table. If the actual result exceeds the maximum value for the OS (the Maximum Outstanding Disk Requests for virtual machines in the OS specifications), use the maximum value. # esxcli storage core device set -d naa.6000b5d0006a0000006a0b9f01490000 -O 8 # esxcli storage core device set -d naa.6000b5d0006a0000006a0b9f014a0000 -O 8 3 Reboot VMware ESXi. # reboot 4 After rebooting VMware ESXi, check the setting values. # esxcli storage core device list grep -E '(Display Name: No of outstanding IOs with competing worlds:)' Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 8 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 8 Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) Has Settable Display Name: false No of outstanding IOs with competing worlds: 32 8.5.2 For VMware vsphere 5.5 1 Check the Maximum Outstanding Disk Requests for virtual machines and the device name of the ETER- NUS DX/AF that is connected with VMware ESXi. Perform Step 2 and later steps if the Maximum Outstanding Disk Requests for virtual machines is different from the value (QueueDepth) that is specified for the driver parameter of the Fibre Channel card. 47

Chapter 8 For VMware vsphere 8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines Check "FUJITSU Fibre Channel Disk (naa.xxx)" and "No of outstanding IOs with competing worlds" in the following example. # esxcli storage core device list grep -E '(Display Name: No of outstanding IOs with competing worlds:)' Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) Has Settable Display Name: false No of outstanding IOs with competing worlds: 32 2 Change the set value for the Maximum Outstanding Disk Requests for virtual machines. Maximum Outstanding Disk Requests for virtual machines No of outstanding IOs with competing worlds Setting value Value (QueueDepth) that is specified for the driver parameter of the Fibre Channel card Use the following command to set the same value for each LUN when the Maximum Outstanding Disk Requests for virtual machines are different from the QueueDepth setting for the driver parameter. If the actual result exceeds the maximum value for the OS (the Maximum Outstanding Disk Requests for virtual machines in the OS specifications), use the maximum value. # esxcli storage core device set -d naa.6000b5d0006a0000006a0b9f01490000 -O 8 # esxcli storage core device set -d naa.6000b5d0006a0000006a0b9f014a0000 -O 8 3 Reboot VMware ESXi. # reboot 4 After rebooting VMware ESXi, check the setting values. # esxcli storage core device list grep -E '(Display Name: No of outstanding IOs with competing worlds:)' Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Has Settable Display Name: true No of outstanding IOs with competing worlds: 32 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 8 Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Has Settable Display Name: true No of outstanding IOs with competing worlds: 8 Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) Has Settable Display Name: false No of outstanding IOs with competing worlds: 32 48

Chapter 8 For VMware vsphere 8.5 Changing the Maximum Outstanding Disk Requests for Virtual Machines 8.5.3 For VMware vsphere 5.1 or Earlier 1 Check the Maximum Outstanding Disk Requests for virtual machines. Perform Step 2 and later steps if the Maximum Outstanding Disk Requests for virtual machines is different from the value (QueueDepth) that is specified for the driver parameter of the Fibre Channel card. # vim-cmd hostsvc/advopt/view Disk.SchedNumReqOutstanding (vim.option.optionvalue) [ (vim.option.optionvalue) { dynamictype = <unset>, key = "Disk.SchedNumReqOutstanding", value = 32, } ] 2 Change the set value for the Maximum Outstanding Disk Requests for virtual machines. Maximum Outstanding Disk Requests for virtual machines Disk.SchedNumReqOutstanding Setting value Value (QueueDepth) that is specified for the driver parameter of the Fibre Channel card Use the following command to set the same value when the Maximum Outstanding Disk Requests for virtual machines are different from the QueueDepth setting for the driver parameter. If the actual result exceeds the maximum value for the OS (the maximum outstanding disk requests in the OS specifications), use the maximum value. # vim-cmd hostsvc/advopt/update Disk.SchedNumReqOutstanding long 8 3 Reboot VMware ESXi or ESX. # reboot 4 After rebooting VMware ESXi or ESX, check the setting values. # vim-cmd hostsvc/advopt/view Disk.SchedNumReqOutstanding (vim.option.optionvalue) [ (vim.option.optionvalue) { dynamictype = <unset>, key = "Disk.SchedNumReqOutstanding", value = 8, } ] 49

Chapter 8 For VMware vsphere 8.6 Plug-in Settings 8.6 Plug-in Settings Set up VMware Multi-Pathing plug-in for ETERNUS by following "Software Information", which is a manual that is provided with the VMware Multi-Pathing plug-in for ETERNUS. 8.7 Changing the Multipath Operation If "VMware Multi-Pathing plug-in for ETERNUS" is not used with ESXi 6.0 or later, change the multipath operation as shown below. For more details, refer to the VMware web-site for KB2106770. 1 Check the device name of all the LUNs in the ETERNUS DX/AF that is connected with VMware ESXi. Check "FUJITSU Fibre Channel Disk (naa.xxx)" in the following example. # esxcli storage nmp device list grep -E 'Device Display Name:' Device Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Device Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Device Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Device Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) 2 Change the multipath operation. Use the following command to set parameters for all LUNs in the ETERNUS DX/AF. # esxcli storage nmp satp generic deviceconfig set -c enable_action_onretryerrors -d naa.6000b5d0006a0000006a0b9f 01490000 # esxcli storage nmp satp generic deviceconfig set -c enable_action_onretryerrors -d naa.6000b5d0006a0000006a0b9f 014a0000 3 Confirm the settings. Confirm that "action_onretryerrors=on" is included in "Storage Array Type Device Config" for the LUNs in the ETERNUS DX/AF. # esxcli storage nmp device list grep -E '(Device Display Name: Storage Array Type Device Config:)' Device Display Name: Local LSI Disk (naa.60030057013345401a6af1560de849bc) Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Device Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f01490000) Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followov er=on; action_onretryerrors=on; {TPG_id=32897,TPG_state=AO}{TPG_id=32896,TPG_state=AO}{TPG_id=32913,TPG_state=ANO }{TPG_id=32912,TPG_state=ANO}} Device Display Name: FUJITSU Fibre Channel Disk (naa.6000b5d0006a0000006a0b9f014a0000) Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followov er=on; action_onretryerrors=on; {TPG_id=32897,TPG_state=AO}{TPG_id=32896,TPG_state=AO}{TPG_id=32913,TPG_state=ANO }{TPG_id=32912,TPG_state=ANO}} Device Display Name: Local MATSHITA CD-ROM (mpx.vmhba32:c0:t0:l0) Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. 50

Chapter 9 Virtual Machine This chapter describes the notes on the Virtual Machine and its settings. To configure the Virtual Machine, access the following web-site and check the "Guest Operation System Installation Guide". Guest Operation System Installation Guide http://partnerweb.vmware.com/gosig/home.html 9.1 For Windows 9.1.1 Checking the Registry Information Check the value of the "TimeOutValue" registry key. If the "TimeOutValue" registry key does not exist, then create it. The registry file should be backed up before creating or changing any registry values. 1 Click the [Start] button, and then click [Run]. 2 In the [Run] dialog box, type "regedit", and then click the [OK] button. Registry Editor starts. 3 Follow the path described below: \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk 4 Check the value of the "TimeOutValue" registry key. Check that the value of the "TimeOutValue" registry key is "0x3C". If the name or value is not "0x3C", change it to "0x3C". 51

Chapter 9 Virtual Machine 9.2 For Linux If the "TimeOutValue" key does not exist, add a registry key with the following values: The "Name" field is case-sensitive. Name Type Radix Data TimeOutValue REG_DWORD Hexadecimal 3C 5 If any registry values were added or modified, reboot the server. The modified settings will be enabled after the reboot. 9.1.2 Notes for Configuring Windows Server Failover Clustering Refer to the VMware web-site for KB1004617 and check the support guidelines before configuring Windows Server Failover Clustering. Ignore the following warning message if it appears during a verification test for Windows Server Failover Clustering: warning: Validate Storage Spaces Persistent Reservation If the verification test fails, refer to the VMware web-site for KB2085702. 9.2 For Linux 9.2.1 Applying Virtual Machine Updates This section describes notes for applying Virtual Machine updates. For the following OSs, the Virtual Machine file system may be restricted to being read-only: For details, refer to "KB51306" on the VMware web-site and apply the Virtual Machine updates. Red Hat Enterprise Linux 5 Red Hat Enterprise Linux AS v.4 Update 4 Red Hat Enterprise Linux AS v.4 Update 3 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 9 Service Pack 3 52

Chapter 10 Dynamic Recognition of LUNs Added While VMware ESX is in Use When adding LUNs while VMware ESX is in use (recognizing LUNs dynamically), first assign the added volumes to an ETERNUS DX/AF LUN group (Affinity Group), and then LUNs should be recognized with the "Rescan" operation via the VMware Infrastructure Client (Infrastructure 3), vsphere Web Client, or vsphere Client respectively described in "8.1 Checking the LUNs" (page 21). 53

Chapter 11 Storage Migration This chapter explains how to configure the server for performing Storage Migration. When Storage Migration is performed, configure the settings so that the ETERNUS DX/AF LUNs can be used from the server (VMware ESXi). Setting Procedure Outline Execute the operation to start Storage Migration from ETERNUS Web GUI. Perform all other operations in the server (VMware ESXi). 1 Delete the guest (VM) that is in the datastore of the LUN that is targeted for migration from the inventory. 2 Turn off the server. 3 Start Storage Migration, change the connection destination to the migration destination ETERNUS DX/AF, and then reboot the server. 4 The datastore is registered automatically. Therefore register the VM in the inventory. 5 If the LUN targeted for migration is used for Raw Device Mapping (RDM), delete the virtual hard disk and then add RDM again. 54

Chapter 11 Storage Migration Example Setting Procedure The following procedure shows an example configuration for VMware ESXi 5.0. 1 Delete the guest (VM) that is in the datastore of the LUN that is targeted for migration from the inventory. (1) Stop the target VM, and then right-click the target VM in vsphere Client and select "Remove from Inventory". 2 Turn off the server. 3 Start Storage Migration from ETERNUS Web GUI and then change the connection destination to the migration destination ETERNUS DX/AF and reboot the server. 55

Chapter 11 Storage Migration 4 The datastore is registered automatically. Therefore register the VM in the inventory. (1) Right-click the target datastore in vsphere Client, and select [Browse Datastore...]. (2) In the [Datastore Browser] window, right-click the virtual machine file (*.vmx) and select [Add to Inventory]. 56

Chapter 11 Storage Migration (3) Specify the registration destination for the VM in the dialog box, and then click the [Finish] button. The VM is registered in the datastore inventory. 5 If the LUN that is targeted for a migration is used for Raw Device Mapping (RDM), the virtual hard disk that was mapped using the RDM cannot be used. Therefore, delete the virtual hard disk, re-create the RDM, and then add it again. (1) Right-click the target VM in vsphere Client, and select [Edit Settings...]. (2) Confirm that the RDM virtual hard disk has a VMDK of size 0. 57

Chapter 11 Storage Migration (3) Delete all virtual hard disks for RDM. However, it is not necessary to delete virtual hard disks that were VMDK before any editing was performed. (4) Create RDM once again on the migration destination volume. (5) Replace all size 0 VMDK with RDM. The VM can now be started. After this procedure is completed, the datastore can be used in the migration destination ETERNUS DX/AF the same way as before the migration. 58

Chapter 12 Non-disruptive Storage Migration This chapter describes the procedures for connecting and disconnecting paths and provides notes for when the Non-disruptive Storage Migration function is used in the example vsphere HA environment that uses Native Multipaths and runs ESXi 5.5 Update 3. Connecting Paths The following procedure shows how to add a path to the migration destination storage system from the server (ESXi 5.5 Update 3) after the migration destination storage system is connected. 1 Check the multipath status. In the following example, the LUN consists of a two-path configuration. 2 Connect the multipath. Add the host affinity setting to the migration destination storage system. 59

Chapter 12 Non-disruptive Storage Migration 3 The connected multipath is recognized automatically. If recognition of the connected multipath takes a while, select the host in the vsphere Client, click [Configuration] - [Storage Adapters], and then select the relevant adapter. 4 Right-click the relevant adapter and select [Rescan]. 5 After the rescan is complete, right-click LUN in the "Details" field and select [Manage Paths]. 60