HUAWEI SAN Storage Host Connectivity Guide for AIX

Size: px
Start display at page:

Download "HUAWEI SAN Storage Host Connectivity Guide for AIX"

Transcription

1 Technical White Paper HUAWEI SAN Storage Host Connectivity Guide OceanStor Storage AIX Huawei Technologies Co., Ltd

2 Copyright Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Huawei Technologies Co., Ltd. Address: Website: Huawei Industrial Base Bantian, Longgang Shenzhen People's Republic of China i

3 for AIX About This Document About This Document Overview This document details the configuration methods and precautions for connecting Huawei SAN storage devices to Advanced Interactive exectuive (AIX) hosts. Intended Audience This document is intended for: Huawei technical support engineers Technical engineers of Huawei's partners Conventions Symbol Conventions The symbols that may be found in this document are defined as follows: Symbol Description Indicates a hazard with a high level of risk, which if not avoided, will result in death or serious injury. Indicates a hazard with a medium or low level of risk, which if not avoided, could result in minor or moderate injury. Indicates a potentially hazardous situation, which if not avoided, could result in equipment damage, data loss, performance degradation, or unexpected results. Indicates a tip that may help you solve a problem or save time. Provides additional information to emphasize or supplement important points of the main text. ii

4 for AIX About This Document General Conventions Convention Times New Roman Boldface Italic Courier New Description Normal paragraphs are in Times New Roman. Names of files, directories, folders, and users are in boldface. For example, log in as user root. Book titles are in italics. Examples of information displayed on the screen are in Courier New. Command Conventions Format Boldface Italic Description The keywords of a command line are in boldface. Command arguments are in italics. iii

5 for AIX Contents Contents About This Document... ii 1 AIX Operating System Introduction to AIX File Systems in AIX Directory Structure in AIX Common Management Tools and Commands Management Tool Management Commands Querying and Updating the Operating System Version Querying the Current Version Querying Files That Must Be Updated Before a System Upgrade Viewing the File Version Application Scenarios Interoperability Between AIX and Storage Systems Network Planning Non-HyperMetro Network Fibre Channel Networking Diagram iscsi Network Diagram HyperMetro Network Fibre Channel Networking Diagram Preparations Before Configuration (on a Host) Adjusting the Directory Size Changing the File Size Limit Viewing and Configuring HBAs HBA Identification HBA WWNs HBA Physical Device Identifier Properties HBA Virtual Device Identifier Properties HBA Parameters Preparations Before Configuration (on a Storage System) Switch Configuration iv

6 for AIX Contents 5.1 Fibre Channel Switch Querying the Switch Model and Version Configuring Zones Precautions Ethernet Switch Configuring VLANs Binding Ports (Link Aggregation) FCoE Switch Command Introduction Creating a VSAN Creating a VLAN Configuring a Port and Adding It to the VLAN Creating a Zone and Adding the Port to It Creating a Zoneset and Adding the Created Zone to It Establishing Fibre Channel Connections Checking Topology Modes OceanStor T Series Storage System OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System Adding Initiators Establishing Connections Establishing iscsi Connections Checking iscsi Software on the Host Configuring Service IP Addresses Storage System Host Configuring Initiators on a Host Checking Storage System Targets OceanStor T Series Storage System OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System Configuring the Host iscsi Service S2000 Series/S2600/S5000 Series/S6800E OceanStor S2200T S2600T/S5500T/S5600T/S5800T/S6800T S2900/S3900/S5900/S OceanStor Series Enterprise Storage System Establishing Connections Mapping and Scanning for LUNs Mapping LUNs to a Host OceanStor T Series Storage System OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System Scanning for LUNs on a Host v

7 for AIX Contents 9 Multipathing Management Software Overview UltraPath Functions Installation and Uninstallation MPIO Configuring and Enabling Multipathing Function Multipathing Configuration for New-Version HUAWEI Storage Multipathing Configuration for Old-Version HUAWEI Storage Volume Management Software LVM Overview Installation Common Configuration Commands VxVM Overview Installation Common Configuration Commands Host High-Availability Overview Version Compatibility Installation and Configuration Cluster Maintenance Common Maintenance Commands Cluster Log Analysis A Acronyms and Abbreviations vi

8 for AIX Figures Figures Figure 1-1 Comparison between JFS2 and JFS... 2 Figure 1-2 JFS2 size limits in 32-bit and 64-bit kernel AIX operating systems... 2 Figure 1-3 SMIT menu... 4 Figure 1-4 Interoperability query page... 7 Figure 2-1 Fibre Channel multi-path directly-connected network (dual-controller)... 9 Figure 2-2 Fibre Channel multi-path directly-connected network (four-controller)... 9 Figure 2-3 Fibre Channel multi-path switch-connected network diagram (dual-controller) Figure 2-4 Fibre Channel multi-path switch-connected network diagram (four-controller) Figure 2-5 Fibre Channel multi-path switch-connected networking diagram (dual-controller) Figure 2-6 Fibre Channel multi-path switch-connected networking diagram (four-controller) Figure 3-1 Changing fsize in configuration file /etc/security/limits Figure 5-1 Switch information Figure 5-2 Switch port indicator status Figure 5-3 Zone tab page Figure 5-4 Zone configuration Figure 5-5 Zone Config tab page Figure 5-6 Name Server page Figure 5-7 Process for configuring an FCoE switch Figure 6-1 Fibre Channel port details Figure 6-2 Fibre Channel port details Figure 7-1 Screen for selecting the installation source Figure 7-2 Software installation screen Figure 7-3 Modifying IPv4 addresses Figure 7-4 Screen for configuring IP addresses Figure 7-5 Change/Show Characteristics of an iscsi Adapter screen Figure 9-1 Going to the host configuration page vii

9 for AIX Figures Figure 9-2 Selecting an initiator of which information you want to modify Figure 9-3 Modifying initiator information Figure 9-4 Querying the special mode type Figure 9-5 Enabling ALUA for T series V100R005/Dorado2100/Dorado5100/Dorado2100 G Figure 9-6 Enabling ALUA for T Series V200R002/18000 Series/V3 Series Figure 10-1 Screen for configuring volume groups Figure 10-2 Screen for configuring logical volume properties Figure 10-3 Screen for configuring file systems (logical volumes available) Figure 10-4 Screen for configuring file systems (no logical volumes) Figure 11-1 Cluster process status Figure 11-2 Cluster service status viii

10 for AIX Tables Tables Table 1-1 Commonly used directories in AIX... 3 Table 1-2 Common AIX commands... 4 Table 2-1 Networking modes... 8 Table 5-1 Switch model mapping Table 5-2 Comparison of link aggregation modes Table 9-1 Configuration methods and application scenarios of the typical working modes Table 9-2 HUAWEI storage's support for ALUA Table 9-3 Initiator parameter description Table 9-4 Multipathing configuration on non-hypermetro Huawei storage interconnected with AIX Table 9-5 Multipathing configuration on HyperMetro Huawei storage interconnected with AIX Table 10-1 VG limitations Table 11-1 Compatibility between HACMP and the AIX operating system ix

11 1 AIX Operating System 1 AIX Operating System 1.1 Introduction to AIX AIX is a UNIX operating system developed by IBM. Complying with the Open Group UNIX 98 Base Brand, AIX supports the concurrent running of 32-bit and 64-bit applications and flexible application expansion. AIX can run on IBM P series and IBM RS/6000 workstations, servers, and large-scale parallel supercomputers. AIX is IBM's proprietary UNIX operating system. The current versions of AIX include AIX5.2, AIX5.3, AIX6.1, AIX7.1, and AIX7.2. Each basic AIX version has its patches subsequently released. For details about AIX version releases, visit: AIX boasts virtual services, high operating efficiency, thorough cluster management, robust reliability, and ensured security. Therefore, AIX is seldom used in desktop systems. Instead, it is mainly used to run large-scale database systems such as Oracle, Sybase, and DB File Systems in AIX AIX supports the following file systems: JFS JFS2 Journaled File System (JFS) uses journals to keep structure integrity. Enhanced Journaled File System (JFS2) is the enhanced JFS. JFS2 is larger than JFS and has higher performance. JFS2 also stores much larger files than JFS. NFS Network File System (NFS) is a distributed file system that allows users to access files and directories on remote PCs the same as on local PCs. CDRFS CD-ROM File System (CDRFS) allows access to CD-ROM contents from common file system interfaces. In traditional UNIX operating systems, files may be damaged after a system fault, particularly the files that are constantly updated. When the contents of a file change, AIX records the structure change of the file to a database log before updating the file contents. 1

12 1 AIX Operating System The log used for recording file structure (metadata) changes is called a JFS log. After an accident such as a file system breakdown, AIX uses the JFS log to recover the file system. JFS and JFS2 are the most widely applied file systems in common applications. The two file systems are compared in Figure 1-1. Figure 1-1 Comparison between JFS2 and JFS For more information, visit: n/doc/baseadmndita/fs_jfs_jfs2.htm Note that AIX6.1 and later support only 64-bit kernels. AIX5.1, 5.2, and 5.3 support both 32-bit and 64-bit kernels. The maximum size of a JFS2 file system and maximum size of a JFS2 file vary with the AIX kernel. For details, see Figure 1-2. Figure 1-2 JFS2 size limits in 32-bit and 64-bit kernel AIX operating systems For more information, visit: 2

13 1 AIX Operating System n/doc/baseadmndita/jfs2sizelim.htm 1.3 Directory Structure in AIX AIX uses the same file and directory structures as other UNIX operating systems. The structures are called file trees. In a file tree, directories are root nodes, which orderly organize data and programs in groups. Files are leaf nodes owned by directories. Table 1-1 describes the commonly used directories in AIX. Table 1-1 Commonly used directories in AIX Directory Description / Starts a UNIX file system file tree. This directory contains a key directory and its files (for example, /sbin, /dev, and /etc) as well as files used in system startup. /etc /dev /home Stores configuration files of the system and applications. Stores device files. Root directory that stores all accounts except account root. /u Link directory that navigates to /home. /tmp /usr /var /opt /admin /sbin /lost+found Stores temporary files created by users or the system. Stores AIX operation commands, databases, and other applications. Stores system operation logs. Used for installing common application systems. Used system management. Stores commands and scripts that are important for file system /usr and system startup. Stores files found by the fsck command. 1.4 Common Management Tools and Commands Management Tool AIX uses the System Management Interface Tools (SMIT) to manage system functions. The SMIT provides users with a menu-based user interface to perform management tasks. The SMIT is easy-to-use and provides most of AIX system management functions. Figure 1-3 shows the SMIT menu. The menu covers almost all AIX functions. 3

14 1 AIX Operating System Figure 1-3 SMIT menu 1.5 Management Commands Table 1-2 lists the management commands used for connecting an AIX host to a Huawei storage system. Table 1-2 Common AIX commands Command bootinfo -s hdisk# cfgmgr -v chdev -l fcs# -a max_xfer_size=0x chfs lsattr -EHl fcs# lsattr -EHl fscsi# lsattr -El hdisk# lsattr -Rl fcs# -a max_xfer_size lscfg -vpl fcs# lscfg -vpl hdisk# Function Views the capacity of hdisk#. Scans for physical hardware. Changes the value of max_xfer_size in fcs#. Changes the directory size. Views the properties of fcs#. Views the properties of fscsi#. Views the properties of hdisk#. Views the available values of max_xfer_size in fcs#. Views information about the adapters of fcs#. Views the properties of hdisk#. 4

15 1 AIX Operating System Command lscfg grep scsi lsdev -Cc adapter lsdev -Cc disk lslpp -l lsvg -l vgname lsvg -o mount varyonvg vgname varyoffvg vgname Function Displays the existing or system-defined SCSI I/O controllers. Views the information about adapters identified by the host. Displays information about disks identified by the host. Views the software installed in the host. Displays the specified volume group's logical volumes, file system type, logical partitions, physical partitions, and status. Displays all activated volume groups. Mounts a logical volume. Activates a volume group. Deactivates a volume group. The pound (#) in this table indicates a numerical digit that can be specified based on your actual conditions. 1.6 Querying and Updating the Operating System Version The version of the AIX operating system is a digit string in the format of AAAA-BB-CC-DDEE, for example, AAAA: indicates the AIX release version. BB: indicates a technical level (TL). CC: indicates a service package (SP). DDEE: indicates a release number, where DD indicates the last two digits of the release year and EE indicates the release week. For example, if AIX 6.1TL6 SP3 was released in the 48th week in 2010, its version is Querying the Current Version Run the following command to query the current operating system version: bash-3.00# oslevel -s bash-3.00# Querying Files That Must Be Updated Before a System Upgrade Run the following command to query the files that must be updated before upgrading the current system version to a specific target version: 5

16 1 AIX Operating System bash-3.00# oslevel -rl Fileset Actual Level Recommended ML Java5.ext.java3d printers.epsonlq1600k_cn.rte printers.escpj84_jp.rte printers.hindi.rte printers.hplj-2p_cn.rte printers.ibm4332_hi.rte printers.ibmgb18030_cn.rte printers.ibmuniversal.rte printers.starar2463_cn.rte bash-3.00# This example command return shows the files that must be updated before upgrading the system version to Viewing the File Version Run the following command to view the version of a specific file: bash-3.00# lslpp -L UltraPath AIX6.1.ppc_64.rte Fileset Level State Type Description (Uninstaller) UltraPath AIX6.1.ppc_64.rte C F ODM definitions for Array disk devices State codes: A -- Applied. B -- Broken. C -- Committed. E -- EFIX Locked. O -- Obsolete. (partially migrated to newer version)? -- Inconsistent State...Run lppchk -v. Type codes: F -- Installp Fileset P -- Product C -- Component T -- Feature R -- RPM Package E -- Interim Fix bash-3.00# This command return shows the version of UltraPath AIX6.1.ppc_64.rte. 1.7 Application Scenarios AIX and storage systems generally work together in industries (for example, large banks, telcos, and multinationals) that have high data security requirements. When interworking with AIX, the storage system must ensure high availability, performance, and security of data in the operating system. 6

17 1 AIX Operating System 1.8 Interoperability Between AIX and Storage Systems When connecting a storage system to an AIX host, consider the interoperability of components (such as storage systems, AIX systems, HBAs, and switches) and upper-layer applications in the environment. You can query the latest compatibility information by performing the following steps: Step 1 Log in to the website support-open.huawei.com. Step 2 On the home page, choose Interoperability Center > Storage Interoperability. Figure 1-4 Interoperability query page Then, the OceanStor Interoperability Navigator is displayed. CAUTION When connecting to a storage system, an AIX host must use only IBM's HBAs (not HBAs from other vendors). ----End 7

18 2 Network Planning 2 Network Planning AIX hosts and storage systems support various networking modes. Table 2-1 Networking modes Classified By Interface module type Whether switches are used Whether multiple paths exist Whether HyperMetro is used Networking Mode Fibre Channel network/iscsi network Directly connected network (no switches are used)/switch-connected network (switches are used) Single-path network/multi-path network HyperMetro network, or non-hypermetro network Generally, the directly-connected network applies to small-scale storage systems (such as those for university libraries and small hospitals); the switch-connected network applies to large-scale storage systems (such as those for banks, financial institutions, and large-scale enterprises), which need to manage a massive amount of service data. The Fibre Channel network is the most widely used network operating systems. To ensure service data security, both directly-connected network and the switch-connected network are generally multi-path networkings. This chapter mainly introduces the Fibre Channel multi-path directly-connected network and the Fibre Channel multi-path switch-connected network. 8

19 2 Network Planning 2.1 Non-HyperMetro Network Fibre Channel Networking Diagram Multi-Path Directly Connected Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, and they support different networkings. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following explains how to connect an AIX host and a storage system (HUAWEI OceanStor S5500T, for example) over a Fibre Channel multi-path directly-connected network, as shown in Figure 2-1. Figure 2-1 Fibre Channel multi-path directly-connected network (dual-controller) Multi-Controller In this networking, both controllers of the storage system are connected to the host's HBAs through optical fibers. The following explains how to connect an AIX host and a storage system (a four-controller HUAWEI OceanStor 18800, for example) over a Fibre Channel multi-path direct connection network, as shown in Figure 2-2. Figure 2-2 Fibre Channel multi-path directly-connected network (four-controller) 9

20 2 Network Planning In this networking, the four controllers of the storage system are connected to the host's HBAs through optical fibers Multi-Path Switch-Connected Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, and they support different networkings. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following explains how to connect an AIX host and a storage system (HUAWEI OceanStor S5500T, for example) over a Fibre Channel multi-path switch-connected network, as shown in Figure 2-3. Figure 2-3 Fibre Channel multi-path switch-connected network diagram (dual-controller) Multi-Controller In this networking example, the storage system is connected to the host through two switches. Both controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. The following explains how to connect an AIX host and a storage system (a four-controller HUAWEI OceanStor 18800, for example) over a Fibre Channel multi-path switch-connected network, as shown in Figure

21 2 Network Planning Figure 2-4 Fibre Channel multi-path switch-connected network diagram (four-controller) In this networking example, the storage system is connected to the host through two switches. All controllers of the storage system are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port iscsi Network Diagram By the release of this document, no multipathing software is applicable to iscsi networks. Therefore, iscsi networks can only be single-path networks (directly-connected or switch-connected). iscsi single-path networkings are simple and therefore not detailed here. 2.2 HyperMetro Network HyperMetro using the OS native multipathing function has the following networking requirements: Uses the multi-path switch-connected networking by default. In the switches' zone configuration, allows a zone to contain only one initiator and one target. You are advised to use dual-switch networking to prevent single points of failure. 11

22 2 Network Planning Fibre Channel Networking Diagram Multi-Path Switch-Connected Network Dual-Controller Huawei provides dual-controller and multi-controller storage systems, and they support different networkings. The following describes network diagrams of dual-controller and multi-controller storage systems respectively. The following explains how to connect an AIX host and a storage system (a dual-controller HUAWEI OceanStor 6800 V3, for example) over a Fibre Channel multi-path switch-connected network, as shown in Figure 2-5. Figure 2-5 Fibre Channel multi-path switch-connected networking diagram (dual-controller) Multi-Controller In this networking example, the storage systems are connected to the host through two switches. Each storage system' two controllers are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. The two storage systems' controllers are interconnected through optical cables to form replication links. Alternatively, you can also connect the two storage systems' controllers through a switch to form replication links. The following explains how to connect an AIX host and a storage system (a four-controller HUAWEI OceanStor 6800 V3, for example) over a Fibre Channel multi-path switch-connected network, as shown in Figure

23 2 Network Planning Figure 2-6 Fibre Channel multi-path switch-connected networking diagram (four-controller) In this networking, the storage systems are connected to the host through two switches. Each storage systems' four controllers are connected to the switches through optical fibers and both switches are connected to the host through optical fibers. To ensure the connectivity between the host and the storage system, each zone contains only one storage port and its corresponding host port. The two storage systems' controllers are interconnected through optical cables to form replication links. Alternatively, you can also connect the controllers through two switches to form replication links. 13

24 3 Preparations Before Configuration (on a Host) 3 Preparations Before Configuration (on a Host) 3.1 Adjusting the Directory Size The default directory size is small upon the AIX installation. You need to manually adjust the directory size based on site requirements. Otherwise, later operations may fail. Capacities of directories such as /, /home, and /usr need to be expanded. Expand the directories based on actual disk capacities. Usually, the directory size can be larger than 10 GB. Perform the following steps to expand directory capacities: Step 1 Display directory capacities. bash-3.00# df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd % % / /dev/hd % % /usr /dev/hd9var % 681 1% /var /dev/hd % 70 1% /tmp /dev/fwdump % 4 1% /var/adm/ras/platform /dev/hd % 841 1% /home /proc /proc /dev/hd10opt % % /opt /dev/lv % 21 1% /audit bash-3.00# Step 2 Expand capacities of desired directories. The command used for expanding directory capacities in AIX is chfs -a size=capacity directory. bash-3.00# chfs -a size=5g /tmp Filesystem size changed to bash-3.00# The example command expands the capacity of /tmp is to 5 GB. Step 3 Verify the capacity expansion. View the directory capacity again to check that its capacity is expanded successfully. 14

25 3 Preparations Before Configuration (on a Host) bash-3.00# df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd % % / /dev/hd % % /usr /dev/hd9var % 681 1% /var /dev/hd % 70 1% /tmp /dev/fwdump % 4 1% /var/adm/ras/platform /dev/hd % 841 1% /home /proc /proc /dev/hd10opt % % /opt /dev/lv % 21 1% /audit bash-3.00# ----End 3.2 Changing the File Size Limit By default, the maximum file size is 2 GB after AIX is installed. Files larger than 2 GB cannot be created under any directory. However, files larger than 2 GB are common. To ensure normal file creation, you need to change the file size limit. To change the file size limit, change fsize in file size limit configuration file /etc/security/limits to -1, where 1 indicates no limits on file size, as shown in Figure 3-1. The change takes effect immediately without the need to restart the system. Figure 3-1 Changing fsize in configuration file /etc/security/limits After fsize is changed, run the following command to verify that the change takes effect: bash-3.00# ulimit -a core file size (blocks, -c) data seg size (kbytes, -d) file size (blocks, -f) unlimited max memory size (kbytes, -m) open files (-n) 2000 pipe size (512 bytes, -p) 64 stack size (kbytes, -s) cpu time (seconds, -t) unlimited max user processes (-u) virtual memory (kbytes, -v) unlimited bash-3.00# In the output, file size is changed to unlimited, indicating that the change takes effect. 15

26 3 Preparations Before Configuration (on a Host) 3.3 Viewing and Configuring HBAs Ensure that HBAs installed on a host are correctly identified. Then configure HBA parameters based on site requirements HBA Identification After an HBA is installed on a host, run the following command on the host to check whether the HBA is identified by the host. bash-3.00# lsdev -Cc adapter grep fc fcs0 fcs1 fcs2 fcs3 bash-3.00# HBA WWNs Available Gb FC PCI Express Adapter (df1000fe) Available Gb FC PCI Express Adapter (df1000fe) Available Gb PCI Express Dual Port FC Adapter (df1000f114108a03) Available Gb PCI Express Dual Port FC Adapter (df1000f114108a03) The output shows that two 4 Gbit/s Fibre Channel host ports and two 8 Gbit/s Fibre Channel host ports are identified. The output is consistent with the ports on the two newly installed HBAs, one dual-port 4 Gbit/s HBA and one dual-port 8 Gbit/s HBA. This output means that the host has identified the HBAs correctly. The output also shows the physical device identifier for each HBA port, for example, fcs0. The identifiers will be used in follow-up query commands. After the host identifies a newly installed HBA, you can view properties of the HBA on the host. The following describes how to view the HBA properties. Run the following command to view the world wide name (WWN) of the HBA. bash-3.00# lscfg -vpl fcs2 fcs2 (df1000f114108a03) U78A0.001.DNWGHBR-P1-C2-T1 8Gb PCI Express Dual Port FC Adapter Part Number...10N9824 Serial Number...1B B Manufacturer...001B EC Level...D76482B Customer Card ID Number...577D FRU Number...10N9824 Device Specific.(ZM)...3 Network Address C99B5D94 ROS Level and ID Device Specific.(Z0) Device Specific.(Z1) Device Specific.(Z2) Device Specific.(Z3) Device Specific.(Z4)...FF Device Specific.(Z5) Device Specific.(Z6) Device Specific.(Z7)...0B7C1135 Device Specific.(Z8) C99B5D94 Device Specific.(Z9)...US1.10X5 Device Specific.(ZA)...U2D1.10X5 16

27 3 Preparations Before Configuration (on a Host) Device Specific.(ZB)...U3K1.10X5 Device Specific.(ZC) EF Hardware Location Code...U78A0.001.DNWGHBR-P1-C2-T1 PLATFORM SPECIFIC Name: fibre-channel Model: 10N9824 Node: fibre-channel@0 Device Type: fcp Physical Location: U78A0.001.DNWGHBR-P1-C2-T1 The output shows the HBA specifications (Part Number and Customer Card ID Number) and WWN (Network Address) HBA Physical Device Identifier Properties AIX assigns a unique physical device identifier (fcs#) and a virtual device identifier (fscsi#) to each HBA port. The properties of the two identifiers are used in the interaction among storage systems, AIX, and upper-layer applications. Therefore, configure these properties correctly based on site requirements. Run the following command to view the properties of an HBA's physical device identifier: bash-3.00# lsattr -EHl fcs0 attribute value description user_settable bus_intr_lvl Bus interrupt level False bus_io_addr 0xff800 Bus I/O address False bus_mem_addr 0xffe7e000 Bus memory address init_link al INIT Link flags True intr_msi_1 581 Bus interrupt level False intr_priority 3 Interrupt priority False lg_term_dma 0x Long term DMA True False max_xfer_size 0x Maximum Transfer Size True num_cmd_elems 200 Maximum number of COMMANDS to queue to the adapter True pref_alpa 0x1 Preferred AL_PA True sw_fc_class 2 FC Class for Fabric True bash-3.00# Among the preceding properties, note the following parameters: init_link Indicates the Fibre Channel HBA port mode. Possible values are auto, al, and pt2pt, indicating three connection modes. Connection modes vary with HBAs. For example, some HBAs support only al and pt2pt and some support only auto. lg_term_dma Indicates the size of the memory where fcs# stores I/O commands and data. By default, the value is 0x800000, namely, 8 MB. This parameter is related to read/write performance. max_xfer_size Indicates the maximum I/O transfer length of fcs#. By default, the value is 0x100000, namely, 1 MB. This property is related to read/write performance. num_cmd_elems 17

28 3 Preparations Before Configuration (on a Host) Indicates the size of concurrent I/Os of fcs#. By default, the value is 200. This parameter is related to read/write performance. The preceding parameters need to be adjusted only when the connection failed between the host and storage systems or the read/write performance is poor HBA Virtual Device Identifier Properties Run the following command to view the properties of an HBA's virtual device identifier: bash-3.00# lsattr -EHl fscsi0 attribute value description user_settable attach none How this adapter is CONNECTED False dyntrk no Dynamic Tracking of FC Devices True fc_err_recov delayed_fail FC Fabric Event Error RECOVERY Policy True scsi_id Adapter SCSI ID False sw_fc_class 3 FC Class for Fabric True bash-3.00# Among the preceding properties, note the following parameters: dyntrk Indicates the status of the dynamic tracking function. By default, the value is no. When dynamic tracking is enabled, HBA service status is monitored in a timely manner. fc_err_recov HBA Parameters Indicates the status of the fast error recovery function. By default, the value is delayed_fail. This parameter determines the time an HBA spent in fault diagnosis. These parameters are related to service path selection. Configure these parameters based on site requirements when multiple paths exist. For details about how to configure the parameters, see the user guides specific to multipathing. Before changing a parameter value, run the following command to view available values of the parameter: bash-3.00# lsattr -Rl fcs0 -a max_xfer_size 0x x x x x bash-3.00# The output shows the five possible values of max_xfer_size of fcs0. Run the following command to change the value of max_xfer_size of fcs0: bash-3.00# chdev -l fcs0 -a max_xfer_size=0x After changing the parameter value, run the lsattr -EHl fcs0 command to verify that the change is successful. 18

29 4 Preparations Before Configuration (on a Storage System) 4 Preparations Before Configuration (on a Storage System) Make sure that RAID, LUNs, and hosts are created correctly on the storage system. These configurations are common and therefore not detailed here. 19

30 5 Switch Configuration 5 Switch Configuration A Fibre Channel multi-path network is recommended. This chapter details the Fibre Channel switches used in this network. 5.1 Fibre Channel Switch The commonly used Fibre Channel switches are mainly from Brocade, Cisco, and QLogic. The following uses a Brocade switch as an example to explain how to configure switches Querying the Switch Model and Version Perform the following steps to query the switch model and version: Step 1 Log in to the Brocade switch from a web page. On the web page, enter the IP address of the Brocade switch. The Web Tools switch login dialog box is displayed. Enter the account and password. The default account and password are admin and password. The switch management page is displayed. CAUTION Web Tools works correctly only when Java is installed on the host. Java 1.6 or later is recommended. Step 2 View the switch information. On the switch management page that is displayed, click Switch Information. 20

31 5 Switch Configuration Figure 5-1 Switch information Tue June Note the following parameters: Fabric OS version: indicates the switch version information. The interoperability between switches and storage systems varies with the switch version. Only switches of authenticated versions can interconnect correctly with storage systems. Type: This parameter is a decimal consists of an integer and a decimal fraction. The integer indicates the switch model and the decimal fraction indicates the switch template version. You only need to pay attention to the switch model. Table 5-1 describes switch model mapping. Table 5-1 Switch model mapping Switch Type B-Series Switch Model Switch Type B-Series Switch Model Brocade DCX Brocade Encryption Switch E M

32 5 Switch Configuration Switch Type B-Series Switch Model Switch Type B-Series Switch Model Brocade DCX-4S Ethernet IPv4: indicates the switch IP address. Effective Configuration: indicates the currently effective configurations. This parameter is important and is related to zone configurations. In this example, the currently effective configuration is ss. ----End Configuring Zones Zone configuration is important for Fibre Channel switches. Perform the following steps to configure switch zones: Log in to the Brocade switch from a web page. This step is the same as that in section "Querying the Switch Model and Version." Step 1 Check the switch port status. Normally, the switch port indicators are steady green, as shown in 0. Figure 5-2 Switch port indicator status If the port indicators are abnormal, check the topology mode and rate. Proceed with the next step after all indicators are normal. Step 2 Go to the Zone Admin page. In the navigation tree of Web Tools, choose Task > Manage > Zone Admin. You can also choose Manage > Zone Admin in the navigation bar. Step 3 Check whether the switch identifies hosts and storage systems. On the Zone Admin page, click the Zone tab. In Ports&Attached Devices, check whether all related ports are identified, as shown in Figure

33 5 Switch Configuration Figure 5-3 Zone tab page The preceding figure shows that ports 1,8 and 1,9 in use are correctly identified by the switch. Step 4 Create a zone. On the Zone tab page, click New Zone to create a new zone and name it zone_8_9. Select ports 1,8 and 1,9 and click Add Member to add them to the new zone, as shown in 0. Figure 5-4 Zone configuration Step 5 Add the new zone to the configuration file and activate the new zone. On the Zone Admin page, click the Zone Config tab. In the Name drop-down list, choose the currently effective configuration ss. 23

34 5 Switch Configuration In Member Selection List, select zone zone_8_9 and click Add Member to add it to the configuration file. Click Save Config to save the configuration and click Enable Config to make the configuration effective. Figure 5-5 Zone Config tab page Step 6 Verify that the configuration takes effect. In the navigation tree of Web Tools, choose Task > Monitor > Name Server to go to the Name Server page. You can also choose Monitor > Name Server in the navigation bar. Figure 5-6 Name Server page The preceding figure shows that ports 8 and 9 are members of zone_8_9 that is now effective. An effective zone is marked by an asterisk (*). 24

35 5 Switch Configuration ----End Precautions Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: The topology mode of the storage system must be set to switch. fill word of ports through which the switch is connected to the storage system must be set to 0. To configure this parameter, run the portcfgfillword <port number> 0 command on the switch. Note the following when connecting a Brocade switch to a storage system at a rate of 8 Gbit/s: When the switch is connected to module HP VC 8Gb 20-port FC or HP VC FlexFabric 10Gb/24-port, change the switch configuration. For details, visit: 3Demr_na-c %7CdocLocale%3Dzh_CN&lang=en&javax.portlet.begCacheTok=co m.vignette.cachetoken&sp4ts.oid= &javax.portlet.endcachetok=com.vignette.cachet oken&javax.portlet.tpst=efb5c e51970c8fa22b053ce01&hpappid=sp4ts&cc=us&ac.a dmitted= Ethernet Switch This section describes how to configure Ethernet switches, including configuring VLANs and binding ports Configuring VLANs On an Ethernet network to which many hosts are connected, a large number of broadcast packets are generated during the host communication. Broadcast packets sent from one host will be received by all other hosts on the network, consuming more bandwidth. Moreover, all hosts on the network can access each other, resulting data security risks. To save bandwidth and prevent security risks, hosts on an Ethernet network are divided into multiple logical groups. Each logical group is a VLAN. The following uses HUAWEI Quidway 2700 Ethernet switch as an example to explain how to configure VLANs. In the following example, two VLANs (VLAN 1000 and VLAN 2000) are created. VLAN 1000 contains ports GE 1/0/1 to 1/0/16. VLAN 2000 contains ports GE 1/0/20 to 1/0/24. Step 1 Go to the system view. <Quidway>system-view System View: return to User View with Ctrl+Z. Step 2 Create VLAN 1000 and add ports to it. [Quidway]VLAN 1000 [Quidway-vlan1000]port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/16 Step 3 Configure the IP address of VLAN

36 5 Switch Configuration [Quidway-vlan1000]interface VLAN 1000 [Quidway-Vlan-interface1000]ip address Step 4 Create VLAN 2000, add ports, and configure the IP address. [Quidway]VLAN 2000 [Quidway-vlan2000]port GigabitEthernet 1/0/20 to GigabitEthernet 1/0/24 [Quidway-vlan2000]interface VLAN 2000 [Quidway-Vlan-interface2000]ip address End Binding Ports (Link Aggregation) When storage devices and application servers are connected in point-to-point mode, existing bandwidth may be insufficient for storage data transmission. Moreover, devices cannot be redundantly connected in point-to-point mode. To address these problems, ports are bound (link aggregation). Port binding can improve bandwidth and balance load among multiple links Link Aggregation Modes Three Ethernet link aggregation modes are available: Manual aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type. Static aggregation Manually run a command to add ports to an aggregation group. Ports added to the aggregation group must have the same link type and LACP enabled. Dynamic aggregation The protocol dynamically adds ports to an aggregation group. Ports added in this way must have LACP enabled and the same speed, duplex mode, and link type. Figure 5-2 compares the three link aggregation modes. Table 5-2 Comparison of link aggregation modes Link Aggregation Mode Packet Exchange Port Detection CPU Usage Manual aggregation No No Low Static aggregation Yes Yes High Dynamic aggregation Yes Yes High Procedure HUAWEI OceanStor storage devices support 802.3ad link aggregation (dynamic aggregation). In this link aggregation mode, multiple network ports are in an active aggregation group and work in duplex mode and at the same speed. After binding iscsi host ports on a storage device, enable aggregation for their peer ports on a switch. Otherwise, links are unavailable between the storage device and the switch. 26

37 5 Switch Configuration This section uses switch ports GE 1/0/1 and GE 1/0/2 and iscsi host ports P2 and P3 as examples to explain how to bind ports. You can adjust related parameters based on site requirements. Bind the iscsi host ports. Step 1 Log in to the ISM and go to the page for binding ports. In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Step 2 Bind ports. Select the ports that you want to bind and choose Bind Ports > Bind in the menu bar. In this example, the ports to be bound are P2 and P3. The Bind iscsi Port dialog box is displayed. In Bond name, enter the name for the port bond and click OK. The Warning dialog box is displayed. In the Warning dialog box, select I have read the warning message carefully and click OK. The Information dialog box is displayed, indicating that the operation succeeded. Click OK. After the storage system ports are bound, configure link aggregation on the switch. Run the following command on the switch: <Quidway>system-view System View: return to User View with Ctrl+Z. [Quidway-Switch]interface GigabitEthernet 1/0/1 [Quidway-Switch-GigabitEthernet1/0/19]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/19]quit [Quidway-Switch]interface GigabitEthernet 1/0/2 [Quidway-Switch-GigabitEthernet1/0/20]lacp enable LACP is already enabled on the port! [Quidway-Switch-GigabitEthernet1/0/20]quit After the command is executed, LACP is enabled for ports GE 1/0/1 and GE 1/0/2. Then the ports can be automatically detected and added to an aggregation group. 5.3 FCoE Switch The configurations of FCoE switches are different from those of FC switches and Ethernet switches. For details, see the specific switch vendor-provided configuration guide. Taking Cisco Nexus5548 as an example, Figure 5-7 shows an FCoE configuration process. 27

38 5 Switch Configuration Figure 5-7 Process for configuring an FCoE switch Command Introduction When using SSH to log in to and manage an FCoE switch, you can have all supported commands displayed by inputting "?": switch#? callhome cd cfs checkpoint clear cli clock configure copy debug debug-filter delete diff-clean dir discover dos2nxos echo ethanalyzer event fcdomain fcping fctrace find fips gunzip gzip hardware install ip ipv6 load locator-led mkdir modem move Callhome commands Change current directory CFS parameters Create configuration rollback checkpoint Reset functions CLI commands Manage the system clock Enter configuration mode Copy from one file to another Debugging functions Enable filtering for debugging functions Delete a file or directory Remove temp files created by ' diff' filters List files in a directory Discover information DOS to NXOS text file format converter Echo argument back to screen (useful for scripts) Configure cisco packet analyzer Event Manager commands Fcdomain internal command Ping an N-Port Trace the route for an N-Port. Find a file below the current directory Enable/Disable FIPS mode Uncompresses LZ77 coded files Compresses file using LZ77 coding Change hardware usage settings Upgrade software Configure IP features Configure IPv6 features Load system image Turn on locator beacon Create new directory Modem commands Move files 28

39 5 Switch Configuration mping Run mping mtrace Trace multicast path from receiver to source no Negate a command or set its defaults ntp NTP configuration ping Test network reachability ping6 Test IPv6 network reachability pktmgr Display Packet Manager information purge Deletes unused data pwd View current directory reload Reboot the entire box restart Manually restart a component rmdir Delete a directory rollback Rollback configuration routing-context Set the routing context run-script Run shell scripts san-port-channel Port-Channel related commands scripting Configure scripting parameters send Send message to open sessions setup Run the basic SETUP command facility show Show running system information sleep Sleep for the specified number of seconds sockets Display sockets status and configuration ssh SSH to another system system System management commands system System configuration commands tac-pac Save tac info in a compressed.gz file at specific location tail Display the last part of a file tar Archiving operations tclsh Source tclsh script telnet Telnet to another system telnet6 Telnet6 to another system using IPv6 addressing terminal Set terminal line parameters test Test command traceroute Traceroute to destination traceroute6 Traceroute6 to destination undebug Disable Debugging functions (See also debug) write Write current configuration xml Xml agent xml Module XML agent zone Execute Zone Server commands zoneset Execute zoneset commands end Go to exec mode exit Exit from command interpreter pop Pop mode from stack or restore from name push Push current mode to stack or save it under name where Shows the cli context you are in switch# For example, to query the model and version, run the following command: switch# show version Cisco Nexus Operating System (NX-OS) Software TAC support: Documents: 29

40 5 Switch Configuration Copyright (c) , Cisco Systems, Inc. All rights reserved. The copyrights to certain works contained herein are owned by other third parties and are used and distributed under license. Some parts of this software are covered under the GNU Public License. A copy of the license is available at Software BIOS: version loader: version N/A kickstart: version 5.1(3)N1(1a) system: version 5.1(3)N1(1a) power-seq: Module 1: version v1.0 Module 3: version v2.0 uc: version v SFP uc: Module 1: v BIOS compile time: 02/03/2011 kickstart image file is: bootflash:///n5000-uk9-kickstart n1.1a.bin kickstart compile time: 2/7/ :00:00 [02/08/ :49:30] system image file is: bootflash:///n5000-uk n1.1a.bin system compile time: 2/7/ :00:00 [02/08/ :44:33] Hardware cisco Nexus5548 Chassis ("O2 32X10GE/Modular Universal Platform Supervisor") Intel(R) Xeon(R) CPU with kb of memory. Processor Board ID FOC16256KUW Device name: switch bootflash: KB Kernel uptime is 15 day(s), 1 hour(s), 59 minute(s), 8 second(s) Last reset at usecs after Wed Feb 18 05:48: Reason: Reset Requested by CLI command reload System version: 5.1(3)N1(1a) Service: Creating a VSAN plugin Core Plugin, Ethernet Plugin, Fc Plugin To create a VSAN on a Cisco Nexus5548 VSAN, do as follows: Step 1 Activate FCoE. switch# conf t Enter configuration commands, one per line. End with CNTL/Z. switch(config)# feature fcoe fcoe fcoe-npv switch(config)# feature fcoe switch(config)# show fcoe Global FCF details FCF-MAC is 54:7f:ee:b4:f8:20 30

41 5 Switch Configuration Step 2 Create a VSAN. FC-MAP is 0e:fc:00 FCF Priority is 128 FKA Advertisement period for FCF is 8 seconds In the following display, the switch(config-vsan-db)# vsan 200 command in red is the VSAN create command. Additionally, you can run show vsan command to check whether the VSAN is created successfully. switch(config)# show vsan vsan 1 information name:vsan0001 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down vsan 100 information name:vsan0100 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:up vsan 4079:evfp_isolated_vsan vsan 4094:isolated_vsan switch(config)# vsan database switch(config-vsan-db)# vsan 200 switch(config-vsan-db)# exit switch(config)# show vsan vsan 1 information name:vsan0001 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down vsan 100 information name:vsan0100 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:up vsan 200 information name:vsan0200 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down vsan 4079:evfp_isolated_vsan vsan 4094:isolated_vsan ----End 31

42 5 Switch Configuration Creating a VLAN To create a VLAN on a CISCO Nexus5548, do as follows: Step 1 Check for existing VLANs. switch(config)# show vlan VLAN Name Status Ports default active Eth1/1, Eth1/2, Eth1/4, Eth1/5 Eth1/6, Eth1/7, Eth1/8, Eth1/15 Eth1/21, Eth1/22, Eth1/23 Eth1/24, Eth1/25, Eth1/26 Eth1/27, Eth1/ VLAN0100 active Eth1/1, Eth1/2, Eth1/3, Eth1/4 Eth1/5, Eth1/6, Eth1/7, Eth1/8 Eth1/9, Eth1/10, Eth1/11 Eth1/12, Eth1/13, Eth1/14 Eth1/15, Eth1/16, Eth1/17 Eth1/18, Eth1/19, Eth1/20 VLAN Type Vlan-mode enet CE 100 enet CE Remote SPAN VLANs Primary Secondary Type Ports Step 2 Create a VLAN and check whether the creation is successful. switch(config)# vlan 200 switch(config-vlan)# show vlan VLAN Name Status Ports default active Eth1/1, Eth1/2, Eth1/4, Eth1/5 Eth1/6, Eth1/7, Eth1/8, Eth1/15 Eth1/21, Eth1/22, Eth1/23 Eth1/24, Eth1/25, Eth1/26 Eth1/27, Eth1/ VLAN0100 active Eth1/1, Eth1/2, Eth1/3, Eth1/4 Eth1/5, Eth1/6, Eth1/7, Eth1/8 Eth1/9, Eth1/10, Eth1/11 Eth1/12, Eth1/13, Eth1/14 Eth1/15, Eth1/16, Eth1/17 Eth1/18, Eth1/19, Eth1/ VLAN0200 active Eth1/1, Eth1/2, Eth1/4, Eth1/5 Eth1/6, Eth1/7, Eth1/8, Eth1/15 VLAN Type Vlan-mode

43 5 Switch Configuration 1 enet CE 100 enet CE 200 enet CE Remote SPAN VLANs Primary Secondary Type Ports End Configuring a Port and Adding It to the VLAN To configure and add a port to a created VLAN, do as follows: Step 1 Configure the port running mode and add it to the VLAN. switch (config)# interface ethernet 1/1 switch (config-if)# switchport mode trunk switch (config-if)# spanning-tree port type edge trunk Step 2 Create a VFC and bind it to the physical port. switch (config)# interface vfc 1 switch (config-if)# bind interface ethernet 1/1 switch (config-if)# no shutdown Step 3 Add the new VFC to the VSAN. NEXUS(config)# vsan database NEXUS(config-vsan-db)# vsan 2 interface vfc End Creating a Zone and Adding the Port to It To create a zone and add a port to it on a CISCO Nexus5548, do as follows: Step 1 Check the WWN of the FCoE device connected to the CISCO Nexus5548 switch: switch# show flogi database INTERFACE VSAN FCID PORT NAME NODE NAME vfc x2b :00:00:0e:1e:0a:6b:ab 20:00:00:0e:1e:0a:6b:ab vfc x2b :00:00:c0:dd:13:e2:a1 20:00:00:c0:dd:13:e2:a1 [lzh1] vfc x2b :00:00:07:43:ab:ce:07 10:00:00:07:43:ab:ce:07 vfc x2b :00:00:c0:dd:13:e2:a3 20:00:00:c0:dd:13:e2:a3 [lzh2] Total number of flogi = 4. Step 2 On the switch, register a device name for the FCoE device. Then, either the device name or the WWN can be used during later operations such as zone division. switch(config)# device-alias database 33

44 5 Switch Configuration switch(config-device-alias-db)# device-alias name test1 pwwn 20:00:00:0e:1e:0a:6b:ab switch(config-device-alias-db)# device-alias name test2 pwwn 10:00:00:07:43:ab:ce:07 switch(config-device-alias-db)# device-alias commit switch(config-device-alias-db)# show device-alias database device-alias name lzh1 pwwn 21:00:00:c0:dd:13:e2:a1 device-alias name lzh2 pwwn 21:00:00:c0:dd:13:e2:a3 device-alias name lzh3 pwwn 20:00:00:07:43:ab:cd:ef device-alias name lzh4 pwwn 20:00:00:07:43:ab:cd:f7 device-alias name test1 pwwn 20:00:00:0e:1e:0a:6b:ab device-alias name test2 pwwn 10:00:00:07:43:ab:ce:07 Step 3 Add the device name to the zone. switch# show zone zone name zonexzh vsan 100 pwwn 21:00:00:0e:1e:0a:6b:ab pwwn 00:00:00:07:43:ab:cd:f7 pwwn 20:00:00:07:43:ab:ce:07 zone name zonexzh02 vsan 100 pwwn 21:00:00:0e:1e:0a:6b:af zone name zonexz vsan 100 pwwn 21:00:00:c0:dd:12:06:03 pwwn 20:00:00:07:43:ab:cd:ff zone name lzhzone1 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a1 [lzh1] pwwn 20:00:00:07:43:ab:cd:ef [lzh3] zone name lzhzone2 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a3 [lzh2] pwwn 20:00:00:07:43:ab:cd:f7 [lzh4] zone name lzhzone3 vsan 100 switch(config)# zone name lzhzone3 vsan 100 switch(config-zone)# member device-alias test1 switch(config-zone)# member device-alias test2 switch(config-zone)# show zone zone name zonexzh vsan 100 pwwn 21:00:00:0e:1e:0a:6b:ab pwwn 00:00:00:07:43:ab:cd:f7 pwwn 20:00:00:07:43:ab:ce:07 zone name zonexzh02 vsan 100 pwwn 21:00:00:0e:1e:0a:6b:af zone name zonexz vsan 100 pwwn 21:00:00:c0:dd:12:06:03 pwwn 20:00:00:07:43:ab:cd:ff zone name lzhzone1 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a1 [lzh1] pwwn 20:00:00:07:43:ab:cd:ef [lzh3] zone name lzhzone2 vsan

45 5 Switch Configuration pwwn 21:00:00:c0:dd:13:e2:a3 [lzh2] pwwn 20:00:00:07:43:ab:cd:f7 [lzh4] zone name lzhzone3 vsan 100 pwwn 20:00:00:0e:1e:0a:6b:ab [test1] pwwn 10:00:00:07:43:ab:ce:07 [test2] ----End Creating a Zoneset and Adding the Created Zone to It To create a zoneset and add a zone to it, do as follows: Step 1 Create a zoneset in the VSAN. switch(config)# zoneset name lzhzoneset5 vsan 100 switch(config-zoneset)# show zoneset zoneset name zoneset100 vsan 100 zone name zonexzh vsan 100 pwwn 21:00:00:0e:1e:0a:6b:ab pwwn 00:00:00:07:43:ab:cd:f7 pwwn 20:00:00:07:43:ab:ce:07 zone name zonexzh02 vsan 100 pwwn 21:00:00:0e:1e:0a:6b:af zone name zonexz vsan 100 pwwn 21:00:00:c0:dd:12:06:03 pwwn 20:00:00:07:43:ab:cd:ff zone name lzhzone1 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a1 [lzh1] pwwn 20:00:00:07:43:ab:cd:ef [lzh3] zone name lzhzone2 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a3 [lzh2] pwwn 20:00:00:07:43:ab:cd:f7 [lzh4] zoneset name lzhzoneset5 vsan 100 Step 2 Add the zone to the created zoneset. switch(config-zoneset)# member lzhzone3 switch(config-zoneset)# show zoneset zoneset name zoneset100 vsan 100 zone name zonexzh vsan 100 pwwn 21:00:00:0e:1e:0a:6b:ab pwwn 00:00:00:07:43:ab:cd:f7 pwwn 20:00:00:07:43:ab:ce:07 zone name zonexzh02 vsan 100 pwwn 21:00:00:0e:1e:0a:6b:af zone name zonexz vsan 100 pwwn 21:00:00:c0:dd:12:06:03 pwwn 20:00:00:07:43:ab:cd:ff 35

46 5 Switch Configuration zone name lzhzone1 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a1 [lzh1] pwwn 20:00:00:07:43:ab:cd:ef [lzh3] zone name lzhzone2 vsan 100 pwwn 21:00:00:c0:dd:13:e2:a3 [lzh2] pwwn 20:00:00:07:43:ab:cd:f7 [lzh4] zoneset name lzhzoneset5 vsan 100 zone name lzhzone3 vsan 100 pwwn 20:00:00:0e:1e:0a:6b:ab [test1] pwwn 10:00:00:07:43:ab:ce:07 [test2]\ Step 3 Activate the zoneset. switch (config)# zoneset activate name zoneset_1 vsan 2 zoneset activation initiated. check zone status WARNING Generally, for an FCoE switch, only one zoneset can be activated. Therefore, it is advisable to keep all the zones in a same zoneset, preventing impacts on other services. 36

47 6 Establishing Fibre Channel Connections 6 Establishing Fibre Channel Connections After connecting a host to a storage system, check the topology modes of the host and the storage system. Fibre Channel connections are established between the host and the storage system after host initiators are identified by the storage system. The following describes how to check topology modes and add initiators. 6.1 Checking Topology Modes If a storage system is connected to an AIX host over a directly-connected network, the topology mode must be arbitrated loop topology. If a storage system is connected to an AIX hosts over a switch-connected network, any topology mode is applicable. The method for checking topology modes varies with storage systems. The following describes how to check the topology mode of the OceanStor T series storage system and the OceanStor series enterprise storage system OceanStor T Series Storage System The check method is as follows: In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click FC Host Ports. Select a port connected to the host and then view the port details, as shown in Figure 6-1. Figure 6-1 Fibre Channel port details 37

48 6 Establishing Fibre Channel Connections As shown in the preceding figure, the topology mode of the OceanStor T series storage system is Public Loop. On the host, check that the HBA port mode is al or auto (if supported). For details about how to check and modify the port mode, see section "HBA Physical Device Identifier Properties" and section "HBA Parameters" respectively OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System The check method is as follows: In the ISM navigation tree, choose System. Then click the device view icon in the upper right corner. Choose Controller Enclosure ENG0 > Controller > Interface Module > FC Port and click the port whose details that you want to view, as shown in Figure 6-2. In the navigation tree, you can see controller A and controller B, each of which has different interface modules. Choose a controller and an interface module based on actual conditions. Figure 6-2 Fibre Channel port details As shown in the preceding figure, the port working mode of the OceanStor 18000/T V2/V3/Dorado V3 storage system is P2P. On the host, check that the HBA port mode is al or auto (if supported). For details about how to check and modify the port mode, see section "HBA Physical Device Identifier Properties" and section "HBA Parameters" respectively. 38

49 6 Establishing Fibre Channel Connections 6.2 Adding Initiators This section describes how to add host HBA initiators on a storage system. Perform the following steps to add initiators: Step 1 Check HBA WWNs on the host. For details, see section "HBA WWNs." Step 2 Run the cfgmgr v command twice on the host to scan for hardware devices. Step 3 Check host WWNs on the storage system. The method for checking host WWNs varies with storage systems. The following describes how to check WWNs on the OceanStor T series storage system and the OceanStor storage system. OceanStor T series storage system Log in to the ISM and choose SAN Services > Mappings > Initiators in the navigation tree. In the function pane, check the initiator information. Ensure that the WWNs in step 1 are found. If the WWNs are not found, check the Fibre Channel port status. Ensure that the port status is normal. OceanStor series enterprise storage system ----End Log in to the ISM and choose Host in the navigation tree. On the Host tab page that is displayed, select a host, click Add Initiator, and check that the WWNs in step 1 are found. If the WWNs are not identified, check the Fibre Channel port status. Ensure that the port status is normal. 6.3 Establishing Connections Add the WWNs (initiators) to the host and ensure that the initiator connection status is Online. If the initiator connection status is Offline, run the cfgmgr v command on the host to scan for hardware devices. After initiators are added to the host, Fibre Channel links are established between the host and storage system. 39

50 7 Establishing iscsi Connections 7 Establishing iscsi Connections IP addresses and iscsi services need to be configured before you establish iscsi connections. The procedure for establishing iscsi connections is as follows: 1. Confirm that required software packages are installed on the host. 2. Configure service IP addresses on the host and the storage system. 3. Configure iscsi initiators on the host. 4. Check the iscsi targets of the storage system. 5. Configure the iscsi service on the host. 6. Check initiators on the storage system and establish connection. The following details each step in this procedure. 7.1 Checking iscsi Software on the Host By default, iscsi software is installed on the host during the AIX system installation. To check the iscsi software installation, run the following command: bash-3.2# lslpp -l grep devices.iscsi devices.iscsi.disk.rte devices.iscsi.tape.rte devices.iscsi_sw.rte devices.iscsi_sw.rte bash-3.2# COMMITTED iscsi Disk Software COMMITTED iscsi Tape Software COMMITTED iscsi Software Device Driver COMMITTED iscsi Software Device Driver If the preceding software is not installed, install the software using the operating system installation CD-ROM. Perform the following steps to install the software: Step 1 Insert the operating system installation CD-ROM into the host's CD-ROM drive. Step 2 Run the smitty update_all command on the host. On the installation configuration screen, press ESC+4. On the screen for selecting the installation source, choose /dev/cd0, as shown in 0. 40

51 7 Establishing iscsi Connections Figure 7-1 Screen for selecting the installation source Press Enter to display the software installation screen. Step 3 Select the software that you want to install and start installing the software. Step 3 shows the software installation screen. Figure 7-2 Software installation screen Move the cursor to SOFTWARE to update and press Esc+4. On the screen for selecting software packages, press F7 to choose the four iscsi software packages. Then set ACCEPT new license agreements to YES. Press Enter to start the software installation. ----End 7.2 Configuring Service IP Addresses Storage System Storage systems and hosts use IP addresses to identify each other in iscsi services. Therefore, service IP addresses must be configured for storage systems and hosts. The following describes how to configure service IP addresses for a storage system and a host. Different versions of storage systems support different IP protocols. Specify the IP protocols for storage systems based on actual storage system versions and application scenarios. 41

52 7 Establishing iscsi Connections Observe the following principles when configuring IP addresses of iscsi ports on storage systems: The IP addresses of an iscsi host port and a management network port must reside on different network segments. The IP addresses of an iscsi host port and a heartbeat network port must reside on different network segments. The IP addresses of iscsi host ports on the same controller must reside on different network segments. In some storage systems of the latest versions, IP addresses of iscsi host ports on the same controller can reside on the same network segment. However, this configuration is not recommended. CAUTION Read-only users are not allowed to modify the IP address of an iscsi host port. Modifying the IP address of an iscsi host port will interrupt the services on the port. The IP address configuration varies with storage systems. The following explains how to configure IPv4 addresses on the OceanStor T series storage system and the OceanStor series enterprise storage system OceanStor T Series Storage System In the ISM navigation tree, choose Device Info > Storage Unit > Ports. In the function pane, click iscsi Host Ports. Select a port and choose IP Address > Modify IPv4 Address in the tool bar, as shown in Figure 7-3. Figure 7-3 Modifying IPv4 addresses In the dialog box that is displayed, enter the new IP address and subnet mask and click OK OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System Step 1 Go to the iscsi Host Port dialog box. 42

53 7 Establishing iscsi Connections Then perform the following steps: 1. On the right navigation bar, click. 2. In the basic information area of the function pane, click the device icon. 3. In the middle function pane, click the cabinet whose iscsi ports you want to view. 4. Click the controller enclosure where the desired iscsi host ports reside. The controller enclosure view is displayed. 5. Click to switch to the rear view. 6. Click the iscsi host port whose information you want to modify. 7. The iscsi Host Port dialog box is displayed. 8. Click Modify. Step 2 Modify the iscsi host port. 1. In the IPv4 Address or IPv6 Address text box, enter the IP address of the iscsi host port. 2. In the Subnet Mask or Prefix text box, enter the subnet mask or prefix of the iscsi host port. 3. In the MTU (Byte) text box, enter the maximum size of data packet that can be transferred between the iscsi host port and the host. The value is an integer ranging from 1500 to Step 3 Confirm the iscsi host port modification. 1. Click Apply. The Danger dialog box is displayed. 2. Carefully read the contents of the dialog box. Then click the check box next to the statement I have read the previous information and understood subsequences of the operation to confirm the information. 3. Click OK. The Success dialog box is displayed, indicating that the operation succeeded. 4. Click OK. ----End Host Run the smit tcpip command on a host to configure IP addresses. Perform the following steps to configure IP addresses: Step 1 Go to the screen for configuring network ports. Run the smitty tcpip command and choose Minimum Configuration & Startup. On the screen that is displayed, choose the desired iscsi host port and press Enter. Step 2 Configure the IP address. 43

54 7 Establishing iscsi Connections Figure 7-4 Screen for configuring IP addresses In the preceding screen, configure the following parameters: Internet ADDRESS (dotted decimal) IP address of a network port Network MASK (dotted decimal) Subnet mask of a network port Default Gateway Address (dotted decimal or symbolic name) Gateway of a network port Configure the parameters based on site requirements and press Enter. After IP addresses are configured for hosts and storage systems, run the ping command to check the link connectivity. If the link connectivity is poor, check the physical links and IP address configurations. ----End 7.3 Configuring Initiators on a Host After the network connectivity is normal between the host and storage system, configure iscsi initiators on the host. Perform the following steps to configure initiators: Step 1 Run the smit iscsi command on the host to go to the iscsi screen. Step 2 Choose iscsi Protocol Device and press Enter to go to the iscsi Protocol Device screen. Step 3 Choose Change/Show Characteristics of an iscsi Adapter and press Enter. Select the device whose initiators that you want to configure. Step 4 Go to the Change/Show Characteristics of an iscsi Adapter screen, as shown in Step 3. 44

55 7 Establishing iscsi Connections Figure 7-5 Change/Show Characteristics of an iscsi Adapter screen Step 5 Modify iscsi Initiator Name. In this example, this field is changed to iqn com.ibm:h7f An iscsi initiator name must comply with the following format: iqn.domaindate.reverse.domain.name:optional name An iscsi initiator name contains only: Special characters: hyphens (-), periods (.), and semicolons (:) Lower-case letters, for example, a to z Digits, for example, 0 to 9 An iscsi initiator name can contain a maximum of 223 characters. Step 6 Verify the configuration. Run the following command to verify the configuration: bash-3.2# lsattr -El iscsi0 disc_filename /etc/iscsi/targets Configuration file False disc_policy file Discovery Policy True initiator_name iqn com.ibm:h7f iscsi Initiator Name True isns_srvnames auto isns Servers IP Addresses True isns_srvports isns Servers Port Numbers True max_targets 16 Maximum Targets Allowed True num_cmd_elems 200 Maximum number of commands to queue to driver True bash-3.2# ----End 7.4 Checking Storage System Targets Storage system targets are required in host iscsi service configuration. The method for checking targets varies with storage systems. The following describes how to check the 45

56 7 Establishing iscsi Connections targets of the OceanStor T series storage system and the OceanStor series enterprise storage system OceanStor T Series Storage System On the CLI of the storage system, run the following command: admin:/>showiscsitgtname ============================================================================ ISCSI Name Iscsi Name iqn com.huaweisymantec:oceanspace: : ============================================================================ The output shows that the storage system target name is iqn com.huaweisymantec:oceanspace: :. If you want to change this target name, run the chgiscsitgtname -n iscsi name command on the CLI. You can also change the name on the ISM. The method is as follows: In the ISM navigation tree, choose Settings. In the function pane, choose Advanced > Modify iscsi Device Name OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System On the CLI of the storage system, run the following command: admin:/>show iscsi target_name iscsi Target Name : iqn com.huawei:oceanstor: a106025f: The output shows that the storage system target name is iqn com.huawei:oceanstor: a106025f:. If you want to change this target name, run the change iscsi target_name iscsi_name command on the CLI. You can also change the name on the ISM. The method is as follows: In the ISM navigation tree, choose Settings. In the function pane, select iscsi Settings to change the name. 7.5 Configuring the Host iscsi Service The configuration file of the host iscsi service is under directory /etc/iscsi. Hosts do not support iscsi service configuration commands. Therefore, you can manage the host iscsi service only by modifying the configuration file. In this example, the configuration file is /etc/iscsi/targets. The method for configuring the host iscsi service varies with storage systems. The following describes how to configure the host iscsi service for different storage systems. 46

57 7 Establishing iscsi Connections S2000 Series/S2600/S5000 Series/S6800E Add the following content to the end of the configuration file: Storage system service IP address port storage system target name:storage system service IP address The content of a standard targets file is as follows: 1.2 src/bos/usr/lib/methods/cfgiscsi/targets.sh, sysxiscsi, bos610 8/30/04 13:58:05 # IBM_PROLOG_BEGIN_TAG # This is an automatically generated prolog. # # bos610 src/bos/usr/lib/methods/cfgiscsi/targets.sh 1.2 # # Licensed Materials - Property of IBM # # COPYRIGHT International Business Machines Corp. 2003,2004 # All Rights Reserved # # US Government Users Restricted Rights - Use, duplication or # disclosure restricted by GSA ADP Schedule Contract with IBM Corp. # # IBM_PROLOG_END_TAG # iscsi targets file # # Comments may be used in the file, the comment character is '#', 0x23. # Anything from a comment char to the end of the line is ignored. # # Blank lines are ignored. # # The format for a target line is defined according to the Augmented BNF # syntax as # described in rfc2234. # # The format for the IPv4Address is taken from rfc2373. # # The line continuation character '\' (i.e., back-slash) can be used to make # each TargetLine easier to read. To ensure no parsing errors, the '\' # character must be the last character and must be preceded by white space. # # --- # # comment = %x23 *CHARS LF # ; # # 47

58 7 Establishing iscsi Connections # TargetLine = *WSP HostNameOrAddr 1*WSP PortNumber 1*WSP ISCSIName *WSP LF # or # TargetLine = *WSP HostNameOrAddr 1*WSP PortNumber 1*WSP ISCSIName *WSP ChapSecret *WSP LF # # HostNameOrAddr = HostName / IPv4Address # # HostName = 1*alphnum *( "." (alphanum / allowedpunc)) # # IPv4Address = 1*3DIGIT "." 1*3DIGIT "." 1*3DIGIT "." 1*3DIGIT # ; # # hexseq = hex4 *( ":" hex4) # # hex4 = 1*4HEXDIG # # hex64 = 16HEXDIG # # PortNumber = 1*5DIGIT # ; to hold uint16 port number # # ISCSIName = "iscsi" / "iqn." iscsinamechars / "eui.0x" hex64 # ; if the ISCSIName is "iscsi", all luns on the device will # be requested. # # iscsinamechars = 1*alphanum *( allowedpunc alphanum ) # ; includes alphanumeric, dot, dash, underbar, colon. # # alphanum = %x30-39 / %x41-5a / %x61-7a # ; [0-9] [A-Z] [a-z] # # allowedpunc = %x2d / %x2e / %x5f / %x58 # ; dash, dot, underbar, colon # # dot = %x2e # ; "." # # ChapSecret = %x22 *( any character ) %x22 # ; " " # ; ChapSecret is a string enclosed in double quotes. The # ; quotes are required, but are not part of the secret. # # EXAMPLE 1: iscsi Target without CHAP(MD5) authentication 48

59 7 Establishing iscsi Connections # Assume the target is at address , # the valid port is 5003 # the name of the target is iqn.com.ibm wtt26 # The target line would look like: # iqn.com.ibm wwt26 # # EXAMPLE 2: iscsi Target with CHAP(MD5) authentication # Assume the target is at address # the valid port is 3260 # the name of the target is iqn.com.ibm-k fc1a # the CHAP secret is "This is my password." # The target line would look like: # iqn.com.ibm-k fc1a "This is my password." # # EXAMPLE 3: iscsi Target with CHAP(MD5) authentication and line continuation # Assume the target is at address # the valid port is 3260 # the name of the target is iqn com.ibm:00.fcd0ab21.shark128 # the CHAP secret is "123ismysecretpassword.fc1b" # The target line would look like: # iqn com.ibm:00.fcd0ab21.shark128 \ # "123ismysecretpassword.fc1b" # iqn com.huaweisymantec:oceanspace: :: CAUTION The last part of the added content must be in the format of storage system target name:storage system service IP address. Ensure that all the three items (the storage system target name, a semicolon, and the storage system service IP address) are in this part. The semicolon cannot be omitted. When a target name ends with a colon (:), there are totally two colons (::) at the end of the target name, and the two colons cannot be omitted OceanStor S2200T For a HUAWEI OceanStor S2200T storage system, you need to add a port before modifying the /etc/iscsi/targets configuration file on the host. The method for adding ports varies with storage system versions. The following describes how to add a port for different V100R005 versions. V100R005 versions earlier than V100R005C00SPC002 49

60 7 Establishing iscsi Connections Add the port after the IP address. The following is an example: iqn com.huawei:oceanstor: a :: V100R005C00SPC002 and V100R005 versions later than V100R005C00SPC002 Add the port before the IP address. The following is an example: iqn com.huawei:oceanstor: a10b7bb2::20103: The method for determining a port number is as follows: The first two digits of a port number indicate the controller. They can be 00 or 01, which indicate controller A or controller B. The next two digits indicate the hardware type. They can be 02, which indicate the network adapter type. The next two digits indicate the interface module number. They can be 01. The last two digits indicate the port. 00 indicates port P0, 01 indicates port P1, and so on. All zeros at the beginning of a port number must be omitted. For example, the number of port P3 on the iscsi interface module on controller A in an S2200T is , namely, CAUTION If the storage system is being upgraded to a later version and the storage system IP address is never changed, do not modify the configuration file on the host. If the storage system is newly deployed, modify the configuration file on the host based on site requirements S2600T/S5500T/S5600T/S5800T/S6800T Add a port before modifying the /etc/iscsi/targets configuration file on the host. The method for adding ports varies with storage system versions. The following describes how to add a port for different V100R005 versions. V100R005 versions earlier than V100R005C00SPC003 Add the port after the IP address. The following is an example: iqn com.huawei:oceanstor: a :: V100R005C00SPC003 and V100R005 versions later than V100R005C00SPC003 Add the port before the IP address. The following is an example: iqn com.huawei:oceanstor: a10b7bb2::20103: V2 versions Add the port before the IP address. The following is an example: iqn com.huawei:oceanstor: a10b7bb2::20103: The method for determining a port number is as follows: The first two digits of a port number indicate the controller. They can be 00 or 01, which indicate controller A or controller B. The next two digits indicate the hardware type. They can be 02, which indicate the network adapter type. 50

61 7 Establishing iscsi Connections The next two digits indicate the interface module number. They can be 01. The last two digits indicate the port. 00 indicates port P0, 01 indicates port P1, and so on. All zeros at the beginning of a port number must be omitted. For example, the number of port P3 on the iscsi interface module on controller A in an S2600T is , namely, CAUTION If the storage system is being upgraded to a later version and the storage system IP address is never changed, do not modify the configuration file on the host. If the storage system is newly deployed, modify the configuration file on the host based on site requirements S2900/S3900/S5900/S6900 Add a port before modifying the /etc/iscsi/targets configuration file on the host. The method for adding ports varies with storage system versions. The following describes how to add a port for different V100R002 versions. V100R002 versions earlier than V100R002C00SPC015 Add the port after the IP address. The following is an example: iqn com.huawei:oceanstor: a :: V100R002C00SPC015 and V100R002 versions later than V100R002C00SPC015 Add the port before the IP address. The following is an example: iqn com.huawei:oceanstor: a10b7bb2::20103: The method for determining a port number is as follows: The first two digits of a port number indicate the controller. They can be 00 or 01, which indicate controller A or controller B. The next two digits indicate the hardware type. They can be 02, which indicate the network adapter type. The next two digits indicate the interface module number. They can be 01. The last two digits indicate the port. 00 indicates port P0, 01 indicates port P1, and so on. All zeros at the beginning of a port number must be omitted. For example, the number of port P3 on the iscsi interface module on controller A in an S2900 is , namely, CAUTION If the storage system is being upgraded to a later version and the storage system IP address is never changed, do not modify the configuration file on the host. If the storage system is newly deployed, modify the configuration file on the host based on site requirements. 51

62 7 Establishing iscsi Connections OceanStor Series Enterprise Storage System Add a port before modifying the /etc/iscsi/targets configuration file on the host. Add the port before the IP address. The following is an example: iqn com.huawei:oceanstor: a106025f::20301: The method for determining a port number is as follows: The first two digits of a port number indicate the controller. They can be 00 or 01, which indicate controller 0A or controller 0B. The next two digits indicate the hardware type. They can be 02, which indicate the network adapter type. The next two digits indicate the interface module number. They can be 01. The last two digits indicate the port. 00 indicates port P0, 01 indicates port P1, and so on. All zeros at the beginning of a port number must be omitted. For example, the number of port P3 on the iscsi interface module on controller 0A in an OceanStor enterprise storage system is , namely, CAUTION If the storage system is being upgraded to a later version and the storage system IP address is never changed, do not modify the configuration file on the host. If the storage system is newly deployed, modify the configuration file on the host based on site requirements. 7.6 Establishing Connections After all conditions are met, run the cfgmgr v or cfgmgr -l iscsi0 command on the host to scan for hardware devices. Host initiators are detected after the command is executed. iscsi connections are established after the initiators area added to the host. 52

63 8 Mapping and Scanning for LUNs 8 Mapping and Scanning for LUNs 8.1 Mapping LUNs to a Host OceanStor T Series Storage System Prerequisites Procedure After a storage system is connected to an AIX host, map the storage system LUNs to the host. Two methods are available for mapping LUNs: Mapping LUNs to a host: This method is applicable to scenarios where only one small-scale client is deployed. Mapping LUNs to a host group: This method is applicable to cluster environments or scenarios where multiple clients are deployed. RAID groups have been created on the storage system. LUNs have been created on the RAID groups. This document explains how to map LUNs to a host. Perform the following steps to map LUNs to a host: Step 1 In the ISM navigation tree, choose SAN Services > Mappings >Hosts. Step 2 In the function pane, select the desired host. In the navigation bar, choose Mapping > Add LUN Mapping. The Add LUN Mapping dialog box is displayed. Step 3 Select LUNs that you want to map to the host and click OK. ----End OceanStor 18000/T V2/V3/Dorado V3 Series Enterprise Storage System After the storage system is connected to the AIX host, run the cfgmgr v command twice on the host. After the host HBA initiators are detected on the storage system, map the storage system LUNs to the host. 53

64 8 Mapping and Scanning for LUNs Prerequisites Procedure LUNs, LUN groups, hosts, and host groups have been created. Perform the following steps to map LUNs to a host: Step 1 Go to the Create Mapping View dialog box. Then perform the following steps: 1. On the right navigation bar, click. 2. On the host management page, click Mapping View. 3. Click Create. The Create Mapping View dialog box is displayed. Step 2 Set basic properties for the mapping view. 1. In the Name text box, enter a name for the mapping view. 2. (Optional) In the Description text box, describe the mapping view. Step 3 Add a LUN group to the mapping view. 1. Click. The Select LUN Group dialog box is displayed. If your service requires a new LUN group, click Create to create one. You can select Shows only the LUN groups that do not belong to any mapping view to quickly locate LUN groups. 2. From the LUN group list, select the LUN groups you want to add to the mapping view. 3. Click OK. Step 4 Add a host group to the mapping view. 1. Click. If your service requires a new host group, click Create to create one. 2. The Select Host Group dialog box is displayed. 3. From the host group list, select the host groups you want to add to the mapping view. 4. Click OK. Step 5 (Optional) Add a port group to the mapping view. 1. Select Port Group. 2. Click. The Select Port Group dialog box is displayed. If your service requires a new port group, click Create to create one. 3. From the port group list, select the port group you want to add to the mapping view. 4. Click OK. Step 6 Confirm the creation of the mapping view. 54

65 8 Mapping and Scanning for LUNs 1. Click OK. The Execution Result dialog box is displayed, indicating that the operation succeeded. 2. Click Close. ----End 8.2 Scanning for LUNs on a Host After LUNs are mapped on the storage system, scan for the mapped LUNs on the host. Step 1 Run hardware scan commands. The commands are as follows: bash-3.2#cfgmgr -l fcs# bash-3.2#cfgmgr -l iscsi# bash-3.2#cfgmgr v In the preceding commands, the pound (#) indicates the hardware identifier of a device. You can specify hardware identifiers based on site requirements. The functions of the commands are as follows: Command 1: Scans for devices whose hardware identifier is fcs#. Command 2: Scans for devices whose hardware identifier is iscsi#. Command 3: Scans for all hardware devices. Step 2 Display information about disks identified by the host. Run the lsdev -Cc disk command on the host to display information about disks identified by the host. The following is an example: bash-3.2# lsdev -Cc disk hdisk0 Available SAS Disk Drive hdisk1 Available SAS Disk Drive hdisk2 Available Other iscsi Disk Drive hdisk3 Available Other iscsi Disk Drive hdisk4 Available Other iscsi Disk Drive You can run the following command to view the disk capacity information: bash-3.2# bootinfo -s hdisk bash-3.2# You can run the lsattr -El hdisk# and lscfg -vpl hdisk# commands to view details about disks. ----End 55

66 9 Multipathing Management Software 9 Multipathing Management Software 9.1 Overview AIX supports both UltraPath and MPIO: MPIO is the AIX native multipathing software, which will be detailed in this chapter. UltraPath is a piece of multipathing software developed by Huawei for HUAWEI OceanStor storage systems. UltraPath is dedicated to the AIX operating system. This software is installed on an application server to control the application server's access to a storage system. The application server uses this software to select and manage paths between the application server and the storage system. UltraPath improves data transmission reliability, securing paths between an application server and a storage system. It provides users with a simple, fast, and efficient path management solution. 9.2 UltraPath Functions UltraPath provides the following functions: Selection of paths between an application server and a storage system UltraPath enables an application server to select the optimal path to communicate with the storage device. Failover A failover is a service switchover upon a failure. Multiple paths can be set up between an application server and a storage system to ensure highly reliable data transfer. Upon an active path failure, UltraPath automatically switches services from the path to another normal path, preventing single points of failure. Failback A failback switches services back after a faulty path returns to normal. When a failed path is recovered and can transfer I/Os again, UltraPath automatically switches services back to the path. I/O load balancing I/O load balancing evenly distributes network traffic among multiple paths between an application server and a storage system, easing the network bandwidth pressure. 56

67 9.2.2 Installation and Uninstallation 9 Multipathing Management Software For details about how to install and uninstall UltraPath, see the OceanStor UltraPath User Guide. HUAWEI storage that does not support multi-controller ALUA and ALUA HyperMetro supports UltraPath regardless of whether ALUA is enabled. ForForHUAWEI storage that does not support multi-controller ALUA and ALUA HyperMetro, do not select Use Third-Party Multipathing. 9.3 MPIO MPIO is the AIX native multipathing software. AIX's native MPIO can take charge of HUAWEI storage only after obtaining the ODM library from the HUAWEI storage. AIX ODM for MPIO is a storage ODM library software HUAWEI develops dedicated to allowing AIX MPIO to correctly identify and take over HUAWEI storage. It offers basic functions such as shielding physical disks and generating virtual disks. However, it cannot switch the working controller for a LUN. If an AIX application server and a HUAWEI storage device are connected by redundant links and SAN boot needs to be performed between them, the AIX ODM for MPIO software enables the server to identify the Huawei storage device and then allows the SAN boot operation to be done on the Huawei disks identified by the server Configuring and Enabling Multipathing Function This section describes the multipathing configurations on interconnected AIX-running hosts and HUAWEI storage systems. HUAWEI storage firmwares' support for the OS-inherent multipathing HyperMetro solution is as follows: Old-version HUAWEI storage (namely, storage that does not support multi-controller ALUA or ALUA HyperMetro): OceanStor T V1/T V2/18000 V1/V300R001/V300R002/V300R003/V300R005/Dorado V300R001C00 New-version HUAWEI storage (namely, storage that supports multi-controller ALUA and ALUA HyperMetro): V300R006C00 (only V300R006C00SPC100 and later)/dorado V300R001C01 (only V300R001C01SPC100 and later) Multipathing Configuration for New-Version HUAWEI Storage HyperMetro Working Modes Typically, HyperMetro works in load balancing mode or local preferred mode. The typical working modes are valid only when both the storage system and host use ALUA. It is advised to set the host's path selection policy to round-robin. If HyperMetro works in load balancing mode, the host's path selection policy must be round-robin. If the host does not use ALUA or its path selection policy is not round-robin, the host's multipathing policy determines the working mode of HyperMetro. HyperMetro storage arrays can be classified into a local and a remote array by their distance to the host. The one closer to the host is the local array and the other one is the remote array. 57

68 9 Multipathing Management Software Table 9-1 describes the configuration methods and application scenarios of the typical working modes. Table 9-1 Configuration methods and application scenarios of the typical working modes Working Mode Load balancing mode Local preferred mode Other modes Configuration Method Enable ALUA on the host and set the path selection policy to round-robin. Configure a switchover mode that supports ALUA for both HyperMetro storage arrays' initiators that are added to the host. Set the path type for both storage arrays' initiators to the optimal path. Enable ALUA on the host. It is advised to set the path selection policy to round-robin. Configure a switchover mode that supports ALUA for both HyperMetro storage arrays' initiators that are added to the host. Set the path type for the local storage array's initiators to the optimal path and that for the remote storage array's initiators to the non-optimal path. Set the initiator switchover mode for the HyperMetro storage arrays by following instructions in the follow-up chapters in this guide. The path type does not require manual configuration. Application Scenario The distance between both HyperMetro storage arrays is less than 1 km. For example, they are in the same equipment room or on the same floor. The distance between both HyperMetro storage arrays is greater than 1 km. For example, they are in different locations or data centers. User-defined Working Principles and Failover When ALUA works, the host multipathing software divides the physical paths to disks into Active Optimized (AO) and Active Non-optimized (AN) paths. The host delivers services to the storage system via the AO paths preferentially. An AO path is the optimal I/O access path and is between the host and a working controller. An AN path is the suboptimal I/O access path and is between the host and a non-working controller. When HyperMetro works in load balancing mode, the host multipathing software selects the paths to the working controllers on both HyperMetro storage arrays as the AO paths, and those to the other controllers as the AN paths. The host accesses the storage arrays via the AO paths. If an AO path fails, the host delivers I/Os to another AO path. If the working controller of a storage array fails, the system switches the other controller to the working mode and maintains load balancing. 58

69 Host 9 Multipathing Management Software Host AO AN AO AN AO AO AO AN A B A B A B A B Site A Site B Site A Site B Path failure SP failure When HyperMetro works in local preferred mode, the host multipathing software selects the paths to the working controller on the local storage array as the AO paths. This ensures that the host delivers I/Os only to the working controller on the local storage array, reducing link consumption. If all AO paths fail, the host delivers I/Os to the AN paths on the non-working controller. If the working controller of the local storage array fails, the system switches the other controller to the working mode and maintains the local preferred mode. Host Host AO AN AN AN AO AO AN AN A B A B A B A B Site A Site B Site A Site B Path failure SP failure Introduction to ALUA ALUA Definition Asymmetric Logical Unit Access (ALUA) is a multi-target port access model. In a multipathing state, the ALUA model provides a way of presenting active/passive LUNs to a host and offers a port status switching interface to switch over the working controller. For example, when a host multipathing program that supports ALUA detects a port status change (the port becomes unavailable) on a faulty controller, the program will automatically switch subsequent I/Os to the other controller Support by HUAWEI Storage Old-version HUAWEI storage supports ALUA only in dual-controller configuration, but not in multi-controller or HyperMetro configuration. New-version HUAWEI storage supports ALUA in dual-controller, multi-controller, and HyperMetro configurations. Table 9-2 describes the HUAWEI storage's support for ALUA. Table 9-2 HUAWEI storage's support for ALUA Storage Type Version Remarks Storage that does not support multi-controller ALUA or ALUA T V1/T V2/18000 V1/V300R001/V300R002/V300R 003/V300R005/Dorado V300R001C00 59

70 9 Multipathing Management Software Storage Type Version Remarks HyperMetro Storage that supports multi-controller ALUA and ALUA HyperMetro: V300R006C00/Dorado V300R001C01 V300R006C00: refers to only V300R006C00SPC100 and later versions. Dorado V300R001C01: refers to only V300R001C01SPC100 and later versions ALUA Impacts ALUA is mainly applicable to a storage system that has one (only one) preferred LUN controller. All host I/Os can be routed through different controllers to the working controller for execution. The storage ALUA will instruct the hosts to deliver I/Os preferentially from the LUN working controller, thereby reducing the I/O routing-consumed resources on the non-working controllers. Once the LUN working controller's all I/O paths are disconnected, the host I/Os will be delivered only from a non-working controller and then routed to the working controller for execution. This scenario must be avoided Suggestions for Using ALUA on HUAWEI Storage Initiator Mode To prevent I/Os from being delivered to a non-working controller, you are advised to: Ensure that the LUN home/working controllers are evenly distributed on storage systems. A change to the storage system (node fault or replacement) may cause an I/O path switchover. Ensure that the host always tries the best to select the optimal path to deliver I/Os. Prevent all host service I/Os from being delivered only to one controller, thereby preventing load unbalancing on the storage system Initiator Parameter Description Table 9-3 Initiator parameter description Parameter Description Example Uses third-party multipath software This parameter is displayed only after an initiator has been added to the host. If LUNs have been mapped to the host before you enable or disable this parameter, restart the host after you configure this parameter. You do not need to enable this parameter on a host with UltraPath. Enabled 60

71 9 Multipathing Management Software Parameter Description Example Switchover Mode Path switchover mode The system supports the following modes: early-version ALUA: default value of Switchover Mode for an upgrade from an earlier version to the current version. The detailed requirements are as follows: The storage system is upgraded from V300R003C10 and earlier to V300R003C20 or V300R006C00SPC100 and later; from V300R005 to V300R006C00SPC100 and later; from Dorado V300R001C00 to Dorado V300R001C01SPC100 and later. Before the upgrade, the storage system has a single or dual controllers and has enabled ALUA. common ALUA: applies to V300R003C20 and later, V300R006C00SPC100 and later, or Dorado V300R001C01SPC100 and later. The detailed requirements are as follows: The storage system version is V300R003C20, V300R006C00SPC100, Dorado V300R001C01SPC100, or later. The OS of the host that connects to the storage system is SUSE, Red Hat 6.X, Windows Server 2012 (using Emulex HBAs), Windows Server 2008 (using Emulex HBAs), or HP-UX 11i V3. ALUA not used: does not support ALUA or HyperMetro. This mode is used when a host such as HP-UX 11i V2 does not support ALUA or ALUA is not needed. Special mode: supports ALUA and has multiple values. It applies to V300R003C20 and later, V300R006C00SPC100 and later, or Dorado V300R001C01SPC100 and later. It is used by host operating systems that are not supported by the common ALUA mode. The detailed requirements are as follows: The storage system version V300R003C20, V300R006C00SPC100, Dorado V300R001C01SPC100, or later. The OS of the host that connects to the storage system is VMware, AIX, Red Hat 7.X, Windows Server 2012 (using QLogic HBAs), or Windows Server 2008 (using QLogic HBAs). Special mode Special modes support ALUA and apply to Mode 0 61

72 9 Multipathing Management Software Parameter Description Example type V300R003C20 and later, V300R006C00SPC100 and later, or Dorado V300R001C01SPC100 and later. The detailed requirements are as follows: Mode 0: The host and storage system must be connected using a Fibre Channel network. The OS of the host that connects to the storage system is Red Hat 7.X, Windows Server 2012 (using QLogic HBAs), or Windows Server 2008 (using QLogic HBAs). Mode 1: The OS of the host that connects to the storage system is AIX or VMware. HyperMetro works in load balancing mode. Mode 2: The OS of the host that connects to the storage system is AIX or VMware. HyperMetro works in local preferred mode. Path Type The value can be either Optimal Path or Non-Optimal Path. When HyperMetro works in load balancing mode, set the Path Type for the initiators of both the local and remote storage arrays to Optimal Path. Enable ALUA on both the host and storage arrays. If the host uses the round-robin multipathing policy, it delivers I/Os to both storage arrays in round-robin mode. When HyperMetro works in local preferred mode, set the Path Type for the initiator of the local storage array to Optimal Path, and that of the remote storage array to Non-Optimal Path. Enable ALUA on both the host and storage arrays. The host delivers I/Os to the local storage array preferentially. Optimal Path Configure the initiators according to the requirements of each OS. The initiators that are added to the same host must be configured with the same switchover mode. Otherwise, host services may be interrupted. After the initiator mode is configured on a storage array, you must restart the host for the configuration to take effect. 62

73 Configuring the Initiators 9 Multipathing Management Software If you want to configure the initiator mode, perform the following operations. Step 1 Go to the host configuration page. Open OceanStor DeviceManager. In the right navigation tree, click Provisioning and then click Host, as shown in Figure 9-1. Figure 9-1 Going to the host configuration page Step 2 Select an initiator of which information you want to modify. On the Host tab page, select a host you want to modify. Then select the initiator (on the host) you want to modify. Click Modify. Figure 9-2 Selecting an initiator of which information you want to modify Step 3 Modify the initiator information. 63

74 9 Multipathing Management Software In the Modify Initiator dialog box that is displayed, modify the initiator information based on the requirements of your operating system. Figure 9-3 shows the initiator information modification page. Figure 9-3 Modifying initiator information Step 4 Repeat the preceding operations to modify the information about other initiators on the host. Step 5 Restart the host to enable the configuration to take effect. ----End Storage Array Configuration For Non-HyperMetro Storage For non-hypermetro storage, use the configuration listed in Table 9-4. Table 9-4 Multipathing configuration on non-hypermetro Huawei storage interconnected with AIX Operating System Storage Operating System Configuration on the Storage Array Third-Party Multipathing Software Switchover Mode Special Mode Path Type AIX 7.1 Dual-controller, multi-controller AIX Enable Special mode Mode 1 Preferred path Other AIX versions Dual-controller, multi-controller AIX Enable ALUA not used Preferred path 64

75 9 Multipathing Management Software To query compatible AIX versions, refer to: WARNING After the initiator mode is configured on a storage array, you must restart the host to enable the new configuration to take effect For HyperMetro Storage For HyperMetro storage, use the configuration listed in Table 9-5. Table 9-5 Multipathing configuration on HyperMetro Huawei storage interconnected with AIX OS HyperMetro Working Mode Storage Array Configuration Storage OS Third-Party Multipathing Software Switchover Mode Special Mode Type Path Type AIX Load balancing Local storage array AIX Enabled Special mode Mode 1 Optimal path Remote storage array AIX Enabled Special mode Mode 1 Optimal path Local preferred Local storage array AIX Enabled Special mode Mode 2 Optimal path Remote storage array AIX Enabled Special mode Mode 2 Non-optimal path To query compatible AIX versions, refer to: WARNING After the initiator mode is configured on a storage array, you must restart the host to enable the new configuration to take effect. In OceanStor V3 V300R003C20, mode 1 and mode 2 are disabled by default. For details about how to enable them, see the OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&

76 9 Multipathing Management Software V3 Storage System V300R003C20 Restricted Command Reference or OceanStor V3&18800 V3 Storage System V300R003C20 Restricted Command Reference. Contact Huawei technical support engineers to obtain the documents. In OceanStor V3 V300R006C00SPC100, Dorado V3 V300R001C01SPC100, and later versions, you can configure mode 1 and mode 2 on DeviceManager directly. Figure 9-4 Querying the special mode type Installing and Enabling the Multipathing Software AIX native MPIO can take over HUAWEI storage disks only if the AIX ODM package has been installed. For details on how to install AIX ODM, see the AIX ODM for MPIO User Guide: th= You can run the following command to check whether MPIO takes over HUAWEI storage disks: Example of the command return: Configuring Multipathing Initiator Mode Being "Special Mode" on the Storage The default system I/O policy is fail_over, and I/Os can be delivered only on one path. To allow I/Os to be delivered on paths of two active-active arrays, you can run the following command to change the I/O policy to round_robin. Example of the command return: 66

77 9 Multipathing Management Software After delivering I/Os, you can run the following command to check whether the path priority is correct. Example of the command return: In this example, there are two preferred paths and 10 non-preferred paths Initiator Mode Being "ALUA Not Used" on the Storage When the initiator mode is set to ALUA not used on the storage, configure MPIO by referring to the AIX ODM for MPIO User Guide, without need for any other settings: th=

78 9 Multipathing Management Software Multipathing Configuration for Old-Version HUAWEI Storage Storage Array Configuration For old-version HUAWEI storage, it is advisable to retain the ALUA disabled state by default. To enable the ALUA function, do as follows: T Series V100R005/Dorado2100/Dorado5100/Dorado2100 G2 Use the Huawei OceanStor ISM system to enable ALUA for all the host initiators, as shown in Figure 9-5. Figure 9-5 Enabling ALUA for T series V100R005/Dorado2100/Dorado5100/Dorado2100 G T Series V200R002/18000 Series/V3 Series Use the Huawei OceanStor DeviceManager to enable ALUA for all the host initiators, as shown in Figure

79 Figure 9-6 Enabling ALUA for T Series V200R002/18000 Series/V3 Series 9 Multipathing Management Software Host Configuration Multi-controller ALUA is not supported. When there are more than two controllers, ALUA is disabled by default and the ALUA status cannot be changed ALUA Enabled on the Storage AIX native MPIO can take over HUAWEI storage disks only if the AIX ODM package has been installed. For details on how to install AIX ODM, see the AIX ODM for MPIO User Guide: th= You can run the following command to check whether MPIO takes over HUAWEI storage disks. Example of the command return: 69

80 9 Multipathing Management Software The default system I/O policy is fail_over, and I/Os can be delivered only on one path. To allow I/Os to be delivered on paths of two active-active arrays, you can run the following command to change the I/O policy to round_robin. Example of the command return: ALUA Disabled On the host, configure MPIO according to the AIX ODM for MPIO User Guide, without need for any other settings. th=

81 10 Volume Management Software 10 Volume Management Software This chapter describes the volume management software applicable to the AIX operating system. The mostly widely applied volume management software in the AIX operating system includes the built-in Logical Volume Manager (LVM) and Symantec's Veritas Volume Manager (VxVM). The following details the two volume management software products LVM Overview LVM enables hosts to deliver a stronger storage management capability. This software helps system administrators allocate storage space to applications and users at ease. Administrators can add, remove, or adjust the size of logical volumes (LVs) on demand. Additionally, LVM customizes name identifiers for all managed logical volumes. Each physical volume (PV) in use belongs to a volume group, which contains one or more logical volumes. Data that is perceived by users as continuous on a logical volume can be discontinuous on a physical volume. Users can resize, relocate, and copy file systems, paging space, and logical volumes across different physical volumes, achieving better flexibility and availability. Disk storage is managed in a layered structure. Each disk (represented as a PV) has a name, for example, /dev/hdisk0. Each physical volume in use belongs to a volume group (VG). All physical volumes in a volume group are divided into physical partitions (PPs) of the same size. A volume group contains one or more logical volumes. A logical volume represents a group of data on a physical volume. Data that is perceived by users as continuous on a logical volume can be discontinuous on a physical volume. Users can resize, relocate, and copy file systems, paging space, and logical volumes across different physical volumes, achieving better flexibility and availability. Each logical volume consists of one or more logical partitions (LPs). A logical partition corresponds to at least one physical partition. Table 10-1 lists the limitations of the VGs in AIX. 71

82 10 Volume Management Software Table 10-1 VG limitations VG Types Maximum PVs Maximum LVs Maximum PPs per VG Maximum PP size Normal VG ,512 (1016 x 32) 1 GB Big VG ,048 (1016 x 128) 1 GB Scalable VG ,097, GB For details, visit: Installation By default, LVM is installed together with the host operating system. LVM requires no extra configuration Common Configuration Commands Creating a Physical Volume After you scan for LUNs on a host, the LUNs are identified as drive letters like hdisk#. The following uses hdisk2 as an example. Perform the following steps to create a physical volume: Step 1 Run the chdev command to create a physical volume. bash-3.2#chdev -l hdisk2 -a pv=yes Step 2 Run the lspv command to verify the physical volume creation. bash-3.2#lspv hdisk0 00c690001d571eda rootvg active hdisk1 00c ddf6f6 None hdisk2 00c ddf6f6 None hdisk3 None None If a physical volume is created successfully, a physical volume identifier is added to corresponding disks. In the output, the physical volume identifier is 00c ddf6f End Creating a Volume Group Perform the following steps to create a volume group: Step 1 Run the smitty mkvg command to create a volume group. bash-3.2#smitty mkvg On the screen that is displayed, choose Add a Big Volume Group to go to the screen for configuring volume groups, as shown in Figure

83 10 Volume Management Software Figure 10-1 Screen for configuring volume groups Configure the following volume group parameters: VOLUME GROUP name Name of a volume group. This parameter is user-configurable. PHYSICAL VOLUME names Press Ctrl+4 to select the physical volumes that you want to add to the volume group, for example, hdisk2 and hdisk3. Volume Group MAJOR NUMBER Maximum number of a volume group. This parameter is optional, and is used only in importing volume groups on other nodes. Keep the default values of the other parameters and press Enter. The volume group is created. Step 2 Run the lsvg command to check volume group information. bash-3.2# lsvg vg_hacmp VOLUME GROUP: vg_hacmp VG IDENTIFIER: 00072ea20000d ed376 VG STATE: active PP SIZE: 128 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 1598 ( megabytes) MAX LVs: 256 FREE PPs: 17 ( megabytes) LVs: 3 USED PPs: 1581 (1024 megabytes) OPEN LVs: 3 QUORUM: 2 (Enabled) TOTAL PVs: 2 VG DESCRIPTORS: 3 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 2 AUTO ON: yes MAX PPs per VG: MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 1024 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable PV RESTRICTION: none In the output, pay special attention to PP SIZE, which will be used in determining the logical volume size during logical volume creation. ----End Creating a Logical Volume Perform the following steps to create a logical volume: Step 1 Run the smit mklv command to create a logical volume. 73

84 10 Volume Management Software bash-3.2# smitty mklv On the screen that is displayed, press Esc+4. The names of all volume groups are displayed. Choose the name of a volume group for which you want to create a logical volume and press Enter. The screen for configuring logical volume properties is displayed, as shown in Figure Figure 10-2 Screen for configuring logical volume properties Configure the following logical volume parameters: Logical volume NAME Name of a logical volume. This parameter is user-configurable. Number of LOGICAL PARTITIONS Number of logical partitions. Determine the parameter value based on the previously obtained PP SIZE. PHYSICAL VOLUME names Physical volume to which a logical volume belongs. Logical volume TYPE File system type (JFS/JFS2). Keep the default values of the other parameters and press Enter. The logical volume is created. Step 2 Run the lslv command to confirm that the information about the newly created logical volume is correct. ----End 74

85 10 Volume Management Software Creating a File System Perform the following steps to create a file system: Step 1 Run the smit crfs command to create a file system. On the screen that is displayed, choose a file system type. Available file system types are as follows: Add an Enhanced Journaled File System Corresponds to JFS2. Add a Journaled File System Corresponds to JFS. Add a CDROM File System Corresponds to an ISO file system in CDROM format. The following explains how to create a JFS2 file system: Step 2 If logical volumes have been created, choose Add an Enhanced Journaled File System on a Previously Defined Logical Volume. The screen for configuring file systems is displayed, as shown in Figure Figure 10-3 Screen for configuring file systems (logical volumes available) Configure the following parameters: LOGICAL VOLUME name Name of a logical volume MOUNT POINT Mount point. This parameter must be different from that of an existing volume. Mount AUTOMATICALLY at system restart? Whether to automatically mount the file system upon system startup. Keep the default values of other parameters. Step 3 If no logical volumes are created, choose Add an Enhanced Journaled File System. Choose a volume group to go to the screen for configuring file systems, as shown in Figure

86 10 Volume Management Software Figure 10-4 Screen for configuring file systems (no logical volumes) Configure the following parameters: Unit Size Size of a unit. The unit size and number of units determine the size of a volume. Number of units Number of units. MOUNT POINT Mount point. Mount AUTOMATICALLY at system restart? Whether to automatically mount the file system upon system startup. Keep the default values of other parameters. Step 4 Run the lslv command to confirm that the information about logical volumes is correct. Step 5 Run the mount command to mount logical volumes. The command syntax is as follows: mount /dev/logical volume name ----End Activating a Volume Group Activate a volume group after importing it. Only an activated volume group can be mounted for data access. Run the following command to activate a volume group: varyonvg volume group name Deactivating a Volume Group Deactivate a volume group before exporting it. Run the following command to deactivate a volume group: varyonvg volume group name 76

87 10 Volume Management Software Exporting a Volume Group In clusters, a volume group needs to be imported or exported during data backup and recovery. Run the following command to export a volume group: Exportvg volume group name The following is an example: bash-3.2# lspv hdisk3 00c b4ccaa vgb hdisk4 none None hdisk5 none None hdisk e72c5a7d7d4 vgb bash-3.2# exportvg vgb bash-3.2#lspv hdisk3 00c b4ccaa None hdisk4 none None hdisk5 none None hdisk e72c5a7d7d4 None Importing a Volume Group Run the following command to import a volume group: Importvg volume group name physical volume name The following is an example: bash-3.2# lspv hdisk3 00c b4ccaa None hdisk4 none None hdisk5 none None hdisk e72c5a7d7d4 None bash-3.2# importvg -y vgb hdisk3 bash-3.2# lspv hdisk3 00c b4ccaa vgb hdisk4 none None hdisk5 none None hdisk e72c5a7d7d4 vgb Deleting a Logical Volume Perform the following steps to delete a logical volume: Step 1 Run the umount command to unmount the logical volume. Step 2 Run the rmlv command to delete the logical volume. ----End Deleting a Volume Group Perform the following steps to delete a volume group: Step 1 Ensure that all logical volumes contained in the volume group are deleted. Step 2 Deactivate the volume group. 77

88 10 Volume Management Software Step 3 Run the rmvg command to delete the volume group. ----End Deleting a Physical Volume Run the following command to delete a physical volume: chdev -l hdisk# -a pv=clear The following is an example: bash-3.00# lspv hdisk0 none None hdisk1 00c69f2242c89068 rootvg active hdisk2 none None hdisk3 00c69f228c77ae1a None hdisk4 00c69f228c88fd65 None hdisk5 none None bash-3.00# chdev -l hdisk3 -a pv=clear hdisk3 changed bash-3.00# lspv hdisk0 none None hdisk1 00c69f2242c89068 rootvg active hdisk2 none None hdisk3 none None hdisk4 00c69f228c88fd65 None hdisk5 none None bash-3.00# 10.2 VxVM Overview Installation Veritas Volume Manager (VxVM) is a storage management subsystem that allows you to manage physical disks as logical devices (volumes). Applications and operating systems detect VxVM volumes as physical disks where file systems, databases, and other hosted data objects are configured. VxVM provides a powerful and easy-of-use online disk management function on the computing environment and SAN. It has the following advantages: Independent RAID models Enhanced capabilities of fault tolerance and quick fault rectification Logical volume management layers across multiple physical disks Tools that improve performance and ensure data availability and integrity Dynamic storage configuration for disks when the system is active By default, VxVM is not installed together with the operating system. VxVM is not free of charge and is available only after being purchased. 78

89 10 Volume Management Software Pre-Installation Check Procedure Run the following command to check whether similar software is installed on the host: lslpp grep -i vrts If no similar software is installed, no output is returned. Perform the following steps to install VxVM: Step 1 Upload the VxVM installation package to any directory in AIX. Step 2 Go to the directory where the installation package resides and run the chmod +x installer command to grant execution permission to file installer. Step 3 Run the./installer command to install VxVM. ----End Common Configuration Commands Loading Disks AIX can identify LUNs mapped by storage systems to host after the LUN scan command is run. VxVM cannot directly manage the identified LUNs. It can manage the LUNs only after disks are loaded. The command to load disks is as follows: vxdisk scandisks Displaying Disks Taken Over by VxVM Run the vxdisk list command to display disks taken over by VxVM: The following is an example: bash-3.2#vxdisk list Initializing Disks DEVICE TYPE DISK GROUP STATUS disk_0 auto:lvm - - LVM disk_1 auto:lvm - - LVM huawei-s5500t0_4 auto:none - - error The state of disks taken over by VxVM for the first time is error. Such disks are not initialized and cannot be used. You need to run the vxdisksetup I disk command to initialize disks. The state of a disk that is successfully initialized changes to online. The following is an example: bash-3.2#vxdisksetup i huawei-s5500t0_4 bash-3.2#vxdisk list DEVICE TYPE DISK GROUP STATUS disk_0 auto:lvm - - LVM disk_1 auto:lvm - - LVM huawei-s5500t0_0 auto:cdsdisk - - online 79

90 10 Volume Management Software Creating a Disk Group After initializing disks, run the vxdg init disk group name disk name command to create a disk group. The following is an example: bash-3.2#vxdg init dg1 huawei-s5500t0_0 huawei-s5500t0_1 bash-3.2#vxdisk list Creating a Volume DEVICE TYPE DISK GROUP STATUS hdisk0 auto:lvm - - LVM hdisk1 auto:cdsdisk - - error hdisk2 auto:cdsdisk - - error huawei-s5500t0_0 auto:cdsdisk huawei-s5500t0_0 dg1 online invalid huawei-s5500t0_1 auto:cdsdisk huawei-s5500t0_1 dg1 online invalid huawei-s5500t0_2 auto:cdsdisk - - online After creating a disk group, run the vxassist g disk group make volume name capacity to create a volume. The following is an example: bash-3.2#vxassist -g dg1 make vo12 10g bash-3.2#vxprint g dg1 t vo12 V Name RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE v v012 - ENABLED ACTIVE SELECT - fsgen Mounting a Volume After creating a volume, run the following command to mount the volume to a specific directory: mount /dev/vx/disk group/volume name mount directory Disabling a Volume A disabled volume is unavailable to users and the state of the volume is changed from ENABLED or DETACHED to DISABLED. Run the following command to disable a volume: vxvol g disk group stop volume name Enabling a Volume An enabled volume is available to users and the state of the volume is changed from DISABLED to ENABLED or DETACHED. Run the following command to enable a volume: vxvol g disk group start volume name Deleting a Volume Run the following command to delete a volume: vxedit g disk group -rf rm volume name 80

91 10 Volume Management Software Exporting a Disk Group In clusters, a volume group needs to be imported or exported during data backup and recovery. Before exporting a disk group, disable all volumes in the disk group. Then run the vxdg deport disk group command to export the disk group. The following is an example: bash-3.2#vxvol -g dg1 stop vol1 bash-3.2#vxdg deport dg1 bash-3.2#vxdg list NAME STATE ID Importing a Disk Group Run the following command to import a disk group: vxdg import disk group name An imported disk group is available only after being activated. The following exemplifies how to activate a disk group: bash-3.2#vxdg import dg1 bash-3.2#vxdg list NAME STATE ID dg1 enabled,cds ibm130 bash-3.2#vxvol -g dg1 startall Adding a Disk to a Disk Group You can add disks to a disk group whose capacity is insufficient. Run the following command to add a disk to a disk group: vxdg g disk group name adddisk disk name The following is an example: bash-3.2#vxdg -g dg1 adddisk huasy-s5500t0_2 bash-3.2#vxdisk list DEVICE TYPE DISK GROUP STATUS hdisk0 auto:lvm - - LVM hdisk1 auto:cdsdisk - - error hdisk2 auto:cdsdisk - - error huasy-s5500t0_0 auto:cdsdisk huasy-s5500t0_0 dg1 online invalid huasy-s5500t0_1 auto:cdsdisk huasy-s5500t0_1 dg1 online invalid huasy-s5500t0_2 auto:cdsdisk huasy-s5500t0_1 dg1 online invalid Removing a Disk from a Disk Group Run the following command to remove a disk from a disk group: vxdg -g disk group name rmdisk disk name The following is an example: bash-3.2#vxdg -g dg1 rmdisk huasy-s5500t0_1 bash-3.2#vxdisk list DEVICE TYPE DISK GROUP STATUS hdisk0 auto:lvm - - LVM hdisk1 auto:cdsdisk - - error hdisk2 auto:cdsdisk - - error huasy-s5500t0_0 auto:cdsdisk huasy-s5500t0_0 dg1 online invalid huasy-s5500t0_1 auto:cdsdisk - - online invalid huasy-s5500t0_2 auto:cdsdisk huasy-s5500t0_1 dg1 online invalid 81

92 11 Host High-Availability 11 Host High-Availability 11.1 Overview As services grow, key applications must be available all the time and a system must have the fault tolerance capability. However, the systems with fault tolerance capability are costly. To lower the system costs, economical applications that provide the fault tolerance capacity are required. A high availability (HA) solution ensures the availability of applications and data in an event of any system component fault. This solution aims at eliminating single points of failure and minimizing the impact of expected or unexpected system downtimes. Moreover, an HA solution required no special hardware. High Availability Cluster Multi-Processing (HACMP) is IBM's HA cluster software dedicated to the AIX/Linux operating systems in IBM P series. This software eliminates single points of failure and ensures system continuity, availability, security, and reliability. HACMP 5.5 and later versions are renamed to Power High Availability (PowerHA) Version Compatibility HACMP is not compatible with all versions of the AIX operating system. Table 11-1 describes the compatibility between HACMP and the AIX operating system. Table 11-1 Compatibility between HACMP and the AIX operating system AIX AIX 5.1 AIX 5.1 (64-bit) AIX 5.2 AIX 5.3 AIX 6.1 AIX 7.1 HACMP 4.5 No Yes No Yes No No No HACMP/ES 4.5 No Yes Yes Yes No No No HACMP/ES 5.1 No Yes Yes Yes Yes No No HACMP/ES 5.2 No Yes Yes Yes Yes No No HACMP/ES 5.3 No No No Yes Yes Yes No 82

93 11 Host High-Availability HACMP/ES HACMP/ES No No No TL8+ TL4+ No No No No No TL8+ TL4+ Yes Yes PowerHA 5.5 No No No No TL9+ TL2, SP1+ PowerHA 6.1 No No No No TL9+ TL2, SP1+ Yes Yes PowerHA 7.1 No No No No No TL6+ Yes PowerHA No No No No No TL7+ Yes For more information, visit: Installation and Configuration For details, visit: Huawei also provides HACMP configuration guides. You can obtain the guides from the Huawei customer service center Cluster Maintenance Common Maintenance Commands Starting a Cluster Run the following command to start a cluster: smit clstart Then, configure the following parameters: Start Cluster Services on these nodes Indicates the two cluster nodes to be started upon the cluster startup. The two nodes can be started one by one or at the same time. Startup Cluster Information Daemon Indicates whether to start clinfoes upon the cluster startup. If this parameter is set to false, you cannot run the #/usr/sbin/cluster/clstat a command to view cluster running status. Check whether any errors occur during the cluster startup. 83

94 11 Host High-Availability If any errors occur, stop the cluster first and then rectify faults based on error messages. Then, start the cluster again Stopping a Cluster Run the following command to stop a cluster: smit clstop Then, configure the following parameters: Stop Cluster Services on these nodes Indicates the cluster nodes to be stopped. Select an Action on Resource Groups Indicates the mode to stop a cluster. Possible values are bring resource groups offline (graceface), move resource groups (takeover), and unmanage resource groups (forced). The meaning of each value is as follows: bring resource groups offline Stops the cluster on this node. The peer node is not affected. move resource groups Stops the cluster on this node. The peer node takes resources from this node. unmanage resource groups Checking Cluster Status Forcibly stops HA on this node without releasing any resources. The peer node is not affected. The cluster status includes the cluster process status and cluster service status. Perform the following steps to check the cluster status: Step 1 Check the cluster process status on nodes. The command syntax is as follows: lssrc -g cluster The command output is as follows: Figure 11-1 Cluster process status Step 2 Check the cluster service status on nodes. The command syntax is as follows: #/usr/sbin/cluster/clstat -r 2 -a In this command, 2 indicates the display of current status every 2 seconds. The command output is as follows: 84

95 11 Host High-Availability Figure 11-2 Cluster service status In the output, the service IP address and the resource group of the cluster are on node ibm31 and the node is online, which indicate that the cluster is in normal state. ----End Cluster Switchover Perform as follows to switch services between two nodes: Run the smit hacmp command on the host and then choose: System Management (C-SPOC) > HACMP Resource Group and Application Management > Move a Resource Group to Another Node Cluster Log Analysis Cluster logs are used in fault diagnosis when clusters encounter problems. HACMP clusters have the following logs: /var/hacmp/adm/cluster.log A major HACMP log file that contains all HACMP errors and events in chronological order. /var/hacmp/log/cspoc.log Contains all messages generated by C-SPOC commands. This log file resides on the node that invokes C-SPOC commands. All messages in this log file are recorded in chronological order. /var/hacmp/log/hacmp.out Contains all outputs after the execution of configuration and startup scripts. This log file is a supplement to /var/adm/cluster.log. When an anomaly occurs on a cluster, /var/hacmp/log/hacmp.out is checked first. For more log information, see /var/hacmp/log. 85

HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers

HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers Technical White Paper HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers OceanStor Storage VMware Huawei Technologies Co., Ltd. 2017-09-28 Copyright Huawei Technologies Co., Ltd. 2017.

More information

The Contents and Structure of this Manual. This document is composed of the following 12 chapters.

The Contents and Structure of this Manual. This document is composed of the following 12 chapters. Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000

More information

Using the Cisco NX-OS Setup Utility

Using the Cisco NX-OS Setup Utility This chapter contains the following sections: Configuring the Switch, page 1 Configuring the Switch Image Files on the Switch The Cisco Nexus devices have the following images: BIOS and loader images combined

More information

Operation Guide for Security NEs Management

Operation Guide for Security NEs Management imanager U2000 Unified Network Management System V100R002C01 Operation Guide for Security NEs Management Issue 03 Date 2010-11-19 HUAWEI TECHNOLOGIES CO., LTD. 2010. All rights reserved. No part of this

More information

Using the Cisco NX-OS Setup Utility

Using the Cisco NX-OS Setup Utility This chapter contains the following sections: Configuring the Switch, page 1 Configuring the Switch Image Files on the Switch The Cisco Nexus devices have the following images: BIOS and loader images combined

More information

Quidway S5700 Series Ethernet Switches V100R006C01. Configuration Guide - Ethernet. Issue 02 Date HUAWEI TECHNOLOGIES CO., LTD.

Quidway S5700 Series Ethernet Switches V100R006C01. Configuration Guide - Ethernet. Issue 02 Date HUAWEI TECHNOLOGIES CO., LTD. V100R006C01 Issue 02 Date 2011-11-21 HUAWEI TECHNOLOGIES CO., LTD. 2011. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written

More information

UltraPath Technical White Paper

UltraPath Technical White Paper HUAWEI OceanStor Enterprise Unified Storage System Issue 01 Date 2014-04-02 HUAWEI TECHNOLOGIES CO, LTD Copyright Huawei Technologies Co, Ltd 2014 All rights reserved No part of this document may be reproduced

More information

Technical Note P/N REV A01 November 24, 2008

Technical Note P/N REV A01 November 24, 2008 AIX Native MPIO for CLARiiON Technical Note P/N 300-008-486 REV A01 November 24, 2008 This technical note contains supplemental information about AIX Native MPIO for CLARiiON storage systems. Technical

More information

Overview of the Cisco NCS Command-Line Interface

Overview of the Cisco NCS Command-Line Interface CHAPTER 1 Overview of the Cisco NCS -Line Interface This chapter provides an overview of how to access the Cisco Prime Network Control System (NCS) command-line interface (CLI), the different command modes,

More information

Huawei SAN Storage Host Connectivity Guide for Red Hat. Issue 04 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei SAN Storage Host Connectivity Guide for Red Hat. Issue 04 Date HUAWEI TECHNOLOGIES CO., LTD. Huawei SAN Storage Host Connectivity Guide for Red Hat Issue 04 Date 2018-04-10 HUAWEI TECHNOLOGIES CO., LTD. 2018. All rights reserved. No part of this document may be reproduced or transmitted in any

More information

UCS Direct Attached Storage and FC Zoning Configuration Example

UCS Direct Attached Storage and FC Zoning Configuration Example UCS Direct Attached Storage and FC Zoning Configuration Example Document ID: 116082 May 23, 2013 Contents Introduction Prerequisites Requirements Components Used Conventions Background Information UCS

More information

This page is intentionally left blank.

This page is intentionally left blank. This page is intentionally left blank. Preface This ETERNUS Multipath Driver User's Guide describes the features, functions, and operation of the "ETERNUS Multipath Driver" (hereafter referred to as "Multipath

More information

Huawei MZ110 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei MZ110 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001 Issue 07 Date 2016-11-21 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent

More information

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

PASS4TEST. IT Certification Guaranteed, The Easy Way!  We offer free update service for one year PASS4TEST IT Certification Guaranteed, The Easy Way! \ http://www.pass4test.com We offer free update service for one year Exam : 642-359 Title : Implementing Cisco Storage Network Solutions Vendors : Cisco

More information

CiscoView CD Installation Instructions for AIX

CiscoView CD Installation Instructions for AIX CiscoView CD Installation Instructions for AIX CiscoView is a device management application that provides dynamic status, statistics, and comprehensive configuration information for Cisco Systems switched

More information

PowerVM - Dynamically adding a Virtual Fibre Channel adapter to a client partition

PowerVM - Dynamically adding a Virtual Fibre Channel adapter to a client partition Techdocs - The Technical Sales Library PowerVM - Dynamically adding a Virtual Fibre Channel adapter to a client partition Document Author: Gero Schmidt Document ID: TD105218 Doc. Organization: Technical

More information

Oracle VM. Getting Started Guide for Release 3.2

Oracle VM. Getting Started Guide for Release 3.2 Oracle VM Getting Started Guide for Release 3.2 E35331-04 March 2014 Oracle VM: Getting Started Guide for Release 3.2 Copyright 2011, 2014, Oracle and/or its affiliates. All rights reserved. Oracle and

More information

Virtual Services Container

Virtual Services Container , page 1 Prerequisites for a You must have a Cisco device installed with an operating system release that supports virtual services and has the needed system infrastructure required for specific applications

More information

Huawei MZ912 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei MZ912 NIC V100R001. White Paper. Issue 07 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001 Issue 07 Date 2016-11-22 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent

More information

S Series Switch. Cisco HSRP Replacement. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

S Series Switch. Cisco HSRP Replacement. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD. Cisco HSRP Replacement Issue 01 Date 2013-08-05 HUAWEI TECHNOLOGIES CO., LTD. 2013. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior

More information

Energy Saving Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Energy Saving Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date Energy Saving Technology White Paper Issue 01 Date 2012-08-13 HUAWEI TECHNOLOGIES CO., LTD. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means

More information

Host Attachment Guide

Host Attachment Guide Version 1.6.x Host Attachment Guide Publication: GA32-0643-07 (June 2011) Book number: GA32 0643 07 This edition applies to IBM XIV Storage System Software and to all subsequent releases and modifications

More information

Upgrading or Downgrading the Cisco Nexus 3500 Series NX-OS Software

Upgrading or Downgrading the Cisco Nexus 3500 Series NX-OS Software Upgrading or Downgrading the Cisco Nexus 3500 Series NX-OS Software This chapter describes how to upgrade or downgrade the Cisco NX-OS software. It contains the following sections: About the Software Image,

More information

Huawei MZ510 NIC V100R001. White Paper. Issue 09 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei MZ510 NIC V100R001. White Paper. Issue 09 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001 Issue 09 Date 2016-11-21 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

Direct Attached Storage

Direct Attached Storage , page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel

More information

Before Contacting Technical Support

Before Contacting Technical Support APPENDIXA This appendix describes the steps to perform before calling for technical support for any Cisco MDS 9000 Family multilayer director and fabric switch. This appendix includes the following sections:

More information

Vendor: HuaWei. Exam Code: H Exam Name: HCNP-Storage-CUSN(Constructing Unifying Storage Network) Version: Demo

Vendor: HuaWei. Exam Code: H Exam Name: HCNP-Storage-CUSN(Constructing Unifying Storage Network) Version: Demo Vendor: HuaWei Exam Code: H13-621 Exam Name: HCNP-Storage-CUSN(Constructing Unifying Storage Network) Version: Demo QUESTION 1 Which of the following options does not belong to the hardware components

More information

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects? Volume: 327 Questions Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects? A. primary first, and then secondary

More information

American Dynamics RAID Storage System iscsi Software User s Manual

American Dynamics RAID Storage System iscsi Software User s Manual American Dynamics RAID Storage System iscsi Software User s Manual Release v2.0 April 2006 # /tmp/hello Hello, World! 3 + 4 = 7 How to Contact American Dynamics American Dynamics (800) 507-6268 or (561)

More information

The procedure was tested on , , and I don't have a lab system with physical HBAs and 5.3 at the moment.

The procedure was tested on , , and I don't have a lab system with physical HBAs and 5.3 at the moment. I received the following question from an AIX administrator in Germany. Hi Chris, on your blog, you explain how to find out the active value of num_cmd_elems of an fc-adapter by using the kdb. So you can

More information

Huawei OceanStor ReplicationDirector Software Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Huawei OceanStor ReplicationDirector Software Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date Huawei OceanStor Software Issue 01 Date 2015-01-17 HUAWEI TECHNOLOGIES CO., LTD. 2015. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without

More information

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference July 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

AIX Host Utilities 6.0 Installation and Setup Guide

AIX Host Utilities 6.0 Installation and Setup Guide IBM System Storage N series AIX Host Utilities 6.0 Installation and Setup Guide GC27-3925-01 Table of Contents 3 Contents Preface... 6 Supported features... 6 Websites... 6 Getting information, help,

More information

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD. HUAWEI OceanStor Enterprise Unified Storage System HyperReplication Technical White Paper Issue 01 Date 2014-03-20 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may

More information

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM IBM Tivoli Storage Manager for AIX Version 7.1.3 Installation Guide IBM IBM Tivoli Storage Manager for AIX Version 7.1.3 Installation Guide IBM Note: Before you use this information and the product it

More information

Virtual Private Cloud. User Guide. Issue 21 Date HUAWEI TECHNOLOGIES CO., LTD.

Virtual Private Cloud. User Guide. Issue 21 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 21 Date 2018-09-30 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2018. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any

More information

HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system.

HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system. HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system. Martin Gingras Product Field Engineer, Canada mgingras@ca.ibm.com Acknowledgements Thank you to the many people who have contributed and reviewed

More information

The Host Server. AIX Configuration Guide. August The Data Infrastructure Software Company

The Host Server. AIX Configuration Guide. August The Data Infrastructure Software Company The Host Server AIX Configuration Guide August 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running IBM's AIX. Basic AIX administration skills are assumed including

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.2 Configuring Hosts to Access Fibre Channel (FC) or iscsi Storage 302-002-568 REV 03 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published July

More information

HP StorageWorks Performance Advisor. Installation Guide. Version 1.7A

HP StorageWorks Performance Advisor. Installation Guide. Version 1.7A HP StorageWorks Performance Advisor Installation Guide Version 1.7A notice Copyright 2002-2004 Hewlett-Packard Development Company, L.P. Edition 0402 Part Number B9369-96068 Hewlett-Packard Company makes

More information

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide First Published: 2011-09-06 Last Modified: 2015-09-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA

More information

Power Systems SAN Multipath Configuration Using NPIV v1.2

Power Systems SAN Multipath Configuration Using NPIV v1.2 v1.2 Bejoy C Alias IBM India Software Lab Revision History Date of this revision: 27-Jan-2011 Date of next revision : TBD Revision Number Revision Date Summary of Changes Changes marked V1.0 23-Sep-2010

More information

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary Description Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary v6.0 is a five-day instructor-led course that is designed to help students prepare for the Cisco CCNP Data Center

More information

Cisco UCS Local Zoning

Cisco UCS Local Zoning Configuration Guide Cisco UCS Local Zoning Configuration Guide May 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 30 Contents Overview...

More information

Best Practices of Huawei SAP HANA TDI Active-Passive DR Solution Using BCManager. Huawei Enterprise BG, IT Storage Solution Dept Version 1.

Best Practices of Huawei SAP HANA TDI Active-Passive DR Solution Using BCManager. Huawei Enterprise BG, IT Storage Solution Dept Version 1. Best Practices of Huawei SAP HANA TDI Active-Passive DR Solution Using BCManager Huawei Enterprise BG, IT Storage Solution Dept 2017-8-20 Version 1.0 Contents 1 About This Document... 3 1.1 Overview...

More information

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05 EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N 300-002-038 REV A05 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2004-2006

More information

BGP/MPLS VPN Technical White Paper

BGP/MPLS VPN Technical White Paper V300R001C10 BGP/MPLS VPN Technical White Paper Issue 01 Date 2013-12-10 HUAWEI TECHNOLOGIES CO., LTD. 2013. All rights reserved. No part of this document may be reproduced or transmitted in any form or

More information

P Commands. Send documentation comments to CHAPTER

P Commands. Send documentation comments to CHAPTER CHAPTER 17 The commands in this chapter apply to the Cisco MDS 9000 Family of multilayer directors and fabric switches. All commands are shown here in alphabetical order regardless of command mode. See

More information

Configuring and Managing Zones

Configuring and Managing Zones Send documentation comments to mdsfeedback-doc@cisco.com CHAPTER 30 Zoning enables you to set up access control between storage devices or user groups. If you have administrator privileges in your fabric,

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

H3C SecBlade SSL VPN Card

H3C SecBlade SSL VPN Card H3C SecBlade SSL VPN Card Super Administrator Web Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Document version: 5PW105-20130801 Copyright 2003-2013, Hangzhou H3C Technologies

More information

Oracle Enterprise Manager Ops Center

Oracle Enterprise Manager Ops Center Oracle Enterprise Manager Ops Center Configure and Install Guest Domains 12c Release 3 (12.3.2.0.0) E60042-03 June 2016 This guide provides an end-to-end example for how to use Oracle Enterprise Manager

More information

esight V300R001C10 SLA Technical White Paper Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD.

esight V300R001C10 SLA Technical White Paper Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD. V300R001C10 Issue 01 Date 2013-12-10 HUAWEI TECHNOLOGIES CO., LTD. 2013. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written

More information

Performing Software Maintenance Upgrades

Performing Software Maintenance Upgrades This chapter describes how to perform software maintenance upgrades (SMUs) on Cisco NX-OS devices. This chapter includes the following sections: About SMUs, page 1 Prerequisites for SMUs, page 3 Guidelines

More information

Configuring Server Boot

Configuring Server Boot This chapter includes the following sections: Boot Policy, page 1 UEFI Boot Mode, page 2 UEFI Secure Boot, page 3 CIMC Secure Boot, page 3 Creating a Boot Policy, page 5 SAN Boot, page 6 iscsi Boot, page

More information

Configuring EtherChannels and Layer 2 Trunk Failover

Configuring EtherChannels and Layer 2 Trunk Failover 35 CHAPTER Configuring EtherChannels and Layer 2 Trunk Failover This chapter describes how to configure EtherChannels on Layer 2 and Layer 3 ports on the switch. EtherChannel provides fault-tolerant high-speed

More information

LAN Ports and Port Channels

LAN Ports and Port Channels Port Modes, page 2 Port Types, page 2 UCS 6300 Breakout 40 GB Ethernet Ports, page 3 Unified Ports, page 7 Changing Port Modes, page 10 Server Ports, page 16 Uplink Ethernet Ports, page 17 Appliance Ports,

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

espace UMS V100R001C01SPC100 Product Description Issue 03 Date HUAWEI TECHNOLOGIES CO., LTD.

espace UMS V100R001C01SPC100 Product Description Issue 03 Date HUAWEI TECHNOLOGIES CO., LTD. V100R001C01SPC100 Issue 03 Date 2012-07-10 HUAWEI TECHNOLOGIES CO., LTD. . 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior

More information

HP StorageWorks. XP Disk Array Configuration Guide for IBM AIX XP24000, XP12000, XP10000, SVS200

HP StorageWorks. XP Disk Array Configuration Guide for IBM AIX XP24000, XP12000, XP10000, SVS200 HP StorageWorks XP Disk Array Configuration Guide for IBM AIX XP24000, XP12000, XP10000, SVS200 Part number: A5951 047 Ninth edition: June 2007 Legal and notice information Copyright 2003, 2007 Hewlett-Packard

More information

Configuring Fibre Channel Interfaces

Configuring Fibre Channel Interfaces This chapter contains the following sections:, page 1 Information About Fibre Channel Interfaces Licensing Requirements for Fibre Channel On Cisco Nexus 3000 Series switches, Fibre Channel capability is

More information

Configuring Security Features on an External AAA Server

Configuring Security Features on an External AAA Server CHAPTER 3 Configuring Security Features on an External AAA Server The authentication, authorization, and accounting (AAA) feature verifies the identity of, grants access to, and tracks the actions of users

More information

Copy-Based Transition Guide

Copy-Based Transition Guide 7-Mode Transition Tool 3.2 Copy-Based Transition Guide For Transitioning to ONTAP February 2017 215-11978-A0 doccomments@netapp.com Table of Contents 3 Contents Transition overview... 6 Copy-based transition

More information

HP A5120 EI Switch Series IRF. Command Reference. Abstract

HP A5120 EI Switch Series IRF. Command Reference. Abstract HP A5120 EI Switch Series IRF Command Reference Abstract This document describes the commands and command syntax options available for the HP A Series products. This document is intended for network planners,

More information

SAN Configuration Guide

SAN Configuration Guide ONTAP 9 SAN Configuration Guide November 2017 215-11168_G0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Considerations for iscsi configurations... 5 Ways to configure iscsi

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM Note: Before you use this information and the product

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

Unit 8 System storage overview

Unit 8 System storage overview Unit 8 System storage overview Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 5.1 Unit objectives After completing this unit, you should be able

More information

UCS Engineering Details for the SAN Administrator

UCS Engineering Details for the SAN Administrator UCS Engineering Details for the SAN Administrator Craig Ashapa 2 First things first: debunking a myth Today (June 2012 UCS 2.02m) there is no FCoE northbound of UCS unless you really really really want

More information

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD. OceanStor 9000 Issue V1.01 Date 2014-03-29 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in

More information

Installation Manual. NEXSAN MSIO for AIX. Version 2.1

Installation Manual. NEXSAN MSIO for AIX. Version 2.1 NEXSAN MSIO for AIX Installation Manual Version 2.1 NEXSAN 555 St. Charles Drive, Suite 202, Thousand Oaks, CA 91360 p. 866.4.NEXSAN f. 866.418.2799 COPYRIGHT Copyright 2009 2011 by Nexsan Corporation.

More information

Troubleshoot Firmware

Troubleshoot Firmware Recovering Fabric Interconnect During Upgrade, page 1 Recovering IO Modules During Firmware Upgrade, page 8 Recovering Fabric Interconnect During Upgrade If one or both fabric interconnects fail during

More information

Software Installation Reference

Software Installation Reference SANtricity Storage Manager 11.25 Software Installation Reference December 2016 215-09862_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use this guide... 6 Deciding on the

More information

Inter-VSAN Routing Configuration

Inter-VSAN Routing Configuration CHAPTER 16 This chapter explains the inter-vsan routing (IVR) feature and provides details on sharing resources across VSANs using IVR management interfaces provided in the switch. This chapter includes

More information

Dell EMC Avamar Backup Clients

Dell EMC Avamar Backup Clients Dell EMC Avamar Backup Clients Version 7.5.1 User Guide 302-004-281 REV 01 Copyright 2001-2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017 Dell believes the information

More information

Configuring EtherChannels and Link-State Tracking

Configuring EtherChannels and Link-State Tracking CHAPTER 37 Configuring EtherChannels and Link-State Tracking This chapter describes how to configure EtherChannels on Layer 2 and Layer 3 ports on the switch. EtherChannel provides fault-tolerant high-speed

More information

Configuring and Managing Zones

Configuring and Managing Zones CHAPTER 5 Zoning enables you to set up access control between storage devices or user groups. If you have administrator privileges in your fabric, you can create zones to increase network security and

More information

Cisco Nexus 1000V Software Upgrade Guide, Release 4.0(4)SV1(3d)

Cisco Nexus 1000V Software Upgrade Guide, Release 4.0(4)SV1(3d) Cisco Nexus 1000V Software Upgrade Guide, Release 4.0(4)SV1(3d) Revised: May 21, 2011 This document describes how to upgrade the Cisco Nexus 1000V software on a Virtual Supervisor Module (VSM) virtual

More information

Third-Party Client (s3fs) User Guide

Third-Party Client (s3fs) User Guide Issue 02 Date 2017-09-28 HUAWEI TECHNOLOGIES CO., LTD. 2018. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of

More information

IBM XIV Host Attachment Kit for AIX. Version Release Notes. First Edition (September 2011)

IBM XIV Host Attachment Kit for AIX. Version Release Notes. First Edition (September 2011) Version 1.7.0 Release Notes First Edition (September 2011) First Edition (September 2011) This document edition applies to version 1.7.0 of the IBM XIV Host Attachment Kit for AIX software package. Newer

More information

Q&As. Troubleshooting Cisco Data Center Unified Computing. Pass Cisco Exam with 100% Guarantee

Q&As. Troubleshooting Cisco Data Center Unified Computing. Pass Cisco Exam with 100% Guarantee 642-035 Q&As Troubleshooting Cisco Data Center Unified Computing Pass Cisco 642-035 Exam with 100% Guarantee Free Download Real Questions & Answers PDF and VCE file from: 100% Passing Guarantee 100% Money

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine Supported VMware Versions, page 1 Support for VMware vmotion, page 1 Support for Open Virtualization Format, page 2 Virtual Machine Requirements, page 3 Virtual Machine Resource and Performance Checks,

More information

Configuring FCoE NPV. Information About FCoE NPV. This chapter contains the following sections:

Configuring FCoE NPV. Information About FCoE NPV. This chapter contains the following sections: This chapter contains the following sections: Information About FCoE NPV, page 1 FCoE NPV Model, page 3 Mapping Requirements, page 4 Port Requirements, page 5 NPV Features, page 5 vpc Topologies, page

More information

FibreQuik Fibre Channel Host Bus Adapter User Guide

FibreQuik Fibre Channel Host Bus Adapter User Guide FibreQuik Fibre Channel Host Bus Adapter User Guide Cambex Corporation 115 Flanders Road Westborough, MA 01581 Customer support support@cambex.com Document: 081-468-032 Date: 6/2/02 Rev.: D Table of Contents

More information

Configuring EtherChannels and Layer 2 Trunk Failover

Configuring EtherChannels and Layer 2 Trunk Failover 28 CHAPTER Configuring EtherChannels and Layer 2 Trunk Failover This chapter describes how to configure EtherChannels on Layer 2 ports on the switch. EtherChannel provides fault-tolerant high-speed links

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

IVR Zones and Zonesets

IVR Zones and Zonesets Information about, page 1 Default Settings, page 3 Licensing Requirements, page 3 Guidelines and Limitations, page 3 Configuring, page 4 Verifying IVR Configuration, page 11 Feature History, page 12 Information

More information

Cisco Nexus 1000V Software Upgrade Guide, Release 4.2(1)SV1(4a)

Cisco Nexus 1000V Software Upgrade Guide, Release 4.2(1)SV1(4a) Cisco Nexus 1000V Software Upgrade Guide, Release 4.2(1)SV1(4a) Revised: May 9, 2012 Caution The upgrade procedure for Release 4.2(1)SV1(4a) has changed. We highly recommend that you read this document

More information

C H A P T E R Commands Cisco SFS Product Family Command Reference OL

C H A P T E R Commands Cisco SFS Product Family Command Reference OL CHAPTER 3 This chapter documents the following commands: aaa accounting, page 3-8 aaa authorization, page 3-9 action, page 3-11 addr-option, page 3-12 authentication, page 3-14 auto-negotiate (Ethernet

More information

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA)

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA) Course Overview DCAA v1.0 is an extended hours bootcamp class designed to convey the knowledge necessary to understand and work with Cisco data center technologies. Covering the architecture, components

More information

Initial Setup. Cisco APIC Documentation Roadmap. This chapter contains the following sections:

Initial Setup. Cisco APIC Documentation Roadmap. This chapter contains the following sections: This chapter contains the following sections: Cisco APIC Documentation Roadmap, page 1 Simplified Approach to Configuring in Cisco APIC, page 2 Changing the BIOS Default Password, page 2 About the APIC,

More information

Oracle Enterprise Manager Ops Center. Overview. What You Need. Create Oracle Solaris 10 Zones 12c Release 3 ( )

Oracle Enterprise Manager Ops Center. Overview. What You Need. Create Oracle Solaris 10 Zones 12c Release 3 ( ) Oracle Enterprise Manager Ops Center Create Oracle Solaris 10 Zones 12c Release 3 (12.3.0.0.0) E60027-01 June 2015 This guide provides an end-to-end example for how to use Oracle Enterprise Manager Ops

More information

Cisco UCS Manager Firmware Management Using the CLI, Release 3.1

Cisco UCS Manager Firmware Management Using the CLI, Release 3.1 First Published: 2016-01-20 Last Modified: 2017-04-27 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Hitachi Compute Blade HVM Navigator User s Guide - LPAR Configuration

Hitachi Compute Blade HVM Navigator User s Guide - LPAR Configuration Hitachi Compute Blade HVM Navigator User s Guide - LPAR Configuration FASTFIND LINKS Document organization Product version Getting help Contents MK-99COM042-11 2012-2015 Hitachi, Ltd. All rights reserved.

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

Cisco Nexus 3500 Series NX-OS Software Upgrade and Downgrade Guide, Release 7.x

Cisco Nexus 3500 Series NX-OS Software Upgrade and Downgrade Guide, Release 7.x Cisco Nexus 3500 Series NX-OS Software Upgrade and Downgrade Guide, Release 7.x First Published: 2018-02-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version Installation and Administration Guide P/N 300-007-130 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

C exam. Number: C Passing Score: 800 Time Limit: 120 min IBM C IBM AIX Administration V1.

C exam. Number: C Passing Score: 800 Time Limit: 120 min IBM C IBM AIX Administration V1. C9010-022.exam Number: C9010-022 Passing Score: 800 Time Limit: 120 min IBM C9010-022 IBM AIX Administration V1 Exam A QUESTION 1 A customer has a virtualized system using Virtual I/O Server with multiple

More information