x10sure 3.2 One to one recover after repair

Similar documents
ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for VMware ESX

The Contents and Structure of this Manual. This document is composed of the following three chapters and an appendix.

ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for VMware ESX

Configuring Server Boot

HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system.

Configuring Cisco UCS Server Pools and Policies

QLogic iscsi Boot for HP FlexFabric Adapters User Guide

Terminology. This document uses the following terms:

Configuring Service Profiles

Installing the Operating System or Hypervisor

Flash Image for 3200 and 8200 Series Adapters. Table of Contents

Configuring Cisco UCS Server Pools and Policies

The Contents and Structure of this Manual. This document is composed of the following six chapters and an appendix.

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) ETERNUS Disk Storage System Settings

HP StorageWorks Booting Windows Server 2003 for Itanium-based systems from a storage area network application notes

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Overview. Implementing Fibre Channel SAN Boot with Oracle's Sun ZFS Storage Appliance. August By Tom Hanvey; update by Peter Brouwer

VMware Infrastructure 3.5 Update 2 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Installing the Server Operating System or Hypervisor

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3

iscsi Boot from SAN with Dell PS Series

Using UCS-Server Configuration Utility

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) for Oracle Solaris

Implementation Guide. By Jeff Chen and Leo Nguyen. Month Year

HP StorageWorks SmartStart EVA Storage 3.2 Release Notes

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

VMware ESX Server Software for Dell PowerEdge Servers. Deployment Guide. support.dell.com

Preparing the boot media/installer with the ISO file:

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN

Configuring Server Boot

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Technical Brief: How to Configure NPIV on VMware vsphere 4.0

ClearCube Virtualization. Deployment Guide. ClearCube Technology, Inc.

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

UCS Engineering Details for the SAN Administrator

Direct Attached Storage

HP Discover 2013 HOL 2653 HP Virtual Connect 4.01 features and capabilities. Lab Guide

FC HBA Driver for NetWare. Table of Contents

Configuring the HP StorageWorks Modular Smart Array 1000 and 1500cs for external boot with Novell NetWare New Installations

EMC CLARiiON iscsi Server Setup Guide for VMware ESX Server 3i and 3.x Hosts

Deploying the 55TB Data Warehouse Fast Track Reference Architecture for Microsoft SQL Server 2014 using PowerEdge R730 and Dell Storage SC4020

istorage Server Working with iscsi HBA Performing a Network Diskless Boot

Configuring Boot Order and RAID Levels

Storage Compatibility Guide for ESX Server 3.0

Service Profiles. Service Profiles in UCS Manager

Express Configuration

ServerView with Data ONTAP-v V 1.02

EMC Fibre Channel with QLogic Host Bus Adapters in the Windows Environment

Reinstalling the Operating System on the Dell PowerVault 745N

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE

Windows Host Utilities Installation and Setup Guide

Dynamic Data Center: Network Boot Design Guide for Development and Test Environments

Deploying the 60TB Data Warehouse Fast Track Reference Architecture for Microsoft SQL Server 2016 using Dell EMC PowerEdge R730 and SC5020

VERSION 2.1. Installation Guide

Release Notes P/N REV A03 January 3, 2006

Service Profiles and Templates

PRIMERGY BX900/BX400 Blade Server Systems

Southern Maine Community College Information Technology Professor Howard Burpee. Installing Windows Server 2012

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Clustered Data ONTAP 8.2

User's Guide for Infrastructure Administrators (Resource Management)

VIRTUALIZATION MANAGER SINGLE SERVER EDITION GETTING STARTED GUIDE

ServerView Virtual-IO Manager V3.8

Setting the Boot Virtual Drive on Intel / LSI Hardware RAID Controllers

Setup for Failover Clustering and Microsoft Cluster Service

Software Installation Reference

Server Support Matrix ETERNUS Disk storage systems Server Connection Guide (Fibre Channel) ETERNUS Disk Storage System Settings

Installing PowerPath. Invisible Body Tag

Using SANDeploy iscsi SAN for VMware ESX / ESXi Server

Windows Unified Host Utilities 7.1 Installation Guide

Fibre Channel and iscsi Configuration Guide for the Data ONTAP 8.0 Release Family

Configuration Guide -Server Connection-

Emulex OneCommand Manager Application for LightPulse Adapters for Windows Release Notes

What is JBOD Mode? How Do I Enable JBOD Mode on Intel 12G SAS Adapters?

Fibre Channel Specialist Lab

Host Attachment Guide

How to integrate HP StorageWorks tape libraries into a SAN based backup environment.

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Cisco VDS Service Broker Software Installation Guide for UCS Platforms

EMC Fibre Channel with QLogic Host Bus Adapters in the NetWare Environment

Using UCS-Server Configuration Utility

Technical White Paper iscsi Boot November 11, 2004

StorNext M330 Metadata Appliance Release Notes

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering

Fibre Channel card Users Guide

Reset the Admin Password with the ExtraHop Rescue CD

ProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information...

Deploying the 8TB Data Warehouse Fast Track Reference Architecture for Microsoft SQL Server 2017 using Dell EMC PowerEdge R740 and SCv3020

ThinkStation P500/P510, P700/P710, and P900/P910 How to Create and Configure RAID Arrays. Version 1.1

PRIMERGY DuplexDataManager Help

Dell EMC SC Series Storage: Microsoft Multipath I/O

The Contents and Structure of this Manual. This document is composed of the following three chapters.

About Chassis Manager

Upgrading the MSA1000 for Enhanced Features

Clustered Data ONTAP 8.3

Transcription:

x10sure 3.2 One to one recover after repair 1 1:1 WITH EXTERNAL STORAGE 2 1.1 After power failure etc. 2 1.2 After repair 2 1.3 After replacement of the failed node 3 1.3.1 Boot device via FC 3 1.3.2 Boot device via iscsi (ext. RAID box) 3 2 DETAILS TO SET UP THE REPLACED NODE WITH EXT. STORAGE 7 2.1 BIOS and HBA settings 7 2.1.1 BX600/900 settings for Fibre Channel daughter cards 7 2.1.2 QLogic HBA setup 9 2.1.3 Emulex HBA setup 11 2.2 Additional RX200/300 production node settings 19 2.2.1 Enabling slot for FC controller 19 2.2.2 Configuring IPMI 20 2.3 Connecting the nodes to the storage 23 2.3.1 Registering the node on the FibreCAT CX 23 2.3.2 Registering the node on the FibreCAT SX 26 2.3.3 Configuring the node for NetApp FCP 28 2.3.4 Configuring the node for Eternus DX60 / DX80 / DX90 28 3 RELATED PUBLICATIONS 31

Purpose X10sure 1:1 is the entry level high availability configuration with one production node and one control node. Failure detection and image failover is fully automated. The focus of this document is the manual repair and recovers procedure after failover. The reader should be familiar with x10sure document UserGuide.pdf and SitePreparation.pdf. 1 1:1 with external Storage In general we have to distinguish three scenarios: I. normal operation machine-1 Control Node (Failover Node) machine-2 Production Node II. scenario after failover Production Node III. scenario after repair Production Node Control Node (Failover Node) 1.1 After power failure etc. Recover after a simple failover in case of power failure, replacing of defect cables or something like this without touching the hardware of the server. The former ControlNode/FailoverNode is the ProductionNode now. Switch power on at the former ProductionNode, it will start as ControlNode/FailoverNode. If the ControlNode is up, start the Runtime-GUI and take both nodes out off Manual-Mode to be monitored again. Recover is complete now 1.2 After repair Page 2

If the repair is done without replacement of main board (BIOS entries), LAN controller (iscsi entry) or SAN controller (WWN) the handling is the same as described in the previous chapter otherwise see next chapter. 1.3 After replacement of the failed node If you have replaced the damaged node by a new one, perform the following: 1.3.1 Boot device via FC Check external storage configuration for existence of: o any storage group (if you use the CX), o initiator group (if you use NetApp), o affinity group (if you use the DX) or o volume (if you use the SX) are assigned to the damaged node. The goal of this procedure is to connect this storage/affinity group/volume with the new node. First delete the HBA registration of the damaged node from the storage as follows: o CX: Use the CX Navisphere GUI Connectivity Status menu and delete the WWN(s) of the damaged node by selecting the WWN and clicking the Deregister button (refer to section 2.1 BIOS and HBA settings for more details). o SX: Connect to the SX by Internet Explorer and select MANAGE VOLUME MANAGEMENT volume mapping manage host list and delete the WWN(s) of the damaged node by clicking the Delete button on the right side in Global Host Port List. o DX: Address the DX by Internet Explorer and select Volume Settings Host I/F Management Configure LUN Mapping, select each port and edit Affinity Group to disable all entries of the damaged node (Select Disable in the box). Don t forget to click Set button, if all entries for one port are done. Then select Volume Settings Host I/F Management Setup FC Host and delete the entries with the WWN(s) of the damaged node by marking them and by clicking the Delete button. Don t forget to click Set button at least. o NetApp: Start the ONTAP GUI (FilerView URL) in your browser. Select LUNs -> Initiator Groups -> Manage and delete/replace the WWN(s) of the damaged node. Configure the new node as described in the section 2.1 BIOS and HBA settings and o connect/map to the storage group of the damaged node (if you are using the CX), o connect/map it to the affinity group at the same ports as the damaged node was connected (if you use the DX) or o map all volumes of the replaced compute node to the new compute node (if you are using the SX). Enter the x10sure Configuration Wizard. Select the Server Configuration and choose the proper Volume Name, LUN Name, Storage Group or Affinity Group for the failed (crashed) node. The corresponding new WWPN(s) will be shown automatically. After Tools -> Activate x10sure will handle the new WWPN s of the replaced node. If the node is still in crash state, select node action Try boot to achieve the online state, otherwise move the node out of Manual-Mode and at last take FailoverNode out off Manual-Mode. Recover is complete now You will find a detailed description in chapter 2 Details to set up the replaced node with external Storage. 1.3.2 Boot device via iscsi (ext. RAID box) Enable the local disk as documented in chapter 2.1 BIOS and HBA settings (Look only for local disk, ignore descriptions for FC). Set IPMI parameter as descript in chapter 2.2.2 Configuring IPMI Page 3

1.3.2.1 Get initiator and target from storage Start the configuration GUI of the external storage: Check which initiator is used by the damaged node, navigate to: LUNs Initiator Groups Manage and write down the initiator (iqn string) of the damaged node (Figure 2). Figure 2: Manage Initiator Groups Determine the iscsi iqn name of the RAID box. This is the iscsi-target required for the BIOS setup of the replaced node. Navigate to: LUNs iscsi Manage Names and write down the iscsi nodename (Figure 3). Figure 3: Manage iscsi Names 1.3.2.2 Set up BIOS for iscsi boot To enter the BIOS reboot your replaced node and press [F2], and put the Legacy Network Card on top of the boot priority order. Page 4

Figure 4: Boot priority order In the Advanced menu of the BIOS Setup select Peripheral Configuration and enter the submenu. Select the entry of the LAN controller that you want to configure for iscsi boot and set the parameter for this controller to iscsi. Activate the Option ROM Scan only for the controller you want to use for iscsi boot. Save the changes and exit the BIOS Setup. Figure 5: Enable iscsi for LAN1 Oprom 1.3.2.3 Configure LAN controller for iscsi Power-on or reset the system: o For Primergy server with Broadcom onboard LAN controller and Broadcom MBA driver press Ctrl S (e.g. RX300-S4) o For Primergy server with Intel onboard LAN controller and Intel MBA driver press Ctrl D (e.g. RX300-S5) to start the iscsi Boot Configuration utility. The following example shows the Intel iscsi Boot Configuration utility only. Power-on or reset the node and press Ctrl D, to start the Intel iscsi Boot Configuration Utility. The first menu of the Intel iscsi Boot Configuration Utility displays a list of iscsi boot-capable adapters. (Figure 6) Enable onboard LAN port 0 as Primary iscsi boot device by pressing the [P] key when highlighted. Page 5

Figure 6: Enable iscsi Port Press the [Return] to get the iscsi Port Configuration menu. Figure 7: Enable iscsi Port Configuration Select iscsi Boot Configuration (see Figure Enable iscsi Port Configuration) Set the Imitator Name, the static Initiator IP, the Target Name and the Target IP as the damaged node had before. Port 3260 is the default iscsi port-no. Figure 8: Enable iscsi Port Skip the iscsi CHAP configuration and select [OK] to reboot the Primergy. Page 6

2 Details to set up the replaced node with ext. Storage 2.1 BIOS and HBA settings Perform the following setup procedures for the replaced server. Refer to the appropriate PRIMERGY Server manual for more details. The speeds of all components, which are used for the I/O connection, have to be set to the same fixed speed (if possible). 2.1.1 BX600/900 settings for Fibre Channel daughter cards Refer to the section 2.2 Additional RX200/300 production node settings for additional Primergy- RX BIOS settings. BX920 Switch on the blade and press [F2] to enter the blade BIOS. Figure 9: BX920 BIOS PCI Configuration Select Advanced PCI Configuration. Enable the Remote Boot for Fibre Channel (Mezzanine Card 1) Save and Exit During next boot make sure that the disk controller is well disabled. Wait for console message LSI Corporation and type Ctrl [C] to enter the LSI SAS BIOS Figure 10: LSI SAS BIOS If the Adapter status is Enabled, type Enter to change the Adapter properties. See Figure below. Page 7

The onboard disk controller should not be disabled if running ESX Servers in your configuration (Figure 11). Figure 11: disable local disk boot Select Boot Support and choose [Disabled]. BX630-S5 Switch on the blade and press [F2] to enter the blade BIOS. Select Advanced Peripheral Configuration (Figure 12). Figure 12: Advanced BIOS menu Set Daughter Board OPROM to Enabled to permit installation or remote booting via the Fibre Channel HBA: Page 8

Figure 13: Enable Daughter Board OPROM Type Esc -> Exit to save your settings and reboot to disable the local disk. During next boot make sure that the disk controller is well disabled. Wait for console message LSI Corporation and type Ctrl [C] to enter the LSI SAS BIOS Figure 14: Disable the LSI SAS adapter If the Adapter status is Enabled, type Enter to change the Adapter properties. (See Figure disable local disk boot) The onboard disk controller should not be disabled if you are running ESX Servers in your configuration 2.1.2 QLogic HBA setup When prompted during the reboot, press [CTRL] + [q] to enter the Qlogic Fast!UTIL Fibre Channel configuration utility (Figure 15). Figure 15: Start Qlogic Fast!UTIL Press [Enter] to select Configuration Settings for the first adapter. Page 9

Press [Enter] to configure the Adapter Settings (Figure 16). Set the Host Adapter BIOS to Enabled. Set the Connection Options to 1 (point-to-point only). Select a fixed Data Rate; for example, to 2 GB as in Figure 16. The setting must be the same as for the FC-switch port where the node connects. Figure 16: Host Adapter Settings Press [Escape] to return to the upper menu. If you have configured redundant paths to the storage, press [Esc] until you reach the initial Qlogic Fast!UTIL window. Choose Select Host Adapter (Figure 17) and configure the Host Adapter Settings of the second HBA in the same way. Figure 17: Fast!UTIL Options Reboot the node and enter again the QLogic Fast!UTIL. By rebooting with the enabled HBA BIOS, the WWNs of the HBAs are now visible in the GUI of the connected storage device. Register the WWNs of your HBA to the storage device and map the boot LUN to it (refer to the section Connecting the nodes to the storage ). Select the Selectable Boot Settings menu option for the first HBA. Enable the Selectable Boot Setting. Go to the next line (Primary) boot port name/lun and press [Enter]. Select all the offered storage devices for the SAN boot and press [Enter] (Figure 18). When the storage device is connected to multiple ports on the storage, you should see multiple entries. Select each one, one at a time. Page 10

Figure 18: Select Fibre Channel Device If redundant storage devices are configured for DDM support, select additional offered storage devices for the SAN boot from the second storage. Ensure you use the eligible order of SAN boot devices when redundant storage is configured. Press [Esc] to go up one menu. Press [Esc] again to get back to the main menu. Press [Enter] to Save Changes. Configure the Selectable Boot Settings of the second HBA in the same way. Exit FAST!UTIL (Figure 19). Figure 19: Fast!UTIL Options Press [Enter] to reboot. During reboot, press [F2] to enter the production node BIOS. Select Boot Boot Device Priority. Further information is provided in the section / 2.1.3 Emulex HBA setup For the following example we used an RX200 S3 server. Other PRIMERGY models might vary slightly. 2.1.3.1 Configure the adapter's parameters Enable the HBA in the PRIMERGY BIOS as described in the section 2.2.1 Enabling slot for FC controller and reboot. Entering Emulex Light Pulse BIOS Utility menu When prompted during the reboot, press [ALT] + [e] to enter the Emulex Light Pulse BIOS Page 11

Utility (Figure 20). Figure 20: Start Emulex BIOS Utility Type 1 to configure the first adapter (Figure 21). Figure 21: Select Adapter Type 2 to configure parameter (Figure 22). Figure 22: Select Adapter s Parameters Type 1 to change the HBA s BIOS (Figure 23). Figure 23: Select Enable or Disable BIOS Type 1 to enable the HBA s BIOS (Figure 24). Page 12

Figure 24: Enable BIOS Enter [Esc] to return to the previous menu. Next, type 3 to adjust the PLOGI Retry Timer, then type 2 to set timer to 50 msec. (Figure 25). Figure 25: Set Retry Timer Enter [Esc] to return to the previous menu. Next, type 4 to adjust the topology (Figure 26). Figure 26: Select Topology Selection Enter 4 to select Fabric Point to Point (Figure 27). Page 13

Figure 27: Set Topology to Fabric Point to Point Enter [Esc] to return to the previous menu. Next, type 11 to adjust the FC speed. The FC speed must be set permanently to 1, 2 or 4 GB (Figure 28). With latest HBA versions 8 GB speed is supported too. Figure 28: Adjusting the FC speed Enter [Esc] to return to the previous menu. Next, type 6 to select Auto Scan Setting, then type 3 to select First LUN 0 device (Figure 29). Figure 29: Selecting first LUN 0 device Page 14

Enter [Esc] three times to return to the main menu. Configure the second adapter in the same way as the first one. When you are finished, exit the Emulex BIOS Utility by pressing x. When prompted, enter y to reboot (Figure 30). Figure 30: Rebooting after exiting the Emulex BIOS utility When prompted during the reboot, press [ALT] + [e] to re-enter the Emulex BIOS Utility. By rebooting with the enabled HBA BIOS, the WWNs of the HBAs are now visible in the GUI of the connected storage device. Use the storage GUI and follow the procedure described in the section Connecting the nodes to the storage to register your replaced node to the storage device and map the boot LUN to it. Use the storage GUI and follow the procedure described in the section Connecting the nodes to the storage to register your PRIMERGY to the storage device and assign the storage group (CX) or map the boot LUN to the PRIMERGY server (SX). For DX storage assign the affinity group. Next, return to Emulex BIOS Utility to configure the boot devices in the Emulex HBA BIOS as described in the following section. 2.1.3.2 Configuring the boot devices After entering the Emulex BIOS Utility, select the first adapter. Type 1 to select Configure Boot Devices (Figure 31). Figure 31: Configuring boot devices in Emulex BIOS utility Type 1 to select the first boot entry (Figure 32). Page 15

Figure 32: Selecting the first boot entry Depending of your configuration, the available boot paths/devices are offered. Enter 01 to select the first path to the boot device (Figure 33). Figure 33: Selecting the first path to the boot device Enter the LUN number 00 for the boot LUN (Figure 34). Figure 34: Entering the LUN number for the boot LUN The Emulex BIOS offers the choice of searching for the boot device via the device ID of the disk or via the WWPN of the HBA. For x10sure you must use booting via WWPN. Press 01 to select the assigned LUN 00 (Figure 35). Figure 35: Selecting the assigned boot LUN Page 16

Type 1 to select Boot this Device via WWPN (Figure 36). Figure 36: Selecting to boot device via WWPN In the following screen, you see the selected WWPN of the storage device as the Primary Boot entry. If you have a redundant path configuration (as recommended), press 2 to configure the second path as a boot device (Figure 37). Figure 37: Configuring the second path as a boot device Type 02 to select the second device as The Desired Boot Device (Figure 38). Figure 38: Selecting the second boot device Again, enter 00 for the boot LUN number (Figure 39). Page 17

Figure 39: Entering the boot LUN number for the second device Press 01 to select the assigned LUN 00 (Figure 40). Figure 40: Setting the boot method via WWPN Type 1 to boot this device via WWPN (Figure 41). Figure 41: Continue setting the boot method via WWPN Press [Esc] several times until you can select the second adapter and repeat the procedure. Press x to exit, and then y when prompted to reboot. Page 18

2.2 Additional RX200/300 production node settings X10sure requires that you make the following changes in the system BIOS of the RX200/300 machines: Enable PCI slot for the FC controller Change boot options Specify IPMI settings 2.2.1 Enabling slot for FC controller Power the server on, and press the [F2] button to enter the BIOS. In the Advanced screen of the BIOS setup, select PCI Configuration (Figure 42). Figure 42: RX200/300 Advanced BIOS screen Select PCI SLOTS Configuration (Figure 43). Figure 43: RX200/300 PCI Configuration BIOS screen Select the slot that contains the FC controller, and set the Option ROM Scan value to Enabled (Figure 44). Page 19

Figure 44: RX200/300 PCI SLOTS Configuration BIOS screen options Use the [ESC] key to exit the screen. 2.2.2 Configuring IPMI The initial IPMI configuration consists of the following: Configuring the interface in the BIOS Changing the password, if desired 2.2.2.1 Configuring IPMI in the BIOS From the Server menu, select IPMI (Figure 45). Figure 45: RX200/300 Server BIOS screen Page 20

From the IPMI menu, select LAN (Figure 46). Figure 46: RX200/300 IPMI BIOS screen Set the Management LAN value to Enabled, set the Management LAN Port value to Management. Specify the IPMI IP address, subnet mask, and gateway. (Figure 47) Figure 47: RX200/300 LAN Settings BIOS screen (IPMI) From the Exit menu, select Save Changes & Exit (Figure 48). Figure 48: RX200/300 Save Changes & Exit BIOS screen Page 21

2.2.2.2 Changing the IPMI user name and password If you have to change the default IPMI user name and password, log into the irmc Web interface by entering the IPMI IP address into a Web browser as in the following example: http://10.10.0.160/ Enter the default user name and password into the login window. The default user name and password is admin/admin (Figure 49). Figure 49: irmc Web interface login window Select User Management in the menu on the left side of the window (Figure 50). Figure 50: Selecting irmc User Management Click on New User from the User Management window (Figure 51). Page 22

Figure 51: Creating new IPMI user Specify the following new user configuration details: Name/Password/Confirm Password: Enter the new user name, password, and confirm. LAN Privilege/Serial Privilege: x10sure needs Administrator privileges to function properly; therefore, you must choose either Administrator/Administrator or OEM/OEM from the drop-down selection windows. User Shell: Keep the default option, Remote Manager. o User Description: Change the user description, if desired. o Click on Apply (Figure 52), and close the browser to exit. Figure 52: Specifying new IPMI user configuration details 2.3 Connecting the nodes to the storage Go to the sub-section that reflects the description of your storage box. 2.3.1 Registering the node on the FibreCAT CX Page 23

Switch to the Navisphere GUI window, right-click on the SAN storage device and open the Connectivity Status window. The WWN of the new node will appear automatically (triggered by a scan of the FC adapter) (Figure 53). Figure 53: Navisphere Connectivity status window Click on the WWN (Initiator Name) and click the button Register. Register the WWN to the existing host entry of the damaged node and confirm. Repeat the registration for all HBAs of your replaced node. Click on the WWN (Initiator Name) of the damaged node (Login In: No) then click the button Deregister and confirm. Repeat the deregistration for all HBAs of your damaged node. Do the registration first, otherwise the host entry is lost and you must assign a new host entry and assign the host connection. The host names in x10sure have to be the same as the ones which are registered in the CX. When using CX3/CX4 choose Initiator Type "CLARiiON Open" (Figure 54) Figure 54: Choose Initiator Type Page 24

2.3.1.1 Assigning a storage group to the node Right-click on the storage group to be assigned for this production node (for example, Mailserver) and select Connect hosts. Select the host from the list of available hosts and move it to the list of hosts to be connected and confirm. The host name connected to a storage group is case sensitive and has to match the server name as configured within x10sure. 2.3.1.2 Production node running ESX4 If the production node is running ESX4 the kernel will perform an auto registration to the CX storage. For x10sure this auto registration has to be disabled. This is done in the advanced software settings in the ESX client configuration. (Figure 55) Figure 55: Enter advanced settings of ESX client Disable the automatic NaviAgent registration by setting the configuration parameter Disk.EnableNaviReg to 0 (Figure 56). Page 25

Figure 56: Disable automatic NaviAgent registration 2.3.2 Registering the node on the FibreCAT SX Register the replaced nodes on a Fibre Channel based SX as follows: Switch to the SX window. In the left window pane, click MANAGE VOLUME MANAGEMENT and select volume mapping manage host list. The WWN of the HBA is now in the Global Host Port List window pane. Delete the entries of the damaged host first, to release the Nickname by clicking the Delete button. Enter a nickname into the Port Nickname field and click the Edit Name button (Figure 57). In contrast to the HBA's host name in the CX, the nickname of an HBA/WWN in the SX has to be unique. Using host names as nicknames is allowed, it is not mandatory but you should use it because it s easier for human administration then the WWN. Figure 57: Registering node on a Fibre Channel based SX Repeat the process for all HBAs of the replaced node. Registering at an iscsi based SX is not necessary because you use the identity of the damaged node Page 26

2.3.2.1 Volume mapping for Fibre Channel SX After you have registered an HBA, you must assign the volume(s) to it. We recommend that you register all the HBAs of a node first and then assign the volumes to all of them. Assign volumes as follows: In the SX left-hand window pane, click MANAGE VOLUME MANAGEMENT volume mapping, and then select the map hosts to volume submenu. Select the volume you want to map by clicking its volume name in the Volume Name column. In the Assign Host Access Privileges window pane, select the replaced host to connect with this volume. In the LUN field enter the same value as you see at the field of the damaged node. Click the Map it button (Figure 58). Figure 58: Map hosts to volume Map all volumes to be used with the replaced node to all of its registered HBAs. Select the damaged host to disconnect from this volume. In the LUN field enter nothing. Click the Unmap it button (Figure 58). Unmap all volumes from all registered HBAs of the damaged node. 2.3.2.2 How to register the SX standby path in the boot device list When using an SX with two controllers, one path is in active mode while the other path is in standby mode. The Emulex or Qlogics BIOS Utility menu (see section 2.1 BIOS and HBA settings ) is able to register the active path but is not able to register the passive path. We recommend that you use the following "approach": Define two small dummy volumes e.g. dummy1 and dummy2. One of them must be part of a vdisk owned by controller A the other one must be part of a vdisk owned by controller B. Use LUN No 0 and map the dummy LUNs to all HBAs of the replaced node. This makes it possible to register the standby path (WWPN) in the boot device list. At the end of the mapping procedure, the two dummy volumes must be "unmapped". Page 27

2.3.3 Configuring the node for NetApp FCP Configuring at an iscsi based NetApp is not necessary because you use the entry of the damaged node You can configure by entering the filer s URL in a browser of your choice. To modify the configuration navigate to: LUNs Initiator Groups Manage, click at the group for the replaced node and the modify window appears. (Figure 59): Figure 59: Modify Initiator Group Delete the WWPN(s) of the damaged node and add the WWPN(s) of the replaced node and click Apply to activate it and then reboot the replaced node. 2.3.4 Configuring the node for Eternus DX60 / DX80 / DX90 Connect to the DX by Internet Explorer and select Volume Settings Host I/F Management Configure LUN Mapping, select first port and click Edit button to edit Affinity Groups (Figure 60). Page 28

Figure 60: Edit Affinity Group Setting Disable all entries of the damaged node (Select Disable in the box). Write down the Affinity Group(s) of the damaged node before, because you need them later to set Affinity Groups to replaced node. Don t forget to click Set button, if all entries for one port are done and confirm (Figure 61). Figure 61: Disable Affinity Groups Disable Affinity Groups of the damaged node on all concerned ports. Then select Volume Settings Host I/F Management Setup FC Host and delete the entries with the WWN(s) of the damaged node by marking them and by clicking the Delete button. Don t forget to click Set button at least and confirm (Figure 62). Page 29

Figure 62: Delete Registration Register all HBAs of the replaced node. Select Volume Settings Host I/F Management Setup FC Host Select Add (Figure 63). Figure 63: Setup FC Port Select an FC Port and select the WWN of the replaced node out of the list box. Give this WWN the same name as the damaged node had before and select OK (Figure 64). Figure 64: Add FC Host Page 30

Do this for all different WWNs at all ports. And at least enter Set and confirm to save values. Map Affinity Groups back to the replaced node. Select Volume Settings Host I/F Management Configure LUN Mapping, select first port and click Edit button to edit Affinity Groups. Select Affinity Group in the box as the damaged node had before. Don t forget to click Set button, if all entries for one port are done and confirm. Registering at an iscsi based DX is not necessary because you use the identity of the damaged node 3 Related publications x10sure 3.2 Site Preparation x10sure 3.2 User Guide Page 31