Installing the Microsoft Hyper-V Failover Cluster on N series

Similar documents
Designing a Reference Architecture for Virtualized Environments Using IBM System Storage N series IBM Redbooks Solution Guide

ServeRAID-MR10i SAS/SATA Controller IBM System x at-a-glance guide

Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide

Tivoli Storage Manager for Virtual Environments: Data Protection for VMware Solution Design Considerations IBM Redbooks Solution Guide

Best practices. Starting and stopping IBM Platform Symphony Developer Edition on a two-host Microsoft Windows cluster. IBM Platform Symphony

Continuous Availability with the IBM DB2 purescale Feature IBM Redbooks Solution Guide

Integrated Management Module (IMM) Support on IBM System x and BladeCenter Servers

Emulex 8Gb Fibre Channel Single-port and Dual-port HBAs for IBM System x IBM System x at-a-glance guide

Using the IBM DS8870 in an OpenStack Cloud Environment IBM Redbooks Solution Guide

Redpaper. IBM Tivoli Access Manager for e-business: Junctions and Links. Overview. URLs, links, and junctions. Axel Buecker Ori Pomerantz

Tivoli Access Manager for Enterprise Single Sign-On

Version 9 Release 0. IBM i2 Analyst's Notebook Premium Configuration IBM

IBM z/os Management Facility V2R1 Solution Guide IBM Redbooks Solution Guide

Utility Capacity on Demand: What Utility CoD Is and How to Use It

ServeRAID-BR10il SAS/SATA Controller v2 for IBM System x IBM System x at-a-glance guide

Version 9 Release 0. IBM i2 Analyst's Notebook Configuration IBM

Tivoli Access Manager for Enterprise Single Sign-On

Version 1.2 Tivoli Integrated Portal 2.2. Tivoli Integrated Portal Customization guide

Patch Management for Solaris

IBM i 7.1 BRMS Enterprise Enhancements

IBM Flex System FC port 8Gb FC Adapter IBM Redbooks Product Guide

IBM XIV Host Attachment Kit for HP-UX Version Release Notes

IBM 6 Gb Performance Optimized HBA IBM Redbooks Product Guide

Broadcom NetXtreme Gigabit Ethernet Adapters IBM Redbooks Product Guide

Redpaper. IBM System Storage N3700 A20 Setup. Alex Osuna Dirk Peitzmann

IBM Cognos Dynamic Query Analyzer Version Installation and Configuration Guide IBM

A Quick Look at IBM SmartCloud Monitoring. Author: Larry McWilliams, IBM Tivoli Integration of Competency Document Version 1, Update:

Migrating Classifications with Migration Manager

Platform LSF Version 9 Release 1.1. Migrating on Windows SC

IBM WebSphere Sample Adapter for Enterprise Information System Simulator Deployment and Testing on WPS 7.0. Quick Start Scenarios

SAS Connectivity Card (CIOv) for IBM BladeCenter IBM Redbooks Product Guide

IBM Storage Driver for OpenStack Version Release Notes

2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Contents. Configuring AD SSO for Platform Symphony API Page 2 of 8

IMM and IMM2 Support on IBM System x and BladeCenter Servers

IBM PowerKVM available with the Linux only scale-out servers IBM Redbooks Solution Guide

IBM High IOPS SSD PCIe Adapters IBM System x at-a-glance guide

IBM Spectrum LSF Process Manager Version 10 Release 1. Release Notes IBM GI

IBM Storage Host Attachment Kit for HP-UX Version Release Notes IBM

Installing Watson Content Analytics 3.5 Fix Pack 1 on WebSphere Application Server Network Deployment 8.5.5

iscsi Configuration Manager Version 2.0

IBM Storage Device Driver for VMware VAAI. Installation Guide. Version 1.1.0

Platform LSF Version 9 Release 1.3. Migrating on Windows SC

Managing IBM Db2 Analytics Accelerator by using IBM Data Server Manager 1

IBM emessage Version 8.x and higher. Account Startup Overview

Workplace Designer. Installation and Upgrade Guide. Version 2.6 G

CONFIGURING SSO FOR FILENET P8 DOCUMENTS

Networking Bootstrap Protocol

ServeRAID M1015 SAS/SATA Controller for System x IBM System x at-a-glance guide

IBM Maximo for Aviation MRO Version 7 Release 6. Installation Guide IBM

IBM Flex System FC5024D 4-port 16Gb FC Adapter IBM Redbooks Product Guide

IBM XIV Provider for Microsoft Windows Volume Shadow Copy Service. Version 2.3.x. Installation Guide. Publication: GC (August 2011)

IBM XIV Host Attachment Kit for AIX Version Release Notes

Best practices. Reducing concurrent SIM connection requests to SSM for Windows IBM Platform Symphony

IBM i2 ibridge 8 for Oracle

Enterprise Caching in a Mobile Environment IBM Redbooks Solution Guide

IBM License Metric Tool Enablement Guide

IBM Storage Driver for OpenStack Version Installation Guide SC

IBM XIV Host Attachment Kit for HP-UX Version Release Notes

Implementing IBM CICS JSON Web Services for Mobile Applications IBM Redbooks Solution Guide

IBM Maximo for Service Providers Version 7 Release 6. Installation Guide

Tivoli Access Manager for Enterprise Single Sign-On

Integrating the Hardware Management Console s Broadband Remote Support Facility into your Enterprise

Tivoli Access Manager for Enterprise Single Sign-On

IBM Maximo Calibration Version 7 Release 5. Installation Guide

Best practices. Linux system tuning for heavilyloaded. IBM Platform Symphony

IBM Endpoint Manager Version 9.1. Patch Management for Ubuntu User's Guide

IBM BigInsights Security Implementation: Part 1 Introduction to Security Architecture

IBM Operational Decision Manager Version 8 Release 5. Configuring Operational Decision Manager on Java SE

IBM Storage Management Pack for Microsoft System Center Operations Manager (SCOM) Version Release Notes

Using application properties in IBM Cúram Social Program Management JUnit tests

Getting Started with InfoSphere Streams Quick Start Edition (VMware)

Emulex 10GbE Virtual Fabric Adapter II for IBM BladeCenter IBM Redbooks Product Guide

IBM Security QRadar Version Customizing the Right-Click Menu Technical Note

IBM i Version 7.2. Systems management Logical partitions IBM

IBM XIV Host Attachment Kit for HP-UX Version Release Notes

ServeRAID M5025 SAS/SATA Controller for IBM System x IBM System x at-a-glance guide

Release Notes. IBM Tivoli Identity Manager Universal Provisioning Adapter. Version First Edition (June 14, 2010)

IBM Security QRadar Version Forwarding Logs Using Tail2Syslog Technical Note

Optimizing Data Integration Solutions by Customizing the IBM InfoSphere Information Server Deployment Architecture IBM Redbooks Solution Guide

Integrated use of IBM WebSphere Adapter for Siebel and SAP with WPS Relationship Service. Quick Start Scenarios

Application and Database Protection in a VMware vsphere Environment

Release Notes. IBM Security Identity Manager GroupWise Adapter. Version First Edition (September 13, 2013)

Limitations and Workarounds Supplement

Release Notes. IBM Tivoli Identity Manager GroupWise Adapter. Version First Edition (September 13, 2013)

Achieving Higher Levels of Productivity with IBM ISPF Productivity Tool for z/os IBM Redbooks Solution Guide

IMS and VSAM replication using GDPS IBM Redbooks Solution Guide

Tivoli Access Manager for Enterprise Single Sign-On

IBM. IBM i2 Enterprise Insight Analysis Understanding the Deployment Patterns. Version 2 Release 1 BA

IBM Ethernet Switch J48E

IBM 1U Switched and Monitored Power Distribution Units IBM System x at-a-glance guide

IBM Storage Driver for OpenStack Version Installation Guide SC

IBM FlashSystem V MTM 9846-AC3, 9848-AC3, 9846-AE2, 9848-AE2, F, F. Quick Start Guide IBM GI

IBM Storage Driver for OpenStack Version Release Notes

N2225 and N2226 SAS/SATA HBAs for IBM System x IBM Redbooks Product Guide

IBM Rational DOORS Installing and Using the RQM Interface Release 9.2

Version 2 Release 1. IBM i2 Enterprise Insight Analysis Understanding the Deployment Patterns IBM BA

IBM Endpoint Manager for OS Deployment Linux OS provisioning using a Server Automation Plan

IBM XIV Provider for Microsoft Windows Volume Shadow Copy Service Version Release Notes

Build integration overview: Rational Team Concert and IBM UrbanCode Deploy

Transcription:

Redpaper Alex Osuna William Luiz de Souza Installing the Microsoft Hyper-V Failover Cluster on N series Introduction IBM System Storage N series offers a wealth of data protection, high availability, and multiprotocol and performance features. By installing the Microsoft Windows 2008 Hyper-V failover cluster on the N series you create a configuration that meets today s virtualization and data availability requirements. This IBM Redpaper publication outlines the steps performed to install the Microsoft Windows 2008 Hyper-V failover cluster server using an N series storage system. Copyright IBM Corp. 2009. All rights reserved. ibm.com/redbooks 1

Accessing the N series storage system After planning and implementing the necessary features on the N series storage system as well as creating the aggregates and volumes, it is time to access the N series storage system and create the logical unit numbers (LUNs) from the host client, which is the Microsoft Hyper-V failover cluster server. The first step is to install and configure the protocol that will be used for this access. At this point you have already planned for the protocol that you are going to use and have all the infrastructure in place for the use of either Internet SCSI Protocol (iscsi) or Fibre Channel Protocol (FCP). There is some configuration that must be done on the N series storage system and on the Microsoft Hyper-V failover cluster server as well. Note: Because of its improved performance and reliability, the recommended protocol is FCP. Fibre Channel Protocol Many companies already have a Fibre Channel infrastructure in place. This eases the installation and configuration of FCP on the Microsoft Hyper-V failover cluster server. Also, the companies already have the knowledge for troubleshooting issues related to FCP. The recommended configuration when using FCP is to have multiple paths configured. In this manner, should any of the Host Bus Adapters (HBAs), Fibre Channel (FC) cables, or FC Switches fail, you still have connectivity between the host and the N series storage system, as shown in Figure 1. Figure 1 Multipathing configuration for Microsoft Hyper-V failover cluster server using FCP 2 Installing the Microsoft Hyper-V Failover Cluster on N series

Note: To enable the FCP protocol and the FCP adapter on the N series storage system, you will need an FCP license to be installed on the N series storage system. In the FilerView, select Filer Manage Licenses and scroll down the right pane until you see the FCP license. Add the FCP license and click Apply. Assuming that all the infrastructure is already in place and working, the first feature to install and enable on the server is the Data ONTAP DSM for Windows MPIO software. This software is part of the IBM System Storage N series solution and requires a license during the installation. DSM for Windows MPIO provides the driver for SnapDrive multipathing capabilities either for high availability or for load balancing on FCP infrastructures. After installing Data ONTAP DSM for Windows MPIO, SnapDrive should be installed so that the LUNs can be created. Note: Despite the fact that the LUNs can be created from the N series storage system, the recommended procedure is to create the LUNs from the Microsoft Hyper-V failover cluster client using SnapDrive. Internet SCSI Protocol For companies that do not have an FCP infrastructure in place or for those that want to access storage using their Ethernet infrastructure and knowledge, iscsi can be used as the access protocol for communication between the Microsoft Hyper-V failover cluster server and the N series storage system. Keep in mind that iscsi Initiator adapters offload the largest amount of processing from the server s CPU. In case there are no iscsi adapters on your planned environment, the iscsi Initiator software can be used to provide the same connectivity to the N series storage system. The use of multipaths is also recommended when using a hardware-based or software-based iscsi solution, as shown in Figure 2. Figure 2 Multipathing configuration for Microsoft Hyper-V failover cluster server using iscsi Installing the Microsoft Hyper-V Failover Cluster on N series 3

In the example, there are two interfaces on the server (either iscsi hardware-based or Gigabit Ethernet cards) that connect to two different LAN switches. It is important, for performance and reliability reasons, that these LAN segments and switches are other than the public ones. The N series storage system will have two of their adapters also connected to both switches. Note: To enable the iscsi protocol and the iscsi adapter on the N series storage system, you will need an iscsi license to be installed on the N series storage system. In the FilerView, select Filer Manage Licenses and scroll down the right pane until you see the iscsi license. Add the iscsi license and click Apply. Assuming that the infrastructure is already in place and working, Microsoft iscsi Software Initiator must be installed on the server. After installing and configuring it, SnapDrive should be installed and configured as well so that the LUNs can be created. Note: Despite the fact that the LUNs can be created from the N series storage system, the recommended procedure is to create the LUNs from the Microsoft Hyper-V failover cluster client using SnapDrive. Installing Microsoft iscsi Software Initiator Microsoft iscsi Software Initiator is the software installed on the server that allows SCSI communication over TCP/IP. This software is required if you are using the SCSI protocol to communicate with the N series storage system, but do not have the hardware-based iscsi adapters. Microsoft iscsi Software Initiator will create additional layers in the network so that a layer for the iscsi protocol and for SCSI drivers are present. In this manner, the regular Network Interface Card (NIC) could be used to communicate with the N series storage system. As a best practice, always use the latest versions of the software and drives on your environment, unless there are compatibility issues. For information about Microsoft iscsi Software Initiator and the latest versions, visit the following Web site: http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiscsi.mspx 4 Installing the Microsoft Hyper-V Failover Cluster on N series

The following steps outline the Microsoft iscsi Software Initiator software installation: 1. In the welcome window (Figure 3), click Next. Figure 3 Microsoft iscsi Initiator welcome window Installing the Microsoft Hyper-V Failover Cluster on N series 5

2. In the installation options window (Figure 4), select the following options and click Next: Initiator Service Software Initiator Microsoft MPIO Multipathing Support for iscsi Figure 4 Installation options window 3. In the License Agreement window (Figure 5), agree to the terms and click Next. Figure 5 License Agreement window 6 Installing the Microsoft Hyper-V Failover Cluster on N series

4. The installation starts. At the end of the installation, in the finish window (Figure 6), click Finish. If you do not want your server to reboot now, select the check box Do not restart now. Otherwise, your server will reboot immediately. Figure 6 Finish window Installing the Microsoft Hyper-V Failover Cluster on N series 7

Configuring iscsi connectivity To configure iscsi connectivity: 1. Start Microsoft iscsi Initiator by using the desktop icon or by selecting Start All Programs Microsoft iscsi Initiator Microsoft iscsi Initiator. This brings up the iscsi Initiator Properties window (Figure 7). 2. Copy the initiator node name from the iscsi Initiator Properties window (Figure 7). Figure 7 iscsi Initiator Properties window 8 Installing the Microsoft Hyper-V Failover Cluster on N series

3. In the FilerView, select LUNs Initiator Groups Add. In the Add Initiator Group window (Figure 8) type in a name for the group that you are creating, select iscsi as the protocol for the group, select Windows as the operating system, and paste in the iscsi initiator node name that you copied in the previous step. Figure 8 Add iscsi Initiator Group window Installing the Microsoft Hyper-V Failover Cluster on N series 9

4. In the Microsoft Hyper-V failover cluster server, click the Discovery tab and the iscsi Initiator Discovery window will open (Figure 9). Click Add in the Target Portals session. Figure 9 iscsi Initiator Discovery tab 5. In the Add Target Portal window (Figure 10), type in the IP address or DNS name of the filer and click Advanced. Figure 10 Add Target Portal window 10 Installing the Microsoft Hyper-V Failover Cluster on N series

6. In the Advanced Settings window (Figure 11), select Microsoft iscsi Initiator as the local adapter and then select one of the IP addresses as the source IP. At this time, the CHAP Authentication protocol may be configured if it is defined on the N series storage system configuration. For this scenario, we do not use CHAP. Click OK. Figure 11 Advanced Settings window 7. Repeat steps 4 through 6 to add the additional IP addresses. Click OK. Installing the Microsoft Hyper-V Failover Cluster on N series 11

8. In the iscsi Initiator Discovery tab (Figure 12) you will notice that two paths were created for the same target portal, one for each interface. Depending on your infrastructure configuration, this may change due to more target portals configured on the N series storage system. Figure 12 iscsi Initiator Discovery tab 12 Installing the Microsoft Hyper-V Failover Cluster on N series

9. Click the Targets tab and a list of the targets of the iscsi storage devices are shown, as seen in Figure 13. Click Log On to configure the paths to the N series storage system. Figure 13 iscsi Initiator Targets tab 10.In the Log On to Target panel, select both check boxes, as shown in Figure 14. This configures the path to be persistent and enables the multipathing. Click Advanced. Figure 14 Log On to Target window Installing the Microsoft Hyper-V Failover Cluster on N series 13

11.In the Advanced Settings window (Figure 15), select Microsoft iscsi Initiator as the local adapter, select the first IP address as the source IP, and select the proper target portal s combination of IP address and port number. Click OK. Figure 15 Advanced Settings window 12.Repeat steps 9 through 11 for the additional IP address on the Microsoft Hyper-V failover cluster server. Select this additional IP address as the source IP. Click OK. 13.Click OK to close the iscsi Initiator Properties window. At this point, the communication between the Microsoft Hyper-V failover cluster server and the N series storage system is established. SnapDrive installation can now take place. Installing SnapDrive for Windows Important: Before installing SnapDrive, ensure that you have the latest patches installed on your Microsoft Windows Server 2008,.NET Framework 3.0, and the KB950927 hotfixes. 14 Installing the Microsoft Hyper-V Failover Cluster on N series

To install SnapDrive on the Microsoft Hyper-V failover cluster server: 1. In the SnapDrive installation welcome window (Figure 16), click Next. Figure 16 SnapDrive installation welcome window Installing the Microsoft Hyper-V Failover Cluster on N series 15

2. In the License Agreement window (Figure 17), accept the terms of the license agreement and click Next. Figure 17 License Agreement window 3. In the license key window (Figure 18), type in the license key for SnapDrive and click Next. Figure 18 SnapDrive license window 16 Installing the Microsoft Hyper-V Failover Cluster on N series

4. In the Customer Information window (Figure 19), type in the user name and organization information and click Next. Figure 19 Customer Information window 5. In the Destination Folder window (Figure 20), confirm or change the destination folder for the installation files and click Next. Figure 20 SnapDrive Destination Folder window Installing the Microsoft Hyper-V Failover Cluster on N series 17

6. In the SnapDrive Service Credentials window (Figure 21), type in the account and password for the user account to be used to start the SnapDrive service and click Next. Figure 21 Snapdrive service credentials window Note: The SnapDrive service user account should be a member of the active directory s domain administrators group and be a member of the N series local administrators group. 18 Installing the Microsoft Hyper-V Failover Cluster on N series

7. In the SnapDrive Web Service Configuration window (Figure 22), type in the ports for the Web services connections (leave the default unless the ports are already being used) and click Next. Figure 22 SnapDrive Web Service Configuration window 8. In the SnapDrive Transport Protocol Default Setting window (Figure 23), select the protocol for the transport (leave default unless the traffic is blocked) and click Next. Figure 23 Transport protocol settings window Installing the Microsoft Hyper-V Failover Cluster on N series 19

9. In the SnapDrive DataFabric Manager Configuration window (Figure 24), enable the protocol manager integration if you are going to use this feature and click Next. Figure 24 DataFabric Manager Configuration window 10.In the SnapDrive installation confirmation window (Figure 25), click Next. Figure 25 SnapDrive installation confirmation window 20 Installing the Microsoft Hyper-V Failover Cluster on N series

11.In the SnapDrive Installation Completed window (Figure 26), click Finish. Figure 26 SnapDrive Installation Completed window After the installation is done, no restart is needed in order to get SnapDrive for Windows working. Creating disks from SnapDrive Now that the Data ONTAP DSM for Windows MPIO and SnapDrive are installed, the disk drives can be added using the SnapDrive software. In order to create the disks from SnapDrive, we assume that: An aggregate has been created on the N series. A volume has been created on the aggregate. A Common Internet File System (CIFS) share has been created mapping the path to the volume. The Fibre Channel infrastructure is in place and working. Installing the Microsoft Hyper-V Failover Cluster on N series 21

To create the LUN from the SnapDrive: 1. Access the Computer Management MMC. 2. You can add your N series storage system to the console to manage the filer from the same MMC. Right-click Storage System Management and type your N series system storage IP (Figure 27). Figure 27 Add Storage System window 22 Installing the Microsoft Hyper-V Failover Cluster on N series

3. Expand Storage, expand SnapDrive, and then expand your server. 4. Right-click Disks and click Create Disk, as shown in Figure 28. Figure 28 Disk creation using SnapDrive 5. In the Create Disk welcome window (Figure 29), click Next. Figure 29 Create disk welcome window Installing the Microsoft Hyper-V Failover Cluster on N series 23

6. On the provide a path and name window (Figure 30), enter the system storage IP and select the volume previously created. For the name for the new virtual disk, type in the name that you want to assign to the LUN that will be created. Click Next. Figure 30 Provide LUN path and name window 7. In the select a virtual disk window (Figure 31), select Dedicated if this disk will be accessed by only one server. Select Shared if this disk will be accessed by a cluster service. Click Next. Figure 31 Select a virtual disk type window 24 Installing the Microsoft Hyper-V Failover Cluster on N series

8. In the select virtual disk properties window (Figure 32), select whether you want to assign a drive letter (99% of system administrators use Q for the quorum) for the disk being created. The next option impacts the size that will be available for the LUN creation. You must select whether you will reserve space for at least one snapshot of this LUN on the volume. Then enter the size that you want for this LUN. Click Next. Figure 32 Select virtual disk properties window Note: For the quorum disk the snapshots are not required. Installing the Microsoft Hyper-V Failover Cluster on N series 25

9. In the Storage System Volume Properties window (see Figure 33), accept defaults and click Next. Figure 33 Storage system volume properties window 10.In the Select Initiators window (Figure 34), select the initiators from the Available Initiators column on the left and click the arrow to move them to the Selected Initiators on the right. Because we are using FCP, the initiators will be listed as the World Wide Port Number (WWPN) of the host bus adapters (HBAs) on the system. Click Next. Figure 34 Select Initiators window 26 Installing the Microsoft Hyper-V Failover Cluster on N series

11.In the Select the Initiator Group management window (Figure 35), select Automatic if you want the SnapDrive to perform igroup management automatically. Select Manual if you want to specify the igroups for the initiators or create new initiators. Click Next. Figure 35 Select initiator Group management window 12.In the summary window (Figure 36), verify that all the information is correct and click Finish. This starts the LUN creation process on the filer. Figure 36 Summary window Now the disk is available for the server using the drive letter that you selected. However, you must add the second node s WWPN to the initiator group for the secondary node. Installing the Microsoft Hyper-V Failover Cluster on N series 27

After the LUN creation, when logging in to the filer, you will notice that an initiator group has been created for the HBA interface on the server. In the FilerView, select LUNs Initiator Groups Manage for a list of the initiator groups on the filer. A group named viarpc.serverwwpn.servername will be created for the HBA interface. You can keep the configuration as is or you can create a new initiator group, add the server WWPNs as members, and map it to the LUN. You will need to add the second node s WWPN to the previously created initiator group. These are the steps to add a new WWPN to the initiator group. Add the second cluster node WWPN to the initiator group To do this: 1. In the FilerView, select LUNs Initiator Groups Manage. 2. As shown in Figure 37, enter the WWPN for the second node in the Initiators box. Click Apply. Figure 37 Modify initiator group window Installing the Microsoft Hyper-V failover cluster In this section we describe how to install the Microsoft Hyper-V failover cluster. 28 Installing the Microsoft Hyper-V Failover Cluster on N series

Install the Hyper-V role To install the Hyper-V role on both servers: 1. Click Start Server Manager. 2. In the roles summary area of the Server Manager main window, click Add Roles (Figure 38). Figure 38 Server Manager window Installing the Microsoft Hyper-V Failover Cluster on N series 29

3. On the Select Server Roles page, click Hyper-V (Figure 39). Figure 39 Select Server Roles window 4. On the Create Virtual Networks page, if the network adapters are identical on both physical computers, select a physical adapter to create a virtual network that provides access to the physical network. If the network adapters are not identical, do not create a virtual network at this time. You can create the virtual network later by following the instructions in Create a virtual network on page 32. 5. On the Confirm Installation Selections page, click Install. 6. The computer must be restarted to complete the installation. Click Close to finish the wizard, then click Yes to restart the computer. 7. After you restart the computer, log on with the same account you used to install the role. After the Resume Configuration wizard completes the installation, click Close to finish the wizard. 30 Installing the Microsoft Hyper-V Failover Cluster on N series

In this step, you install the failover cluster feature on both servers. The servers must be running Windows Server 2008. 1. In the Server Manager window, under the features summary, click Add Features (Figure 40). Figure 40 Server Manager window Installing the Microsoft Hyper-V Failover Cluster on N series 31

2. In the Add Features Wizard, click Failover Clustering Install (Figure 41). Figure 41 Add Features window 3. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it. 4. Repeat the process for the second server. Create a virtual network You must perform this step on both physical computers if you did not create the virtual network when you installed the Hyper-V role. This virtual network provides the highly available virtual machine with access to the physical network. 1. Open Hyper-V Manager. 2. From the Actions menu, click Virtual Network Manager. 3. Under Create virtual network, select External. 4. Click Add. The New Virtual Network page appears. 5. Type a name for the new network. Make sure that you use exactly the same name on both servers running Hyper-V. 6. Under Connection Type, click External and then select the physical network adapter. Click OK. 32 Installing the Microsoft Hyper-V Failover Cluster on N series

Validate the cluster configuration Before you create the cluster, we strongly recommend that you run a full validation test of your configuration. Validation helps you confirm that the configuration of your servers, network, and storage meets a set of specific requirements for failover clusters. 1. To open the failover cluster snap-in, click Start Administrative Tools Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action that it displays is what you want, and then click Continue.) 2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Validate a Configuration. 3. Follow the instructions in the wizard to specify the two servers. Run all tests to fully validate the cluster before creating a cluster. 4. The Summary page appears after the tests run. To view help topics that will help you interpret the results, click More about cluster validation tests. 5. While still on the Summary page, click View Report and read the test results. Or, to view the results of the tests after you close the wizard, see SystemRoot\Cluster\Reports\Validation Report date and time.htm, where SystemRoot is the folder in which the operating system is installed (for example, C:\Windows). 6. As necessary, make changes to the configuration and rerun the tests. Create the cluster To do this: 1. To open the failover cluster snap-in, click Start Administrative Tools Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action that it displays is what you want, and then click Continue.) 2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Create a cluster. Follow the instructions in the wizard to specify: The servers to include in the cluster The name of the cluster Any IP address information that is not automatically supplied by your Dynamic Host Configuration Protocol (DHCP) settings After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report. Create a virtual machine and reconfigure the automatic start action In this step, you create a virtual machine and reconfigure the automatic action that controls the virtual machine's behavior when the Hyper-V Virtual Machine Management service starts. You must choose the shared storage as the location in which to store the virtual machine and the virtual hard disk. Otherwise, you will not be able to make the virtual machine highly available. To make the shared storage available to the virtual machine, you must create the virtual machine on the physical computer that is the node that owns the storage. Installing the Microsoft Hyper-V Failover Cluster on N series 33

To create a virtual machine 1. Open Hyper-V Manager. Click Start Administrative Tools Hyper-V Manager (Figure 42). Figure 42 Hyper-V manager window 2. If you are not already connected to the server that owns the shared storage, connect to that server. 34 Installing the Microsoft Hyper-V Failover Cluster on N series

3. From the action pane (Figure 43), click New Virtual Machine. You can also use a current.vhd file if you already have a VM. 4. From the New Virtual Machine Wizard, click Next. Figure 43 New Virtual Machine Wizard window Installing the Microsoft Hyper-V Failover Cluster on N series 35

5. On the Specify Name and Location page (Figure 44), specify a name for the virtual machine, such as VM. Click Store the virtual machine in a different location, and then type the full path or click Browse and navigate to the shared storage. Figure 44 Specify Name and Location window 36 Installing the Microsoft Hyper-V Failover Cluster on N series

6. On the Assign Memory page (Figure 45), specify the amount of memory required for the operating system that will run on this virtual machine. Figure 45 Assign Memory window Installing the Microsoft Hyper-V Failover Cluster on N series 37

7. On the Configure Networking page (Figure 46), connect the network adapter to the virtual network that is associated with the physical network adapter. Figure 46 Configure Networking window 38 Installing the Microsoft Hyper-V Failover Cluster on N series

8. On the Connect Virtual Hard Disk page (Figure 47), click Create a virtual hard disk. If you want to change the name, type a new a name for the virtual hard disk. Here is where you should select your current.vhd file if you want to use an existing virtual machine. Click Next. Figure 47 Connect Virtual Hard Disk window Installing the Microsoft Hyper-V Failover Cluster on N series 39

9. On the Installation Options page (Figure 48), click Install an operating system later, then click Finish. Figure 48 Installation Options window Note: Do not start the virtual machine at this point. The virtual machine must be turned off so that you can make it highly available. 40 Installing the Microsoft Hyper-V Failover Cluster on N series

Reconfigure automatic start action for the virtual machine Automatic actions let you automatically manage the state of the virtual machine when the Hyper-V Virtual Machine Management service starts or stops. However, when you make a virtual machine highly available, the management of the virtual machine state should be controlled through the cluster service. In this step, you reconfigure the automatic start action for the virtual machine. Note: Do not to intentionally shut down a node while a virtual machine is running on the node. If you need to shut down the node, take the virtual machine offline, and then shut down the node. 1. In Hyper-V Manager, under Virtual Machines, select the virtual machine that you just created and then click Settings (Figure 49). 2. In the left pane, click Automatic Start Action. 3. Under What do you want this virtual machine to do when the physical computer starts?, click Nothing and then click Apply. Figure 49 Virtual machine settings window Installing the Microsoft Hyper-V Failover Cluster on N series 41

Make the virtual machine highly available To do this: 1. To open the failover cluster snap-in, click Start Administrative Tools Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action that it displays is what you want, and then click Continue.) 2. Right-click Services and Applications and click Configure a Service or Application. Figure 50 Failover Cluster Management window 42 Installing the Microsoft Hyper-V Failover Cluster on N series

3. The High Availability Wizard opens. Click Next. Figure 51 Before You Begin window Installing the Microsoft Hyper-V Failover Cluster on N series 43

4. On the Select Service or Application page, select Virtual Machine from the list and then click Next. Figure 52 Select Service or Application window 44 Installing the Microsoft Hyper-V Failover Cluster on N series

5. On the Select Virtual Machine page, check the name of the virtual machine that you want to make highly available and then click Next. Figure 53 Select Virtual Machine window Installing the Microsoft Hyper-V Failover Cluster on N series 45

6. Confirm your selection and then click Next again. Figure 54 Confirmation window 46 Installing the Microsoft Hyper-V Failover Cluster on N series

7. The wizard configures the virtual machine for high availability and provides a summary. To see the details of the configuration, click View Report. To close the wizard, click Finish. Figure 55 Summary window 8. To verify that the virtual machine is now highly available, you can check in either one of two places in the console tree: Expand Services and Applications. The virtual machine should be listed under Services and Applications. Expand Nodes. Select the node on which you created the virtual machine. Under Services and Applications in the Results pane (the center pane), the virtual machine should be listed. 9. To bring the virtual machine online, under Services and Applications, right-click the virtual machine and then click Bring this service or application online. This action will bring the virtual machine online and start it. After the operating system is set up, you are ready to install the integration services. From the Action menu of Virtual Machine Connection, click Insert Integration Services Setup Disk. If autorun does not start the installation automatically, you can start it manually. From a command prompt, type: %windir%\support\amd64\setup.exe. Test a planned failover To test a planned failover, use failover cluster management to move this service or application to another node: 1. From the console tree, select Services and Applications. 2. Right-click the virtual machine, point to Move this service or application to another node, and click the name of the other node. 3. You can verify that the move succeeded by inspecting the details of each node. Installing the Microsoft Hyper-V Failover Cluster on N series 47

Test an unplanned failover To do this: 1. To open the failover cluster snap-in, click Start Administrative Tools Failover Cluster Management. (If the User Account Control dialog box appears, confirm that the action that it displays is what you want, and then click Continue.) 2. From the console tree, select Nodes and then right-click the node that runs the virtual machine. 3. Select More Actions and then click Stop Cluster Service. 4. Click Stop the cluster service to confirm the action. 5. The virtual machine is moved to the other node. Remove a virtual machine from a cluster When you want to remove a virtual machine from a cluster, the procedure that you must use varies depending on whether you want to keep the virtual machine. This step illustrates both scenarios. Remove a virtual machine from a cluster and retain virtual machine To do this: 1. Use the failover cluster management snap-in to take the virtual machine offline. Under Services and Applications, right-click the VM resource name and then click Take this resource offline. 2. In Hyper-V Manager, under Actions, click Delete. 3. Switch to the failover cluster management snap-in. Expand Services and Applications, right-click the VM resource name, and then click Delete. This action removes the virtual machine from the cluster. Important: The following steps show you how to delete a virtual machine and its files. Perform these steps only if you do not want to keep the virtual machine. Remove a virtual machine from a cluster and delete virtual machine To do this: 1. Use the failover cluster management snap-in to take the virtual machine offline. Under Services and Applications, right-click the VM resource name and then click Take this resource offline. 2. In Hyper-V Manager, under Actions, click Delete. 3. Switch to the failover cluster management snap-in. Expand Services and Applications, right-click the VM resource name, and then click Delete. This action removes the virtual machine from the cluster. 4. Manually delete the virtual machine and virtual hard disk from the shared storage. 48 Installing the Microsoft Hyper-V Failover Cluster on N series

The team that wrote this IBM Redpaper publication This paper was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center. Alex Osuna is a Project Leader at the International Technical Support Organization, Tucson Center. He writes extensively and teaches IBM classes worldwide on all areas of storage. Before joining the ITSO three years ago, Alex worked as a Principal Systems Engineer for the Tivoli Western Region. Alex has over 30 years of experience in the IT industry focused mainly on hardware and software storage. He holds certifications from IBM, Microsoft, and RedHat. William Luiz de Souza is a System Management Engineer on Brazil's Wintel Global Resources Team, Brazil SDC. He works at third-level support for severity ones and infra-structure projects. Before working for the BR Wintel GR Team two years ago, he worked as Wintel primary for Brazil's USF. William has more than eight years of experience in the IT segment focused on Microsoft technologies. He holds certifications from IBM, Microsoft, Citrix, and ITIL. Installing the Microsoft Hyper-V Failover Cluster on N series 49

50 Installing the Microsoft Hyper-V Failover Cluster on N series

Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-ibm product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-ibm Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-ibm products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. Copyright International Business Machines Corporation 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. 51

This document REDP-4497-00 was created or updated on February 5, 2009. Send us your comments in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 U.S.A. Redpaper Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: IBM Redbooks (logo) System Storage Tivoli The following terms are trademarks of other companies: ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. SnapDrive, FilerView, DataFabric, Data ONTAP, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. Hyper-V, Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. "Microsoft product screen shot(s) reprinted with permission from Microsoft Corporation." Other company, product, or service names may be trademarks or service marks of others. 52 Installing the Microsoft Hyper-V Failover Cluster on N series