One Stop Virtualization Shop StarWind Virtual SAN Compute and Storage Separated with Windows Server 2012 R2 FEBRUARY 2018 TECHNICAL PAPER
Trademarks StarWind, StarWind Software and the StarWind and the StarWind Software logos are registered trademarks of StarWind Software. StarWind LSFS is a trademark of StarWind Software which may be registered in some jurisdictions. All other trademarks are owned by their respective owners. Changes The material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, StarWind Software assumes no liability resulting from errors or omissions in this document, or from the use of the information contained herein. StarWind Software reserves the right to make changes in the product design without reservation and without notification to its users. Technical Support and Services If you have questions about installing or using this software, check this and other documents first - you will find answers to most of your questions on the Technical Papers webpage or in StarWind Forum. If you need further assistance, please contact us. About StarWind StarWind is a pioneer in virtualization and a company that participated in the development of this technology from its earliest days. Now the company is among the leading vendors of software and hardware hyper-converged solutions. The company s core product is the years-proven StarWind Virtual SAN, which allows SMB and ROBO to benefit from cost-efficient hyperconverged IT infrastructure. Having earned a reputation of reliability, StarWind created a hardware product line and is actively tapping into hyperconverged and storage appliances market. In 2016, Gartner named StarWind Cool Vendor for Cluster Platforms following the success and popularity of StarWind HyperConverged Appliance. StarWind partners with world-known companies: Microsoft, VMware, Veeam, Intel, Dell, Mellanox, Citrix, Western Digital, etc. Copyright 2009-2018 StarWind Software Inc. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of StarWind Software. TECHNICAL PAPER 2
Contents Introduction... 4 Pre-Configuring the Servers... 5 Enabling Multipath Support... 7 Downloading, Installing, and Registering the Software... 8 Configuring Shared Storage... 15 Discovering Target Portals... 26 Connecting Targets... 32 Creating a Cluster... 39 Adding Cluster Shared Volumes... 43 Conclusion... 44 TECHNICAL PAPER 3
Introduction StarWind Virtual SAN supports both, hyper-converged and compute- and storageseparated architectures. Running compute and storage layers separately makes it possible to scale compute and storage resources independently. This technical paper provides a detailed step-by-step guidance on configuring a 2- node Hyper-V Failover Cluster using StarWind Virtual SAN to turn storage resources of the separated servers into a fault-tolerant and fully redundant shared storage for Hyper-V environments. The Failover Cluster configuration assumes that if one of the cluster nodes fails, the other nodes automatically take over resources, thus continuing serving the applications. Regarding this feature, the workflow remains uninterrupted and secured. Adding StarWind disks to CSVs provides efficient use of storage and simplifies its management, as well as enhances availability and increases resilience. Once it is done, you can start creating highly available virtual machines on them. This guide is intended for experienced Windows system administrators and IT professionals who would like to configure a Hyper-V cluster using StarWind Virtual SAN to convert the clustered storage space into a fault-tolerant shared storage resource. It also highlights how to connect the StarWind HA devices to the Microsoft iscsi initiator and configure the StarWind shared storage as the Cluster Shared Volumes. A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in the StarWind Management Console. For any technical inquiries, please, visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department. TECHNICAL PAPER 4
Pre-Configuring the Servers The image below depicts a network architecture of the configuration described in this guide: NOTE: Additional network connections may be required depending on the cluster setup and applications it`s running. TECHNICAL PAPER 5
1. Make sure that cluster nodes are added to the domain. 2. Install Failover Clustering and Multipath I/O features, as well as the Hyper-V role on cluster nodes. This can be done through Server Manager (Add Roles and Features menu item). 3. Configure network interfaces on each node to make sure that Synchronization and iscsi/starwind heartbeat interfaces are in different subnets and connected as it is shown in the network diagram above. In this document, 172.16.10.x and 172.16.20.x subnets are used for iscsi/starwind heartbeat traffic while 172.16.30.x and 172.16.40.x subnets are used for the Synchronization traffic. 4. In order to allow iscsi Initiators to discover all StarWind Virtual SAN interfaces, StarWind configuration file (StarWind.cfg) should be changed after stopping StarWind Service on the node where it is edited. Locate the StarWind Virtual SAN configuration file (the default path is: C:\Program Files\StarWind Software\StarWind\StarWind.cfg) and open it with Wordpad as Administrator. Find the string <iscsidiscoverylistinterfaces value= 0 /> and change the value from 0 to 1 (should look as follows: <iscsidiscoverylistinterfaces value= 1 />). Save the changes and exit Wordpad. Once StarWind.cfg has been changed and saved, StarWind service can be started. TECHNICAL PAPER 6
Enabling Multipath Support 5. On cluster nodes, open the MPIO manager: Start->Administrative Tools->MPIO; 6. Go to the Discover Multi-Paths tab; 7. Tick the Add support for iscsi devices checkbox and click Add 8. When it is prompted to restart the server, click Yes to proceed. NOTE: Repeat the procedure on the second cluster node. TECHNICAL PAPER 7
Downloading, Installing, and Registering the Software 9. Download the StarWind setup executable file from our website by following the link below: https://www.starwind.com/registration-starwind-virtual-san NOTE: The setup file is the same for x86 and x64 systems, as well as for all Virtual SAN deployment scenarios. 10. Launch the downloaded setup file on the server where you wish to install StarWind Virtual SAN or one of its components. The setup wizard appears: 11. Read and accept the License Agreement. Click Next to continue. TECHNICAL PAPER 8
12. Read the information about new features and improvements carefully. The red text indicates warnings for users who are updating existing software installations. Click Next to continue. 13. Click Browse to modify the installation path if necessary. Click Next to continue. TECHNICAL PAPER 9
14. Select the following components for the minimum setup: StarWind Virtual SAN Service StarWind Service is the core of the software. It can create iscsi targets as well as share virtual and physical devices. The service can be managed from StarWind Management Console on any Windows computer or VSA that is on the network. Alternatively, the service can be managed from StarWind Web Console and deployed separately. StarWind Management Console The Management Console is the Graphic User Interface (GUI) part of the software that controls and monitors all storage-related operations (e.g., allows users to create targets and devices on StarWind Virtual SAN servers connected to the network). Click Next to continue. TECHNICAL PAPER 10
15. Specify the Start Menu folder. Click Next to continue. 16. Enable the checkbox if you want to create a desktop icon. Click Next to continue. TECHNICAL PAPER 11
17. You will be prompted to request a time-limited fully functional evaluation key, or a FREE version key, or a fully commercial license key sent to you with the purchase of StarWind Virtual SAN. Select the appropriate option. Click Next to continue. 18. Click Browse to locate the license file. Click Next to continue. TECHNICAL PAPER 12
19. Review the licensing information. Click Next to apply the license key. 20. Verify the installation settings. Click Back to make any changes or Install to continue. TECHNICAL PAPER 13
21. Select the appropriate checkbox to launch the StarWind Management Console immediately after the setup wizard is closed. Click Finish to close the wizard. 22. Repeat installation steps on the partner node. TECHNICAL PAPER 14
Configuring Shared Storage 23. Launch the StarWind Management Console by double-clicking the StarWind tray icon. NOTE: StarWind Management Console cannot be installed on a GUI-less OS. You can install the Console on any GUI-enabled Windows editions, including a desktop version of Windows. If StarWind Service and Management Console are installed on the same server, the Management Console automatically adds the local StarWind instance to the Console tree after the first launch. Then, the Management Console automatically connects to the StarWind Service using the default credentials. To add remote StarWind servers to the Console, use the Add Server button on the control panel. TECHNICAL PAPER 15
24. StarWind Management Console asks you to specify a default storage pool on the server you connect to for the first time. Please, configure the storage pool to use one of the volumes you have prepared earlier. All the devices created through the Add Device wizard are stored in the configured storage pool. Should you decide to use an alternative storage path for your StarWind virtual disks, please use the Add Device (advanced) menu item. Press the Yes button to configure the storage pool. Should you require to change the storage pool destination, press Choose path and point to the necessary disk in the browser. NOTE: Each of the arrays that will be used by StarWind Virtual SAN to store virtual disk images should meet the following requirements: initialized as GPT; have a single NTFS-formatted partition; have a drive letter assigned. 25. Select the StarWind server where you intend to create the device. 26. Press the Add Device (advanced) button on the toolbar. 27. Add Device Wizard appears. Select Hard Disk Device and click Next. 28. Select a Virtual disk and click Next. TECHNICAL PAPER 16
29. Specify the virtual disk name, location, and size and click Next. Below, you can find how to prepare an HA device for a Witness drive. Devices for Cluster Shared Volumes (CSV) should be created in the same way. 30. Specify virtual disk options. Click Next. TECHNICAL PAPER 17
31. Define the caching policy and specify the cache size and click Next. NOTE: It is recommended to put 1 GB of L1 cache in the Write-Back mode per 1 TB storage capacity. 32. Define the Flash Cache Parameters policy and size if necessary. Choose an SSD location in the wizard. Click Next to continue. NOTE: The recommended size of the L2 cache is 10% of the initial StarWind device size. TECHNICAL PAPER 18
33. Specify target parameters. Select the Target Name checkbox to enter a custom name of a target. Otherwise, the name is generated automatically based on the target alias. Click Next to continue. 34. Click Create to add a new device and attach it to the target. Then, click Close to close the wizard. 35. Right-click the servers field and select Add Server. Add a new StarWind Server which will be used as the second HA node. Click OK and Connect button to continue. TECHNICAL PAPER 19
36. Right-click the recently created device and select Replication Manager. The Replication Manager window appears. Press the Add Replica button. 37. Select Synchronous two-way replication and click Next to proceed. TECHNICAL PAPER 20
38. Specify the partner server IP address. The default StarWind management port is 3261. If you have configured a different port, please, type it in the Port number field. Click Next. 39. Select Heartbeat as the Failover Strategy and click Next TECHNICAL PAPER 21
40. Choose Create new Partner Device and click Next. 41. Specify the partner device location if necessary. You can also modify the target name of the device. Click Next. TECHNICAL PAPER 22
42. You can specify the synchronization and heartbeat channels for the HA device on this screen. You can also modify the ALUA settings. Click Change network settings. 43. Specify the interfaces for synchronization and Heartbeat. Click OK. Then click Next. NOTE: It is recommended configuring Heartbeat and iscsi channels on the same interfaces to avoid the split-brain issue. TECHNICAL PAPER 23
44. Select Synchronize from existing Device as a partner device initialization mode and click Next. TECHNICAL PAPER 24
45. Press the Create Replica button and click Close. 46. The added devices appear in StarWind Management Console. Repeat the steps above for the remaining virtual disks used as Cluster Shared Volumes. Once all devices are created, the Management console should look as in the image below: TECHNICAL PAPER 25
Discovering Target Portals In this chapter, we will discover Target Portals from each StarWind node on each Cluster node. 47. Launch Microsoft iscsi Initiator on Cluster Node 1: Start > Administrative Tools > iscsi Initiator or iscsicpl from the command line interface. The iscsi Initiator properties window appears. 48. Navigate to the Discovery tab. Click the Discover Portal button. In Discover Target Portal dialog enter iscsi IP address of the first StarWind Node 172.16.10.88 Click the Advanced button. TECHNICAL PAPER 26
49. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. 50. Click the Discover Portal button once again. 51. In Discover Target Portal dialog, enter another iscsi IP address of the first StarWind Node 172.16.30.88 and click the Advanced button. TECHNICAL PAPER 27
52. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. 53. Target portals are added from the first StarWind Node. TECHNICAL PAPER 28
54. To Discover Targets Portals from the second StarWind Node, click the Discover Portal button one more time, enter iscsi IP address for the second StarWind Node 172.16.10.99. Click the Advanced button. 55. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. TECHNICAL PAPER 29
56. Click the Discover Portal button one more time. In Discover Target Portal dialog, enter another iscsi IP address of the second StarWind Node 172.16.30.99. Click the Advanced button. 57. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. TECHNICAL PAPER 30
58. All target portals are successfully added to the Cluster Node 1. 59. Perform the steps in this chapter on the Cluster Node 2. All target portals added to the Cluster Node 2 should look like in the picture below. TECHNICAL PAPER 31
Connecting Targets 60. Launch Microsoft iscsi Initiator on Cluster Node 1 and click on the Targets tab. The previously created targets should be listed in the Discovered Targets section. NOTE: If the created targets are not listed, check the firewall settings of the StarWind Server as well as the list of networks served by the StarWind Server (go to StarWind Management Console -> Configuration -> Network). 61. Select a target discovered from the first StarWind Node and click Connect. TECHNICAL PAPER 32
62. Enable checkboxes like in the image below and click Advanced. 63. Select Microsoft iscsi Initiator in the Local adapter text field. 64. In the Target portal IP, select the IP address of the first StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target. TECHNICAL PAPER 33
65. To connect the same target via another subnet, select it one more time and click Connect. 66. Enable checkboxes like in the image below and click Advanced. 67. Select Microsoft iscsi Initiator in the Local adapter text field. 68. In the Target portal IP select another IP address of the first StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target. TECHNICAL PAPER 34
69. Select the partner target discovered from the second StarWind node and click Connect. 70. Enable checkboxes like it is shown in the image below and click Advanced. 71. Select Microsoft iscsi Initiator in the Local adapter text field. TECHNICAL PAPER 35
72. In Target portal IP select the IP address of the second StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target. 73. To connect the same target via another subnet, select it one more time and click Connect. 74. Enable checkboxes like in the image below and click Advanced. TECHNICAL PAPER 36
75. In the Target portal IP select another IP address of the second StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target. TECHNICAL PAPER 37
76. Repeat the above actions for all HA device targets. After that, repeat the steps described in this section on the Cluster node 2, specifying corresponding IP addresses. The result should look like in the picture below. 77. Initialize the disks and create partitions on them using the Disk Management snap-in. To create the cluster, the disk devices must be initialized and formatted on both nodes. NOTE: it is recommended to initialize the disks as GPT. TECHNICAL PAPER 38
Creating a Cluster NOTE: To avoid issues during cluster validation configuration, it is recommended to install the latest Microsoft updates on each node. 78. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu. TECHNICAL PAPER 39
79. Click the Create Cluster item in the Actions section of the Failover Cluster Manager. Specify servers to be added to the cluster. Click Next to continue. 80. Verify that your servers are suitable for building a cluster: select Yes and click Next. TECHNICAL PAPER 40
81. Specify a cluster name. NOTE: If the cluster servers get IP addresses over DHCP, the cluster also gets its IP address over DHCP. If the IP addresses are set statically, you need to set a cluster IP address manually. Click Next to continue. TECHNICAL PAPER 41
82. Make sure that all of the settings are correct. Click Previous to make any changes. Click Next to continue. 83. The process of cluster creation starts. Upon completion, the system displays a report with detailed information. Click Finish to close the wizard. TECHNICAL PAPER 42
Adding Cluster Shared Volumes Follow these steps to add Cluster Shared Volumes (CSV) that is necessary for working with Hyper-V virtual machines: 84. Open Failover Cluster Manager. 85. Go to Cluster->Storage -> Disks. 86. Click Add Disk in the Actions panel, choose the disks from the list and click OK. 87. To configure a Witness drive, right-click the Cluster->More Actions->Configure Cluster Quorum Settings, follow the wizard, and use a default quorum configuration. 88. Right-click the required disk and select Add to Cluster Shared Volumes. Once the disks are added to the Cluster Shared Volumes list, you can start creating highly available virtual machines on them. NOTE: to avoid unnecessary CSV overhead, configure each CSV to be owned by one Cluster node. This node should also be the preferred owner of the VMs it runs. TECHNICAL PAPER 43
Conclusion The cluster increases availability of the services or applications on it. CSV feature simplifies the storage management by allowing multiple VMs to be accessed from a common shared disk. Resilience is provided by creating multiple connections between StarWind nodes and the shared disk. Thus, if one of the nodes goes down, another one will take over the production operations. TECHNICAL PAPER 44
Contacts US Headquarters EMEA and APAC 1-617-449-77 17 1-617-507-58 45 1-866-790-26 46 +44 203 769 18 57 (UK) +34 629 03 07 17 (Spain and Portugal) Customer Support Portal: Support Forum: Sales: General Information: https://www.starwind.com/support https://www.starwind.com/forums sales@starwind.com info@starwind.com StarWind Software, Inc. 35 Village Rd., Suite 100, Middleton, MA 01949 USA www.starwind.com 2017, StarWind Software Inc. All rights reserved. TECHNICAL PAPER 45