One Stop Virtualization Shop StarWind Virtual SAN Compute and Storage Separated with Windows Server 2016 FEBRUARY 2018 TECHNICAL PAPER
Trademarks StarWind, StarWind Software and the StarWind and the StarWind Software logos are registered trademarks of StarWind Software. StarWind LSFS is a trademark of StarWind Software which may be registered in some jurisdictions. All other trademarks are owned by their respective owners. Changes The material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, StarWind Software assumes no liability resulting from errors or omissions in this document, or from the use of the information contained herein. StarWind Software reserves the right to make changes in the product design without reservation and without notification to its users. Technical Support and Services If you have questions about installing or using this software, check this and other documents first - you will find answers to most of your questions on the Technical Papers webpage or in StarWind Forum. If you need further assistance, please contact us. About StarWind StarWind is a pioneer in virtualization and a company that participated in the development of this technology from its earliest days. Now the company is among the leading vendors of software and hardware hyper-converged solutions. The company s core product is the years-proven StarWind Virtual SAN, which allows SMB and ROBO to benefit from cost-efficient hyperconverged IT infrastructure. Having earned a reputation of reliability, StarWind created a hardware product line and is actively tapping into hyperconverged and storage appliances market. In 2016, Gartner named StarWind Cool Vendor for Cluster Platforms following the success and popularity of StarWind HyperConverged Appliance. StarWind partners with world-known companies: Microsoft, VMware, Veeam, Intel, Dell, Mellanox, Citrix, Western Digital, etc. Copyright 2009-2018 StarWind Software Inc. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of StarWind Software. TECHNICAL PAPER 2
Contents Introduction... 4 Pre-Configuring the servers... 5 Enabling Multipath Support... 7 Downloading, Installing, and Registering the Software... 8 Configuring Shared Storage... 15 Discovering Target Portals... 26 Connecting Targets... 32 Creating a Cluster... 39 Adding Cluster Shared Volumes... 43 Conclusion... 44 TECHNICAL PAPER 3
Introduction StarWind Virtual SAN supports both architectures: hyper-converged and computeand storage-separated. Running compute and storage layers separately makes it possible to scale compute and storage resources independently. This technical paper provides a detailed step-by-step guidance on configuring a 2- node Hyper-V Failover Сluster using StarWind Virtual SAN to turn storage resources of the separated servers into a fault-tolerant and fully redundant shared storage for Hyper-V environments. The Failover Cluster configuration assumes that if one of the cluster nodes fails, the other nodes automatically take over resources, thus continuing serving the applications. Regarding this feature, the workflow remains uninterrupted and secured. Adding StarWind disks to CSVs provides efficient use of storage and simplifies its management, as well as enhances availability and increases resilience. Once it is done, you can start creating highly available virtual machines on them. This guide is intended for experienced Windows system administrators and IT professionals who would like to configure the Hyper-V cluster using StarWind Virtual SAN to convert the clustered storage space into the fault-tolerant shared storage resource. It also highlights how to connect the StarWind HA devices to the Microsoft iscsi initiator and configure the StarWind shared storage as the Cluster Shared Volumes. A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in the StarWind Management Console. For any technical inquiries, please, visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department. TECHNICAL PAPER 4
Pre-Configuring the Servers The diagram below depicts the network architecture of the configuration described in this guide. NOTE: Additional network connections may be required depending on the cluster setup and applications that are running. TECHNICAL PAPER 5
1. Make sure that cluster nodes are added to the domain. 2. Install Failover Clustering and Multipath I/O features, as well as the Hyper-V role on cluster nodes. This can be done through Server Manager (Add Roles and Features menu item). 3. Configure network interfaces on each node to make sure that Synchronization and iscsi/starwind heartbeat interfaces are in different subnets and connected as it is shown in the network diagram above. In this document, 172.16.10.x and 172.16.20.x subnets are used for iscsi/starwind Heartbeat traffic while 172.16.30.x and 172.16.40.x subnets are used for the Synchronization traffic. 4. In order to allow iscsi Initiator to discover all StarWind Virtual SAN interfaces, StarWind configuration file (StarWind.cfg) should be changed after stopping StarWind service. Locate the StarWind Virtual SAN configuration file (the default path is: C:\Program Files\StarWind Software\StarWind\StarWind.cfg) and open it with Wordpad as Administrator. Find the string <iscsidiscoverylistinterfaces value= 0 /> and change the value from 0 to 1 (should look as follows: <iscsidiscoverylistinterfaces value= 1 />). Save the changes and exit Wordpad. Once StarWind.cfg has been changed and saved, StarWind service can be started. Please note, this tip can be used only for Windows-based compute nodes. TECHNICAL PAPER 6
Enabling Multipath Support 1. On the compute nodes, open the MPIO manager: Start->Administrative Tools- >MPIO. 2. Go to the Discover Multi-Paths tab. 3. Tick the Add support for iscsi devices checkbox and click Add. 4. When it is prompted to restart the server, click Yes to proceed. NOTE: Repeat the procedure on the second cluster node. TECHNICAL PAPER 7
Downloading, Installing, and Registering the Software 5. Download the StarWind setup executable file from StarWind website by following the link below: https://www.starwind.com/registration-starwind-virtual-san NOTE: The setup file is the same for x86 and x64 systems, as well as for all Virtual SAN deployment scenarios. 6. Launch the downloaded setup file on the server where StarWind Virtual SAN or one of its components need to be installed. The setup wizard appears. 7. Read and accept the License Agreement. Click Next to continue. TECHNICAL PAPER 8
8. Read the information about new features and improvements carefully. Red text indicates warnings for users who are updating existing software installations. Click Next to continue. 9. Click Browse to modify the installation path if necessary. Click Next to continue. TECHNICAL PAPER 9
10. Select the following components for the minimum setup: StarWind Virtual SAN service StarWind service is the core of the software. It can create iscsi targets as well as share virtual and physical devices. The service can be managed from StarWind Management Console on any Windows computer or VSA connected to the network. Alternatively, the service can be managed from StarWind Web Console and deployed separately. StarWind Management Console The Management Console is the Graphic User Interface (GUI) part of the software that controls and monitors all storage-related operations (e.g., allows users to create targets and devices on StarWind Virtual SAN servers connected to the network). Click Next to continue. TECHNICAL PAPER 10
11. Specify the Start Menu folder. Click Next to continue. 12. Enable the checkbox to create a desktop icon. Click Next to continue. TECHNICAL PAPER 11
13. There is an option to request a time-limited fully functional evaluation key, or a FREE version key, or a fully commercial license key sent to you with the purchase of StarWind Virtual SAN. Select the appropriate one. Click Next to continue. 14. Click Browse to locate the license file. Click Next to continue. TECHNICAL PAPER 12
15. Review the licensing information. Click Next to apply the license key. 16. Verify the installation settings. Click Back to make any changes or Install to continue. TECHNICAL PAPER 13
17. Select the appropriate checkbox to launch the StarWind Management Console right after the setup wizard is closed. Click Finish to close the wizard. 18. Repeat the installation steps on the partner node. TECHNICAL PAPER 14
Configuring Shared Storage 19. Launch StarWind Management Console by double-clicking the StarWind tray icon. NOTE: StarWind Management Console cannot be installed on a GUI-less OS. You can install the Console on any GUI-enabled Windows editions, including a desktop version of Windows. If StarWind Service and Management Console are installed on the same server, the Management Console automatically adds the local StarWind instance to the Console tree after the first launch. Then, the Management Console automatically connects to the StarWind service using the default credentials. To add remote StarWind servers to the Console, use the Add Server button on the control panel. 20. StarWind Management Console will ask to specify a default storage pool on the server you connect to for the first time. Please, configure the storage pool to use one of the volumes you have prepared earlier. All the devices created through the Add Device wizard are stored in the configured storage pool. Should you decide to use an alternative storage path for your StarWind virtual disks, please use the Add Device (advanced) menu item. Press the Yes button to configure the storage pool. Should you require to change the storage pool destination, press Choose path and point to the necessary disk in the browser. TECHNICAL PAPER 15
NOTE: Each array that will be used by StarWind Virtual SAN to store virtual disk images should meet the following requirements: Initialized as GPT; NTFS-formatted partition; Assigned drive letter. 21. Select the StarWind server where the device intends to be created. 22. Press the Add Device (advanced) button on the toolbar. 23. Select Hard Disk Device in Add Device Wizard and click Next. 24. Select a Virtual disk and click Next. 25. Specify the virtual disk name, location, size and click Next. Below, you can find how to prepare an HA device for a Witness drive. Devices for Cluster Shared Volumes (CSV) should be created the same way. TECHNICAL PAPER 16
26. Specify virtual disk options. Click Next. 27. Define the caching policy, specify the cache size, and click Next. NOTE: It is recommended to put 1 GB of L1 cache in the Write-Back mode per 1 TB storage capacity. TECHNICAL PAPER 17
28. Define optional Flash Cache Parameters policy and size if necessary. Choose an SSD location in the wizard. Click Next to continue. NOTE: The recommended size of the L2 cache is 10% of the initial StarWind device size. 29. Specify target parameters. Target Name is generated automatically based on the target alias. Additionally, custom name of the target can be assigned. Click Next to continue. TECHNICAL PAPER 18
30. Click Create to add a new device and attach it to the target. Then, click Close. 31. Right-click the servers field and select Add Server. Add a new StarWind server which will be used as the second HA node. Click OK and the Connect button to continue. 32. Right-click the recently created device and select Replication Manager. Press the Add Replica button in the Replication Manager window. TECHNICAL PAPER 19
33. Select Synchronous two-way replication and click Next to proceed. 34. Specify the partner server IP address. The default StarWind management port is 3261. If you have configured a different port, please type it in the Port number field. Click Next. TECHNICAL PAPER 20
35. Select Heartbeat as the Failover Strategy and click Next. 36. Choose Create new Partner Device and click Next. TECHNICAL PAPER 21
37. Specify the partner device location if necessary. You can additionally modify the target name of the device. Click Next. 38. Specify the synchronization and heartbeat channels for the HA device on this screen. You can also modify the ALUA settings. Click Change network settings. TECHNICAL PAPER 22
39. Specify the interfaces for synchronization and Heartbeat. Click OK. Then click Next. NOTE: It is recommended to configure the Heartbeat and iscsi channels on the same interfaces to avoid the split-brain issue. TECHNICAL PAPER 23
40. Select Synchronize from existing Device as a partner device initialization mode and click Next. 41. Press the Create Replica button and click Close. 42. The added device will appear in StarWind Management Console. TECHNICAL PAPER 24
Repeat the steps above for the remaining virtual disks used as Cluster Shared Volumes. Once all devices are created, the Management console should look as in the image below: TECHNICAL PAPER 25
Discovering Target Portals This part describes how to discover Target Portals from each StarWind node on each Cluster node. 43. Launch Microsoft iscsi Initiator on Cluster Node 1: Start > Administrative Tools > iscsi Initiator or iscsicpl from the command line interface. The iscsi Initiator properties window appears. 44. Navigate to the Discovery tab. Click the Discover Portal button. In Discover Target Portal dialog, enter iscsi IP address of the first StarWind Node Click the Advanced button. TECHNICAL PAPER 26
45. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. 46. Click the Discover Portal button once again. 47. In Discover Target Portal dialog, enter the second iscsi IP address of the first StarWind Node and click the Advanced button. TECHNICAL PAPER 27
48. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. 49. Target portals are added from the first StarWind Node. TECHNICAL PAPER 28
50. To Discover Target Portals from the second StarWind Node, click the Discover Portal button once more, enter iscsi IP address for the second StarWind Node, and Click the Advanced button. 51. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. TECHNICAL PAPER 29
52. Choose the Discover Portal button once again. In Discover Target Portal dialog, enter the second iscsi IP address of the second StarWind Node. Click the Advanced button. 53. Select Microsoft iscsi Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal. TECHNICAL PAPER 30
54. All target portals are successfully added to the Cluster Node 1. 55. Perform the steps of this part on the Cluster Node 2. All target portals added to the Cluster Node 2 should look like in the picture below. TECHNICAL PAPER 31
Connecting Targets 56. Launch Microsoft iscsi Initiator on Cluster Node 1 and click on the Targets tab. The previously created targets should be listed in the Discovered Targets section. NOTE: If the created targets are not listed, check the firewall settings of the StarWind Server as well as the list of networks served by the StarWind Server (go to StarWind Management Console -> Configuration -> Network). 57. Select a target discovered from the first StarWind Node and click Connect. TECHNICAL PAPER 32
58. Enable checkboxes like in the image below and click Advanced. 59. Select Microsoft iscsi Initiator in the Local adapter text field. 60. In the Target portal IP, select the IP address of the first StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target. TECHNICAL PAPER 33
61. To connect the same target via another subnet, select it one more time and click Connect. 62. Enable checkboxes like in the image below and click Advanced. 63. Select Microsoft iscsi Initiator in the Local adapter text field. 64. In the Target portal IP, select another IP address of the first StarWind Node and Initiator IP address from the same subnet. Click OK to connect the target. TECHNICAL PAPER 34
65. Select the partner target discovered from the second StarWind node and click Connect. 66. Enable checkboxes like it is shown in the image below and click Advanced. 67. Select Microsoft iscsi Initiator in the Local adapter text field. TECHNICAL PAPER 35
68. In Target portal IP, select the IP address of the second StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target. 69. To connect the same target via another subnet, select it one more time and click Connect. 70. Enable checkboxes like in the image below and click Advanced. TECHNICAL PAPER 36
71. In the Target portal IP, select another IP address of the second StarWind Node and Initiator IP address from the same subnet. Click OK to connect the target. TECHNICAL PAPER 37
72. Repeat the above actions for all HA device targets. After that, repeat the steps described in this section on the Cluster node 2, specifying corresponding IP addresses. The result should look like in the picture below. 73. Initialize the disks and create partitions using the Disk Management snap-in. To create the cluster, the disk devices must be initialized and formatted on both nodes. NOTE: it is recommended to initialize the drives as GPT. TECHNICAL PAPER 38
Creating a Cluster NOTE: To avoid issues during cluster configuration validation, it is recommended to install the latest Microsoft updates on each node. 74. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu. TECHNICAL PAPER 39
75. Click the Create Cluster item in the Actions section of the Failover Cluster Manager. Specify the servers that need to be added to the cluster. Click Next to continue. 76. Verify that your servers are suitable for building a cluster. Select Yes and click Next. TECHNICAL PAPER 40
77. Specify the cluster name. NOTE: If the cluster servers get IP addresses over DHCP, the cluster also gets its IP address over DHCP. If the IP addresses are set statically, you need to set the cluster IP address manually. Click Next to continue. TECHNICAL PAPER 41
78. Make sure that all of the settings are correct. Click Previous to make any changes. Click Next to continue. 79. The cluster creation process starts. Upon completion, the system displays the report with detailed information. Click Finish to close the wizard. TECHNICAL PAPER 42
Adding Cluster Shared Volumes Follow these steps to add Cluster Shared Volumes (CSV) that is necessary for working with Hyper-V virtual machines: 80. Open Failover Cluster Manager. 81. Go to Cluster->Storage -> Disks. 82. Click Add Disk in the Actions panel, choose the disks from the list and click OK. 83. To configure a Witness drive, right-click the Cluster->More Actions->Configure Cluster Quorum Settings, follow the wizard, and use the default quorum configuration. 84. Right-click the required disk and select Add to Cluster Shared Volumes. Once the disks are added to the Cluster Shared Volumes list, you can start creating the highly available virtual machines on them. NOTE: To avoid the unnecessary CSV overhead, configure each CSV to be owned by one Cluster node. This node should also be the preferred owner of the VMs it runs. TECHNICAL PAPER 43
Conclusion The cluster increases availability of the services or applications on it. CSV feature simplifies the storage management by allowing multiple VMs to be accessed from a common shared disk. Resilience is provided by creating multiple connections between StarWind nodes and the shared disk. Thus, if one of the nodes goes down, another one will take over the production operations. TECHNICAL PAPER 44
Contacts US Headquarters EMEA and APAC 1-617-449-77 17 1-617-507-58 45 1-866-790-26 46 +44 203 769 18 57 (UK) +34 629 03 07 17 (Spain and Portugal) Customer Support Portal: Support Forum: Sales: General Information: https://www.starwind.com/support https://www.starwind.com/forums sales@starwind.com info@starwind.com StarWind Software, Inc. 35 Village Rd., Suite 100, Middleton, MA 01949 USA www.starwind.com 2018, StarWind Software Inc. All rights reserved. TECHNICAL PAPER 45