This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

Similar documents
This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

This guide provides configuration settings and considerations for Hosts running VMware ESXi.

The Host Server. Citrix XenServer Configuration Guide. May The authority on real-time data

The Host Server. HPUX Configuration Guide. August The authority on real-time data

The Host Server. HPUX Configuration Guide. May The Data Infrastructure Software Company

The Host Server. AIX Configuration Guide. August The Data Infrastructure Software Company

The Host Server. Citrix XenServer Configuration Guide. November The Data Infrastructure Software Company

The Host Server. Oracle Solaris Configuration Guide. October The authority on real-time data

The Host Server. Microsoft Windows Configuration Guide. February The Data Infrastructure Software Company

This guide provides configuration settings and considerations for SANsymphony Hosts running Oracle's Solaris.

The Host Server. Microsoft Windows Configuration Guide. September The Data Infrastructure Software Company

The Host Server. Linux Configuration Guide. October 2017

The Host Server. Linux Configuration Guide. August The authority on real-time data

Configuring and Managing Virtual Storage

Configuration Guide -Server Connection-

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

The DataCore and Host servers

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Qualified Hardware Components

Setup for Failover Clustering and Microsoft Cluster Service

The DataCore and Host servers

Multipathing Configuration for Software iscsi Using Port Binding

Setup for Failover Clustering and Microsoft Cluster Service

HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

Qualified Hardware Components

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

HP StoreVirtual Storage Multipathing Deployment Guide

Dell EMC SC Series Virtual Volumes Best Practices

vsphere Host Profiles Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

Virtual Volumes FAQs First Published On: Last Updated On:

vsphere Host Profiles 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Overview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot

VMware vcenter Site Recovery Manager 4.1 Evaluator s Guide EVALUATOR'S GUIDE

VMware VMFS Volume Management VMware Infrastructure 3

VMware vsphere 6.5/6.0 Ultimate Bootcamp

Best Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment

Preparing the boot media/installer with the ISO file:

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

HP P4000 LeftHand SAN Solutions with VMware vsphere Best Practices

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Vmware VCP410. VMware Certified Professional on vsphere 4. Download Full Version :

Nimble Storage SmartStack Getting Started Guide Cisco UCS and VMware ESXi5

Table of Contents HOL-PRT-1467

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Infortrend VMware Solutions

VI3 to vsphere 4.0 Upgrade and New Technology Ultimate Bootcamp

The DataCore Server. Best Practice Guidelines. February The Data Infrastructure Software Company

Dell EMC SC Series Best Practices with VMware vsphere 5.x 6.x

DELL POWERVAULT MD32XXI / MD36XXI DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

Configuring Server Boot

Dell EqualLogic PS Series Arrays: Advanced Storage Features in VMware vsphere

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product.

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

2V0-622 vmware. Number: 2V0-622 Passing Score: 800 Time Limit: 120 min.

The DataCore Server. Best Practice Guidelines. August The Data Infrastructure Software Company

EMC VSI for VMware vsphere: Path Management

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

VMware vsphere with ESX 4.1 and vcenter 4.1

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

"Charting the Course... VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot. Course Summary

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

VMware vsphere 5.5 Professional Bootcamp

Virtual Server Agent v9 with VMware. June 2011

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Compellent Storage Center

Using VMware vsphere with Your System

HYPER-UNIFIED STORAGE. Nexsan Unity

Best Practices for Sharing an iscsi SAN Infrastructure with Dell PS Series and SC Series Storage using VMware vsphere Hosts

VMware vsphere Stretched Cluster with X-IO Technologies ISE using Active-Active Mirroring. Whitepaper

Veritas Storage Foundation in a VMware ESX Environment

Question No: 2 What three shares are available when configuring a Resource Pool? (Choose three.)

vsphere Virtual Volumes

Exam Questions VCP550D

Actualtests.VCI questions

o Restrict administrative privileges

vsphere Availability Update 1 ESXi 5.0 vcenter Server 5.0 EN

EMC VSI for VMware vsphere : Path Management Version 5.3

ETERNUS VAAI Plug-in for VMware vsphere User s Guide

vsphere 5/6: Install, Configure, Manage Review Questions

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

AGENDA Settings, configuration options, etc designed for every allflash array regardless of vendor Our philosophy What you need to consider What you d

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

DELL POWERVAULT MD3200I / MD3600I DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

Dell EMC SC Series Storage with SAS Front-end Support for VMware vsphere

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

PAC532 iscsi and NAS in ESX Server 3.0. Andy Banta Senior Storage Engineer VMware

Using IBM FlashSystem V9000 TM with the VMware vstorage APIs for Array Integration

VMware vsphere with ESX 4 and vcenter

VMware vsphere 6.5 Boot Camp

Exam Name: VMware Certified Professional on vsphere 5 (Private Beta)

iscsi Configuration for ESXi using VSC Express Guide

Vmware VCP-511. VMware Certified Professional on vsphere 5 (Private Beta) Download Full Version :

Latest IT Exam Questions & Answers

vsphere Availability 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Transcription:

The Host Server VMware ESXi Configuration Guide October 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi. Basic VMware administration skills are assumed including how to connect to iscsi and/or Fibre Channel Storage Array target ports as well as the processes of discovering, mounting and formatting a disk device. Also see the official statement from DataCore for any differences between the information in this document and VMware's own Hardware Compatibility List (HCL): http://datacore.custhelp.com/app/answers/detail/a_id/1131 The Data Infrastructure Software Company

Table of contents Changes made to this document 3 VMware ESXi compatibility lists 4 ESXi Operating system versions 4 VMware ESXi Path Selection Policies (PSP) VStorage API for Array Integration (VAAI) support 6 7 VMware VVOL VASA API 2.0 VSphere Metro Storage Clusters (VMSC) 8 8 The DataCore Server's settings 9 DataCore Servers in an ESX Virtual Machine 11 The VMware ESXi Host's settings 12 VMware Path Selection Policies 16 Configuring the Round Robin Path Selection Policy Configuring the Fixed Path Selection Policy 18 20 Configuring the Most Recently Used Path Selection Policy 22 Known issues 24 ESXi 6.x (includes 6.0.x and 6.5.x) 25 ESXi 5.x (includes 5.0.x, 5.1.x and 5.5.x) ESX 4.x (includes 4.0.x, and 4.1.x) 28 30 Appendix A 32 Preferred Server & Preferred Path settings 32 Appendix B 34 Configuring Disk Pools 34 Appendix C 35 Reclaiming storage 35 Appendix D 39 Moving from Most Recently Used to either Round Robin or Fixed Path Selection Policy 39 Previous Changes 40 Page 2

Changes made to this document The most recent version of this document is available from here: http://datacore.custhelp.com/app/answers/detail/a_id/838 All changes since August 2017 VMware 'Fault Tolerant' or 'High Available' Clusters The information has been moved to the 'Known Issues' section instead. Sharing the same physical connection for Host Front end and DataCore Mirror Ports may result in unexpected behavior when a failure occurs on that physical connection. When Virtual Disks are served to two or more ESX Hosts; make sure that all Host to DataCore Server connections i.e. Front End Ports - do not also share the same 'physical' connection as any DataCore Server to DataCore Server connections i.e. Mirror ports. For example using a single physical 'Inter Switch Link' between the DataCore Servers across two site locations where a Virtual Disk is also served to an ESX Host over the same Inter Switch Link. Should a failure occur on that single 'physical' connection, this will cause both the Mirror I/O (between the DataCore Servers) and the Host I/O (between the Host and the DataCore Server) to fail at the same time. Even though the DataCore Server sends a 'correct' SCSI notification to the ESX Hosts - LUN_NOT_AVAILABLE - ESX will continue to try to access all Virtual Disks even though DataCore would normally expected the ESX Host to report either a 'Permanent Device Loss' (PDL) or an 'All-Paths-Down' (APD) event. No attempt to 'failover' (ESX HA) or move the VM (ESX Fault Tolerant) will be attempted and the ESX Host will lose access to the Virtual Disk. DataCore cannot support a configuration where ESX Hosts are served Virtual Disks over the same Physical Link as the DataCore Servers send their Mirror I/O. All previous changes Please see page 40 Page 3

VMware ESXi compatibility lists ESXi Operating system versions SANsymphony 9.0 PSP 4 Update 4 (1) 10.0 (all versions) ESXi Version 3.x and earlier With ALUA Without ALUA With ALUA Without ALUA Not Supported Not Supported Not Supported Not Supported 4.0.x Not Qualified Not Qualified Not Supported Not Supported 4.1.x Qualified Not Qualified Not Supported Not Supported 5.x Qualified Not Qualified Qualified Not Qualified 6.x Qualified Not Qualified Qualified Not Qualified Notes: Qualified vs. Not Qualified vs. Not Supported See the next page for definitions. DataCore Server Front-End Port connections Fibre Channel and iscsi are supported. VMware VAAI (VStorage API for Array Integration) compatibility See the table on page 6 for more ESXi version-specific information. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 35 for version-specific 'how to' instructions. 1 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329 Page 4

Qualified vs. Not Qualified vs. Not Supported VMware ESXi compatibility lists Qualified This combination has been tested by DataCore and all the host-specific settings listed in this document applied using non-mirrored, mirrored and Dual Virtual Disks. Not Qualified This combination has not yet been tested by DataCore using Mirrored or Dual Virtual Disks types. DataCore cannot guarantee 'high availability' (failover/failback, continued access etc.) even if the host-specific settings listed in this document are applied. Self-qualification may be possible please see Technical Support FAQ #1506 Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any problems that are encountered while using VMware versions that are 'Not Qualified' will still get root-cause analysis. Non-mirrored Virtual Disks are always considered 'Qualified' - even for 'Not Qualified' combinations of VMware/SANsymphony. Not Supported This combination has either failed 'high availability' testing by DataCore using Mirrored or Dual Virtual Disks types; or the operating System's own requirements/limitations (e.g. age, specific hardware requirements) make it impractical to test. DataCore will not guarantee 'high availability' (failover/failback, continued access etc.) if the host-specific settings listed in this document are applied. Mirrored or Dual Virtual Disks types are configured at the users own risk. Self-qualification is not possible. Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any problems that are encountered while using VMware versions that are 'Not Supported' will get best-effort Technical Support (e.g. to get access to Virtual Disks) but no root-cause analysis will be done. Non-mirrored Virtual Disks are always considered 'Qualified' even for 'Not Supported' combinations of VMware/SANsymphony. VMware versions that are End of Support Life (EOSL) / Availability (EOA) / Distribution (EOD) For versions that are listed as 'Not Supported', self-qualification is not possible. For versions that are listed as 'Not Qualified', self-qualification may be possible if there is an agreed support contract with VMware as well. Please contact DataCore Technical Support before attempting any self-qualification. For any problems that are encountered while using VMware versions that are EOSL, EOA or EOD with DataCore Software, only best-effort Technical Support will be performed (e.g. to get access to Virtual Disks). Root-cause analysis will not be done. Non-mirrored Virtual Disks are always considered 'Qualified'. Page 5

VMware ESXi Path Selection Policies (PSP) VMware ESXi compatibility lists VMware Path Selection Policy ESXi Version Most Recently Used (MRU) (use without ALUA enabled) Fixed (use with ALUA enabled) Round Robin (RR) (use with ALUA enabled) 4.x Not Tested Tested/Works Tested/Works 5.x Not Tested Tested/Works Tested/Works 6.x Not tested Tested/Works Tested/Works Notes: Tested/Works vs Not Tested This table only applies to combinations that are listed as 'Qualified' on page 4. ESXi version 6.x Fixed and RR PSPs are both listed on VMware's Hardware Compatibility List. MRU is not. ESXi version 5.x Only RR PSP is listed on VMware's Hardware Compatibility List. Both Fixed and MRU are not. Page 6

VMware ESXi compatibility lists VStorage API for Array Integration (VAAI) support SANsymphony 9.0 PSP 4 Update 4 (1) 10.0 (all versions) ESXi Version VAAI VAAI 4.x Does not work Does not work 5.x Tested/Works Tested/Works 6.x Tested/Works Tested/Works Notes: VAAI-specific commands that are supported by the DataCore Server: Atomic Test & Set (ATS) Clone Blocks/Full Copy/XCOPY Zero Blocks/Write Same Block Delete/SCSI UNMAP Tested/Works vs. Does not Work This table only applies to combinations that are listed as 'Qualified' on page 4. ESX 4.x Even though VAAI is available with ESX 4.x, it is not supported by DataCore for any version of SANsymphony. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 35 for version-specific instructions. 1 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329 Page 7

VMware ESXi compatibility lists VMware VVOL VASA API 2.0 SANsymphony 9.0 PSP 4 Update 4 10.0 PSP3 and earlier 10.0 PSP 4 and greater ESXi Version VASA support for VVOL VASA support for VVOL VASA support for VVOL 4.x VVOL Support Not Available VVOL Support Not Available Not Supported 5.x VVOL Support Not Available VVOL Support Not Available Not Supported 6.x VVOL Support Not Available VVOL Support Not Available Tested/Works Notes: Tested/Works vs. Not Supported VMSC support only applies to combinations that are listed as 'Qualified' on page 4. Configuration specific notes Please refer to 'Getting Started with the DataCore VASA Provider' section from the Online Help: http://www.datacore.com/ssv-webhelp/getting_started_with_vasa_provider.htm VSphere Metro Storage Clusters (VMSC) Notes: Qualified vs. Not Supported VMSC support only applies to combinations that are listed as 'Qualified' on page 4. Selfqualification may be possible for combinations that are listed as 'Not Qualified' - please contact DataCore Technical Support. Virtual Disks that were formatted using VMFS5 are supported. Virtual Disks formatted with earlier versions of VMFS and have then been upgraded to VMFS5 are not supported. Page 8

The DataCore Server's settings Also see: Video: Configuring ESX Hosts in the DataCore Management Console http://datacore.custhelp.com/app/answers/detail/a_id/1637 Operating system type When registering the Host choose the 'VMware ESXi' menu option. See the Registering Hosts section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/hosts.htm Port roles Ports used for serving Virtual Disks to Hosts should only have the Front End (FE) role enabled. Mixing other Port Role types may cause unexpected results as Ports that only have the FE role enabled will be turned off when the DataCore Server software is stopped (even if the physical server remains running). This helps to guarantee that any Hosts do not still try to access FE Ports, for any reason, once the DataCore Software is stopped but where the DataCore Server remains running. Any Port with the Mirror and/or Back End role enabled do not shut off when the DataCore Server software is stopped but still remain active. Multipathing support The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the Multipathing Support section from the SANsymphony Help: http://www.datacore.com/ssv- Webhelp/Hosts.htm Non-mirrored Virtual Disks and Multipathing Non-mirrored Virtual Disks can still be served to multiple Hosts and/or multiple Host Ports from one or more DataCore Server FE Ports if required; in this case the Host can use its own multipathing software to manage the multiple Host paths to the Single Virtual Disk as if it was a Mirrored or Dual Virtual Disk. Note: Hosts that have non-mirrored Virtual Disks served to them do not need Multipathing Support enabled unless they have other Mirrored or Dual Virtual Disks served as well. Page 9

The DataCore Server's settings Asymmetrical Logical Unit Access (ALUA) support The ALUA support option should be enabled if required and if Multipathing Support has been also been enabled (see above). Please refer to the Operating system compatibility table on page 4 to see which combinations of VMware ESXi and SANsymphony support ALUA. More information on Preferred Servers and Preferred Paths used by the ALUA function can be found on in Appendix A on page 32. Serving Virtual Disks to the Hosts for the first time DataCore recommends that before serving Virtual Disks for the first time to a Host, that all DataCore Front-End ports on all DataCore Servers are correctly discovered by the Host first. Then, from within the SANsymphony Console, the Virtual Disk is marked Online, up to date and that the storage sources have a host access status of Read/Write. Virtual Disks LUNs and serving to more than one Host or Port DataCore Virtual Disks always have their own unique Network Address Authority (NAA) identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports on the same Host Server and the same Virtual Disk being served to multiple Hosts. See the SCSI Standard Inquiry Data section from the online Help for more information on this: http://www.datacore.com/ssv-webhelp/changing_virtual_disk_settings.htm While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating system to identify a disk device served to it over different paths generally we have found that it is. And while there is sometimes a convention that all paths by the same disk device should always using the same LUN 'number' to guarantees consistency for device identification, this may not be technically true. Always refer to the Host Operating System vendor s own documentation for advice on this. DataCore's Software does, however always try to create mappings between the Host's ports and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number (3) where it can. The software will first find the next available (lowest) LUN 'number' for the Host- DataCore FE mapping combination being applied and will then try to apply that same LUN number for all other mappings that are being attempted when the Virtual Disk is being served. If any Host-DataCore FE port combination being requested at that moment is already using that same LUN number (e.g. if a Host has other Virtual Disks served to it from previous) then the software will find the next available LUN number and apply that to those specific Host- DataCore FE mappings only. 3 The software will also try to match a LUN 'number' for all DataCore Server Mirror Port mappings of a Virtual Disk too, although the Host does not 'see' these mirror mappings and so this does not technically need to be the same as the Front End port mappings (or indeed as other Mirror Path mappings for the same Virtual Disk). Having Mirror mappings using different LUNs has no functional impact on the Host or DataCore Server at all. Page 10

The DataCore Server's settings DataCore Servers in an ESX Virtual Machine See the article Hyperconverged and Virtual SAN Best Practices guide: http://datacore.custhelp.com/app/answers/detail/a_id/1155 Page 11

The VMware ESXi Host's settings The following are the Host-specific settings that need to be configured directly on the Host Server. Note: Older versions (of VMware ESXi) may require different Host settings when compared to newer versions. When a setting or configuration change is listed for one version but not another then is only required for that specific version of VMware ESXi. If you have upgraded from an older version and a specific setting is no longer documented for your newer version then assume that no further changes are needed for those settings that are no longer listed in your newer version and they should be left as they were. ISCSI Connections TCP Ports Make sure TCP Port 3260 is opened for all iscsi Communication to the DataCore Server. See the 'TCP and UDP Ports' section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/windows_security_settings_disclosure.htm ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is not supported (this also includes ESXi 'Port Binding'). The Front End port will only accept the first connection from a given IQN that attempts to login to it where a unique ISCSI Session ID (ISID) is created for that connection. All subsequent connections that then come from a different NIC that happens to share the same IQN as the first login, will causes a ISID conflict and will be rejected by the DataCore Server. After that, no further iscsi logins will be possible for this IQN. This may cause unexpected disconnects between the Host and the DataCore Server for those connections. It is important to note that if the first successful connection gets disconnected for any reason (e.g. by a SCSI reset ), then one of the other NICs - sharing the same IQN - may re-attempt a login and, if successful, will take the session for itself. This will now block the previouslyconnected NIC from being able to re-connect and it will now remain disconnected. See the following pages for example of qualified and not-supported configurations: Page 12

The VMware ESXi Host's settings Example 1 A qualified configuration An ESX Host (ESX1) has four different Network Interfaces; each with its own IP address but all with the same IQN: 192.168.1.1 (iqn.esx1) 192.168.2.1 (iqn.esx1) 192.168.1.2 (iqn.esx1) 192.168.2.2 (iqn.esx1) There are, in this example, two DataCore Servers each with two Front-end ports with their own corresponding IP adresses and IQNs: 192.168.1.101 (iqn.dcs1-1) 192.168.2.101 (iqn.dcs1-2) 192.168.1.102 (iqn.dcs2-1) 192.168.2.102 (iqn.dcs2-2) Each Network Interface of the ESX Host should connect to a separate Front-end Port on both DataCore Servers; (iqn.esx1) 192.168.1.1 ISCSI Fabric 1 192.168.1.101 (iqn.dcs1-1) (iqn.esx1) 192.168.2.1 ISCSI Fabric 2 192.168.2.101 (iqn.dcs1-2) (iqn.esx1) 192.168.1.2 ISCSI Fabric 1 192.168.1.102 (iqn.dcs2-1) (iqn.esx1) 192.168.2.2 ISCSI Fabric 2 192.168.2.102 (iqn.dcs2-2) Also note that this kind of set up will make things simpler to manage and troubleshoot if connection problems occur in the future. There is no case in the above example where the other Network Interface on ESX1 is also trying to connect to the other Front-end port on the same DataCore Server (i.e. there are no multiple ISCSI session connections). Page 13

Example 2 An non-supported configuration Using the same values as the qualified example above; The VMware ESXi Host's settings (iqn.esx1) 192.168.1.1 ISCSI Fabric 1 192.168.1.101 (iqn.dcs1-1) (iqn.esx1) 192.168.2.1 ISCSI Fabric 2 192.168.2.101 (iqn.dcs1-2) (iqn.esx1) 192.168.1.2 ISCSI Fabric 1 192.168.1.101 (iqn.dcs1-1) (iqn.esx1) 192.168.2.2 ISCSI Fabric 2 192.168.2.102 (iqn.dcs2-2) In this case, both of the Network Interfaces from ESX1 have been configured to connect to the same Network Interface on DataCore Server 1 in this case iqn.dcs1-1 (highlighted in red) which will not work as expected. DataCore Server1 will accept only one of the connections and the other will be rejected; any subsequent interruption over that iscsi connection may then result in either of the two ESX Network Interfaces being able to (re)connect to iqn.dcs1-1, forcing the other ESX connection to be rejected. In other words, there is no guarantee that the ESX1 connection that was previously logged into iqn.dcs1-1 will be able to reconnect if disconnected for any reason and if the other Network Interface logs in before it. A solution in this case may be Teaming the NICs together as the teamed connections will then only have a single IP address and be recognized by the DataCore Server as a single NIC. Also See the Important Notes section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/configuring_iscsi_connections.htm Page 14

The VMware ESXi Host's settings Advanced Settings Note: A reboot may not be needed if any of these settings are changed from a previous value please check with VMware first. ESX 6.x and 5.x From within the ESXi Configuration Tab under Advanced Settings change and/or verify the following values are set: Disk.DiskMaxIOSize = 512 ESX 4.1.x From within the ESXi Configuration Tab under Advanced Settings change and/or verify the following values are set: Disk.DiskMaxIOSize = 512 Disk.QFullSampleSize = 32 Disk.QFullThreshold = 8 Disk.UseLunReset = 1 Disk.UseDeviceReset = 0 SCSI.CRTimeoutDuringBoot = 10000 ESX 4.0.x From within the ESXi Configuration Tab under Advanced Settings change and/or verify the following values are set: Disk.DiskMaxIOSize = 512 Disk.QFullSampleSize = 32 Disk.QFullThreshold = 8 Disk.UseLunReset = 1 Disk.UseDeviceReset = 0 SCSI.CRTimeoutDuringBoot = 1 SCSI.ConflictRetries = 200 Page 15

VMware Path Selection Policies Which Path Selection Policies (PSP) are qualified? Please refer to VMware ESXi Path Selection Policies (PSP) compatibility list on page 6. Which PSP does DataCore Software recommend? DataCore does not recommend one particular policy over another; one user s installation and configuration of SANsymphony will be different to another's. Note: Some PSPs are not supported by VMware themselves for certain types of Virtual Machine Operating Systems. DataCore cannot take responsibility for these VMware-unsupported Virtual Machine/PSP combinations should any issues occur. See http://kb.vmware.com/kb/1011340 Changing the PSP type on an already-served Virtual Disk As long as the same Storage Array Type Plug-in (SATP) being used on the Host is the same for the new PSP there is nothing that needs to be done on the DataCore Server. If the current SATP is different to what the new PSP requires (for example, moving from 'Most Recently Used' to 'Round Robin'), then DataCore recommend that you unserve the Virtual Disks first, delete the old SATP value adding the new SATP and serve them back again. Note: Changing the SATP type may also require that the ALUA option also be changed as well on the Host, within the SANsymphony Console, from its current setting. In which case see the After changing the settings section of changing multipath or ALUA support settings for hosts from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/hosts.htm Using different PSPs for the same Virtual Disk on multiple Hosts While this is technically possible, it is not supported and DataCore cannot guarantee the behavior of the VMware ESXi Hosts in this case. Always use the same PSP for the same Virtual Disk on all VMware ESXi Hosts that it is served to. Page 16

VMware Path Selection Policies Which Storage Array Type Plug-in (SATP) should I use? Please refer to the following pages to determine which SATP and how to configure it for the particular PSP you wish to use. Note: Auto-detection by VMware ESXi to choose the correct Path Selection Policy and/or Storage Array Plugin Type to use for a given Virtual Disk can be inconsistent in older versions and VMware ESXi may, for example, default to Most Recently Used by default for any Virtual Disk mapped to the Host regardless if the ALUA option had been enabled or not. This type of mistake will cause unexpected results during any failover event. It is, therefore, important to always verify manually, that both the correct PSP and SATP have been selected. This can be done directly on the VMware vsphere client GUI or by running the following command at the VMware ESXi console: esxcli storage nmp device list grep -A 7 "^naa\.60030d9" This looks for all devices that contain DataCore s unique NAA identifier - part of any SANsymphony Virtual Disk s SCSI Standard Inquiry Data. See the SCSI Standard Inquiry Data section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/changing_virtual_disk_settings.htm The additional -C switch for the grep command will display an additional 3 lines of output above and below the line that the string searched for appears on which should then include the Path Selection Policy and the Storage Array Type Plugin. Increase this number to display more of the Virtual Disk s properties as required. If using SANsymphony s own VMware vcenter Integration, searching by the NAA identifier is the only way to list the Virtual Disks on the command line. Page 17

VMware Path Selection Policies Configuring the Round Robin Path Selection Policy Use the SATP 'VMW_SATP_ALUA' with the claim option 'tpgs_on'. Round Robin can be configured using either the 'default' SATP type or by configuring a custom SATP rule. Using the default SATP type It is possible to use VMware ESXi s built-in, generic 'VMW_SATP_ALUA' rule VMW_SATP_ALUA system tpgs_on Any array with ALUA support Using a custom SATP rule To create a custom rule run the following command on the ESXi Host s console esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P VMW_PSP_RR Verify the custom rule has been set correctly esxcli storage nmp satp rule list -s VMW_SATP_ALUA grep DataCore The response should look like something this (1) VMW_SATP_ALUA DataCore Virtual Disk user tpgs_on VMW_PSP_RR This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using the Round Robin PSP. Note: Round Robin is only qualified with the ALUA option enabled on the VMware Host from within the DataCore Server's Console. See Changing multipath or ALUA support settings for hosts from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/multipath_support.htm 1 This example is taken from VMware ESXi version 5.5 Page 18

VMware Path Selection Policies Which Preferred Server setting on the DataCore Server should I use with Round Robin? DataCore recommends, when using the Round Robin PSP and configuring your Hosts for the first time to set an explicit DataCore Server as the Preferred Server or leave the 'Auto select' setting configured and not use the All setting. When the Host's Preferred Server setting is either 'Auto Select', or is using an explicitly named DataCore Server, only the Host paths that are connected to either the first DataCore Server listed in the Virtual Disk's properties (for 'Auto Select') or the named DataCore Server respectively are set as Active Optimized. The Host's paths connected to the other DataCore Server are set as 'Active Non-optimized'. VMware Hosts will only send IO to 'Active Optimized' paths when there is a choice between that or 'Active Non-optimized'. Caution is therefore advised when using the All setting as this allows the VMware Host to send I/O to all paths on all DataCore Servers for any served Virtual Disk. While this may seem preferable, in configurations where there are significant path distances between servers (e.g. across remote sites) or where the speed of links between remote servers is significantly slower than links between local servers on the same site, it is possible that longer I/O wait times are encountered between paths to remote servers which will then cause additional delays for I/O within the same request but sent via paths to local servers resulting in overall significant I/O latency. Testing is advised. Please see Appendix A Notes on Preferred Server and Preferred Path settings on page 32 for a more details explanation when using the All setting. Also see the Preferred Servers section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/port_connections_and_paths.htm Page 19

Configuring the Fixed Path Selection Policy Use the SATP 'VMW_SATP_ALUA' with the claim option 'tpgs_on'. VMware Path Selection Policies The Fixed PSP can be configured using either the 'default' SATP type or by configuring a custom SATP rule. Using the default SATP type It is possible to use VMware ESXi s built-in, generic 'VMW_SATP_ALUA' rule VMW_SATP_ALUA system tpgs_on Any array with ALUA support Using a custom SATP rule To create a custom rule run the following command on the ESXi Host s console esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P VMW_PSP_FIXED Verify the custom rule has been set correctly esxcli storage nmp satp rule list -s VMW_SATP_ALUA grep DataCore The response should look like something this (1) VMW_SATP_ALUA DataCore Virtual Disk user tpgs_on VMW_PSP_FIXED This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using the Fixed PSP. Note: Fixed PSP is only qualified with the ALUA option enabled on the VMware Host from within the DataCore Server's Console. See Changing multipath or ALUA support settings for hosts from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/multipath_support.htm 1 This example is taken from VMware ESXi version 5.5 Page 20

VMware Path Selection Policies Which Preferred Server setting on the DataCore Server should I use with Fixed PSP? Unlike Round Robin, where DataCore recommend (initially) to not use the 'All' Preferred Server setting, when using the Fixed PSP the 'All' setting is mandatory and no other Preferred Server setting is qualified. This is because the Fixed PSP always required an 'Active Optimized' path to failover or failback to for it to work as expected. Note: The Fixed PSP will not send IO to all 'Active Optimized' paths like the Round Robin PSP does. The actual 'active' path used by the VMware Host that is using the Fixed PSP is configured on the ESX Host directly and is not controlled by the DataCore Server. Please refer to VMware's own documentation on how to configure the 'active' path when using the Fixed PSP. Please see Appendix A Notes on Preferred Server and Preferred Path settings on page 32 for a more details explanation when using the All setting. Also see the Preferred Servers section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/port_connections_and_paths.htm Page 21

VMware Path Selection Policies Configuring the Most Recently Used Path Selection Policy Use the SATP 'VMW_SATP_DEFAULT_AA' with no claim option set. The Fixed PSP can be configured using either the 'default' SATP type or by configuring a custom SATP rule. Using the default SATP type It is possible to use VMware ESXi s built-in, generic 'VMW_SATP_DEFAULT_AA' rule VMW_SATP_DEFAULT_AA fc system Fibre Channel Devices Using a custom SATP rule To create a custom rule run the following command on the ESXi Host s console esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_DEFAULT_AA -P VMW_PSP_MRU Verify the custom rule has been set correctly esxcli storage nmp satp rule list -s VMW_SATP_DEFAULT_AA grep DataCore The response should look like something this (6) VMW_SATP_DEFAULT_AA DataCore Virtual Disk user VMW_PSP_MRU This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using the Fixed PSP. Note: Most Recently Used is only qualified without the ALUA option enabled on the VMware Host from within the DataCore Server's Console. See Changing multipath or ALUA support settings for hosts from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/multipath_support.htm 6 This example is taken from VMware ESXi version 5.5 Page 22

VMware Path Selection Policies Which Preferred Server setting on the DataCore Server should I use with Most Recently Used? Because the ALUA option is not supported when using the Most Recently Used PSP it must never be enabled on the Host. The Preferred Server setting, which controls the ALUA state of a given path to a Host from a DataCore Server, will therefore be ignored by the Host. Note: The actual 'active' path used by the VMware Host that is using the Most Recently Used PSP is configured on the ESX Host directly and is not controlled by the DataCore Server. Please refer to VMware's own documentation on how to configure the 'active' path when using the Most Recently Used PSP. Also see the Preferred Servers section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/port_connections_and_paths.htm Page 23

Known issues The following is intended to make DataCore Software customers aware of any issues that may affect performance, access or generally give unexpected results under certain conditions when VMware ESXi is used with SANsymphony. Some of the issues here have been found during DataCore s own testing but many others are issues reported by DataCore Software customers, where a specific problem had been identified and then subsequently resolved. DataCore cannot be held responsible for incorrect information regarding VMware products. No assumption should be made that DataCore has direct communication with VMware regarding the issues listed here and we always recommend that users contact the VMware directly to see if there are any updates or fixes since they were reported to us. For Known issues for DataCore s own Software products, please refer to the relevant DataCore Software Component s release notes. Page 24

Known issues ESX 5.x ESXi 6.x (includes 6.0.x and 6.5.x) Converged Network Adaptors When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option. When connecting ESXi Hosts to DataCore Servers After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths will report as 'Active (I/O)' - regardless of the Path Selection Policy. VMware have verified this as a cosmetic bug in ESXi that does not affect IO of either the ESX Hosts or VMs and that their engineering team is currently working on a solution. Also see: https://kb.vmware.com/kb/2149992 Example before ESXi 6.0 Update 3: Example after ESXi 6.0 Update 3: Note: Use the ESXi 'esxtop' command (e.g. using the 'd' or 'u' switches) to show actual activity on the expected paths and/or devices. Page 25

Known issues ESX 5.x Sharing the same physical connection for Host Front end and DataCore Mirror Ports may result in unexpected behavior when a failure occurs on that physical connection. When Virtual Disks are served to two or more ESX Hosts; make sure that all Host to DataCore Server connections i.e. Front End Ports - do not also share the same 'physical' connection as any DataCore Server to DataCore Server connections i.e. Mirror ports. For example using a single physical 'Inter Switch Link' between the DataCore Servers across two site locations where a Virtual Disk is also served to an ESX Host over the same Inter Switch Link. Should a failure occur on that single 'physical' connection, this will cause both the Mirror I/O (between the DataCore Servers) and the Host I/O (between the Host and the DataCore Server) to fail at the same time. Even though the DataCore Server sends a 'correct' SCSI notification to the ESX Hosts - LUN_NOT_AVAILABLE - ESX will continue to try to access all Virtual Disks even though DataCore would normally expected the ESX Host to report either a 'Permanent Device Loss' (PDL) or an 'All-Paths-Down' (APD) event. No attempt to 'failover' (ESX HA) or move the VM (ESX Fault Tolerant) will be attempted and the ESX Host will lose access to the Virtual Disk. DataCore cannot support a configuration where ESX Hosts are served Virtual Disks over the same Physical Link as the DataCore Servers send their Mirror I/O. ESXi hosts experience degraded IO performance on iscsi network when Delayed ACK is 'enabled' on ESXi its software iscsi initiator. See http://kb.vmware.com/kb/1002598 for more specific information and how to disable the 'Delayed ACK' feature on ESXi Hosts. A reboot of the ESXi Host will be required. ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is not supported (this also includes ESXi 'Port Binding') Please see the ISCSI Connections section on page 12 for more specific information, with examples. Storage PDL responses may not trigger path failover in vsphere 6.0.0 and 6.0 Update 1. This has now been fixed by VMware. See http://kb.vmware.com/kb/2144657. VHBAs and other PCI devices may stop responding when using Interrupt Remapping. See http://kb.vmware.com/kb/1030265. Page 26

Known issues ESX 5.x Under heavy load the VMFS heartbeat may fail with 'false' ATS miscompare message. The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its function. A change in the heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 uses ESXi's VAAI ATS commands instead directly to the storage array (i.e. the DataCore Serve). DataCore Servers do not require (and so do not support) these ATS commands. DataCore therefore recommend disabling the VAAI ATS heartbeat setting see http://kb.vmware.com/kb/2113956. If your ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to disable this setting for these arrays. Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to increased I/O latency. This only affects ESXi Hosts not using VAAI. Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009 DataCore Software recommends using VAAI where the 'Atomic Test and Set (ATS) primitive' is used instead as this is much better method for locking VMFS Datastores on Virtual Disks when compared to the normal SCSI Reservation process. When running Microsoft Clusters in a Virtual Machine Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End mapping to each DataCore Server may cause unexpected loss of access. A fix is available from VMware. See https://kb.vmware.com/kb/2145663 for more information. Unable to access filesystem for MSCS cluster nodes after vmotion. This has now been fixed by VMware. See https://kb.vmware.com/kb/2144153. The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in VMware ESXi Virtual Machines. This is expected. See http://kb.vmware.com/kb/1037959 specifically read the 'additional notes' (under the section 'VMware vsphere support for running Microsoft clustered configurations'). ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or during LUN rescan. See http://kb.vmware.com/kb/1016106. Page 27

Known issues ESX 5.x ESXi 5.x (includes 5.0.x, 5.1.x and 5.5.x) Converged Network Adaptors When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option. When connecting ESXi Hosts to DataCore Servers Sharing the same physical connection for Host Front end and DataCore Mirror Ports may result in unexpected behavior when a failure occurs on that physical connection. When Virtual Disks are served to two or more ESX Hosts; make sure that all Host to DataCore Server connections i.e. Front End Ports - do not also share the same 'physical' connection as any DataCore Server to DataCore Server connections i.e. Mirror ports. For example using a single physical 'Inter Switch Link' between the DataCore Servers across two site locations where a Virtual Disk is also served to an ESX Host over the same Inter Switch Link. Should a failure occur on that single 'physical' connection, this will cause both the Mirror I/O (between the DataCore Servers) and the Host I/O (between the Host and the DataCore Server) to fail at the same time. Even though the DataCore Server sends a 'correct' SCSI notification to the ESX Hosts - LUN_NOT_AVAILABLE - ESX will continue to try to access all Virtual Disks even though DataCore would normally expected the ESX Host to report either a 'Permanent Device Loss' (PDL) or an 'All-Paths-Down' (APD) event. No attempt to 'failover' (ESX HA) or move the VM (ESX Fault Tolerant) will be attempted and the ESX Host will lose access to the Virtual Disk. DataCore cannot support a configuration where ESX Hosts are served Virtual Disks over the same Physical Link as the DataCore Servers send their Mirror I/O. ESXi hosts experience degraded IO performance on iscsi network when Delayed ACK is 'enabled' on ESXi its software iscsi initiator. See http://kb.vmware.com/kb/1002598 for more specific information and how to disable the 'Delayed ACK' feature on ESXi Hosts. A reboot of the ESXi Host will be required. ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is not supported (this also includes ESXi 'Port Binding') Please see the ISCSI Connections section on page 12 for more specific information, with examples. VHBAs and other PCI devices may stop responding when using Interrupt Remapping. See http://kb.vmware.com/kb/1030265. Page 28

Known issues ESX 5.x Under heavy load the VMFS heartbeat may fail with 'false' ATS miscompare message. The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its function. A change in the heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 uses ESXi's VAAI ATS commands instead directly to the storage array (i.e. the DataCore Serve). DataCore Servers do not require (and so do not support) these ATS commands.datacore therefore recommend disabling the VAAI ATS heartbeat setting see http://kb.vmware.com/kb/2113956. If your ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to disable this setting for these arrays. Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to increased I/O latency. This only affects ESXi Hosts not using VAAI. Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009 DataCore Software recommends using VAAI where the 'Atomic Test and Set (ATS) primitive' is used instead as this is much better method for locking VMFS Datastores on Virtual Disks when compared to the normal SCSI Reservation process. When running Microsoft Clusters in a Virtual Machine Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End mapping to each DataCore Server may cause unexpected loss of access. A fix is available from VMware. See https://kb.vmware.com/kb/2145663 for more information. The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in VMware ESXi Virtual Machines. This is expected. See http://kb.vmware.com/kb/1037959 specifically read the 'additional notes' (under the section 'VMware vsphere support for running Microsoft clustered configurations'). ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or during LUN rescan. See http://kb.vmware.com/kb/1016106. Page 29

ESX 4.x (includes 4.0.x, and 4.1.x) Converged Network Adaptors When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option. When connecting ESXi Hosts to DataCore Servers Sharing the same physical connection for Host Front end and DataCore Mirror Ports may result in unexpected behavior when a failure occurs on that physical connection. When Virtual Disks are served to two or more ESX Hosts; make sure that all Host to DataCore Server connections i.e. Front End Ports - do not also share the same 'physical' connection as any DataCore Server to DataCore Server connections i.e. Mirror ports. For example using a single physical 'Inter Switch Link' between the DataCore Servers across two site locations where a Virtual Disk is also served to an ESX Host over the same Inter Switch Link. Should a failure occur on that single 'physical' connection, this will cause both the Mirror I/O (between the DataCore Servers) and the Host I/O (between the Host and the DataCore Server) to fail at the same time. Even though the DataCore Server sends a 'correct' SCSI notification to the ESX Hosts - LUN_NOT_AVAILABLE - ESX will continue to try to access all Virtual Disks even though DataCore would normally expected the ESX Host to report either a 'Permanent Device Loss' (PDL) or an 'All-Paths-Down' (APD) event. No attempt to 'failover' (ESX HA) or move the VM (ESX Fault Tolerant) will be attempted and the ESX Host will lose access to the Virtual Disk. DataCore cannot support a configuration where ESX Hosts are served Virtual Disks over the same Physical Link as the DataCore Servers send their Mirror I/O. ESXi hosts experience degraded IO performance on iscsi network when Delayed ACK is 'enabled' on ESXi its software iscsi initiator. See http://kb.vmware.com/kb/1002598 for more specific information and how to disable the 'Delayed ACK' feature on ESXi Hosts. A reboot of the ESXi Host will be required. ISCSI Patches required for ESXi 4.0 Hosts connected to DataCore Servers VMware ESXi 4.0, Patch ESXi400-200906413-BG: http://kb.vmware.com/kb/1012232 VMware ESXi 4.0, Patch ESXi400-201003401-BG: http://kb.vmware.com/kb/1019492 ESXi does not support LUNs (i.e. SANsymphony Virtual Disks) greater than 2-terabyte. See: http://kb.vmware.com/kb/3371739 ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is not supported (this also includes ESXi 'Port Binding') Please see the ISCSI Connections section on page 12 for more specific information, with examples. Page 30

Known issues ESX 4.x VHBAs and other PCI devices may stop responding when using Interrupt Remapping. See http://kb.vmware.com/kb/1030265. When running Microsoft Clusters in a Virtual Machine ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or during LUN rescan. See http://kb.vmware.com/kb/1016106. Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to increased I/O latency. Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009 Page 31

Appendix A Preferred Server & Preferred Path settings See the Preferred Servers and Preferred Paths sections from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/port_connections_and_paths.htm Without ALUA enabled If Hosts are registered without ALUA support, the Preferred Server and Preferred Path settings will serve no function. All DataCore Servers and their respective Front End (FE) paths are considered equal. It is up to the Host s own Operating System or Failover Software to determine which DataCore Server is its preferred server. With ALUA enabled Setting the Preferred Server to Auto (or an explicit DataCore Server), determines the DataCore Server that is designated Active Optimized for Host IO. The other DataCore Server is designated Active Non-Optimized. If for any reason the Storage Source on the preferred DataCore Server becomes unavailable, and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore Server will be designated the Active Optimized side. The Host will be notified by both DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the ALUA state of both DataCore Servers and act accordingly. If the Storage Source on the preferred DataCore Server becomes unavailable but the Host Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the DataCore Server is unavailable but the FE and MR paths are all connected or if the Host physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or iscsi Cable failure) then the ALUA state will not change for the remaining, Active Nonoptimized side. However, in this case, the DataCore Server will not prevent access to the Host nor will it change the way READ or WRITE IO is handled compared to the Active Optimized side, but the Host will still register this DataCore Server s Paths as Active Non-Optimized which may (or may not) affect how the Host behaves generally. Page 32

Appendix A - Preferred Server & Preferred Path settings In the case where the Preferred Server is set to All, then both DataCore Servers are designated Active Optimized for Host IO. All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the distance that the IO has to travel to the DataCore Server. For this reason, the All setting is not normally recommended. If a Host has to send a WRITE IO to a remote DataCore Server (where the IO Path is significantly distant compared to the other local DataCore Server), then the WAIT times accrued by having to send the IO not only across the SAN to the remote DataCore Server, but for the remote DataCore Server to mirror back to the local DataCore Server and then for the mirror write to be acknowledged from the local DataCore Server to the remote DataCore Server and finally for the acknowledgement to be sent to the Host back across the SAN, can be significant. The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not always clear cut. Testing is advised. For Preferred Path settings it is stated in the SANsymphony Help: A preferred front-end path setting can also be set manually for a particular virtual disk. In this case, the manual setting for a virtual disk overrides the preferred path created by the preferred server setting for the host. So for example, if the Preferred Server is designated as DataCore Server A and the Preferred Paths are designated as DataCore Server B, then DataCore Server B will be the Active Optimized Side not DataCore Server A. In a two-node Server group there is usually nothing to be gained by making the Preferred Path setting different to the Preferred Server setting and it may also cause confusion when trying to diagnose path problems, or when redesigning your DataCore SAN with regard to Host IO Paths. For Server Groups that have three or more DataCore Servers, and where one (or more) of these DataCore Servers shares Mirror Paths between other DataCore Servers setting the Preferred Path makes more sense. So for example, DataCore Server A has two mirrored Virtual Disks, one with DataCore Server B, and one with DataCore Server C and DataCore Server B also has a mirrored Virtual Disk with DataCore Server C then using just the Preferred Server setting to designate the Active Optimized side for the Host s Virtual Disks becomes more complicated. In this case the Preferred Path setting can be used to override the Preferred Server setting for a much more granular level of control. Page 33