The Host Server. Linux Configuration Guide. October 2017

Size: px
Start display at page:

Download "The Host Server. Linux Configuration Guide. October 2017"

Transcription

1 The Host Server Linux Configuration Guide October 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running Linux. Basic Linux administration skills are assumed including how to connect to iscsi and/or Fibre Channel Storage Array target ports as well as the processes of discovering, mounting and formatting a disk device.

2 Table of contents Changes made to this document 3 Compatibility lists 4 Oracle VM Server 4 RedHat Enterprise Linux 5 SUSE Linux Enterprise Server 6 Ubuntu 7 Other Linux distributions 9 The DataCore Server's settings 10 The Linux Host's settings 13 Operating system settings 13 Multipath configuration settings 14 SAP HANA 18 Known issues 19 All Linux distributions 20 Ubuntu 20 Appendix A 21 Preferred Server & Preferred Path settings 21 Appendix B 23 Configuring Disk Pools 23 Appendix C 24 Reclaiming storage 24 Previous changes 27 Page 2

3 Changes made to this document The most recent version of this document is available from here: All changes since August 2017 Added Compatibility lists - Red Hat Enterprise Linux RedHat Enterprise Linux 7.4 This version is currently considered 'Not Qualified' for SANsymphony 10.x versions and 'Not Supported' for SANsymphony-V 9.0 PSP 4 Update 4 (and earlier). SUSE Linux Enterprise Server 12.0 SP 3 This version is currently considered 'Not Qualified' for SANsymphony 10.x versions and 'Not Supported' for SANsymphony-V 9.0 PSP 4 Update 4 (and earlier). Ubuntu 17.x This version is currently considered 'Not Qualified' for SANsymphony 10.x versions and 'Not Supported' for SANsymphony-V 9.0 PSP 4 Update 4 (and earlier). All previous changes Please see page 27 Page 3

4 Compatibility lists Oracle VM Server SANsymphony 9.0 PSP 4 Update 4 (1) 10.0 (all versions) Version With ALUA Without ALUA With ALUA Without ALUA 3.3 and earlier Not Supported Not Supported Not Supported Not Supported 3.4.x Not Supported Not Supported Not Qualified Qualified (2) Notes: Qualified vs. Not Qualified vs. Not Supported See page 9 for definitions. DataCore Server Front-End Port connections Fibre Channel is supported but only with SANsymphony versions 10.0 PSP 6 or greater, iscsi connections are considered 'Not qualified'. Multipath Tools Use Multipathing Tools version el6.x86_64 or later earlier versions are considered 'Not Qualified'. Oracle VM Manager Qualification by DataCore was done without using Oracle's VM Manager. SCSI UNMAP SCSI UNMAP is supported. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 24 for version-specific 'how to' instructions. 1 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications 2 SANsymphony 10.0 PSP 6 or greater with Fibre Channel connections only. ISCSI is considered not qualified. Page 4

5 RedHat Enterprise Linux SANsymphony 9.0 PSP 4 Update 4 (1) 10.0 (all versions) Version With ALUA Without ALUA With ALUA Without ALUA 5.10 or earlier Not Supported Not Supported Not Supported Not Supported Qualified Qualified Not Qualified Not Qualified Not Qualified Not Qualified Qualified Not Qualified Not Supported Not Supported Not Qualified Not Qualified 7.0 Not Supported Not Supported Qualified Not Qualified 7.1 Not Supported Not Supported Not Qualified Not Qualified Not Supported Not Supported Qualified (2) Not Qualified 7.4 Not Supported Not Supported Not Qualified Not Qualified Notes: Qualified vs. Not Qualified vs. Not Supported See the page 9 for definitions. DataCore Server Front-End Port connections Fibre Channel and iscsi are supported. Multipath Tools Always use the most current version of Multipathing Tools available for your version of RHEL. SCSI UNMAP SCSI UNMAP is supported. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 24 for version-specific 'how to' instructions. 1 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications 2 Qualified with SANsymphony version 10.0 PSP 6 or greater; earlier versions are still considered 'not qualified'. Page 5

6 Linux compatibility lists SUSE Linux Enterprise Server SANsymphony 9.0 PSP 4 Update 4 (1) 10.0 (all versions) Version With ALUA Without ALUA With ALUA Without ALUA 10.x or earlier Not Supported Not Supported Not Supported Not Supported 11.0 (no SP) Not Supported Qualified Not Supported Not Supported 11.0 SP 1 Not Supported Not Supported Not Supported Not Supported 11.0 SP 2 Qualified Qualified Not Qualified Not Qualified 11.0 SP 3 Not Qualified Not Qualified Qualified Not Qualified 11.0 SP 4 Not Supported Not Supported Qualified (2) Not Qualified 12.0 Not Supported Not Supported Not Qualified Not Qualified 12.0 SP 1 Not Supported Not Supported Qualified (2) Not Qualified 12.0 SP 2 3 Not Supported Not Supported Not Qualified Not Qualified Notes: Qualified vs. Not Qualified vs. Not Supported See the page 9 for definitions. DataCore Server Front-End Port connections Fibre Channel and iscsi are supported except for SLES 11.0 SP 4 and 12.0 SP 1 where iscsi connections are 'Not qualified'. Multipath Tools For SLES 12.x use Multipathing Tools version git1.656f8865 or later earlier versions are considered 'Not Qualified'. For all other SLES versions before 12.x use the most current version available for your distribution. SCSI UNMAP SCSI UNMAP is not supported. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 24 for version-specific 'how to' instructions. 1 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications 2 Fibre Channel only iscsi FE connections are still considered 'Not Qualified'. Page 6

7 Linux compatibility lists Ubuntu This following table specifically applies only to the Linux distribution provided by Canonical Ltd. SANsymphony 9.0 PSP 4 Update 4 (3) 10.0 (all versions) Version With ALUA Without ALUA With ALUA Without ALUA 13.x and earlier Not Supported Not Supported Not Supported Not Supported LTS Not Supported Not Qualified Not Supported Qualified Not Supported Not Qualified Not Supported Not Supported 15.x Not Supported Not Supported Not Supported Not Qualified 16.x Not Supported Not Supported Not Qualified Not Qualified 17.x Not Supported Not Supported Not Qualified Not Qualified Notes: Qualified vs. Not Qualified vs. Not Supported See the page 9 for definitions. DataCore Server Front-End Port connections Fibre Channel and iscsi connections are both supported. Multipath Tools Use Multipath Tools version git1.656f8865 or later - earlier versions are considered 'Not Qualified'. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 24 for version-specific 'how to' instructions. 3 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications Page 7

8 Linux compatibility lists Qualified vs. Not Qualified vs. Not Supported Qualified This combination has been tested by DataCore and all the host-specific settings listed in this document applied using non-mirrored, mirrored and Dual Virtual Disks. Not Qualified This combination has not yet been tested by DataCore using Mirrored or Dual Virtual Disks types. DataCore cannot guarantee 'high availability' (failover/failback, continued access etc.) even if the host-specific settings listed in this document are applied. Self-qualification may be possible please see Technical Support FAQ #1506 Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any problems that are encountered while using Linux versions that are 'Not Qualified' will still get root-cause analysis. Non-mirrored Virtual Disks are always considered 'Qualified' - even for 'Not Qualified' combinations of Linux/SANsymphony. Not Supported This combination has either failed 'high availability' testing by DataCore using Mirrored or Dual Virtual Disks types; or the operating System's own requirements/limitations (e.g. age, specific hardware requirements) make it impractical to test. DataCore will not guarantee 'high availability' (failover/failback, continued access etc.) if the host-specific settings listed in this document are applied. Mirrored or Dual Virtual Disks types are configured at the users own risk. Self-qualification is not possible. Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any problems that are encountered while using Linux versions that are 'Not Supported' will get best-effort Technical Support (e.g. to get access to Virtual Disks) but no root-cause analysis will be done. Non-mirrored Virtual Disks are always considered 'Qualified' even for 'Not Supported' combinations of Linux/SANsymphony. Linux versions that are End of Life For versions that are listed as 'Not Supported', self-qualification is not possible. For versions that are listed as 'Not Qualified', self-qualification may be possible if there is an agreed support contract with your Linux Vendor as well. Please contact DataCore Technical Support before attempting any self-qualification. For any problems that are encountered while using Linux versions that are EOL with DataCore Software, only besteffort Technical Support will be performed (e.g. to get access to Virtual Disks). Root-cause analysis will not be done. Non-mirrored Virtual Disks are always considered 'Qualified'. Page 8

9 Other Linux distributions Linux distributions that are not listed in this document are considered 'Not Qualified' for SANsymphony versions 10.x and 'Not Supported' for SANsymphony-V versions 9.0 PSP 4 Update 4 or earlier. Self-qualification may be possible please see Technical Support FAQ #1506 Non-mirrored Virtual Disks are always considered 'Qualified' - even for 'Not Qualified' or 'Not Supported' combinations of Linux/SANsymphony. Page 9

10 The DataCore Server's settings These are the Host-specific settings that need to be configured directly on the DataCore Server. Also see: Video: Configuring Linux Hosts to use DataCore Virtual Disks Operating system type See the Registering Hosts section from the SANsymphony Help: Oracle VM Server When registering the Host choose the 'Linux (all other distributions)' menu option. RedHat Enterprise Linux When registering the Host choose the 'Linux (all other distributions)' menu option. SUSE Linux Enterprise Server When registering the Host choose the 'Linux SUSE Enterprise Server 11 ' menu option. Ubuntu Linux When registering the Host choose the 'Linux (all other distributions)' menu option. Port roles Ports used for serving Virtual Disks to Hosts should only have the Front End (FE) role enabled. Mixing other Port Role types may cause unexpected results as Ports that only have the FE role enabled will be turned off when the DataCore Server software is stopped (even if the physical server remains running). This helps to guarantee that any Hosts do not still try to access FE Ports, for any reason, once the DataCore Software is stopped but where the DataCore Server remains running. Any Port with the Mirror and/or Back End role enabled do not shut off when the DataCore Server software is stopped but still remain active. Page 10

11 The DataCore Server's settings Multipathing support The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the Multipathing Support section from the SANsymphony Help: Webhelp/Hosts.htm Non-mirrored Virtual Disks and Multipathing Non-mirrored Virtual Disks can still be served to multiple Hosts and/or multiple Host Ports from one or more DataCore Server FE Ports if required; in this case the Host can use its own multipathing software to manage the multiple Host paths to the Single Virtual Disk as if it was a Mirrored or Dual Virtual Disk. Note: Hosts that have non-mirrored Virtual Disks served to them do not need Multipathing Support enabled unless they have other Mirrored or Dual Virtual Disks served as well. Asymmetrical Logical Unit Access (ALUA) support The ALUA support option should be enabled if required and if Multipathing Support has been also been enabled (see above). Please refer to the compatibility list for your distribution of Linux on page 4 to see which combinations of Linux and SANsymphony support ALUA. More information on 'Preferred Servers' and 'Preferred Paths' used by the ALUA function can be found on in Appendix A on page 21. Serving Virtual Disks to the Hosts for the first time DataCore recommends that before serving Virtual Disks for the first time to a Host, that all DataCore Front-End ports on all DataCore Servers are correctly discovered by the Host first. Then, from within the SANsymphony Console, the Virtual Disk is marked Online, up to date and that the storage sources have a host access status of Read/Write. Virtual Disks LUNs and serving to more than one Host or Port DataCore Virtual Disks always have their own unique Network Address Authority (NAA) identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports on the same Host Server and the same Virtual Disk being served to multiple Hosts. See the SCSI Standard Inquiry Data section from the online Help for more information on this: While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating system to identify a disk device served to it over different paths generally we have found that it is. And while there is sometimes a convention that all paths by the same disk device should always using the same LUN 'number' to guarantees consistency for device identification, this may not be technically true. Always refer to the Host Operating System vendor s own documentation for advice on this. Page 11

12 The DataCore Server's settings DataCore's Software does, however always try to create mappings between the Host's ports and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number (1) where it can. The software will first find the next available (lowest) LUN 'number' for the Host- DataCore FE mapping combination being applied and will then try to apply that same LUN number for all other mappings that are being attempted when the Virtual Disk is being served. If any Host-DataCore FE port combination being requested at that moment is already using that same LUN number (e.g. if a Host has other Virtual Disks served to it from previous) then the software will find the next available LUN number and apply that to those specific Host- DataCore FE mappings only. 1 The software will also try to match a LUN 'number' for all DataCore Server Mirror Port mappings of a Virtual Disk too, although the Host does not 'see' these mirror mappings and so this does not technically need to be the same as the Front End port mappings (or indeed as other Mirror Path mappings for the same Virtual Disk). Having Mirror mappings using different LUNs has no functional impact on the Host or DataCore Server at all. Page 12

13 The Linux Host's settings Operating system settings Fibre Channel device timeouts Do not specify any fibre channel-specific device (i.e. lpfc_devloss_tmo for Emulex or, qlport_down_retry for the QLogic) but use the values in the multipath configuration file (see the next page). Disk Timeouts Set the SCSI Disk timeout value to 80 seconds for any DataCore Virtual Disk devices. echo 80 > /sys/block/[disk_device]/device/timeout For example if the two DataCore Virtual Disks are using /dev/sda and /dev/sdb respectively: echo 80 > /sys/block/sda/device/timeout echo 80 > /sys/block/sdb/device/timeout This will change the SCSI Disk timeout value to 80 seconds for those particular disk devices. The SCSI Disk timeout can then be verified by running the cat command on the disk device directly cat /sys/block/sda/device/timeout Important Note on setting the SCSI Disk Device Timeout The method used here to set the SCSI Disk timeout may revert back to the system's default value after a reboot. For RHEL, Oracle and UBUNTU Linux users please contact your vendor directly or consult their documentation for details on creating a rules file in /etc/udev/ for DataCore Virtual Disks. For SLES users please see: Note that this link (as of publication of this document) refers to 'SANsymphony-V 9' but applies to all versions of SANsymphony 10.x as we ll. Page 13

14 The Linux Host's settings Multipath configuration settings Multipath Tools versions Always refer to the compatibility - on page 4 - lists for your distribution's minimum version of multipath tools that have been qualified. Polling Interval In the defaults section of the multipath.conf file, the polling_interval must be set to 60 defaults { } polling_interval 60 This is a DataCore-required value which helps prevent excessive Host attempts to check for a Virtual Disk Storage Source s Host Access value, after a failure, is set as Offline. Smaller interval settings will interfere with overall Host performance. Do not add this parameter to the 'device' section (discussed on the next page) by mistake; else the setting will not work as expected. Blacklist exceptions Usually all storage vendor devices are specified under a separate device sub-section within the blacklist_exceptions section. blacklist_exceptions { } vendor product "DataCore" "Virtual Disk" Page 14

15 The Linux Host's settings The 'device' section ALUA-enabled Hosts Please refer to the compatibility list for your distribution of Linux on page 4 to see which combinations of Linux and SANsymphony support ALUA device { vendor "DataCore" product "Virtual Disk" path_checker tur prio alua failback 10 no_path_retry fail dev_loss_tmo infinity fast_io_fail_tmo 5 rr_min_io_rq 100 # Alternative option See notes below # rr_min_io 100 path_grouping_policy group_by_prio # Alternative policy - See notes below # path_grouping_policy failover } # optional - See notes below # user_friendly_names yes Without ALUA enabled Please refer to the compatibility list for your distribution of Linux on page 4 to see which combinations of Linux and SANsymphony support ALUA device { } vendor "DataCore" product "Virtual Disk" path_checker tur failback 10 dev_loss_tmo infinity fast_io_fail_tmo 5 no_path_retry fail # optional - See notes below # user_friendly_names yes Page 15

16 Device section notes Note: All entries listed are required by SANsymphony and are case sensitive. The Linux Host's settings dev_loss_tmo infinity Also requires the fast_io_fail_tmo 5 setting (see next). The dev_loss_tmo setting controls the length of time (normally indicated in seconds) before a PATH to a Virtual Disk, that has since become unavailable to the Linux Host, is removed from the Linux operating system. For example; when a DataCore Server is stopped or a PATH to a DataCore Server s Virtual Disk is set to Host Access: Not Allowed. Once a PATH to a Virtual Disk has been removed by the Linux operating system it can then only usually be re-established by manual intervention (e.g. user-initiated rescan on the Linux Host). The infinity value prevents the PATH from being removed. If the fast_io_fail_tmo 5 setting is not present in the multipath.conf file, the infinity setting is ignored and the dev_loss_tmo value defaults to 600 (10 minutes). Note: Some older kernels of RedHat Enterprise Linux and SUSE Linux Enterprise Server 11 SP2 and earlier do not support the infinity value and it will be ignored (and may also post an error in syslog). In that case, a default value - usually 600 seconds - will be applied instead. Use the cat command to verify that any DataCore Virtual Disks detected by the Linux Host are using the infinity value correctly. A simple example: sleshost3:~ # cat /sys/class/fc_remote_ports/rport-*\:*-*/dev_loss_tmo Note: The value of indicates a dev using the infinity setting. fast_io_fail_tmo This is required by the dev_loss_tmo infinity setting (see previous). Do not use any other value other than 5, otherwise the dev_loss_tmo setting will use a larger, default value (usually 600 seconds). failback This adds an extra wait period (10 seconds) that helps to prevent unnecessary failback attempts to a Virtual Disk whose Storage Source s Host Access value is still set as Not Allowed. Page 16

17 The Linux Host's settings no_path_retry Required for any Linux Hosts configured for ALUA and want to use the Preferred Server setting set to ALL. See Appendix A on page 21 about information regarding the ALL setting. path_checker This is a DataCore-required value. No other value should be used. path_grouping_policy group_by_prio or path_grouping_policy group_by_failover One of two possible values is required group_by_prio or failover - to avoid any other, non- DataCore, setting in the multipath.conf (i.e. from other storage arrays attached to the Linux Host) from taking precedence. Note: The failover value is unqualified for RHEL 6.5 and greater. In either case, make sure the Preferred Server setting on the DataCore Server is either set to 'Auto Select' or an explicit Server Name. See Appendix A on page 21 about information regarding the 'Auto Select' setting. rr_min_io_rq This is a DataCore-required value for Linux Hosts that are running kernels newer than Older versions must use the option rr_min_io 100 instead (see next). rr_min_io This is a DataCore-required value for Linux Hosts that are running kernels older than Newer versions must use the option rr_min_io_rq 100 instead (see previous). user_friendly_names This is an optional setting and simply specifies that the operating system should use the /etc/multipath/bindings file to assign a persistent and unique alias to the multipath device, in the form of mpathn. If this value is set to 'no' (or omitted completely) the operating system will use a WWID as an alias for the multipath device instead. Page 17

18 SAP HANA SAP HANA has been certified with SANsymphony. Please see the document 'SANsymphony with SAP HANA - Sizing Guidelines' for more information: Page 18

19 Known issues The following is intended to make DataCore Software customers aware of any issues that may affect performance, access or generally give unexpected results under certain conditions when Linux Hosts are used with SANsymphony. Some of the issues here have been found during DataCore s own testing but many others are issues reported by DataCore Software customers, where a specific problem had been identified and then subsequently resolved. DataCore cannot be held responsible for incorrect information regarding Linux products. No assumption should be made that DataCore has direct communication with any of the Linux vendors regarding the issues listed here and we always recommend that users contact their own Linux vendor directly to see if there are any updates or fixes since they were reported to us. For Known issues for DataCore s own Software products, please refer to the relevant DataCore Software Component s release notes. Page 19

20 Known issues All Linux distributions Formatting a Virtual Disk may take longer than expected Since SANsymphony-V 9.0 PSP 4, Linux Hosts have been able to take advantage of SANsymphony s SCSI-UNMAP support - See the Appendix C - Reclaiming Storage' on page 24. However for RHEL and SLES Hosts, this will result in the mkfs command sending additional discard commands during the format process, resulting in longer format times. Ubuntu has not been tested with SCSI UNMAP so is at this time considered unqualified for SCSI UNMAP operations. Use the -K option while formatting to disable the discard command during formatting: and Refer to your own man page to be sure your installation supports this option. Ext3 filesystems will use excessive Disk Pool Storage Allocations The Ext3 filesystem will use significant amounts of Storage Allocation Units (SAU) during the Writing superblocks and filesystem accounting information phase of the filesystem's creation. Care must therefore be taken so as not to completely use all the SAUs from the Disk Pool; and if Ext3 is required, then use a small (i.e. 4MB) SAU size. Other filesystem types do not seem to exhibit this behavior and use only a few SAUs during filesystem creation. Also refer to Appendix B - Configuration Disk Pools on page 25. Ubuntu Manual rescans are needed to update previously failed paths for Ubuntu Hosts DataCore have not been able to get Ubuntu to automatically re-detect paths to mirrored Virtual Disks that have failed or have been removed (e.g. after stopping a DataCore Server) and are then subsequently made available again. Manual intervention is required. Use the 'multipath' command to establish which paths were previously failed (and are now available from the DataCore Server): multipath -ll Then use the 'echo' command to send an IOCTL to the disk device and force the operating system to update the path s status properly. For example echo 1 > /sys/block/sdc/device/rescan Alternatively download and install the 'scsitools' package and use the 'rescan-scsi-bus' command to re-establish the connection to the previously failed path. Page 20

21 Appendix A Preferred Server & Preferred Path settings See the Preferred Servers and Preferred Paths sections from the SANsymphony Help: Without ALUA enabled If Hosts are registered without ALUA support, the Preferred Server and Preferred Path settings will serve no function. All DataCore Servers and their respective Front End (FE) paths are considered equal. It is up to the Host s own Operating System or Failover Software to determine which DataCore Server is its preferred server. With ALUA enabled Setting the Preferred Server to Auto (or an explicit DataCore Server), determines the DataCore Server that is designated Active Optimized for Host IO. The other DataCore Server is designated Active Non-Optimized. If for any reason the Storage Source on the preferred DataCore Server becomes unavailable, and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore Server will be designated the Active Optimized side. The Host will be notified by both DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the ALUA state of both DataCore Servers and act accordingly. If the Storage Source on the preferred DataCore Server becomes unavailable but the Host Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the DataCore Server is unavailable but the FE and MR paths are all connected or if the Host physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or iscsi Cable failure) then the ALUA state will not change for the remaining, Active Nonoptimized side. However, in this case, the DataCore Server will not prevent access to the Host nor will it change the way READ or WRITE IO is handled compared to the Active Optimized side, but the Host will still register this DataCore Server s Paths as Active Non-Optimized which may (or may not) affect how the Host behaves generally. Page 21

22 Appendix A - Preferred Server & Preferred Path settings In the case where the Preferred Server is set to All, then both DataCore Servers are designated Active Optimized for Host IO. All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the distance that the IO has to travel to the DataCore Server. For this reason, the All setting is not normally recommended. If a Host has to send a WRITE IO to a remote DataCore Server (where the IO Path is significantly distant compared to the other local DataCore Server), then the WAIT times accrued by having to send the IO not only across the SAN to the remote DataCore Server, but for the remote DataCore Server to mirror back to the local DataCore Server and then for the mirror write to be acknowledged from the local DataCore Server to the remote DataCore Server and finally for the acknowledgement to be sent to the Host back across the SAN, can be significant. The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not always clear cut. Testing is advised. For Preferred Path settings it is stated in the SANsymphony Help: A preferred front-end path setting can also be set manually for a particular virtual disk. In this case, the manual setting for a virtual disk overrides the preferred path created by the preferred server setting for the host. So for example, if the Preferred Server is designated as DataCore Server A and the Preferred Paths are designated as DataCore Server B, then DataCore Server B will be the Active Optimized Side not DataCore Server A. In a two-node Server group there is usually nothing to be gained by making the Preferred Path setting different to the Preferred Server setting and it may also cause confusion when trying to diagnose path problems, or when redesigning your DataCore SAN with regard to Host IO Paths. For Server Groups that have three or more DataCore Servers, and where one (or more) of these DataCore Servers shares Mirror Paths between other DataCore Servers setting the Preferred Path makes more sense. So for example, DataCore Server A has two mirrored Virtual Disks, one with DataCore Server B, and one with DataCore Server C and DataCore Server B also has a mirrored Virtual Disk with DataCore Server C then using just the Preferred Server setting to designate the Active Optimized side for the Host s Virtual Disks becomes more complicated. In this case the Preferred Path setting can be used to override the Preferred Server setting for a much more granular level of control. Page 22

23 Appendix B Configuring Disk Pools See Creating Disk Pools and Adding Physical Disks from the SANsymphony Help: The smaller the SAU size, the larger the number of indexes are required, by the Disk Pool driver, to keep track of the equivalent amount of allocated storage compared to a Disk Pool with a larger SAU size; e.g. there are potentially four times as many indexes required in a Disk Pool using a 32MB SAU size compared to one using 128MB the default SAU size. As SAUs are allocated for the very first time, the Disk Pool needs to update these indexes and this may cause a slight delay for IO completion and might be noticeable on the Host. However this will depend on a number of factors such as the speed of the physical disks, the number of Hosts accessing the Disk Pool and their IO READ/WRITE patterns, the number of Virtual Disks in the Disk Pool and their corresponding Storage Profiles. Therefore, DataCore usually recommend using the default SAU size (128MB) as it is a good compromise between physical storage allocation and IO overhead during the initial SAU allocation index update. Should a smaller SAU size be preferred, the configuration should be tested to make sure that a potential increased number of initial SAU allocations does not impact the overall Host performance. Page 23

24 Appendix C Reclaiming storage Using SCSI UNMAP commands SANsymphony's support for SCSI UNMAP when used in conjunction the Linux fstrim command or the mount o discard option on certain file system types and allows Hosts to send 'all-zero' write I/O to a Virtual Disk and trigger SANsymphony's Automatic Reclamation feature. See below. Also see the following online resources: The Linux man-pages project RedHat s Storage Administration Guide: Section: 2.5. Discard unused blocks Always refer to your Linux Vendor to determine which file systems are supported for either the fstrim command and/or the mount o discard option for the version of RHEL or SLES. Note that using the mount o discard option may affect Host performance again refer to your Linux Vendor for their recommendations. SANsymphony's Automatic Reclamation feature DataCore Servers keep track of any 'all-zero' write I/O requests sent to Storage Allocation Units (SAU) in all Disk Pools. When enough 'all-zero' writes have been detected to have been passed down to an entire SAUs logical address space, that SAU will be immediately assigned as 'free' (as if it had been manually reclaimed) and made available to the entire Disk Pool for future (re)use. No additional 'zeroing' of the Physical Disk or 'scanning' of the Disk Pool is required. Important technical notes on Automatic Reclamation The Disk Pool driver has a small amount of system memory that it uses keep a list of all address spaces in a Disk Pool that are sent 'all-zero' writes; all other (non-zero) write requests are ignored by the Automatic Reclamation feature and not included in the in-memory list. Any all-zero write addresses that are detected to be physically 'adjacent' to each other from a block address point of view the Disk Pool driver will 'merge' these requests together in the list so as to keep the size of it as small as possible. Also as entire 'all-zeroed' SAUs are re-assigned Page 24

25 Previous changes back to the Disk Pool, the record of all its address spaces is removed from the in-memory list making space available for future all-zero writes to other SAUs that are still allocated. However if write I/O pattern of the Hosts mean that the Disk Pool receives all-zero writes to many, non-adjacent block addresses the list will require more space to keep track of them compared to all-adjacent block addresses. In extreme cases, where the in-memory list can no longer hold any more new all-zero writes (because all the allocated system memory for the Automatic Reclamation feature has been used) the Disk Pool driver will discard the oldest records of the all-zero writes to accommodate newer records of all-zero write I/O. Likewise if a DataCore Server is rebooted for any reason, then the in-memory list is completely lost and any knowledge of SAUs that were already partially detected as having been written with all-zeroes will now no longer be remembered. In both of these cases this can mean that, over time, even though technically an SAU may have been completely overwritten with all-zero writes, the Disk Pool driver does not have a record that cover the entire address space of that SAU in its in-memory list and so the SAU will not be made available to the Disk Pool but remain allocated to the Virtual Disk until any future all-zero writes happen to re-write the same address spaces that were forgotten about previously by the Disk Pool driver. In these scenarios, a Manual Reclamation will force the Disk Pool to re-read all SAUs and perhaps detect those now missing all-zero address spaces. See the section 'Manual Reclamation' on the next page for more information. Reclaiming storage without using fstrim For Linux Hosts that either do not support the fstrim command or do not have the mount o discard option set, a suggestion would be to create a sparse file of an appropriate size (If there is enough free space available in the file system) and then zero-fill it using the dd command. This example creates an empty file (called my_file ) of 2GB in size: fallocate --length my_file Then use the dd command to fill all of the unused file system space with 'all-zero' write I/O. dd if=/dev/zero of=my_file bs=1024 count= This I/O will then be detected by SANsymphony's Automatic Reclamation function (see previous page for more details). Page 25

26 Previous changes Also see: Performing Reclamation section from the SANsymphony Help: SANsymphony's Manual Reclamation feature Manual reclamation forces the Disk Pool driver to 'read' all SAUs currently assigned to a Virtual Disk looking for SAUs that contain only all-zero IO data. Once detected, that SAU will be immediately assigned as 'free' and made available to the entire Disk Pool for future (re)use. No additional 'zeroing' of the Physical Disk is required. Note that manual reclamation will create additional 'read' I/O on the Storage Array used by the Disk Pool, as this process runs at 'low priority' it should not interfere with normal I/O operations. However, caution is advised, especially when scripting the manual reclamation process. Manual Reclamation may still be required even when Automatic Reclamation has taken place (see the 'Automatic Reclamation' section on the previous page for more information). How much storage will be reclaimed? It is impossible to predict exactly how many Storage Allocation Units (SAUs) will be reclaimed. For reclamation of an SAU to take place, it must contain only all-zero block data over the entire SAU else it will remain allocated and this is entirely dependent on how and where the Host has written its data on the DataCore LUN. For example, if the Host has written the data in such a way that every allocated SAU contains a small amount of non-zero block data then no (or very few) SAUs can be reclaimed, even if the total amount of data is much less than the total amount of assigned SAUs. It may be possible to use the Host operating system s own defragmentation tools to move andy data that is spread out over the DataCore LUN so that it ends up as one or more large areas of contiguous non-zero block addresses. This might then leave the the DataCore LUN with SAUs that now only have all-zero data on them and that can then be reclaimed. However care should be taken that the act of defragmenting the data itself does not cause more SAU allocation as the block data is moved around (i.e. re-written to new areas on the DataCore LUN) during the re-organization. Page 26

27 Previous changes 2017 August Compatibility lists - Red Hat Enterprise Linux RHEL This version is 'Not Supported' for SANsymphony V 9.x and earlier and 'Not Qualified' for SANsymphony versions 10.x RHEL This version is now qualified for SANsymphony 10.0 PSP 6 and greater when using ALUA (earlier versions of SANsymphony are still considered 'not qualified'). May Compatibility lists - Red Hat Enterprise Linux RHEL This version is now qualified for SANsymphony 10.0 PSP 6 and greater when using ALUA April Compatibility lists All Linux distributions A note about Multipathing Support has been added with regard to expected versions of 'Multipathing Tools' that should be installed. Compatibility lists - Red Hat Enterprise Linux RHEL 7.3 This version is 'not supported' for SANsymphony-V 9.x and 'not qualified' for SANsymphony 10.x. Compatibility lists - SUSE Linux Enterprise Server SUSE Linux Enterprise Server 12.0 SP 2 This version is 'not supported' for SANsymphony-V 9.x and 'not qualified' for SANsymphony 10.x. Compatibility lists - Ubuntu Ubuntu 15.x This version is 'not supported' for SANsymphony-V 9.0 PSP 4 Update 4 'Without ALUA'. Previously it was, incorrectly, marked as 'not qualified'. Removed SAP HANA certified configuration settings. The information has been moved to the 'SANsymphony with SAP HANA - Sizing Guidelines' document: February The Linux Host's settings Operating system settings Disk Timeouts Expanded the note under this section that refers to a URL from SUSE's own Knowledgebase that explains how to make the required SCSI Disk Device timeout setting permanent over a reboot. January Added Page 27

28 Previous changes Linux compatibility list - Oracle VM Server Version 3.4 has now been qualified with SANsymphony-V 10.0 PSP 6 or greater December Added The DataCore Server's settings Added link: Video: Configuring Linux Hosts to use DataCore Virtual Disks November Appendix C - Reclaiming storage Automatic and Manual reclamation These two sections have been re-written with more detailed explanations and technical notes. October Linux compatibility lists SUSE Linux Enterprise Server 11.0 SP 4 SUSE Linux Enterprise Server 12.0 SP 1 Both of these versions are now 'Supported' using ALUA with Fibre Channel Front-end connections. Note: iscsi Front End connections are still considered 'Not Qualified' July This document has been reviewed for SANsymphony 10.0 PSP 5. No other updates were required. April Added Linux compatibility list Red Hat Enterprise Linux Versions Red Hat Enterprise Linux Versions SUSE Linux Enterprise Server 12 (no Service Pack) and 12 Service Pack 1 January Linux compatibility list Red Hat Enterprise Linux Version Previously this version was listed as 'not qualified' with ALUA enabled Hosts and SANsymphony-V 9.x. This has now been changed to 'Not Supported with Mirrored or Dual Virtual Disks'. Red Hat Enterprise Linux Version 6.6 Previously this version was listed as 'Not Qualified' with ALUA enabled Hosts and SANsymphony-V 10.x. This version is now 'Qualified' with ALUA enabled Hosts and SANsymphony-V 10.x. November Added Linux Applications SAP HANA 2015 Page 28

29 Previous changes This includes an example of a 3-node SAP HANA configuration. SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications September Added Linux Applications SAP HANA A new section has been added with settings specific to SAP HANA. Linux compatibility list SUSE Linux Enterprise Server 11.0 Service Pack 3 using ALUA are certified with SAP HANA versions SPS09 and SPS10. Multipath Configuration Settings The multipath.conf settings for all qualified Linux distributions are now identical so there is only one section for all Linux distributions listed here. The setting dev_loss_tmo infinity requires the additional setting fast_io_fail_tmo to be present in the multipath.conf file this was omitted previously from the SUSE Linux Enterprise Server requirements; and that the fast_io_fail_tmo value is set to 5 this was previously set to 30 for Red Hat Enterprise Linux requirements. June Added Linux compatibility list Ubuntu LTS has now been qualified with SANsymphony-V 10.x. Known Issues Ubuntu requires manual intervention to redetect failed paths to a DataCore Server. April Added Known Issues All RHEL or SUSE Linux specific known issues with SANsymphony-V will now be documented here. March Linux compatibility lists SUSE Linux Enterprise Server 11.0 with Service Pack 3 is now qualified with SANsymphony-V 10.x using ALUA-only settings. February Linux compatibility lists Red Hat Enterprise Linux 7.0 is now qualified with SANsymphony-V 10.x using ALUA-only settings. Multipath Configuration Settings Red Hat Enterprise Linux Added new entry in the multipath.conf file no_path_retry fail Page 29

30 Previous changes This is required for any Oracle RAC/GFS configured Hosts Hosts using ALUA with the Preferred Server setting configured for ALL November Linux compatibility lists the table for all Red Hat Enterprise Linux and SUSE Linux Enterprise Server Versions Single Virtual Disks are now always considered supported. July Linux compatibility lists the table for all Red Hat Enterprise Linux Versions It was previously listing versions only. Host Settings Disk Timeouts Added an example for use of the cat command to determine the current SCSI Disk Timeout and a note that some versions of Linux may revert to the default settings after a reboot and how to resolve this. June Linux compatibility lists the table for SANsymphony-V 10.x May Added Linux compatibility lists Red Hat Enterprise Linux 6.5 with ALUA is now qualified for use with SANsymphony-V 9.x Appendix A - Reclaiming Storage from Linux Hosts Added information for 'ATA Trim' commands and Automatic Reclamation from Disk Pools. Multipath Configuration Settings Red Hat Enterprise Linux only Added a new Multipath.conf entry requirement - fast_io_fail_tmo - which is required to support the dev_loss_tmo value of infinity else the value may default back to 10 minutes instead. NB: Check with your Vendor to make sure your specific Linux Kernel version supports these options. Red Hat Enterprise Linux and SUSE Enterprise Linux DataCore recommends that the Multipath.conf entry path_grouping_policy be group_by_prio instead of failover. NB: The failover option is considered unqualified for RHEL version 6.5. Added an optional Multipath.conf entry user_friendly_names which will create simpler, easier to read, disk device names for users to work with. NB: Check with your Vendor to make sure your specific Linux Kernel version supports this option. April Page 30

31 Previous changes This document combines all of DataCore s Linux-related information from older Technical Bulletins into a single document including: Technical Bulletin 2a: " Other Linux Hosts" Technical Bulletin 2b: "Redhat Enterprise Linux 6.x Hosts" Technical Bulletin 2c: "SUSE Linux Enterprise Server 11.x Hosts" Technical Bulletin 8: "Formatting Host s File Systems on Virtual Disks created from Disk Pools" Technical Bulletin 11: "Disk Timeout Settings on Hosts" Technical Bulletin 16: "Reclaiming Space in Disk Pools" Added Linux compatibility lists Added new tables to show which versions are explicitly qualified, unqualified and not supported with either SANsymphony-V 8.x or 9.x, and if the configuration is with or without ALUA enabled Hosts. Note that the minimum requirement for SANsymphony-V 8.x is now 8.1 PSP1 Update 4 to guarantee expected behavior for qualified versions of Linux. Appendix A This section gives more detail on the Preferred Server and Preferred Path settings with regard to how it may affect a Host. Appendix B This section incorporates information regarding "Reclaiming Space in Disk Pools" (from Technical Bulletin 16) that is specific to Linux Hosts. Host Settings - SUSE Linux Enterprise Server with/without ALUA enabled dev_loss_tmo - For SLES 11 SP2 or greater, please make sure that multipath-tools or greater is installed for the infinity setting to work properly. Improved explanations to most of the required Host Settings and DataCore Server Settings generally and earlier Technical Bulletin 2a: " Other Linux Hosts" July 2013 Removed all references to SANmelody as this is now End of Life as of December Completely updated what is considered qualified, not qualified and not supported in this document. January 2013 the section for the multipath.conf sections specifically for the polling_interval line. July 2012 Added SANsymphony-V 9.x. the Notes section for the multipath.conf sections specifically for the getuid_callout line. May 2012 DataCore Server and Host minimum requirements. Removed all references to End of Life SANsymphony and SANmelody versions that are no longer supported as of December the configuration settings and entries for /etc/multipath.conf (with additional values and explanations) for all DataCore products. Page 31

32 Previous changes Users should re-check their existing configurations and make any appropriate changes. Removed all (out of date) step-by-step instructions on how to manage and configure/format disk devices on the Host. March 2011 Added SANsymphony-V December 2009 Initial Publication Technical Bulletin 2b: "Redhat Enterprise Linux 6.x Hosts" July 2013 Removed all references to RHEL version 5.3 as this were never tested with ALUA-enabled hosts and so may cause confusion. Please use Technical Bulletin 2a for this earlier version. This Bulletin is now only for RHEL versions 6.x. Host Requirements. Added additional information in the Notes section of the multipath.conf section. June 2013 Improved blacklist_exceptions example for multipath.conf. April 2013 Removed all references to SANmelody as this is now End of Life as of December settings for multipath.conf including additional settings and explanations. March 2013 RHEL versions 6.0, 6.1, 6.2 and 6.3 explicitly stated. January 2013 the section for the multipath.conf sections specifically for the polling_interval line. This was previously stated to go in the devices. This was incorrect, and should be added to defaults section instead. Technical Bulletin 2c: "SUSE Linux Enterprise Server 11.x Hosts" July 2013 This Bulletin is now only for SLES versions 11.x. Host Requirements. Completely updated what is considered qualified, not qualified and not supported in this document. Added additional information in the Notes section of the multipath.conf section. June 2013 Improved blacklist_exceptions example for multipath.conf. April 2013 Removed all references to SANmelody as this is now End of Life as of December settings for multipath.conf including additional settings and explanations. February 2013 Added notes for SLES 11 SP2; including an additional multipath.conf setting. that SLES 11 SP1 is no longer supported as this was found not work correctly in all failure conditions. Users on SLES 11 SP1 should update to SP2. January 2013 the section for the multipath.conf sections specifically for the polling_interval line. This was previously stated to go in the devices. This was incorrect, and should be added to defaults section instead. Page 32

The Host Server. Linux Configuration Guide. August The authority on real-time data

The Host Server. Linux Configuration Guide. August The authority on real-time data The Host Server Linux Configuration Guide August 2018 This guide provides configuration settings and considerations for Hosts running Linux with SANsymphony. Basic Linux storage administration skills are

More information

The Host Server. HPUX Configuration Guide. May The Data Infrastructure Software Company

The Host Server. HPUX Configuration Guide. May The Data Infrastructure Software Company The Host Server HPUX Configuration Guide May 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running HPUX. Basic HPUX administration skills are assumed including

More information

The Host Server. AIX Configuration Guide. August The Data Infrastructure Software Company

The Host Server. AIX Configuration Guide. August The Data Infrastructure Software Company The Host Server AIX Configuration Guide August 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running IBM's AIX. Basic AIX administration skills are assumed including

More information

The Host Server. Citrix XenServer Configuration Guide. November The Data Infrastructure Software Company

The Host Server. Citrix XenServer Configuration Guide. November The Data Infrastructure Software Company The Host Server Citrix XenServer Configuration Guide November 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running Citrix XenServer. Basic Citrix XenServer administration

More information

The Host Server. Citrix XenServer Configuration Guide. May The authority on real-time data

The Host Server. Citrix XenServer Configuration Guide. May The authority on real-time data The Host Server Citrix XenServer Configuration Guide May 2018 This guide provides configuration settings and considerations for Hosts running XenServer with SANsymphony. Basic XenServer storage administration

More information

This guide provides configuration settings and considerations for SANsymphony Hosts running Oracle's Solaris.

This guide provides configuration settings and considerations for SANsymphony Hosts running Oracle's Solaris. The Host Server Oracle Solaris Configuration Guide October 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running Oracle's Solaris. Basic Solaris administration

More information

The Host Server. HPUX Configuration Guide. August The authority on real-time data

The Host Server. HPUX Configuration Guide. August The authority on real-time data The Host Server HPUX Configuration Guide August 2018 This guide provides configuration settings and considerations for Hosts running HPUX with SANsymphony. Basic HPUX storage administration skills are

More information

The Host Server. Oracle Solaris Configuration Guide. October The authority on real-time data

The Host Server. Oracle Solaris Configuration Guide. October The authority on real-time data The Host Server Oracle Solaris Configuration Guide October 2018 This guide provides configuration settings and considerations for Hosts running Oracle Solaris with SANsymphony. Basic Oracle Solaris storage

More information

The Host Server. Microsoft Windows Configuration Guide. September The Data Infrastructure Software Company

The Host Server. Microsoft Windows Configuration Guide. September The Data Infrastructure Software Company The Host Server Microsoft Windows Configuration Guide September 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running Microsoft Windows. Basic Microsoft Windows

More information

The Host Server. Microsoft Windows Configuration Guide. February The Data Infrastructure Software Company

The Host Server. Microsoft Windows Configuration Guide. February The Data Infrastructure Software Company The Host Server Microsoft Windows Configuration Guide February 2018 This guide provides configuration settings and considerations for SANsymphony Hosts running Microsoft Windows. Basic Microsoft Windows

More information

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi. The Host Server VMware ESXi Configuration Guide February 2018 This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi. Basic VMware administration skills

More information

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi. The Host Server VMware ESXi Configuration Guide October 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi. Basic VMware administration skills

More information

This guide provides configuration settings and considerations for Hosts running VMware ESXi.

This guide provides configuration settings and considerations for Hosts running VMware ESXi. The Host Server VMware ESXi Configuration Guide May 2018 This guide provides configuration settings and considerations for Hosts running VMware ESXi. Basic VMware storage administration skills are assumed

More information

The DataCore and Host servers

The DataCore and Host servers The DataCore and Host servers Qualified Hardware Components February 2018 This guide lists all hardware components that are considered qualified by DataCore Software for use with SANsymphony. In general

More information

The DataCore and Host servers

The DataCore and Host servers The DataCore and Host servers Qualified Hardware Components May 2018 This guide lists all hardware components that are considered qualified by DataCore Software for use with SANsymphony. In general DataCore

More information

Qualified Hardware Components

Qualified Hardware Components For DataCore and Host Servers October 2018 This guide lists all hardware components that are considered qualified by DataCore Software for use with SANsymphony. The authority on real-time data Changes

More information

Qualified Hardware Components

Qualified Hardware Components DataCore and Host Servers November 2018 This guide lists all hardware components that are considered qualified by DataCore Software for use with SANsymphony. Also see Minimum Hardware Requirements https://www.datacore.com/products/software-defined-storage/tech/prerequisites/

More information

LINUX IO performance tuning for IBM System Storage

LINUX IO performance tuning for IBM System Storage LINUX IO performance tuning for IBM System Storage Location of this document: http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102584 Markus Fehling Certified IT specialist cross systems isicc@de.ibm.com

More information

Veritas Storage Foundation in a VMware ESX Environment

Veritas Storage Foundation in a VMware ESX Environment Veritas Storage Foundation in a VMware ESX Environment Linux and Solaris x64 platforms January 2011 TABLE OF CONTENTS Introduction... 3 Executive Summary... 4 Overview... 5 Virtual Machine File System...

More information

NEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide

NEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide NEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide (M120/M320/M320F/M110/M310/M310F/M510/M710/M710F) August, 2018 NEC Copyright 2018 NEC Corporation.

More information

NEC Storage M series for SAP HANA Tailored Datacenter Integration

NEC Storage M series for SAP HANA Tailored Datacenter Integration NEC Storage M series for SAP HANA Tailored Datacenter Integration (NEC Storage M110/M310/M510/M710) Configuration and Best Practice Guide NEC June 2015 Copyright 2015 NEC Corporation. All Rights Reserved.

More information

Veritas Storage Foundation In a VMware ESX Environment

Veritas Storage Foundation In a VMware ESX Environment Veritas Storage Foundation In a VMware ESX Environment Linux and Solaris x64 platforms December 2008 TABLE OF CONTENTS Introduction... 3 Executive Summary... 4 Overview... 5 Virtual Machine File System...

More information

Tegile Best Practices for Oracle Databases

Tegile Best Practices for Oracle Databases Tegile Best Practices for Oracle Databases Pg. 1 Contents Executive Summary... 3 Disclaimer... 3 About This Document... 3 Quick Start Guide... 4 LUN Sizing Recommendations... 5 Tegile IntelliFlash Storage

More information

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.2.1 release notes

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.2.1 release notes Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.2.1 release notes Part number: AA-RWF9H-TE First edition: March 2009 Legal and notice information Copyright 2009 Hewlett-Packard

More information

1. Set up the storage to allow access to the LD(s) by the server following the NEC storage user guides.

1. Set up the storage to allow access to the LD(s) by the server following the NEC storage user guides. Overview Server running Red Hat Enterprise Linux (RHEL) must be configured to recognize and work with NEC storage systems. The following procedure demonstrates the steps necessary to configure multipath

More information

Red Hat Enterprise Linux 5 DM Multipath. DM Multipath Configuration and Administration Edition 3

Red Hat Enterprise Linux 5 DM Multipath. DM Multipath Configuration and Administration Edition 3 Red Hat Enterprise Linux 5 DM Multipath DM Multipath Configuration and Administration Edition 3 Red Hat Enterprise Linux 5 DM Multipath DM Multipath Configuration and Administration Edition 3 Legal Notice

More information

Red Hat Enterprise Linux 6

Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 6 DM Multipath DM Multipath Configuration and Administration Last Updated: 2017-10-20 Red Hat Enterprise Linux 6 DM Multipath DM Multipath Configuration and Administration Steven

More information

STORAGE CONFIGURATION BEST PRACTICES FOR SAP HANA TAILORED DATA CENTER INTEGRATION ON EMC VNX SERIES UNIFIED STORAGE SYSTEMS

STORAGE CONFIGURATION BEST PRACTICES FOR SAP HANA TAILORED DATA CENTER INTEGRATION ON EMC VNX SERIES UNIFIED STORAGE SYSTEMS STORAGE CONFIGURATION BEST PRACTICES FOR SAP HANA TAILORED DATA CENTER INTEGRATION ON EMC VNX SERIES UNIFIED STORAGE SYSTEMS Integrating SAP HANA into an EMC VNX storage system EMC Solutions Abstract This

More information

The DataCore Server. Best Practice Guidelines. February The Data Infrastructure Software Company

The DataCore Server. Best Practice Guidelines. February The Data Infrastructure Software Company The DataCore Server February 2018 The Data Infrastructure Software Company Table of contents Changes made to this document 3 Design objectives 5 The server hardware 7 BIOS 7 CPUs 8 Power 9 System memory

More information

Emulex Drivers for Linux for LightPulse Adapters Release Notes

Emulex Drivers for Linux for LightPulse Adapters Release Notes Emulex Drivers for Linux for LightPulse Adapters Release Notes Versions: FC Version 11.4.142.21 Date: September 6, 2017 Purpose and Contact Information These release notes describe the new features, current

More information

HYPER-UNIFIED STORAGE. Nexsan Unity

HYPER-UNIFIED STORAGE. Nexsan Unity HYPER-UNIFIED STORAGE Nexsan Unity Multipathing Best Practices Guide NEXSAN 25 E. Hillcrest Drive, Suite #150 Thousand Oaks, CA 9160 USA Printed Wednesday, January 02, 2019 www.nexsan.com Copyright 2010

More information

Red Hat Enterprise Linux 7 DM Multipath

Red Hat Enterprise Linux 7 DM Multipath Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven Levine Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven Levine

More information

SANmelody TRIAL Quick Start Guide

SANmelody TRIAL Quick Start Guide Page 1 SANmelody TRIAL Quick Start Guide From Installation to Presenting Virtual Storage Download the trial software at: SANmelody Trial Software SANmelody TRIAL Quick Start Guide Change Summary August,

More information

Using Device-Mapper Multipath. Configuration and Administration 5.2

Using Device-Mapper Multipath. Configuration and Administration 5.2 Using Device-Mapper Multipath Configuration and Administration 5.2 DM_Multipath ISBN: N/A Publication date: May 2008 Using Device-Mapper Multipath This book provides information on using the Device-Mapper

More information

HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes

HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes Part number: AA-RWF7N-TE Thirteenth edition: April 2009

More information

OSIG Change History Article

OSIG Change History Article OSIG Change History Article Change history The OSIG has moved The OSIG is now available as a web application. See http://lenovopress.com/osig 21 September 2016 Windows Server 2016 is Certified on x3850

More information

The Contents and Structure of this Manual. This document is composed of the following four chapters.

The Contents and Structure of this Manual. This document is composed of the following four chapters. Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000

More information

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- (SAS) for Citrix XenServer This page is intentionally left blank. Preface This manual briefly explains the operations that need to be

More information

The DataCore Server. Best Practice Guidelines. August The Data Infrastructure Software Company

The DataCore Server. Best Practice Guidelines. August The Data Infrastructure Software Company The DataCore Server August 2017 The Data Infrastructure Software Company Table of contents Changes made to this document 4 Overview 5 Which versions of SANsymphony does this apply to? 6 High level design

More information

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes April 2010 H Legal and notice information Copyright 2009-2010 Hewlett-Packard Development Company, L.P. Overview

More information

HP StorageWorks Emulex Fibre Channel host bus adapters for ProLiant and Integrity servers using Linux, VMware and Citrix operating systems release

HP StorageWorks Emulex Fibre Channel host bus adapters for ProLiant and Integrity servers using Linux, VMware and Citrix operating systems release HP StorageWorks Emulex Fibre Channel host bus adapters for ProLiant and Integrity servers using Linux, VMware and Citrix operating systems release notes Part number: AA-RWF7R-TE Fifteen edition: November

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven

More information

Configuration Guide -Server Connection-

Configuration Guide -Server Connection- FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection- (Fibre Channel) for Citrix XenServer This page is intentionally left blank. Preface This manual briefly explains the operations

More information

SRP Update. Bart Van Assche,

SRP Update. Bart Van Assche, SRP Update Bart Van Assche, Overview Involvement With SRP SRP Protocol Overview Recent SRP Driver Changes Possible Future Directions March 30 April 2, 2014 #OFADevWorkshop 2 Involvement with SRP Maintainer

More information

IBM XIV Host Attachment Kit for Linux. Version Release Notes. First Edition (December 2011)

IBM XIV Host Attachment Kit for Linux. Version Release Notes. First Edition (December 2011) Version 1.7.1 Release Notes First Edition (December 2011) First Edition (December 2011) This document edition applies to version 1.7.1 of the IBM XIV Host Attachment Kit for Linux software package. Newer

More information

HP StorageWorks QLogic Fibre Channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes

HP StorageWorks QLogic Fibre Channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes HP StorageWorks QLogic Fibre Channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes Part number: AA-RWFNF-TE Fourteenth edition: April 2009

More information

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.0 release notes

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.0 release notes Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.0 release notes Part number: AA-RWF9K-TE First edition: February 2010 Legal and notice information Copyright 2009-2010 Hewlett-Packard

More information

Configuring Server Boot

Configuring Server Boot This chapter includes the following sections: Boot Policy, page 1 UEFI Boot Mode, page 2 UEFI Secure Boot, page 3 CIMC Secure Boot, page 3 Creating a Boot Policy, page 5 SAN Boot, page 6 iscsi Boot, page

More information

Z-Drive R4 and 4500 Linux Device Driver

Z-Drive R4 and 4500 Linux Device Driver User Guide Driver Version 4.2 Contents Introduction............................................................................ 3 New features of driver version 4.2...........................................................

More information

Red Hat Enterprise Linux 4 DM Multipath. DM Multipath Configuration and Administration

Red Hat Enterprise Linux 4 DM Multipath. DM Multipath Configuration and Administration Red Hat Enterprise Linux 4 DM Multipath DM Multipath Configuration and Administration DM Multipath Red Hat Enterprise Linux 4 DM Multipath DM Multipath Configuration and Administration Edition 1.0 Copyright

More information

Windows Host Utilities Installation and Setup Guide

Windows Host Utilities Installation and Setup Guide IBM System Storage N series Windows Host Utilities 6.0.1 Installation and Setup Guide GC52-1295-06 Table of Contents 3 Contents Preface... 7 Supported features... 7 Websites... 7 Getting information,

More information

Digitizer operating system support

Digitizer operating system support Digitizer operating system support Author(s): Teledyne SP Devices Document ID: 15-1494 Classification: General release Revision: J Print date: 2018-08-08 1 Windows operating systems We deliver a Windows

More information

Lenovo SAN Manager Rapid RAID Rebuilds and Performance Volume LUNs

Lenovo SAN Manager Rapid RAID Rebuilds and Performance Volume LUNs Lenovo SAN Manager Rapid RAID Rebuilds and Performance Volume LUNs Lenovo ThinkSystem DS2200, DS4200, DS6200 June 2017 David Vestal, WW Product Marketing Lenovo.com/systems Table of Contents Introduction...

More information

HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes

HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes Part number: AA-RWF7J-TE Ninth edition: January 2009 Description

More information

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION WHITE PAPER Maximize Storage Networks with iscsi USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION For use with Windows 2000 VERITAS Software Corporation 03/05/2003

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Virtual Iron Software Release Notes

Virtual Iron Software Release Notes Virtual Iron Software Release Notes Virtual Iron Version 4.2 Copyright (c) 2007 Virtual Iron Software, Inc. 00122407R1 This information is the intellectual property of Virtual Iron Software, Inc. This

More information

Release Notes and Installation/Upgrade Guide (Release 10.0 PSP6 Update5)

Release Notes and Installation/Upgrade Guide (Release 10.0 PSP6 Update5) Release Notes and Installation/Upgrade Guide (Release 10.0 PSP6 Update5) Cumulative Change Summary Date 10.0 release May 28, 2014 Minor edits to Online Help and Memory Utilization Notes May 30, 2014 Added

More information

Before Reading This Manual This section explains the notes for your safety and conventions used in this manual.

Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Integrated Mirroring SAS User s Guide Areas Covered Before Reading This Manual Chapter 1 Chapter 2 Chapter 3 This section explains the notes for your safety and conventions used in this manual. Overview

More information

Overview. Implementing Fibre Channel SAN Boot with Oracle's Sun ZFS Storage Appliance. August By Tom Hanvey; update by Peter Brouwer

Overview. Implementing Fibre Channel SAN Boot with Oracle's Sun ZFS Storage Appliance. August By Tom Hanvey; update by Peter Brouwer Implementing Fibre Channel SAN Boot with Oracle's Sun ZFS Storage Appliance August 2012 By Tom Hanvey; update by Peter Brouwer This paper describes how to implement a Fibre Channel (FC) SAN boot solution

More information

End of Life Announcement for HP EVA P6350 and P6550 Storage

End of Life Announcement for HP EVA P6350 and P6550 Storage September 12, 2013 End of Life Announcement for HP EVA P6350 and P6550 Storage Dear Valued HP EVA Storage Customer: HP appreciates and values your business. We are writing to inform you of an upcoming

More information

VMware VMFS Volume Management VMware Infrastructure 3

VMware VMFS Volume Management VMware Infrastructure 3 Information Guide VMware VMFS Volume Management VMware Infrastructure 3 The VMware Virtual Machine File System (VMFS) is a powerful automated file system that simplifies storage management for virtual

More information

HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes

HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes HP StorageWorks Emulex fibre channel host bus adapters for ProLiant and Integrity servers using Linux and VMware operating systems release notes Part number: AA-RWF7L-TE Eleventh edition: March 2009 Description

More information

HP Serviceguard for Linux Certification Matrix

HP Serviceguard for Linux Certification Matrix Technical Support Matrix HP Serviceguard for Linux Certification Matrix Version 04.05, April 10 th, 2015 How to use this document This document describes OS, Server and Storage support with the listed

More information

December 2011 vsp-patch noarch.rpm Avaya Aura System Platform R6.0 June 2010 vsp iso

December 2011 vsp-patch noarch.rpm Avaya Aura System Platform R6.0 June 2010 vsp iso AVAYA Avaya Aura Release Notes Issue 1.1 INTRODUCTION This document introduces the Avaya Aura and describes new features, known issues and the issues resolved in this release. WHAT S NEW IN SYSTEM PLATFORM

More information

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.0 This paper describes how to implement a Fibre Channel (FC) SAN

More information

Shared Multi Port Array (SMPA)

Shared Multi Port Array (SMPA) Shared Multi Port Array (SMPA) Tested Configurations October 2018 This document lists all storage arrays that have been tested using DataCore s own SMPA validation tests. Entries that are marked as passed

More information

Shared Multi Port Array (SMPA)

Shared Multi Port Array (SMPA) Shared Multi Port Array (SMPA) Tested Configurations July 2018 This document lists all storage arrays that have been tested using DataCore s own SMPA validation tests. Entries that are marked as passed

More information

Veritas NetBackup Enterprise Server and Server 6.x OS Software Compatibility List

Veritas NetBackup Enterprise Server and Server 6.x OS Software Compatibility List Veritas NetBackup Enterprise Server and Server 6.x OS Software Compatibility List Created on July 21, 2010 Copyright 2010 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo, and Backup

More information

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Setup for Microsoft Cluster Service Revision: 041108

More information

Fibre Channel Specialist Lab

Fibre Channel Specialist Lab Fibre Channel Specialist Lab 203.21 Enterprise Fabric Suite Zoning Operation Objective: This lab will demonstrate the configuration and management of zoning in a QLogic fabric. Unless noted, all operations

More information

Linux Host Utilities 6.2 Quick Start Guide

Linux Host Utilities 6.2 Quick Start Guide Linux Host Utilities 6.2 Quick Start Guide This guide is for experienced Linux users. It provides the basic information required to get the Linux Host Utilities installed and set up on a Linux host. The

More information

Brocade Fabric OS DATA CENTER. Target Path Selection Guide October 17, 2017

Brocade Fabric OS DATA CENTER. Target Path Selection Guide October 17, 2017 October 17, 2017 DATA CENTER Brocade Fabric OS Target Path Selection Guide Brocade Fabric OS (Brocade FOS) Target Path releases are recommended code levels for Brocade Fibre Channel switch platforms. Use

More information

NetBackup SAN Client and Fibre Transport Troubleshooting Guide. 2 What are the components of the SAN Client feature?

NetBackup SAN Client and Fibre Transport Troubleshooting Guide. 2 What are the components of the SAN Client feature? Symantec TechNote 288437 NetBackup SAN Client and Fibre Transport Troubleshooting Guide 1 Introduction Revision F This document explains how to troubleshoot different failures that may occur while using

More information

Emulex Universal Multichannel

Emulex Universal Multichannel Emulex Universal Multichannel Reference Manual Versions 11.2 UMC-OCA-RM112 Emulex Universal Multichannel Reference Manual Corporate Headquarters San Jose, CA Website www.broadcom.com Broadcom, the pulse

More information

A3800 & A3600. Service Release SR2.1 Release Notes. A-Class. VTrak A-Class firmware version Clients

A3800 & A3600. Service Release SR2.1 Release Notes. A-Class. VTrak A-Class firmware version Clients VTrak A-Class A3800 & A3600 Service Release SR2.1 Release Notes A-Class VTrak A-Class firmware version 1.11.0000.00 Clients VTrak Mac Client Package 1.3.1 42009 VTrak Windows Client Package 1.3.0-40692

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

High performance Oracle database workloads with the Dell Acceleration Appliance for Databases 2.0

High performance Oracle database workloads with the Dell Acceleration Appliance for Databases 2.0 High performance Oracle database workloads with the Dell Acceleration Appliance for Databases 2.0 A Dell Reference Architecture Dell Database Solutions Engineering June 2015 A Dell Reference Architecture

More information

Implementing Software RAID

Implementing Software RAID Implementing Software RAID on Dell PowerEdge Servers Software RAID is an inexpensive storage method offering fault tolerance and enhanced disk read-write performance. This article defines and compares

More information

DtS Data Migration to the MSA1000

DtS Data Migration to the MSA1000 White Paper September 2002 Document Number Prepared by: Network Storage Solutions Hewlett Packard Company Contents Migrating Data from Smart Array controllers and RA4100 controllers...3 Installation Notes

More information

OnCommand Unified Manager 7.2: Best Practices Guide

OnCommand Unified Manager 7.2: Best Practices Guide Technical Report OnCommand Unified : Best Practices Guide Dhiman Chakraborty August 2017 TR-4621 Version 1.0 Abstract NetApp OnCommand Unified is the most comprehensive product for managing and monitoring

More information

StorTrends - Citrix. Introduction. Getting Started: Setup Guide

StorTrends - Citrix. Introduction. Getting Started: Setup Guide StorTrends - Citrix Setup Guide Introduction This guide is to assist in configuring a Citrix virtualization environment with a StorTrends SAN array. It is intended for the virtualization and SAN administrator

More information

Configuring Cisco UCS Server Pools and Policies

Configuring Cisco UCS Server Pools and Policies This chapter contains the following sections: Global Equipment Policies, page 1 UUID Pools, page 4 Server Pools, page 5 Management IP Pool, page 7 Boot Policy, page 8 Local Disk Configuration Policy, page

More information

ETERNUS Disk storage systems Server Connection Guide (FCoE) for Linux

ETERNUS Disk storage systems Server Connection Guide (FCoE) for Linux Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000

More information

HPE Security ArcSight. ArcSight Data Platform Support Matrix

HPE Security ArcSight. ArcSight Data Platform Support Matrix HPE Security ArcSight ArcSight Data Platform Support Matrix November 28, 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise products and services are set forth in the express

More information

The Btrfs Filesystem. Chris Mason

The Btrfs Filesystem. Chris Mason The Btrfs Filesystem Chris Mason The Btrfs Filesystem Jointly developed by a number of companies Oracle, Redhat, Fujitsu, Intel, SUSE, many others All data and metadata is written via copy-on-write CRCs

More information

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM Note: Before you use this information and the product

More information

SVC VOLUME MIGRATION

SVC VOLUME MIGRATION The information, tools and documentation ( Materials ) are being provided to IBM customers to assist them with customer installations. Such Materials are provided by IBM on an as-is basis. IBM makes no

More information

Whitepaper: Back Up SAP HANA and SUSE Linux Enterprise Server with SEP sesam. Copyright 2014 SEP

Whitepaper: Back Up SAP HANA and SUSE Linux Enterprise Server with SEP sesam.  Copyright 2014 SEP Whitepaper: Back Up SAP HANA and SUSE Linux Enterprise Server with SEP sesam info@sepusa.com www.sepusa.com Table of Contents INTRODUCTION AND OVERVIEW... 3 SOLUTION COMPONENTS... 4-5 SAP HANA... 6 SEP

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Microsoft Service Pack and Security Bulletin Support Addendum

Microsoft Service Pack and Security Bulletin Support Addendum Microsoft Service Pack and Security Bulletin Support Addendum to the Avid Security Guidelines and Best Practices document (Last updated 02/28/17) What s New? 1. Support announced for February s security

More information

Multipath with Virtual Iron and Open-E DSS Configured and verified by Massimo Strina, Share Distribuzione SRL (Italy)

Multipath with Virtual Iron and Open-E DSS Configured and verified by Massimo Strina, Share Distribuzione SRL (Italy) Multipath with Virtual Iron and Open-E DSS Configured and verified by Massimo Strina, Share Distribuzione SRL (Italy) December 2008 TO SET UP MULTIPATH WITH VIRTUAL IRON AND OPEN-EE DSS, PERFORM THE FOLLOWING

More information

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 Setup for Failover Clustering and Microsoft Cluster Service 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website

More information

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6. Setup for Failover Clustering and Microsoft Cluster Service Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware

More information

EXPRESSCLUSTER X 4.0 for Linux

EXPRESSCLUSTER X 4.0 for Linux EXPRESSCLUSTER X 4.0 for Linux Installation and Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New manual. Copyright NEC Corporation 2018.

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

HP StorageWorks QLogic host bus adapters for x86 and x64 Linux and Windows and x86 NetWare release notes

HP StorageWorks QLogic host bus adapters for x86 and x64 Linux and Windows and x86 NetWare release notes HP StorageWorks QLogic host bus adapters for x86 and x64 Linux and Windows and x86 NetWare release notes Part number: AV-RSBNV-TE Twentieth edition: January 2007 Description These release notes contain

More information