The Host Server. AIX Configuration Guide. August The Data Infrastructure Software Company

Similar documents
The Host Server. HPUX Configuration Guide. May The Data Infrastructure Software Company

This guide provides configuration settings and considerations for SANsymphony Hosts running Oracle's Solaris.

The Host Server. HPUX Configuration Guide. August The authority on real-time data

The Host Server. Oracle Solaris Configuration Guide. October The authority on real-time data

The Host Server. Citrix XenServer Configuration Guide. May The authority on real-time data

The Host Server. Citrix XenServer Configuration Guide. November The Data Infrastructure Software Company

The Host Server. Microsoft Windows Configuration Guide. September The Data Infrastructure Software Company

The Host Server. Linux Configuration Guide. October 2017

The Host Server. Microsoft Windows Configuration Guide. February The Data Infrastructure Software Company

The Host Server. Linux Configuration Guide. August The authority on real-time data

The DataCore and Host servers

The DataCore and Host servers

Qualified Hardware Components

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

Qualified Hardware Components

This guide provides configuration settings and considerations for SANsymphony Hosts running VMware ESX/ESXi.

This guide provides configuration settings and considerations for Hosts running VMware ESXi.

Shared Multi Port Array (SMPA)

Shared Multi Port Array (SMPA)

SANmelody TRIAL Quick Start Guide

Lenovo Storage DX8200D System Installation Guide (Additional Appliances in Existing Server Group)

Installation Manual. NEXSAN MSIO for AIX. Version 2.1

Technical Note P/N REV A01 November 24, 2008

Optimus.2 Ascend. Rev A August, 2014 RELEASE NOTES.

StarWind Virtual SAN Compute and Storage Separated 3-Node Setup with Hyper-V

IBM Storage Host Attachment Kit for HP-UX Version Release Notes IBM

VMware VMFS Volume Management VMware Infrastructure 3

OSIG Change History Article

Dell EMC SAN Storage with Video Management Systems

Brocade Fabric OS DATA CENTER. Target Path Selection Guide October 17, 2017

Notes for migrating to EMC or Shark with Fibre Channel boot. Planning Notes

The Contents and Structure of this Manual. This document is composed of the following 12 chapters.

This page is intentionally left blank.

Quick Start Guide: Creating HA Device with StarWind Virtual SAN

StarWind Virtual SAN Installation and Configuration of HyperConverged 2 Nodes with Hyper-V Cluster

Oracle Enterprise Manager Ops Center

StarWind Virtual SAN Compute and Storage Separated 2-Node Cluster. Creating Scale- Out File Server with Hyper-V.

CA Unified Infrastructure Management

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

AIX Host Utilities 6.0 Installation and Setup Guide

IBM XIV Provider for Microsoft Windows Volume Shadow Copy Service Version Release Notes

Oracle VM. Getting Started Guide for Release 3.2

Oracle Enterprise Manager Ops Center. Overview. What You Need. Create Oracle Solaris 10 Zones 12c Release 3 ( )

IBM XIV Host Attachment Kit for HP-UX Version Release Notes

3.1. Storage. Direct Attached Storage (DAS)

Veritas Storage Foundation in a VMware ESX Environment

The DataCore Server. Best Practice Guidelines. February The Data Infrastructure Software Company

The DataCore Server. Best Practice Guidelines. August The Data Infrastructure Software Company

IBM XIV Host Attachment Kit for HP-UX Version Release Notes

IBM Storage Host Attachment Kit for AIX Version Release Notes IBM

FlexArray Virtualization Implementation Guide for Third- Party Storage

IBM Systems. IBM Virtual Machine Manager Release Notes

Boot and Network Configuration Deployment using Server Template with Dell EMC OpenManage Essentials (OME)

EMC Disk Library Automated Tape Caching Feature

PowerHA SystemMirror 7.2. Split Brain Handling through SCSI PR Disk Fencing

The HBAs tested in this report are the Brocade 825 and the Emulex LightPulse LPe12002 and LPe12000.

iscsi Configuration for ESXi using VSC Express Guide

SVC VOLUME MIGRATION

Surveillance Dell EMC Storage with FLIR Latitude

IBM XIV Host Attachment Kit for AIX Version Release Notes

Veritas Storage Foundation In a VMware ESX Environment

NetApp AltaVault Cloud-Integrated Storage Appliances

NetApp AltaVault Cloud-Integrated Storage Appliances

IBM Storage Host Attachment Kit for AIX Version Release Notes IBM

AIX Host Utilities 6.0 Release Notes

IBM XIV Host Attachment Kit for HP-UX Version Release Notes

Disk I/O and the Network

HP StorageWorks. XP Disk Array Configuration Guide for IBM AIX XP24000, XP12000, XP10000, SVS200

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Surveillance Dell EMC Storage with Digifort Enterprise

High Availability for Oracle Database with IBM PowerHA SystemMirror and IBM Spectrum Virtualize HyperSwap

StarWind Virtual SAN Creating Stand-Alone Image File device

AvePoint Cloud Backup. Release Notes

IBM XIV Host Attachment Kit for AIX. Version Release Notes. First Edition (September 2011)

One Identity Active Roles 7.2. Management Pack Technical Description

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

May 2014 Product Shipping Configuration Change Notice

Oracle Enterprise Manager Ops Center. Introduction. What You Will Need. Configure and Install Root Domains 12c Release 3 (

IBM XIV Host Attachment Kit for AIX Version Release Notes

Release Notes LAW PreDiscovery, Version Enhancements Resolved Issues Current Issues Release Information

FlexArray Virtualization

Lenovo SAN Manager Rapid RAID Rebuilds and Performance Volume LUNs

Oracle Enterprise Manager Ops Center. Introduction. Creating Oracle Solaris 11 Zones Guide 12c Release 1 ( )

Intel Cache Acceleration Software for Windows* Workstation

Dell EMC SC Series Storage: Microsoft Multipath I/O

AvePoint Discovery Tool. Release Notes

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2

Nondisruptive Operations with SMB File Shares

HP StorageWorks Enterprise Virtual Array

StarWind Virtual SAN Installing and Configuring SQL Server 2017 Failover Cluster Instance on Windows Server 2016

HP StorageWorks Continuous Access EVA 2.1 release notes update

IBM XIV Host Attachment Kit for Solaris Version Release Notes

StarWind Native SAN for Hyper-V:

IBM System Storage N series

Run SAP BPC in a VMware environment Version 1.00 December 2008

Creating Resources on the ZFS Storage Appliance

UNH-IOL NVMe Testing Service

StarWind Virtual SAN Installing and Configuring SQL Server 2019 (TP) Failover Cluster Instance on Windows Server 2016

StarWind Virtual SAN. Installing and Configuring SQL Server 2014 Failover Cluster Instance on Windows Server 2012 R2. One Stop Virtualization Shop

StoneGate Management Center. Release Notes for Version 5.3.3

Transcription:

The Host Server AIX Configuration Guide August 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running IBM's AIX. Basic AIX administration skills are assumed including how to connect to iscsi and/or Fibre Channel Storage Array target ports as well as the processes of discovering, mounting and formatting a disk device. The Data Infrastructure Software Company

Table of contents Changes made to this document 3 AIX compatibility list 4 The DataCore Server's Settings 6 The AIX Host's Settings 8 Known Issues 9 Appendix A 11 Preferred Server & Preferred Path settings 11 Appendix B 12 Configuring Disk Pools 12 Appendix C 13 Reclaiming Storage 13 Previous changes 16 Page 2

Changes made to this document The most recent version of this document is available from here: http://datacore.custhelp.com/app/answers/detail/a_id/838 All changes since April 2017 Added AIX compatibility list A note has been added regarding the qualification status of IBM PowerHA SystemMirror (formerly known as IBM PowerHA and HACMP). All previous changes Please see page 16 Page 3

AIX compatibility list SANsymphony 9.0 PSP 4 Update 4 (1) 10.0 (all versions) Version With ALUA Without ALUA With ALUA Without ALUA 5.2 Not Supported Not Qualified Not Supported Not Supported 5.3 Not Supported Not Qualified Not Supported Not Supported 6.1 Not Supported Qualified Not Supported Not Qualified 7.1 Not Supported Not Qualified Not Supported Qualified 7.2 Not Supported Not Supported Not Qualified Not Qualified Notes: Qualified vs. Not Qualified vs. Not Supported See the next page for definitions. DataCore Server Front-End Port connections Fibre Channel connections are supported. ISCSI connections are not. SCSI UNMAP SCSI UNMAP is not supported. Reclaiming storage from DataCore Disk Pools See Appendix C: 'Reclaiming Storage' on page 13 for version-specific 'how to' instructions. IBM PowerHA SystemMirror (formerly IBM PowerHA and HACMP) DataCore currently considers this product as 'Not Qualified' Self-qualification may be possible, but only for combinations of AIX and SANsymphony that are already listed as 'Qualified' in terh table above. 1 SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329 Page 4

Qualified vs. Not Qualified vs. Not Supported Qualified This combination has been tested by DataCore and all the host-specific settings listed in this document applied using non-mirrored, mirrored and Dual Virtual Disks. Not Qualified This combination has not yet been tested by DataCore using Mirrored or Dual Virtual Disks types. DataCore cannot guarantee 'high availability' (failover/failback, continued access etc.) even if the host-specific settings listed in this document are applied. Self-qualification may be possible please see Technical Support FAQ #1506 Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any problems that are encountered while using AIX versions that are 'Not Qualified' will still get root-cause analysis. Non-mirrored Virtual Disks are always considered 'Qualified' - even for 'Not Qualified' combinations of AIX/SANsymphony. Not Supported This combination has either failed 'high availability' testing by DataCore using Mirrored or Dual Virtual Disks types; or the operating System's own requirements/limitations (e.g. age, specific hardware requirements) make it impractical to test. DataCore will not guarantee 'high availability' (failover/failback, continued access etc.) if the host-specific settings listed in this document are applied. Mirrored or Dual Virtual Disks types are configured at the users own risk. Self-qualification is not possible. Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any problems that are encountered while using AIX versions that are 'Not Supported' will get best-effort Technical Support (e.g. to get access to Virtual Disks) but no root-cause analysis will be done. Non-mirrored Virtual Disks are always considered 'Qualified' even for 'Not Supported' combinations of AIX/SANsymphony. AIX versions that are End of Life (EOL) For versions that are listed as 'Not Supported', self-qualification is not possible. For versions that are listed as 'Not Qualified', self-qualification may be possible if there is an agreed support contract with IBM as well. Please contact DataCore Technical Support before attempting any self-qualification. For any problems that are encountered while using AIX versions that are EOL with DataCore Software, only besteffort Technical Support will be performed (e.g. to get access to Virtual Disks). Root-cause analysis will not be done. Non-mirrored Virtual Disks are always considered 'Qualified'. Page 5

The DataCore Server's Settings Operating System Type When registering the Host choose the appropriate menu option: AIX 5.2 with ML9 or earlier - IBM AIX Native MPIO Legacy AIX 5.2 with TL10 - the IBM AIX AIX 5.3 with ML5 or earlier - IBM AIX Native MPIO Legacy AIX 5.3 with TL6 or greater - IBM AIX AIX 6.1 and AIX 7.x - IBM AIX Also see the Registering Hosts section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/hosts.htm Port roles Ports used for serving Virtual Disks to Hosts should only have the Front End (FE) role enabled. Mixing other Port Role types may cause unexpected results as Ports that only have the FE role enabled will be turned off when the DataCore Server software is stopped (even if the physical server remains running). This helps to guarantee that any Hosts do not still try to access FE Ports, for any reason, once the DataCore Software is stopped but where the DataCore Server remains running. Any Port with the Mirror and/or Back End role enabled do not shut off when the DataCore Server software is stopped but still remain active. Multipathing support The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the Multipathing Support section from the SANsymphony Help: http://www.datacore.com/ssv- Webhelp/Hosts.htm Non-mirrored Virtual Disks and Multipathing Non-mirrored Virtual Disks can still be served to multiple Hosts and/or multiple Host Ports from one or more DataCore Server FE Ports if required; in this case the Host can use its own multipathing software to manage the multiple Host paths to the Single Virtual Disk as if it was a Mirrored or Dual Virtual Disk. Note: Hosts that have non-mirrored Virtual Disks served to them do not need Multipathing Support enabled unless they have other Mirrored or Dual Virtual Disks served as well. Asymmetrical Logical Unit Access (ALUA) support ALUA is not supported. Page 6

Serving Virtual Disks to the Hosts for the first time DataCore recommends that before serving Virtual Disks for the first time to a Host, that all DataCore Front-End ports on all DataCore Servers are correctly discovered by the Host first. Then, from within the SANsymphony Console, the Virtual Disk is marked Online, up to date and that the storage sources have a host access status of Read/Write. Virtual Disks LUNs and serving to more than one Host or Port DataCore Virtual Disks always have their own unique Network Address Authority (NAA) identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports on the same Host Server and the same Virtual Disk being served to multiple Hosts. See the SCSI Standard Inquiry Data section from the online Help for more information on this: http://www.datacore.com/ssv-webhelp/changing_virtual_disk_settings.htm While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating system to identify a disk device served to it over different paths generally we have found that it is. And while there is sometimes a convention that all paths by the same disk device should always using the same LUN 'number' to guarantees consistency for device identification, this may not be technically true. Always refer to the Host Operating System vendor s own documentation for advice on this. DataCore's Software does, however always try to create mappings between the Host's ports and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number (1) where it can. The software will first find the next available (lowest) LUN 'number' for the Host- DataCore FE mapping combination being applied and will then try to apply that same LUN number for all other mappings that are being attempted when the Virtual Disk is being served. If any Host-DataCore FE port combination being requested at that moment is already using that same LUN number (e.g. if a Host has other Virtual Disks served to it from previous) then the software will find the next available LUN number and apply that to those specific Host- DataCore FE mappings only. 1 The software will also try to match a LUN 'number' for all DataCore Server Mirror Port mappings of a Virtual Disk too, although the Host does not 'see' these mirror mappings and so this does not technically need to be the same as the Front End port mappings (or indeed as other Mirror Path mappings for the same Virtual Disk). Having Mirror mappings using different LUNs has no functional impact on the Host or DataCore Server at all. Page 7

The AIX Host's Settings Operating system settings The 'DataCore Support for AIX MPIO' software For any DataCore Virtual Disk to be recognized as MPIO-capable disk devices by the AIX operating system, download and install the DataCore Support for AIX MPIO software package on the AIX Host. DataCore Software Downloads http://datacore.custhelp.com/app/answers/detail/a_id/1419 All installation and configuration instructions can be found in the release notes. Disk Timeouts The Disk rw_timeout must be changed to 60 seconds. To determine the current rw_timeout value, run the command: lsattr l hdiskx E Note: X is the number of the DataCore Virtual Disk Device as discovered on the AIX Host. Then use the chdev command to change the rw_timeout to 60. chdev l hdiskx a rw_timeout=60 Configure a dummy LUN 0 device AIX s MPIO requires a Disk Device to always be available at LUN 0 as this allows AIX to detect and use additional LUNs served to the same port on Host. So, on each DataCore Server, create one, very small, non-mirrored Virtual Disk. Then serve this Virtual Disk to all AIX Host Ports as a dummy LUN 0. Do not mirror this Virtual Disk (to avoid situations where the DataCore Server mirror partner sets the Virtual Disk as 'unavailable' to the AIX Host) which will then prevent access or discovery of other LUNs already served to the same Host port. There is also no need to format this 'dummy' LUN; It is enough to just discover it on the AIX Host. Page 8

Known Issues Known Issues The following is intended to make DataCore Software customers aware of any issues that may affect performance, access or generally give unexpected results under certain conditions when AIX Hosts are used with SANsymphony. Some of the issues here have been found during DataCore s own testing but many others are issues reported by DataCore Software customers, where a specific problem had been identified and then subsequently resolved. DataCore cannot be held responsible for incorrect information regarding IBM products. No assumption should be made that DataCore has direct communication with IBM regarding the issues listed here and we always recommend that users contact their IBM directly to see if there are any updates or fixes since they were reported to us. For Known issues for DataCore s own Software products, please refer to the relevant DataCore Software Component s release notes. Page 9

Known Issues AIX can set the Device Queue Depth to 1 for DataCore Virtual Disks It has been found in testing that Disk Devices that are not marked as being from IBM storage may have their LUN Queue Depths set to 1. This will significantly affect Host performance when using DataCore Virtual Disks. How to identify the Device Queue Depth of a DataCore Virtual Disk: # lsdev -Cc disk hdisk0 Available 10-80-00-4,0 16 Bit SCSI Disk Drive hdisk1 Available 10-80-00-5,0 16 Bit SCSI Disk Drive hdisk2 Available 10-90-01 Other FC SCSI Disk Drive Note: In the above 'hdisk2' is the DataCore Virtual Disk. Once you have the 'hdisk' number, use this information to get the disk device's attributes: # lsattr -El hdisk2 location Location Label True ww_name 0x210100e08b23fb22 FC World Wide Name False pvid none Physical volume identifier False queue_depth 1 Queue DEPTH True... In this example the 'queue_depth' value has been identified as being '1'. Use the chdev command to set the value to '16'. # chdev l hdisk2 a queue_depth=16 Note: A queue_depth value of '16' is the value that DataCore use when qualifying AIX. Larger values are possible but any that are greater than '32' are not supported by DataCore. Smaller values can also be used instead, if preferred. Please refer to IBM's own documentation for more information. Re-run the 'lsattr' command above to verify the change. Page 10

Appendix A Preferred Server & Preferred Path settings See the Preferred Servers and Preferred Paths sections from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/port_connections_and_paths.htm Without ALUA enabled If Hosts are registered without ALUA support, the Preferred Server and Preferred Path settings will serve no function. All DataCore Servers and their respective Front End (FE) paths are considered equal. It is up to the Host s own Operating System or Failover Software to determine which DataCore Server is its preferred server. With ALUA enabled ALUA is not supported for Oracle Solaris Hosts. Page 11

Appendix B Configuring Disk Pools See Creating Disk Pools and Adding Physical Disks from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/about_disk_pools.htm The smaller the SAU size, the larger the number of indexes are required, by the Disk Pool driver, to keep track of the equivalent amount of allocated storage compared to a Disk Pool with a larger SAU size; e.g. there are potentially four times as many indexes required in a Disk Pool using a 32MB SAU size compared to one using 128MB the default SAU size. As SAUs are allocated for the very first time, the Disk Pool needs to update these indexes and this may cause a slight delay for IO completion and might be noticeable on the Host. However this will depend on a number of factors such as the speed of the physical disks, the number of Hosts accessing the Disk Pool and their IO READ/WRITE patterns, the number of Virtual Disks in the Disk Pool and their corresponding Storage Profiles. Therefore, DataCore usually recommend using the default SAU size (128MB) as it is a good compromise between physical storage allocation and IO overhead during the initial SAU allocation index update. Should a smaller SAU size be preferred, the configuration should be tested to make sure that a potential increased number of initial SAU allocations does not impact the overall Host performance. Page 12

Appendix C Reclaiming Storage Using SCSI UNMAP commands SCSI UNMAP cannot be used as it is not currently supported for AIX Hosts. SANsymphony's Automatic Reclamation feature DataCore Servers keep track of any 'all-zero' write I/O requests sent to Storage Allocation Units (SAU) in all Disk Pools. When enough 'all-zero' writes have been detected to have been passed down to an entire SAUs logical address space, that SAU will be immediately assigned as 'free' (as if it had been manually reclaimed) and made available to the entire Disk Pool for future (re)use. No additional 'zeroing' of the Physical Disk or 'scanning' of the Disk Pool is required. Important technical notes on Automatic Reclamation The Disk Pool driver has a small amount of system memory that it uses keep a list of all address spaces in a Disk Pool that are sent 'all-zero' writes; all other (non-zero) write requests are ignored by the Automatic Reclamation feature and not included in the in-memory list. Any all-zero write addresses that are detected to be physically 'adjacent' to each other from a block address point of view the Disk Pool driver will 'merge' these requests together in the list so as to keep the size of it as small as possible. Also as entire 'all-zeroed' SAUs are re-assigned back to the Disk Pool, the record of all its address spaces is removed from the in-memory list making space available for future all-zero writes to other SAUs that are still allocated. However if write I/O pattern of the Hosts mean that the Disk Pool receives all-zero writes to many, non-adjacent block addresses the list will require more space to keep track of them compared to all-adjacent block addresses. In extreme cases, where the in-memory list can no longer hold any more new all-zero writes (because all the allocated system memory for the Automatic Reclamation feature has been used) the Disk Pool driver will discard the oldest records of the all-zero writes to accommodate newer records of all-zero write I/O. Likewise if a DataCore Server is rebooted for any reason, then the in-memory list is completely lost and any knowledge of SAUs that were already partially detected as having been written with all-zeroes will now no longer be remembered. In both of these cases this can mean that, over time, even though technically an SAU may have been completely overwritten with all-zero writes, the Disk Pool driver does not have a record that cover the entire address space of that SAU in its in-memory list and so the SAU will not be Page 13

Appendix C Reclaiming storage made available to the Disk Pool but remain allocated to the Virtual Disk until any future all-zero writes happen to re-write the same address spaces that were forgotten about previously by the Disk Pool driver. In these scenarios, a Manual Reclamation will force the Disk Pool to re-read all SAUs and perhaps detect those now missing all-zero address spaces. See the section 'Manual Reclamation' on the next page for more information. Reclaiming storage by sending all-zero writes to a Hosts's own filesystem For AIX Hosts, a suggestion would be to create a sparse file of an appropriate size (If there is enough free space available in the file system) and then zero-fill it using the dd command: dd if=/dev/zero of=my_file bs=1024 count=2097152 This I/O will then be detected by SANsymphony's Automatic Reclamation function (see previous page for more details). Also see: Performing Reclamation section from the SANsymphony Help: http://www.datacore.com/ssv-webhelp/reclaiming_virtual_disk_space.htm SANsymphony's Manual Reclamation feature Manual reclamation forces the Disk Pool driver to 'read' all SAUs currently assigned to a Virtual Disk looking for SAUs that contain only all-zero IO data. Once detected, that SAU will be immediately assigned as 'free' and made available to the entire Disk Pool for future (re)use. No additional 'zeroing' of the Physical Disk is required. Note that manual reclamation will create additional 'read' I/O on the Storage Array used by the Disk Pool, as this process runs at 'low priority' it should not interfere with normal I/O operations. However, caution is advised, especially when scripting the manual reclamation process. Manual Reclamation may still be required even when Automatic Reclamation has taken place (see the 'Automatic Reclamation' section on the previous page for more information). How much storage will be reclaimed? It is impossible to predict exactly how many Storage Allocation Units (SAUs) will be reclaimed. For reclamation of an SAU to take place, it must contain only all-zero block data over the entire SAU else it will remain allocated and this is entirely dependent on how and where the Host has written its data on the DataCore LUN. Page 14

Appendix C Reclaiming storage For example, if the Host has written the data in such a way that every allocated SAU contains a small amount of non-zero block data then no (or very few) SAUs can be reclaimed, even if the total amount of data is much less than the total amount of assigned SAUs. It may be possible to use the Host operating system s own defragmentation tools to move andy data that is spread out over the DataCore LUN so that it ends up as one or more large areas of contiguous non-zero block addresses. This might then leave the the DataCore LUN with SAUs that now only have all-zero data on them and that can then be reclaimed. However care should be taken that the act of defragmenting the data itself does not cause more SAU allocation as the block data is moved around (i.e. re-written to new areas on the DataCore LUN) during the re-organization. Page 15

Previous changes 2017 April Updated General This document has been reviewed for SANsymphony 10.0 PSP 6 update 4. No additional settings or configurations are required. 2016 November Updated Appendix C - Reclaiming storage Automatic and Manual reclamation These two sections have been re-written with more detailed explanations and technical notes. August Updated Known Issues AIX can set the Device Queue Depth to 1 for DataCore Virtual Disks The example showed how to set, using the chdev command, the queue_depth parameter for an hdisk to '16' and the footnote for the example merely stated that values larger than 32 should not be used, without any context. The footnote now reads: "'16' is the value that DataCore use when qualifying AIX. Larger values are possible but any that are greater than '32' are not supported by DataCore. Smaller values can also be used instead, if preferred. Please refer to IBM's own documentation for more information." July Updated This document has been reviewed for SANsymphony 10.0 PSP 5. No additional updates were required. Added AIX compatibility lists AIX Version 7.2 2015 December Added A new section: 'Known Issues' The information added was previously documented in DataCore Support's FAQ 981. November Updated SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329 June Added Page 16

Previous changes List of qualified AIX Versions - Notes on qualification This section has been updated and new information added regarding the definitions of all qualified, unqualified and not supported labels. A new section on AIX versions that are no longer in development has also been added at the end of this section. February Updated List of qualified AIX Versions AIX 7.1 is now qualified with SANsymphony-V 10.x using non-alua settings. iscsi is still not considered qualified at this time; all qualified versions are with Fibre Channel only. 2014 and earlier December No new technical information has been added but this document now combines all of DataCore s AIX-related information from older Technical Bulletins into a single document including: Technical Bulletin 6: AIX Hosts Technical Bulletin 8: Formatting Host s File Systems on Virtual Disks created from Disk Pools Technical Bulletin 11: Disk Timeout Settings on Hosts Technical Bulletin 16: Reclaiming Space in Disk Pools Added Which Distributions are qualified? Added new tables to show which versions are explicitly qualified, unqualified and not supported with either SANsymphony-V 8.1 PSP1 Update 4, 9.x and 10.x, and if the configuration is with or without ALUA enabled Hosts. Note that the minimum requirement for SANsymphony-V 8.x is now 8.1 PSP1 Update 4 Appendix A This section gives more detail on the Preferred Server and Preferred Path settings with regard to how it may affect a Host. Appendix B This section incorporates information regarding Reclaiming Space in Disk Pools (from Technical Bulletin 16) that is specific to AIX Hosts. Updated Host Settings - Improved explanations to most of the required Host Settings and DataCore Server Settings generally. Technical Bulletin 6: AIX Hosts April 2013 Removed all references to SANmelody as this is now End of Life of December 31 2012. Removed all references to iscsi as this is not supported with AIX. July 2012 Updated for SANsymphony-V 9.x. No new technical information. Page 17

Previous changes January 2012 Updated DataCore Server and Host minimum requirements. Removed all references to End of Life SANsymphony and SANmelody versions that are no longer supported as of December 31 2011. June 2011 Added AIX 7.1. November 2011 Removed all references to End of Life SANsymphony and SANmelody versions that are no longer supported as of July 31 2011. October 2011 Added SANsymphony-V 8.x July 2009 Added AIX 6.1.x. March 2009 Initial publication of Technical Bulletin. Added AIX 5.2 TL10 Page 18

COPYRIGHT Copyright 2017 by DataCore Software Corporation. All rights reserved. DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners. ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED AS IS AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTABILITY, NON- INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW. No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-readable form without the prior written consent of DataCore Software Corporation