IBM SONAS Storage Intermix

Size: px
Start display at page:

Download "IBM SONAS Storage Intermix"

Transcription

1 IBM SONAS Storage Intermix The best practices guide Jason Auvenshine, Storage Architect Tom Beglin, Product Architect IBM Systems and Technology Group November 2013 Copyright IBM Corporation, 2013 Page 1 of 48

2 Table of Contents Table of Contents...2 Abstract...4 Introduction...4 Understanding Appliance and Gateway Storage...6 Introduction... 6 Intermix Configurations... 6 High Level Description of Intermix Approaches and Process...8 Introduction... 8 Intermixing different disk systems within a SONAS system...11 Adding SONAS storage nodes (2851-SS2) and quorum node considerations...12 Matching drive types in the same GPFS storage pool...13 Specific recommendations for DDN and DCS3700 Spinning Drive Intermix...14 Specific recommendations for DDN and DCS3700 SSD Intermix...15 SSD Considerations for all storage types...20 Matching NSD sizes and number of NSDs in the same GPFS storage pool...22 Introduction Impacts with varied number and performance of NSDs from different storage devices in the same GPFS storage pool...22 Impacts with varied capacity NSDs in the same GPFS storage pool...23 Using different GPFS storage pools...25 When should I use different GPFS storage pools?...25 Placement and migration policies in multi-pool systems...25 Monitoring metadata space utilization...26 GUI navigation Metadata utilization notifications...27 Separate GPFS file systems for intermixed storage...28 Setting optimal storage device settings...30 Setting optimal GPFS parameters...32 Failure Groups Understanding limitations and restrictions in an intermixed system...34 Implementation and post-implementation considerations...37 Warnings seen during the RXC MES...37 Storage Building Block Numbers...37 Rebalancing file storage (restripe)...39 Customer Questionnaire...41 Summary...42 Acronyms...43 Resources...45 About the authors...46 Copyright IBM Corporation, 2013 Page 2 of 48

3 Trademarks and special notices...47 Copyright IBM Corporation, 2013 Page 3 of 48

4 Abstract This white paper explains the best practices for implementing the IBM SONAS storage system utilizing a mixture of back-end storage technologies ( Intermix ). Currently, intermix is supported between the DDN storage (IBM 2851-DR1) and the DCS3700 storage (IBM 2851-DR2 or IBM C) back ends. Introduction This paper explains the recommended best practices for mixing SONAS back end storage systems. For the initial intermix release, we support mixing DDN and DCS3700 back end storage of the same drive type in the same GPFS storage pool and file system, a specific use case of SSDs for metadata in the system pool, and mixing DDN and DCS3700 back end storage of the same or different drive type in different GPFS storage pools within the same file system. No other intermix configurations are supported at this time. Migration from and removal of DDN storage is also not supported at this time. The primary considerations for storage intermix covered in this guide include: Understanding Appliance and Gateway storage High level description of Intermix approaches Matching drive types in the same GPFS storage pool SSD considerations Matching NSD sizes and number of NSDs in the same GPFS storage pool What performance impacts to expect with varied number of NSDs from different storage devices in the same GPFS storage pool What performance impacts to expect with varied capacity disks in the same storage pool Choosing when to use different GPFS storage pools Placement and migration policies in multi-pool systems Monitoring metadata space utilization Setting optimal storage device settings Segment Size Setting optimal GPFS parameters Failure Groups Block Size Matching existing parameters in the same pool Understanding limitations and restrictions in an intermixed system Implementation and post-implementation considerations Warnings in RXC MES Rebalancing file storage (restripe) Customer Questionnaire Copyright IBM Corporation, 2013 Page 4 of 48

5 Topics in this guide are presented in conceptual groups rather than in a chronological implementation order. This guide does not take the place of following the steps in the SONAS Gateway Installation Guide and the SONAS MES Installation Instructions. Therefore, it is recommended to look through this best practices guide and determine what all of your setup parameters should be prior to using the aforementioned guides to perform the actual setup steps. Terms: The SONAS storage controller (IBM MTM 2851-DR1) and SONAS disk storage expansion unit (IBM MTM 2851-DE1) are generally known as the DDN storage products because they are OEM ed from a company named DataDirect Networks (DDN). In this document: All references to the DCS3700 storage controller should be taken to mean both the DCS3700 storage controller sold under the IBM Machine Type/Model C and the rebranded SONAS version of this product under the IBM SONAS Machine Type/Model 2851-DR2. All references to the DCS3700 expansion unit should be taken to mean both the DCS3700 expansion unit sold under the IBM Machine Type/Model E and the rebranded SONAS version of this product under the IBM SONAS Machine Type/Model 2851-DE2. Copyright IBM Corporation, 2013 Page 5 of 48

6 Understanding Appliance and Gateway Storage Introduction What is the difference between a SONAS Appliance and a SONAS Gateway? A SONAS Appliance is a SONAS that was originally sold with internal DDN storage, MTM 2851-DR1. This storage is managed by the SONAS GUI and SONAS RAS package. Any SONAS with only MTM 2851-DR1 storage is a SONAS Appliance. A SONAS Gateway is a SONAS that was originally sold with no internal DDN storage, and which connects to external storage. External storage is not managed by the SONAS GUI and SONAS RAS package. Instead, external storage is managed by the storage's own GUI and RAS package. SONAS Gateway External storage types include XIV, Storwize V7000, and DCS3700 in both MTM C and MTM 2851-DR2. What is an Intermixed System? Intermixed systems are systems with more than one type of storage. Different types of storage must always be connected to separate storage pods (pairs of 2851-SSx storage nodes). In an intermixed system, the system itself may no longer be entirely appliance or entirely gateway. A storage pod connected to DDN (2851-DR1) storage is an appliance storage pod. Storage in appliance storage pods in an intermixed system continues to be managed by the SONAS GUI and SONAS RAS package. A storage pod connected to any other storage is a gateway storage pod. Storage in gateway storage pods in an intermixed system continue to be managed by the external storage GUI and RAS package. Intermix Configurations SONAS Storage pods Different types of storage, meaning storage with different machine types, must always be placed in separate storage pods in an intermixed system. There is no exception to this rule. When adding storage pods, all the rules for ordering that storage in a uniform system (number of devices, number of expansions, drive types, device options, etc.) also apply to an intermixed system. Copyright IBM Corporation, 2013 Page 6 of 48

7 GPFS Storage Pools Storage from different devices may be placed in different GPFS Storage Pools. Certain intermix configurations also allow storage from different devices to be placed in the same GPFS storage pool. This is only recommended where the drive types and NSD sizes are similar, when SSDs are added to the system pool for metadata, or for migration purposes. Following sections describe the considerations when using different devices in the same GPFS storage pool. Consider these carefully before deciding which configuration to use. If possible, placing different devices in different GPFS storage pools provides less performance and availability risk. Currently, only DDN (2851-DR1) storage and DCS3700 (2851-DR2 / C) storage may be intermixed in the same GPFS storage pool. GPFS file systems Storage from different devices may be placed in different GPFS file systems. Certain intermix configurations also allow storage from different devices to be placed in the same GPFS file system. This is only recommended where the file system parameters desired for the two devices are similar, or for migration purposes. Following sections describe the considerations when using different devices in the same GPFS file system. Consider these carefully before deciding which configuration to use. If possible, placing different devices in different GPFS file systems provides less risk to performance and availability. Currently, only DDN (2851-DR1) storage and DCS3700 (2851-DR2 / C) storage may be intermixed in the same GPFS file system. Copyright IBM Corporation, 2013 Page 7 of 48

8 High Level Description of Intermix Approaches and Process Introduction The system context for most intermixed SONAS systems is an existing IBM customer that has a SONAS appliance system with integrated DDN storage products and wishes to extend the storage capacity of that SONAS system with IBM DCS3700 storage. Because that is the most common case, it will be used in the examples in this guide. Where you see references to existing storage, that can be presumed to mean DDN. Where you see references to new storage, that can be presumed to mean IBM DCS3700. The customer has a SONAS Base Rack (MTM 2851-RXA) with one or more SONAS storage controllers (MTM 2851-DR1) and optionally one or more SONAS disk storage expansion units (MTM 2851-DE1). The existing SONAS system may also have one or more SONAS Storage Expansion racks (MTM 2851-RXB) with additional SONAS storage nodes, SONAS storage controllers (2851-DR1) and SONAS disk storage expansion units (2851-DE1). To extend the storage capacity of the SONAS appliance system with IBM DCS3700 storage, the customer, with the help of their IBM representative or business partner, performs the following high level activities: 1. The customer purchases a SONAS Interface Expansion Rack (2851-RXC) with one or more pairs of SONAS storage nodes (MTM 2851-SS2). 2. The customer purchases and installs one or more DCS3700 storage systems. The customer (via their IBM Service Support Representative or SSR) installs the DCS3700 storage systems into rack(s) provided by the customer. 3. The customer connects one or two DCS3700 storage systems to each pair of SONAS storage nodes (2851-SS2) in the SONAS Interface Expansion Rack. 4. The customer configures the storage in the DCS3700 using the DS Storage Manager, which is the standard IBM software for configuring, monitoring and managing IBM DS storage systems. The high level tasks for configuring the storage on the DCS3700 involve: Creating a host group of the SONAS storage nodes Configuring global hot spare disks Creating RAID arrays Creating logical drives out of the available space in the RAID arrays 5. After the logical drives are configured on the DCS3700 storage system(s) the customer can use the mkdisk command to discover the DCS3700 logical drives (LUNs) visible to the SONAS storage nodes and to create the corresponding NSDs. There are some outstanding defects with the mkdisk command that are scheduled to be fixed in SONAS LUNs may also be discovered using the cnaddstorage command in the interim. Note that the cnaddstorage command requires root access. The customer can query the NSDs created using the lsdisk command regardless of how they were added to SONAS. NOTE: Whenever executing GPFS NSD or other configuration commands, always make sure one configuration change completes in the broadcast to all nodes before running another one. If two change commands execute out of order on different nodes, it is possible for GPFS to assert and cause an outage. Copyright IBM Corporation, 2013 Page 8 of 48

9 6. The customer can use the mmchdisk command to change the attributes of the newly created NSDs, such as the file system pool to which they belong, the usage type (dataandmetadata, dataonly, metadataonly) and the failure group to associate with the NSD. 7. The customer can add the NSDs to either a new GPFS file system using the mkfs command, to a new storage pool(s) in an existing GPFS file system or to existing storage pools in an existing GPFS file system using the chfs command with the - add or -pool options. 8. The following diagram depicts the scenario of: A existing SONAS appliance system with a SONAS Base Rack (2851-RXA) containing SONAS interface nodes, storage nodes, storage controllers and disk storage expansion units. To this SONAS appliance system, a SONAS interface expansion rack (2851-RXC) has been added containing a single pair of SONAS storage node (2851-SS2). Another (IBM or customer supplied) rack containing two DCS3700 storage systems, which are direct Fibre-Channel (FC) attached to the SONAS storage nodes in the SONAS Interface Expansion Rack (2851-RXC). In this diagram, each DCS3700 system contains a DCS3700 controller ( C) and two DCS3700 expansion units ( E). IBM System Storage SONAS Appliance with embedded DDN storage intermixed with external DCS3700 storage 2851-RXA SONAS Base Rack Config #3 (FC 9005) 42 SMC 8150L2 50 port GbE Mandatory 41 SMC 8150L2 50 port GbE Mandatory Interface Node #8 (2851-SI2) port IB sw (2851-I36) #2 Mandatory port IB sw (2851-I36) #1 Mandatory U blank filler panel 21 KVM / KVM Switch Mandatory PDU PDU Interface Node #7 (2851-SI2) Interface Node #6 (2851-SI2) Interface Node #5 (2851-SI2) Interface Node #4 (2851-SI2) Interface Node #3 (2851-SI2) Interface Node #2 (2851-SI2) - Passive mgmt Interface Node #1 (2851-SI2) - Active mgmt Storage Node #2 (2851-SS1 or SS2) Storage Node #1 (2851-SS1 or SS2) Disk Storage Expansion Unit #2.2 (2851-DE1) Storage Controller #2.1 (2851-DR1) Disk Storage Expansion Unit #1.2 (2851-DE1) Storage Controller #1.1 (2851-DR1) PDU PDU Mandatory Mandatory Mandatory Mandatory Mandatory 2851-RXC SONAS Interface Expansion Rack 42 SMC 8150L2 50 port GbE Mandatory 41 SMC 8150L2 50 port GbE Mandatory PDU PDU Storage Node #4 (2851-SS2) Storage Node #3 (2851-SS2) PDU PDU PDU PDU Customer Supplied Rack with DCS3700 systems DCS3700 Expansion #2.2 ( E) DCS3700 Expansion #2.2 ( E) DCS3700 Controller #2.1 (181-80C) DCS3700 Expansion #1.3 ( E) DCS3700 Expansion #1.2 ( E) DCS3700 Controller #1.1 (181-80C) PDU PDU Notes: 1. SONAS has support for Storage Nodes in SONAS Interface Expansion rack (2851-RXC) 2. Existing SONAS Appliance customers can order Interface Expansion Rack (2851-RXC) with pairs of storage nodes. 3. Each Storage Node pairs can be attached to one or two external disk storage systems (i.e. DCS3700) 4. Each pair of SONAS storage nodes (2851-SS2) has only one type of storage (V7000, DCS3700, XIV) attached to it. 5. External disk systems are not configured, managed, or monitored by SONAS RAS code 6. External disk systems are configured, serviced and supported using their normal configuration tools and processes. 7. SONAS Appliance configuration may have one or more SONAS Storage Expansion Racks (2851- RXB) attached to it. Page 7 IBM Confidential Figure 1 - SONAS Appliance extended with external DCS3700 storage 2013 IBM Corporation Copyright IBM Corporation, 2013 Page 9 of 48

10 The diagram above is just a starting example. The system may continue to be expanded by: Adding additional pairs of SONAS storage nodes (2851-SS2) to the SONAS Interface Expansion Rack (2851-RXC). Additional pairs of SONAS storage nodes may be added until the maximum of 32 nodes (this includes all node types in a SONAS system) is reached. This limit of 32 nodes is for a SONAS system ordered with the SONAS 36-port InfiniBand switch (MTM 2851-I36). One or two DCS3700 storage system may be direct fibre-channel attached to each additional pair of SONAS storage nodes Copyright IBM Corporation, 2013 Page 10 of 48

11 Intermixing different disk systems within a SONAS system Within the context of a SONAS system, DDN storage and DCS3700 storage may be intermixed in a variety of different ways. The following intermix scenarios are supported and described in the following sections. The different intermix capabilities are listed in their order of preference, with Approach #1 being the most desirable and Approach #3 the least desirable. However, each customer needs to weigh the advantages and disadvantages of each approach and their desired goals to decide on the best approach for them. Approach #1 - Using separate GPFS file systems for storage residing on DCS3700 storage systems Approach #2 - Placing DCS3700 storage in separate file system pools within the same GPFS file system containing file system pools having DDN storage Approach #3 - Placing DCS3700 storage in the same file system pools as DDN storage in the same file system The following table outlines some of the high level advantages and disadvantages of each approach: Approach Advantages Disadvantages Approach #1 Separate GPFS file systems for DCS3700 storage Approach #2 Separate file system storage pools for DCS3700 storage in same file system with file system pools with DDN storage Complete separation from a performance and availability perspective Ability to utilize existing GPFS file systems and NAS exports Separation of storage in different disk storage systems to different file system pools File placement policies may be used to steer certain files to DCS3700 file system pools ILM policies may be used to perform file system pool migration to migrate files from DDN file system pools to DCS3700 file system pools This approach provides for the eventual migration of files/directories from DDN storage to DCS3700 storage without having to copy the data from one file system to another file system and redirect NAS clients to different NAS exports. Requires creation and management of separate GPFS file systems and NAS exports Storage space in DDN systems and DCS3700 systems not shared within a single file system Storage space in the different file system pools must be monitored and managed separately. Requires new file placement policy to steer specific files to specific storage pools and/or migration policies to move files between storage pools. Expansion storage is only for nonsystem pools. You cannot create a new system pool on the new storage and relegate the old storage to data only without a complex migration process which is not covered in this guide. Copyright IBM Corporation, 2013 Page 11 of 48

12 Approach #3 DCS3700 storage and DDN storage intermixed in the same GPFS file system pools in same file system The storage space in the file system pool can be managed as a single entity even though it contains both DDN storage and DCS3700 storage. This approach provides for the eventual migration of files/directories from DDN storage to DCS3700 storage without having to copy the data from one file system to another file system and redirect NAS clients to different NAS exports. No separation of storage from a performance and availability perspective. An individual file/directory may be spread across both DDN storage system and DCS3700 storage systems. Adding SONAS storage nodes (2851-SS2) and quorum node considerations When adding new storage nodes to a SONAS system you have the option during the installation process to assign the storage nodes as GPFS quorum nodes. The recommendation regarding the number of GPFS quorum nodes in a SONAS system is the following: There must always be an odd number of quorum nodes in the system A minimum of three (3) quorum nodes in the system ½ the nodes in the system (rounded up to an odd number) should be assigned as quorum nodes A maximum of seven (7) quorum nodes in the system The lsnode CLI command can be used to list which nodes are currently assigned as quorum nodes. If the SONAS system currently has less that seven quorum nodes (such as 3 or 5) then newly added storage nodes (added as part of expanding the SONAS system with DCS3700 storage) can be added as quorum nodes, Storage nodes are preferred as GPFS quorum nodes, as opposed to SONAS interface nodes. If necessary existing interface nodes that are currently assigned as quorum nodes can be changed to non-quorum nodes. The chnode CLI command can be used to change a quorum node to a non-quorum node. Copyright IBM Corporation, 2013 Page 12 of 48

13 Matching drive types in the same GPFS storage pool With the exception of the SSD use case described below, only drives of the same type (SSD vs. SAS vs. NL-SAS) should be placed into the same GPFS storage pool. GPFS storage pools are intended to group storage with nearly identical performance characteristics. If a GPFS storage pool contains storage with significantly different performance characteristics, the overall performance of the system can suffer severe degradation. Here is the matrix describing which types of drives may generally be used together in the same GPFS storage pool: Existing Drives New Drives OK to add new drives to the same GPFS storage pool? 1 TB, 2 TB, or 3 TB 7200 RPM NL-SAS 1 TB, 2 TB, or 3 TB 7200 RPM NL-SAS 300 GB, 600 GB, or 900 GB 10K RPM or 15K RPM SAS 2 TB, 3 TB, or 4 TB 7200 RPM NL-SAS 300 GB, 600 GB, 900 GB, or 1.2 TB 10K or 15K RPM RPM SAS 2 TB, 3 TB, or 4 TB 7200 RPM NL-SAS YES But be sure to consider the information in the section Matching NSD sizes and number of NSDs in the same GPFS storage pool. NO, except existing systems where SAS drives are metadataonly and NL-SAS are data-only in the system pool may add like drives in the system pool the same way. NO, except existing systems where SAS drives are metadataonly and NL-SAS are data-only in the system pool may add like drives in the system pool the same Copyright IBM Corporation, 2013 Page 13 of 48

14 300 GB, 600 GB, or 900 GB 10K RPM or 15K RPM SAS Any type of spinning disk drive 300 GB, 600 GB, 900 GB, or 1.2 TB 10K or 15K RPM SAS Any type of SSD way. YES But it's best to also match rotational speed (10K with 10K, 15K with 15K) and also be sure to consider the information in the section Matching NSD sizes and number of NSDs in the same GPFS storage pool. NO, except for the use case in the SSD Considerations Section below Specific recommendations for DDN and DCS3700 Spinning Drive Intermix As a general rule of thumb, GPFS performs best when all NSDs in the same file system pool have roughly equivalent performance characteristics (response time, throughput). If you are going to mix NSDs residing on DDN storage with NSDs residing on DCS3700 storage in the same GPFS file system pool, with the same usage type then you are strongly advised to follow these recommendations: The NSDs corresponding to the LUNs in different storage systems should be backed by physical disk drives having the same rotational speed (7.2K RPM, 10K RPM or 15K RPM) The NSDs corresponding to the LUNs in different storage systems should be backed by RAID arrays utilizing the same RAID technology (RAID-6) The NSDs corresponding to the LUNs in different storage systems should be backed by RAID arrays using the same RAID array width (8+P+Q), meaning 8 data drives and two parity drives. Based on these guidelines, IBM believes it is possible to intermix the following types of storage technology within the same GPFS file system pool. Each row in the table represents a configuration utilizing the same disk drive technology; RAID level and RAID array width that can be mixed in the same GPFS file system pool. SONAS Storage Controller (2851- DR1) & SONAS Disk Storage Expansion Unit (2851-DE1) Feature Codes Disk drive techology RAID level RAID config DCS3700 Storage Controller ( C) & DCS3700 Expansion Unit ( E) Feature Codes Disk drive technology RAID level RAID config Copyright IBM Corporation, 2013 Page 14 of 48

15 TB 7.2K NL SAS* 2TB 7.2K NL SAS 3TB 7.2K NL SAS RAID-6 8+P+Q TB 7.2K NL SAS 3TB 7.2K NL SAS 4TB 7.2K NL SAS* GB 15K SAS* 600GB 15K SAS RAID-6 8+P+Q GB 15K SAS* 600GB 10K SAS GB 10K SAS RAID-6 8+P+Q GB 10K SAS GB 10K SAS TB 10K SAS* RAID-6 RAID-6 RAID-6 8+P+Q 8+P+Q 8+P+Q * There is not an exact size match for this type of drive in an intermixed system. Please consider the recommendations in the section Impacts with varied capacity NSDs in the same GPFS storage pool carefully before planning to intermix these drives in the same GPFS storage pool. Specific recommendations for DDN and DCS3700 SSD Intermix One common use case for attaching DCS3700 storage systems to an existing SONAS appliance system is to use solid state disks in the DCS3700 storage system(s) for GPFS file system metadata. This use case is desirable for SONAS customer having very large file systems containing 100 s of million of files. As the number of files within the file system grows to very large numbers, the time it takes to perform GPFS policy engine scans increases such that it impacts the performance of certain SONAS advanced functions. Some SONAS advanced functions, such as asynchronous replication and TSM backup, perform GPFS policy engine scans of the file system metadata to determine the set of files on which to operate. For customers wishing to pursue this option, the following high level steps can be taken with the help of their IBM representative or business partner: 1. The customer can purchase a SONAS Interface Expansion Rack (2851-RXC) with one or more pairs of SONAS storage nodes (MTM 2851-SS2). 2. The customer can purchase and installs one or more DCS3700 storage systems, with solid state disks in the DCS3700 system(s) 3. The customer connects one or two DCS3700 storage systems to each pair of SONAS storage nodes (2851-SS2) in the SONAS Interface Expansion Rack. 4. The customer configures the storage in the DCS3700 using the DS Storage Manager, which is the standard IBM software for configuring, monitoring and managing IBM DS storage systems. 5. After the logical drives are configured on the DCS3700 storage system(s) the customer can use the mkdisk command to discover the DCS3700 logical drives (LUNs) visible to the SONAS storage nodes and to create the corresponding NSDs. Alternatively cnaddstorage can be used to create the NSDs 1 The customer can query the newly created NSDs using the lsdisk command. NOTE: Whenever executing GPFS NSD or other configuration commands, always 1 At the time of this writing cnaddstorage is the preferred method of adding NSDs. Copyright IBM Corporation, 2013 Page 15 of 48

16 make sure one configuration change completes in the broadcast to all nodes before running another one. If two change commands execute out of order on different nodes, it is possible for GPFS to assert and cause an outage. 6. The customer can change the storage pool to which the newly created NSDs belong to the system pool and the usage type to metadata only using the -pool system and -usagetype metadataonly options on the command. 7. The customer can then add the newly created NSDs (as metadata only disks) to an existing file system using the chfs commands. If the customer wishes to have all new file system metadata directed to the NSDs corresponding to the DCS3700 logical drives on the SSD RAID arrays, then the customer can change the usage type of all existing NSDs (corresponding to DDN LUNs) in the system pool to dataonly using the usagetype dataonly option on the chdisk command. At this time of writing the usagetype of an NSD that is already part of a file system can not be changed using chdisk command. Instead mmchdisk must be used as follows: mmchdisk <file system> change -F Descfile where Descfile contains the description of the spnning NSDs that must be changed one per line as follows: nsdname:::dataonly Once this is done, all new file system metadata will be written only to the NSDs corresponding to DCS3700 logical drives on the SSD RAID arrays. The full performance benefits of having GPFS metadata on RAID arrays on solid state disks will not be achieve if the system pool within the file system contains a mixture of NSDs containing GPFS file system metadata, some NSDs corresponding to DDN LUNs (on RAID arrays on spinning hard disk drives) and some NSDs corresponding the DCS3700 LUNs on RAID arrays on solid state disks. To achieve the full performance benefit of having GPFS metadata on RAID arrays on solid state disk, you should eventually migrate existing GPFS file system metadata from DDN LUNs to DCS3700 LUNs on solid state disks. To migrate all of the existing GPFS file system metadata from the NSDs residing on DDN LUNs to the NSDs corresponding to the DCS3700 LUNs on the SSD arrays follow this procedure (this assumes that you have already changed the usagetype of the DDN LUNs to dataonly using the usagetype dataonly option on the chdisk command or command) Issue the mmrestripefs -r command or restripefs with --balance. The mmrestripefs command is very I/O intensive and should only be done in periods of low system activity as it is very likely to impact the performance of network file serving to NAS clients. It will also block certain file system data management commands such as snapshot creation and deletion. If it is required to stop and restarting restriping task then mmrestripefs command with -r option should be used. One may need to stop and restart the restriping task several times in order to run the restriping task during off peak hours as well as to allow blocked task to proceed. See the Rebalancing file storage (restripe) section of this guide for more details. Prior to migrating GPFS file system metadata from DDN LUNs to DCS3700 logical drives on SSD arrays, the customer should do careful planning an ensure that they have sufficient space for the current GPFS file system metadata and future GPFS file system metadata on Copyright IBM Corporation, 2013 Page 16 of 48

17 the DCS3700 logical drives on the SSD arrays. The NSDs must also be spread across failure groups such that the file replication setting can be maintained. In net, Must have equivalent or better performance of NSDs seen by the file system it is replacing. One should consider number of physical disks, rpm of the disks it is logically replacing, RAID levels used (e.g., R10 or R5) as well as storage controllers that are hosting it. As a general r Must have enough capacity to hold the meta data including the sapce needed for replication if it is set. The new NSDs must have failure groups that are equivalent to the NSDs it would be replacing. If you are going to use solid state disks inside the DCS3700 storage system(s) specifically for GFPS file system metadata, you need to estimate the amount of space that will be needed for GPFS file system metadata. The amount of space needed for GPFS file system metadata depends on a number of factors, including: The total number of files and directories that will exist within the file system The length of file names and directory names Whether files have GPFS extended attributes associated with them. In a SONAS system, files that have been scanned for viruses using the anti-virus capability of SONAS have GPFS extended attributes associated with them. In addition, files that are managed via the Hierarchical Space Management (HSM) feature have extended attributes associated with them. The number of snapshots within the file system and the number of changed blocks in each snapshot Whether metadata replication is enabled. Metadata replication is the default for GPFS file systems inside a SONAS system. As a general rule of thumb, the amount of space needed for file system metadata ranges between 2% and 5% of the total file system space. We recommend using 5% (of total file system space) as general rule of thumb for the amount of space needed for file system metadata in a SONAS system for the following reasons: Metadata replication is the default setting when creating a file system in SONAS Some SONAS functions (anti-virus and HSM) store GPFS extended attributes for managed files Customers frequently used snapshots with GPFS file systems in a SONAS system For a more detailed discussion about the space needed for GPFS file system metadata, see the IBM DeveloperWorks article Data and Metadata Separate or mixed. Summary of SSD Considerations for DDN and DCS3700 Intermix The task of adding SSD meta data to the file system is essentially composed of two key steps. a. Add NSDs to a file system and change NSD usage of the existing spinning disks Copyright IBM Corporation, 2013 Page 17 of 48

18 b. Perform a file system restripe task to move the metadata from the spinning NSDs to the SSD NSDs. See the Rebalancing file storage (restripe) section of this guide for more details. 1. When adding SSDs to system pool it it recommended that they be added as metadataonly. 2. RAID Levels of the SSD NSDs should be set to RAID 1 or RAID 10 and organized such that Drawer failure protection is maintained. In workload where metadata load is lower a RAID 5 organization can be considered. Note that RAID 5 is supported for SSD drives only. 3. SSDs are fast but not infinitely so. A large number of spinning disks can still outperform a small number of SSDs. Therefore, the number of SSDs used should not be less than 10% of the number of spinning disks it is replacing for metadata, but also pay attention to the total capacity recommendation below, as it is more likely to determine the number of SSDs. In metadata intensive workload the number of SSDs should be at least 20% of the number of spinning disks it is replacing for metadata. The number of NSDs used in the meta data should not be far less then 16 when it is lower then the number of NSDs it is replacing. For example if you are replacing 24 15K SAS NSDs with 32 SSDs, the number of RAID 10 NSDs would be 16. In a lower metadata intensive application 24 SSDs (12 RAID 10 NSDs) will still be acceptable. 4. Failure groups of the NSDs added should be sufficient to support the file system replication setting. For example if the file system is set to meta, then at least two failure groups must be available. NSDs in multiple failure groups that are participating in the meta data replication should be identical (as practical as possible). The number of NSDs on a failure groups should be very close to the number of NSDs in the other failure group participating in the replication. 5. The total capacity of the NSDs for metadata should be close to 2% for low to medium meta intensive workload with medium size files and few data management operations such as snapshot, HSM, NDMP backup etc. In a high metadata intensive environment the metadata usage should be raised to 5%. Current usage of metadata can be found in the output of mmlspool [root@mercury03.mgmt001st001 ~]# mmlspool gpfs0 Storage pools in file system at '/ibm/gpfs0': Name Id BlkSize Data Meta Total Data in (KB) Free Data in (KB) Total Meta in (KB) Free Meta in (KB) system KB yes yes ( 31%) ( 33%) data KB yes no ( 79%) 0 0 ( 0%) silver KB yes no ( 90%) 0 0 ( 0%) 6. SSD based metadata NSDs should be created using cnaddstorage procedure. At this time of wrting the mkdisk command is not suitable for creating NSDs 7. Change disk usage and failure group using mmchdisk command mmchdisk <file system> change -F Descfile where Descfile contains the description of the spnning NSDs that must be changed one per line as follows: nsdname:::metadataonly -failuregroup <failure group> Copyright IBM Corporation, 2013 Page 18 of 48

19 8. Make sure all undergoing data management tasks (such as create or delete of snapshots, backup of file system, async replication etc) are either completed or stopped. Request a maintenance window where overall system load will be minimal. Before attempting to add the NSDs to the existing file system perform both hardware and software health checks(may want to disable callhome during the maintenance window). [mgmt]# lshealth [mgmt]# cnrssccheck --nodes=all --checks=all [mgmt]# tail -f /var/log/messages <-- verify no problems; log should be stable 9. Add the new SSD NSDs or spinning disk NSDs to file system using chfs <file system> --add <NSD list> 2 It may take one to fhree mins to add a NSD to the current file system. So if you are adding 30 NSDs it may take up to one and half hours to complete the task of addting the NSDs to the file system. Note that as soon as NSDs are added to the file system meta data space will be allocated in to it and it will start to receive IO load. 10. Change NSD usage type of the existing spinning NSDs to dataonly using At the time of this writing chdisk cannot change the usagetype of a NSD that is already part of a file system. Instead, the following command should be used. mmchdisk <file system> change -F Descfile where Descfile contains the description of the spnning NSDs that must be changed one per line as follows: nsdname:::dataonly 11. Once the usagetype of the spinning disks are changed, the task of NSD addition is complete. It may be a good idea to perfrom a health check to make sure the system is stable and has no issues. If not, you need to continue same maintenance window or request additional maintenance window for the file system restripe task. 12. Restripe the file system. Note this may take several days to weeks to complete the restripeing task and may impact performance as well as block other tasks while it is running. Before starting a restriping task make sure all data management tasks are either completed or stopped. Use mmrestripefs <file system> -b to initiate a complete file system rebalance. If only migration of the metadata is require, one can use -p option to move the metadata from the spinning disks (ill placed) to the newly designated metadata NSDs. One may need to stop (Ctrl+C) and restart the restriping task several times in order to run the restriping task during off peak hours as well as to allow blocked tasks to proceed. The mmrestripefs task is composed of four phases. When stopped and restarted it will again start from the beginning. mmrestripefs maintains internal checkpoints indicating progress made in the last mmrestripe task and therefore will skip metadata that are aleady moved. In net, even though it starts from the beginning, the steps that are 2 Consider arranging the list in a round robin so that the storage node as well as the controller supporting the NSDs weill be alternatively accessed. Copyright IBM Corporation, 2013 Page 19 of 48

20 already completed will move fast when the task is restarted again. Plan for several of these sessions. Closely work with the customer and end user such that when stopped the tasks that are normally blocked during mmrestripefs can be completed. See the Rebalancing file storage (restripe) section of this guide for more details. 13. At the end of successful completion of the migration of the meta data operation, monitor performance for several weeks to make sure system is stable. Watch for LONG DISK IOS to the meta data disks. Collect performance data if any anomalies are observed. SSD Considerations for all storage types Solid State Drives (SSDs) may be added to the same GPFS storage pool as spinning disks only if they are limited exclusively to storing metadata. If you wish to use SSDs to store metadata and data, or data only, then they must be placed into a different GPFS storage pool from spinning disks of any type. To add SSDs to the system pool for the storage of metadata, the following high level steps are required: 1. Add SSD based NSDs to the system pool as metadata only. 2. Use the mkdisk command to add the new SSD based NSDs. One can also use cnaddstorage to add NSDs. NOTE: Whenever executing GPFS NSD or other configuration commands, always make sure one configuration change completes in the broadcast to all nodes before running another one. If two change commands execute out of order on different nodes, it is possible for GPFS to assert and cause an outage. 3. Use the mmchdisk command against the new NSDs with mmchdisk <file system> change -F Descfile where Descfile contains the description of the spnning NSDs that must be changed one per line as follows: nsdname:::metadataonly Note that the NSDs added should have at least two failure groups if the current file system replication used in the file system is meta. The added NSDs should have enough capacity to accomodate current and future metadata of the file system.. Note that is used to change disk usagetype does not take much time and is almost instantaneous. However, as a best practice in most cases this should only be done for addition of NSDs to a file system.. Such operation should be scheduled when no other data management operations are ongoing. Particularly restriping, draining of disks, changing replication policy (like none to meta, or meta to none etc). Copyright IBM Corporation, 2013 Page 20 of 48

21 4. Change existing spinning disk based NSDs to data only once the SSD based NSDs are successfully added to the file system. 5. Add any new spinning disk based NSDs as data only. Add the new data disks before setting disks that were metadata + data to data only otherwise this may result in too few data disks. Add new metadata disks too make sure not to run out of metadata disks. 6. Use the mkdisk command to add any new spinning disk based NSDs. One can also use cnaddstorage to add newly added spinning disks. 7. Use the mmchdisk command against all newly added spinning disk NSDs with mmchdisk <file system> change -F Descfile where Descfile contains the description of the spnning NSDs that must be changed one per line as follows: nsdname:::dataonly 8. Add the new SSD NSDs or spnning disk NSDs to file system using chfs <file system> --add <NSD list> 3 9. Change NSD usage type of the existing spinning NSDs to dataonly using mmchdisk <file system> change -F Descfile where Descfile contains the description of the spnning NSDs that must be changed one per line as follows: nsdname:::dataonly 10. Restripe the file system. Note this may take several days to weeks to complete the restripeing task and may impact performance as well as block other tasks while it is running. Use the mmrestripefs command.with -r to restripe the file system. If it is required to stop and restarting restriping task then mmrestripefs command with -r option should be used. One may need to stop and restart the restriping task several times in order to run the restriping task during off peak hours as well as to allow blocked task to proceed. Other data management and reconfiguration functions can not be done concurrently. In most cases it will not be allowed anyway. From a best practice point of view other data management asks should not be attempted run while restripe is running. Although SONAS CLI restripefs can not be interrupted, the underlying mmrestripe command can be interrupted and restarted. If a restripe must be interrupted, it needs to be restarted from the beginning. However, it can remember how much work was successfully done from the internal check points and the steps there were completed then moves through quickly. In net it would not repeat the movement of data or meta data that is already done. However, it may have to rebuild some of the index tables again when restarted. See the Rebalancing file storage (restripe) section of this guide for more details. 3 Consider arranging the list in a round robin so that the storage node as well as the controller supporting the NSDs weill be alternatively accessed. Copyright IBM Corporation, 2013 Page 21 of 48

22 Matching NSD sizes and number of NSDs in the same GPFS storage pool Introduction The GPFS file system used by SONAS is designed to have optimal performance when all of the NSDs used for the same usage type (metadata, data, or both) in the same GPFS storage pool have identical storage capacity and performance characteristics. GPFS accomplishes optimal performance by striping all data for a given pool and usage type across all available NSDs. Intermix scenarios where NSDs from different types of storage devices are placed into the same GPFS storage pool will all necessarily violate the ideal to some degree. If some NSDs in the pool are slower than others, the performance of the pool will suffer significant degradation. If some NSDs in the pool fill up before others, writes cannot be striped across all NSDs and again, performance will suffer significant degradation. The easiest and surest way to avoid these problems is to place storage from different devices in different GPFS storage pools. However, if you choose to place NSDs from different devices in the same GPFS storage pool the worst of the impacts can be mitigated by carefully following the instructions in the Matching Drive Types in the same GPFS storage pool. Even so, just because you can intermix different devices in the same GPFS storage pool does not mean that the performance of such a configuration will be free of significant impacts. The remaining impacts to be considered fall into two main areas: Varying the number and performance of NSDs from different devices in the same pool Varying the capacity of NSDs in the same GPFS storage pool (a function of the underlying drive size) Impacts with varied number and performance of NSDs from different storage devices in the same GPFS storage pool When some NSDs perform substantially differently from other NSDs in the same pool with the same usage type, overall performance of the pool is significantly degraded. The performance of NSDs is a function of the underlying drive type, RAID type, controller type, and controller + server loading. We control for drive type and RAID type in the section above titled, Matching Drive Types in the same GPFS storage pool. Controller type is, by definition, going to be different in intermix. We can't control that. Copyright IBM Corporation, 2013 Page 22 of 48

23 However, controller + server loading is something that can be controlled to some extent and should be considered. As nearly as possible, the number of drives behind a controller and the number of controllers behind a pair of SONAS storage node should be matched across device types. Suppose you are starting with a SONAS system that has a single pair of storage nodes, DR1 controllers behind those and one 2851-DE1 expansion behind each controller. Each enclosure has 60 NL-SAS drives (240 drives total). This results in 24 NSDs from the 2851-DR1 based storage pod. Now you wish to expand this system using intermixed 2851-DR2 based storage, and place the new drives in the same GPFS storage pool. A new pair of SONAS storage nodes is added for this purpose. If a single 2851-DR2 controller only (60 NL-SAS drives) is added to the new pod, this results in 6 NSDs from the new storage pod vs. 24 from the old pod. The old controllers each have twice the number of disks to service, and the old storage nodes have twice as many controllers to service. This system is unbalanced and GPFS performance will suffer. Similarly, if two 2851-DR2 controllers are added with performance module controllers, and each of those have DE2 expansion units behind them, now we have a total of 720 NL-SAS drives behind the new storage pod. That results in 72 NSDs from the new pod vs. 24 from the old pod. Each of the new controllers have three times as many disks to service as the old controllers. This system is unbalanced and GPFS performance will suffer. To avoid this, try to keep storage pod configurations roughly balanced between intermixed storage types being used in the same GPFS storage pool. In the above example, two DR2 controllers each with one 2851-DE2 expansion all filled with NL-SAS drives (240 in total) would match the configuration in the old pod and give the best GPFS performance. Impacts with varied capacity NSDs in the same GPFS storage pool As mentioned above, GPFS attempts to stripe data across all NSDs in a GPFS storage pool with the same usage type (metadata, data, or both). When NSDs in a GPFS storage pool have significantly different capacities, smaller NSDs can become completely filled while large NSDs continue to have plenty of free space. That means writes (and subsequent reads) will not be striped across all NSDs in the pool, which impacts performance significantly. As nearly as possible, NSDs of a given usage type in a GPFS storage pool should be created the same size. This can be difficult to achieve if different sized drives are used. Typically, NSDs are created with one NSD per array, with each array corresponding to the space of 8 physical drives. Matching NSD sizes may require modifying this directive if different drive sizes are present in the different storage devices. Therefore we add the following recommendation: Copyright IBM Corporation, 2013 Page 23 of 48

24 If possible, drives underlying NSDs of a given usage type in a GPFS storage pool should all have the same capacity. Suppose you are starting with 2851-DR1 storage filled with 2 TB NL-SAS drives. If you add 2852-DR2 storage with 2 TB NL-SAS drives then the new NSDs come out to be the same size as the old ones, as recommended. If you add 2852-DR2 storage with 3 TB NL-SAS drives then the new NSDs come out to be 50% larger than the old ones if standard NSD creation practices are followed. Such a configuration is not recommended. If 3 TB NL-SAS drives are intermixed with 2TB NL-SAS drives in the same GPFS storage pool, the only way to match NSD sizes would be to use disk pools on the DCS3700 and create 50% more NSDs than is normally recommended. Since the LUNs are from same physical disks, this would create 50% greater load to the same set of disks which should be avoided. Therefore, there is no ideal answer to this configuration and it should be avoided if possible. If you add 2852-DR2 storage with 4 TB NL-SAS drives then the new NSDs come out to be 100% larger than the old ones if standard NSD creation practices are followed. Such a configuration is not recommended. The new NSDs could be created two per RAID array instead of one per RAID array, resulting in NSDs the same size as the existing ones to mitigate this problem. However, that creates an underlying performance mismatch in that now two of the new NSDs are being serviced by the same number of disk drives as one of the old NSDs. If faced with this choice, however, it is better to create NSDs of the same size than with the same number of disk drives. The exact impact on the performance of the system may require detailed information of the overall access workload and file usage and may be difficult to estimate. We recommend that as long as the added NSDs performance profile (drive type, rpm and RAID levels) matches closely to source it can be added. As long as the performance profiles are equivalent the performance arises from unbalanced access of the underlying NSDs and disks. Such unbalanced access patterns can be detected from performance monitoing of the NSDs (lsperfdata number of operations as well as cmmp collector in perfcol). The GPFS file system typically naturally balances the performance access over time. If the access unbalance still remains one should schedule a file system rebalance task (restripefs balance or mmrestripefs -r). Copyright IBM Corporation, 2013 Page 24 of 48

25 Using different GPFS storage pools NSDs corresponding to DCS3700 logical drives may be added to new GPFS file system pools within the same GPFS file systems containing file system pools with DDN NSDs. Each GPFS file system may have up to eight file system pools. When should I use different GPFS storage pools? Using different GPFS storage pools provides a level of separation since NSDs corresponding to DCS3700 logical drives reside in different GPFS file system pools than the file system pools containing NSDs residing on DDN LUNs. The concerns outlined above about matching drive types, capacities, and NSD sizes and quantities do not apply to storage in different GPFS storage pools. This scenario does not require the creation of new GPFS file system and NAS exports. In this approach, the following capabilities of the SONAS system may be used File placement policies may be used to steer certain files at file creation time to GPFS file system pools containing NSDs residing on DCS3700 storage In addition, ILM policies may be used to migrate files from file system pools containing NSDs on DDN storage to file system pools containing NSDs residing on DCS3700 storage. Using the policy capabilities mentioned above, this approach provides controlled mechanism to decide what data is stored on DCS3700 file system pools, but without the need to create new file systems and NAS exports. Placement and migration policies in multi-pool systems For additional information about policies, consult the following section of the online SONAS Information Center: Administering->Managing->Managing Policies File Placement Policy Rule As mentioned above, if DCS3700 NSDs are placed in separate GPFS storage pools, then either a file placement policy and/or a file migration policy must be implemented to either steer certain files at creation time (file placement policy) or move files from storage pools containing DDN NSDs to storage pools containing DCS3700 NSDs (file migration policy). A variety of different file placement policies could be implemented depending on the customer s environment and intentions: All newly created files could be steered to a storage pool comprised of NSDs on DCS3700 storage. All newly created files within a specific file set or file sets could be steered to a storage pool comprised of NSDs on DCS3700 storage using the FOR FILESET (filesetname[,filesetname]) clause on the file placement policy rule All newly created files within specific directories could be steered to a storage pool comprised of NSDs on DCS3700 storage using the WHERE SqlExpression clause on the file placement policy and evaluating the PATH_NAME in the SqlExpression. Copyright IBM Corporation, 2013 Page 25 of 48

26 To create a policy named default with a rule named todcs3700 that is the default file placement rule to steer all newly created file to the storage pool named DCS370015KSAS and to assign this policy to the file system gpfs0, the following commands can be used: mkpolicy default R RULE todcs3700 SET POOL DCS370015KSAS -D setpolicy D gpfs0 default File Migration Policy Rule In addition to a file placement policy, customers may use file migration rules within the policy assigned to the file system to migrate files from a DDN storage pool to a DCS3700 storage pool. A large number of file migration policy rules are possible and the customer should decide which ones might be best for them based on their environment and intentions: Some general examples of file migration policy rules include: Migrating files from a DDN storage pool to a DCS3700 storage pool based on the threshold of occupancy of the DDN storage pool using the THRESHOLD(HighPercentage[,LowPercentage]) clause on the file migration rule. NOTE: Do not set the HighPercentage and LowPercentage to 0% in an attempt to completely clear a file system. Overhead items and snapshots count in the percentages, and setting them to 0% will cause migration to fail as it cannot actually be reached. Migrating files from a DDN storage pool to a DCS3700 storage pool based on the age since the file was last accessed using the WHERE SqlExpression clause on the file migration rule and evaluating the number of days since lasted accessed in the SqlExpression. Migrating files in a specific file set (or file sets) from a DDN storage pool to a DCS3700 storage pools using the FOR FILESET (filesetname[,filesetname]) clause on the file migration policy rule The following example is a file migration rule named NotAccessed180Days that migrates files from the storage pool silver to the storage pool 4TBNLSAS if the file is not accessed in the last 180 days. RULE NotAccessed180Days MIGRATE FROM POOL silver TO POOL 4TBNLSAS WHERE (DAYS(CURRENT_TIMESTAMP)- DAYS(ACCESS_TIME)) > 180 This rule could be added to an existing policy using the chpolicy command. The customer should validate that the rules that they specify in the policy are valid using the chkpolicy command. Monitoring metadata space utilization It is important to monitor metadata space utilization in any SONAS system, but especially in intermixed systems where new storage is added in a new storage pool as data-only. If more space is added to store files, but the space to store information about those files (metadata) is not expanded, the system can run out of metadata space. Copyright IBM Corporation, 2013 Page 26 of 48

27 The active management node collects file system utilization data regarding metadata and data for all of the shared file systems of the system. The following measurement variables are displayed: Total space Used space Free space You can choose to view the file system utilization data by percentage or by time. GUI navigation 1. Log in to the GUI. 2. Click Monitoring > Capacity. 3. Ensure that the File System tab is selected. 4. Select the file system for which you want to view the system utilization. A chart showing the system capacity in percentage displays. Note: The total selected capacity displays the highest daily amount of storage for the selected file system. If the amount of storage is decreased, the deleted amount will not display until the following day. 5. To view the file system utilization by time, in the Display by list, select Time, and then in the Time frame list, select whether you want to view the system utilization for the last year or for the last 30 days. Attention: For best overall performance the file system and its associated pool space utilization should remain below the 90% threshold. It is a good practice to take action when any of the pool utilization goes above 85% utilization. Metadata utilization notifications If the data or metadata usage for a file system's storage pool exceeds 80%, IBM SONAS Health Center notifies you with a Warning level event notification. If the data or metadata usage exceeds 90%, IBM SONAS Health Center notifies you with a Critical level event notification. Once the data or metadata usage reaches the acceptable level below 80%, IBM SONAS Health Center notifies you with an Info level event notification. Copyright IBM Corporation, 2013 Page 27 of 48

28 Separate GPFS file systems for intermixed storage NSDs corresponding to new intermixed storage logical drives may be added to completely new GPFS file systems, separate from the existing GPFS file systems containing NSDs residing on existing LUNs. New NAS exports can then be created on the new GPFS file systems residing on the new logical drives and exported to NAS clients. This approach provides the highest level of separation, since separate GPFS file systems are created from the logical drives on the new storage system(s). NOTE: Currently SONAS ties node status to the status of the file system(s) on those nodes. This results in nodes being marked down if there is a failure of one file system on a node, even if there are other file systems which are still up. This behavior may be improved at a future date but in the interim customers using multiple file systems should be aware of it. For example, in a system which intermixes DDN and DCS3700 storage in separate GPFS file systems, if there is a storage failure on DDN we can expect: All nodes to be marked unhealthy: [root@furby.mgmt001st001 ras]# ctdb scriptstatus 7 scripts were executed last monitor cycle 00.ctdb Status:OK Duration:0.012 Wed Oct 30 15:56: reclock Status:OK Duration:0.016 Wed Oct 30 15:56: interface Status:OK Duration:0.016 Wed Oct 30 15:56: natgw Status:OK Duration:0.014 Wed Oct 30 15:56: routing Status:OK Duration:0.010 Wed Oct 30 15:56: per_ip_routing Status:OK Duration:0.012 Wed Oct 30 15:56: cnscmd Status:ERROR Duration:0.114 Wed Oct 30 15:56: OUTPUT:GPFS filesystem /dev/ddn_fs0 required but not mounted [root@furby.mgmt001st001 ras]# ctdb status Number of nodes:4 pnn: UNHEALTHY pnn: UNHEALTHY pnn: UNHEALTHY (THIS NODE) pnn: UNHEALTHY Generation: Size:4 hash:0 lmaster:0 hash:1 lmaster:1 hash:2 lmaster:2 hash:3 lmaster:3 Recovery mode:normal (0) Recovery master:2 CIFS to hit netname_deleted on accesses to both file systems. It to be possible to go to NFS and CIFS clients and mount all customer network IPs and run IO to the DCS3700 file system, even with unhealthy status on all nodes. The DCS3700 file system is not actually unhealthy, SONAS just marks the entire node unhealthy because the DDN file system is unhealthy. Copyright IBM Corporation, 2013 Page 28 of 48

29 An NFS client running to the DCS3700 file system can continue to run without interruption. NFS does not break the connection simply because the node is marked unhealthy. Copyright IBM Corporation, 2013 Page 29 of 48

30 Setting optimal storage device settings The following DCS3700 controller settings are recommended for intermixed environments: Write back caching should always be enabled on the DCS3700 controllers. If write back caching is disabled on the DCS3700, performance will significantly degrade because all write I/Os will done in write through mode, meaning all write I/Os will be directly to physical disk drives instead of to the cache memory of the DCS3700. Write cache mirroring should always be enabled on the DCS3700 controllers. Write caching mirroring ensures that the contents of one RAID controllers write cache are always mirrored to the other RAID controller. Write caching should always be enabled on all logical drives (LUNs). Again, for performance reasons, write caching should always be enabled on all logical drives. If you are creating new GPFS file systems using the logical drives/luns on the DCS3700 storage system(s), use a DCS3700 segment size of 128KB when creating all logical drives on the DCS3700 using the DS Storage Manager. When using this segment size, along with eight (8) data drives in each RAID array, the RAID stripe size is 1MB, which matches the recommended GPFS file system block size, which is optimal for performance for the general usecase. You may need a different settting if your use cases are different. For example if your application is creating very large number of small files or few very large files you may get better performance with diferent file system block size and DCS3700 segment size. If you are adding DCS3700 logical drives/luns to an existing GPFS file system, then you should use a DCS3700 segment size that best matches the GPFS file system block size of the existing GPFS file system. General recommendations are as follows: o If your existing GPFS file system has a GPFS file system block size of 256KB, then use a DCS3700 segment size of 32KB. A segment size of 32KB with eight data drives (RAID-6 8+P+Q) creates a RAID stripe size of 256KB matching the GPFS file system block size of 256KB. o If your existing GPFS file system has a GPFS file system block size of 1MB, then use a DCS3700 segment size of 128KB. A segment size of 128KB with eight data drives (RAID-6 8+P+Q) creates a RAID stripe size of 1MB matching the GPFS file system block size of 1MB. o If your existing GPFS file system has a GPFS file system block size of 4MB, then use a DCS3700 segment size of 512KB. A segment size of 512KB with eight data drives (RAID-6 8+P+Q) creates a RAID stripe size of 4MB matching the GPFS file system block size of 4MB. DCS3700 dynamic cache read prefetching can be enabled, which is the default in the DS Storage Manager. In general, the data caching performed by the GPFS file system makes the dynamic cache read prefetching by the DCS3700 controllers ineffective. Both SONAS storage nodes should be placed in the default host group and assigned a host type of Linux Cluster in the DS Storage Manager GUI. The logical drives (LUNs) within the DCS3700 that have been created from the RAID arrays should be split between the two RAID controllers inside the DCS3700, such that approximately one-half of the logical drives (LUNs) have a preferred owner of RAID controller A (upper) and the other half of the logical drives (LUNs) have a preferred owner of RAID controller B (lower). Copyright IBM Corporation, 2013 Page 30 of 48

31 As a general rule you should accept the recommended defaults in the DS Storage Manager GUI for providing certain levels of protection when creating RAID arrays. For example, if you have a single DCS3700 controller (with no DCS3700 expansion enclosures), you should accept the default in the DS Storage Manager GUI for drawer level protection when creating RAID arrays. This would spread the physical disk drives in a RAID array across the drawers contained in the DCS3700 enclosure (also called a tray). There are five individual drawers in each DCS3700 enclosure, each individual drawer holds twelve hard disk drives or solid state disks in a 4 x 3 arrangement (4 drives across from left to right and 3 drives deep from front to back). Copyright IBM Corporation, 2013 Page 31 of 48

32 Setting optimal GPFS parameters Failure Groups GPFS supports replicating file system metadata or replicating both metadata and file data for added redundancy and resiliency. File system metadata replication is enabled by specifying the R meta option on the mkfs or chfs commands. File system metadata and data replication is enabled by specifying the R all option on the mkfs or chfs commands. The SONAS default is to replicate file system metadata if two failure groups are available. To support GPFS metadata replication you need to use two failure groups. The failure group is intended to represent a group of entities (GPFS network shared disks or NSDs) that share a common failure boundary. Examples of possible failure boundaries include: 1. RAID array failures. All logical disks (or virtual disks or LUNs) that are created out of a single RAID array, such that if the RAID array were to fail, all disks created from the space in that RAID array are impacted by the failure of the RAID array. 2. Storage system failures. All logical disks (or virtual disk or LUNs) that are create from a single storage system, such that if that storage system were to fail, all disks belonging to that storage systems are impacted by the failure of the storage system 3. Rack level failures. All logical disks (or virtual disks or LUNs) created from one or more disk systems contained within the same rack/cabinet, such that if all power sources supplying power to the rack/cabinet fail, all disks belonging to all disk systems in that rack/cabinet are impacted by the failure to supply power to the disks systems in the rack/cabinet. With GPFS replication the objective is to store two redundant copies of the data on two different failure groups. With GPFS replication enabled: one set of NSDs is assigned to one failure group and used for one copy of the data and another set of NSDs is assigned to a different failure and used for another copy of the data and file system metadata (and optionally data) is replicated between the NSDs in the two different failure groups With GPFS replication you have two redundant copies of file system metadata (and optionally file data) stored on different failure boundaries. If you are going to extend your SONAS system with DCS3700 storage and you plan on replicating file system metadata or both metadata and data, then it is recommended that you start with two DCS3700 storage systems, that is two DCS3700 storage controllers (IBM MTM C) and that you: create logical drives/luns from one DCS3700 storage system and assign them to one failure group and create another set of logical drives/luns from the other DCS3700 storage system and assign them to another failure group Copyright IBM Corporation, 2013 Page 32 of 48

33 By starting with two DCS3700 storage systems you are using GPFS replication to protect against a storage system failure, where if one of the DCS3700 storage system were to completely fail (both redundant RAID controllers inside the DCS3700 fail and all data access is lost) you have a replicated copy of the metadata (or both metadata and data) on another DCS3700 storage system. It is recommended that the GPFS NSDs corresponding to the LUNs in single DCS3700 storage system be assigned to a single unique failure group. This recommendation is using the failure group concept to protect against a storage system failure. A single DCS3700 storage system is the DCS3700 controller (IBM MTM C) and all of its attached disk expansion units (IBM MTM E). When assigning DCS3700 LUNs to failure groups for the purposes of GPFS replication, you should follow these general rules: An approximately equal number of DCS3700 LUNs (representing an approximately equal storage capacity) should be assigned to each failure group within the storage pool The DCS3700 LUNs assigned to each failure group within the storage pool should be backed by the same type of disk drive technology and RAID technology (rotational speed, RAID level and RAID width). This is true even when metadata only NSDs are being added to the system pool within a file system. The purpose of having metadata only NSDs and data only NSDs in the system pool within a file system is to isolate GPFS file system metadata onto higher performance storage devices and the data onto lower performance storage devices. If the file system set to replicate either metadata or data, the NSDs in the different failure groups with the same usage type should be identical. It is sometime difficult to maintain same numbers of NSDs in both failure groups, but care should be taken to make the numbers as close as possible. For the group of logical drives (LUNs) in a single DCS3700 system which are being assigned to a given GPFS storage pool within a file system: o The logical drives (LUNs) should be split between the two RAID controllers inside the DCS3700, such that approximately one-half of the logical drives (LUNs) have a preferred owner of RAID controller A (upper) and the other half of the logical drives (LUNs) have a preferred owner of RAID controller B (lower).file system Settings The following GPFS settings are recommend when configuring new file systems using logical drives (NSDs) configured on one or more DCS3700 systems: 1MB file system block size for new file systems. This file system block size is recommended because when using a DCS3700 segment size of 128KB and 8 data drives in each RAID array, the RAID stripe size is 1MB, matching the GPFS file system block size which is optimal for performance. Scatter Block Allocation 2 failure groups to support GPFS metadata replication GPFS metadata replication enabled for overall system availability Copyright IBM Corporation, 2013 Page 33 of 48

34 Understanding limitations and restrictions in an intermixed system The limitations and restrictions of each storage system used in an intermixed system apply to the intermixed system as a whole in the following way: Feature / function limitations and restrictions apply to their respective storage systems only. Availability limitations and restrictions apply to shared GPFS storage pools and file systems. Consult the latest product documentation for limitations and restrictions related to the storage in your intermixed system. The following limitations and restrictions are currently in place with regard to adding DCS3700 storage to a DDN appliance: Each DCS3700 storage system must be dedicated for use by the SONAS system only. This means that attaching the DCS3700 storage system to any other host systems for block I/O access (via Fibre-channel, SAS, iscsi or any other mechanism) is NOT supported. A maximum of two DCS3700 storage systems may be attached to each pair of SONAS storage nodes (IBM MTM 2851-SS2). Pairs of SONAS storage nodes may be added to the SONAS system until the maximum of 32 nodes in a SONAS system (interface nodes, storage nodes and management node) is reached. Each DCS3700 storage system must be direct fibre-channel attached to a pair of SONAS storage nodes (2851-SS2). SAN-fabric attachment of the DCS3700 to the SONAS storage nodes using Fibre-channel SAN switches is NOT supported. No fibre channel SAN interoperability testing will be performed with any FC SAN switches or infrastructure. A pair of SONAS storage nodes may only have one type of disk storage system attached to them (either DDN storage or DCS3700 storage). The following features of the DCS3700 storage system are NOT supported when used in conjunction with the IBM SONAS product: o FlashCopy / Volume Copy o Thin Provisioning o Compressed Volumes o Remote Volume Mirroring (RVM) o RAID 0 and 3 The DCS3700 storage system is configured, monitored, managed, serviced and supported completely independent of the SONAS appliance system. There is no integration into the SONAS graphical user interface (GUI), command line interface (CLI), RAS package or Call Home interface any capability to configure, monitor, manage, service or support the DCS3700 system. All configuration, management, service and support of the DCS3700 storage system is done using the normal features and facilities provided by this storage system. The standard IBM software for managing and configuring a DCS3700 storage system is the DS Storage Manager software. Restrictions Copyright IBM Corporation, 2013 Page 34 of 48

35 During IBM s testing of the SONAS system with IBM DCS3700 storage systems attached, the following concerns and problems have been encountered and the customer should be aware of the following restrictions. Concurrently upgrading the DCS3700 firmware, using the DS Storage Manager software, while the SONAS system is actively performing I/O to the DCS3700 system has resulted in Windows clients that are actively accessing the SONAS system using the CIFS protocol to time-out and receive a Windows error code (Windows system error code 64, ERROR_NETNAME_DELETED). NFS I/O can also pause for up to 10 minutes. As a result, if you plan to upgrade your DCS3700 firmware, you should plan to do so during periods of low NAS client I/O activity and you should quiesce I/O to the greatest extent possible before initiating a DCS3700 firmware upgrade. It is common to see errors in the event log associated with performing a firmware upgrade, including GPFS Long Waiters and cnscm data collection. These should be ignored for the duration of the firmware upgrade process. 1. Before initiating the DCS3700 firmware upgrade, async replication must be stopped using stoprepl CLI command. Refer to the SoNAS InfoCenter "Upgrading"->"Before you start"->"stopping asynchronous replication" section for the detailed procedure. If the auto-scheduler is configured, verify the schedule by lsrepltask CLI command and make sure no replication will be invoked during the firmware upgrade. If the DCS3700 of the replication target SONAS is being upgraded, follow the same steps on the source SONAS to stop the replication. 2. It is best to deactivate NDMP before upgrading firmware on DCS3700, similar to the way NDMP is deactivated before performing a full code upgrade. Please refer to the following section of the info center for information on how to deactivate NDMP: Upgrading -> Upgrade provider information -> Before you start -> Stopping network data management protocol (NDMP). After all DCS3700 disk subsystems have had their firmware upgraded, NDMP can be restarted. Please refer to the following section of the info center for information on how to activate NDMP: Upgrading -> Upgrade provider information -> After you upgrade -> Restarting network data management protocol (NDMP). 3. It is best to halt TSM backup sessions before upgrading firmware on DCS3700, similar to the way TSM backup sessions are stopped before performing a full code upgrade. Please refer to the following section of the info center for information on how to stop TSM backups: Upgrading -> Upgrade provider information -> Before you start -> Checking and halting a Tivoli Storage Manager backup session. After all DCS3700 disk subsystems have had their firmware upgraded, TSM backups can be restarted. Please refer to the following section of the info center for information on how to restart TSM backups: Upgrading -> Upgrade provider information -> After you upgrade -> Restarting a Tivoli Storage Manager backup session. 4. It is best to stop HSM migrations before upgrading firmware on DCS3700, similar to the way HSM migrations are stopped before performing a full code upgrade. Please refer to the following section of the info center for information on how to stop HSM migrations: Upgrading -> Upgrade provider information -> Before you start -> Stopping HSM migrations. After all DCS3700 disk subsystems have had their firmware upgraded, HSM migrations can be restarted. Please refer to the following section of the info center for information on how to restart HSM migrations: Upgrading -> Upgrade provider information -> After you upgrade -> Starting HSM migrations. Copyright IBM Corporation, 2013 Page 35 of 48

36 5. Anti-Virus may remain on during DCS3700 firmware upgrade provided that I/O from clients has been fully quiesced. Customers should follow normal data center HA design practices and connect the two power supplies that are in each DCS3700 controllers and expansion enclosure to two different power distribution units (PDUs) that are connected to two different power sources. There is a known and documented restriction in the DCS3700 product publications that updating the hard disk drive firmware on the hard disk drives inside the DCS3700 controller enclosure and any attached expansion enclosure is a non-concurrent activity and can not be performed while the SONAS system is accessing the DCS3700 (reference defect S ). Failure events including controller or drive failures may cause CIFS connections to time out depending on client load and the specific failure event. If this happens the clients should retry the I/Os which timed out, and in rare instances may need to be rebooted. To insure 100% data integrity in such situations, clients using the CIFS protocol should run with oplocks turned off. See the SONAS documentation for the mkexport command for instructions on how to turn oplocks off. See also the data integrity section in Info Center titled "CIFS and NFS Data Integrity Options" at %2Fcom.ibm.sonas.doc%2Fmng_t_export_locking.html While it cannot be predicted when such failure events will occur, whenever a controller canister is to be brought back online (such as following the repair of a failed controller canister), if possible the same quiesce activities recommended for a controller firmware upgrade should be implemented. This includes quiescing client I/O and halting advanced functions such as replication, backup, and HSM migration. Copyright IBM Corporation, 2013 Page 36 of 48

37 Implementation and post-implementation considerations Warnings seen during the RXC MES When the PFE / SSR / CSR is performing the RXC MES, the system health will become yellow: This is due to warnings on all of the switches: These warnings are harmless and should be cleared as part of the end of call procedure. Storage Building Block Numbers Do not be alarmed if the system displays new storage with a lower storage building block number than the old storage, as in the following example where Storage Nodes 3 and 4 are called storage building block 1, and Storage Nodes 1 and 2 are called storage building block 2: Copyright IBM Corporation, 2013 Page 37 of 48

38 The storage building block number has no effect on system operation. In time, the system will reorder the storage building blocks: Copyright IBM Corporation, 2013 Page 38 of 48

39 Rebalancing file storage (restripe) In an all spinning disk configuration, you may want to restripe after adding the new disks. Whether or not you should do this depends on the workload. If the workload creates and destroys most files on a short term basis, it's probably not worth the overhead to restripe. If a significant portion of the files are long lived, restripe will be useful. Depending on the size of the file system, how full it is, and the peformance of the underlying storage, restripe can take a few days to several weeks. Restripe is IO intensive and can interfere with other operations (e.g. snapshots). Long disk I/Os may also occur. If you have a large file system you need to restripe, you should plan to do this in a period of as low activity as possible. Restripe will also block certain file system data management tasks like create and delete of snapshot as well as async replication since it needs to create a snapshot. Copyright IBM Corporation, 2013 Page 39 of 48

SONAS Best Practices and options for CIFS Scalability

SONAS Best Practices and options for CIFS Scalability COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING...2 MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS...2 SONAS SYSTEM CONFIGURATION...4 SONAS Best Practices and options for CIFS Scalability A guide

More information

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide V7 Unified Asynchronous Replication Performance Reference Guide IBM V7 Unified R1.4.2 Asynchronous Replication Performance Reference Guide Document Version 1. SONAS / V7 Unified Asynchronous Replication

More information

IBM Storwize V7000 Unified

IBM Storwize V7000 Unified IBM Storwize V7000 Unified Pavel Müller IBM Systems and Technology Group Storwize V7000 Position Enterprise Block DS8000 For clients requiring: Advanced disaster recovery with 3-way mirroring and System

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales 2011 IBM Corporation

Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales 2011 IBM Corporation Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales simon_podeprel@cz.ibm.com Agenda V7000 Unified Overview IBM Active Cloud Engine for V7kU 2 Overview V7000 Unified

More information

IBM EXAM QUESTIONS & ANSWERS

IBM EXAM QUESTIONS & ANSWERS IBM 000-452 EXAM QUESTIONS & ANSWERS Number: 000-452 Passing Score: 800 Time Limit: 120 min File Version: 68.8 http://www.gratisexam.com/ IBM 000-452 EXAM QUESTIONS & ANSWERS Exam Name: IBM Storwize V7000

More information

NEC M100 Frequently Asked Questions September, 2011

NEC M100 Frequently Asked Questions September, 2011 What RAID levels are supported in the M100? 1,5,6,10,50,60,Triple Mirror What is the power consumption of M100 vs. D4? The M100 consumes 26% less energy. The D4-30 Base Unit (w/ 3.5" SAS15K x 12) consumes

More information

IBM Storwize V7000 Unified Real-time Compression (RtC) Best Practices

IBM Storwize V7000 Unified Real-time Compression (RtC) Best Practices IBM Storwize V7000 Unified Real-time Compression (RtC) Best Practices Andreas Baer ATS System Storage Europe Executive summary Start using RtC only after careful capacity planning and capacity contingency

More information

DS8880 High-Performance Flash Enclosure Gen2

DS8880 High-Performance Flash Enclosure Gen2 DS8880 High-Performance Flash Enclosure Gen2 Bert Dufrasne Kerstin Blum Jeff Cook Peter Kimmel Product Guide DS8880 High-Performance Flash Enclosure Gen2 This IBM Redpaper publication describes the High-Performance

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

StoreOnce 6500 (88TB) System Capacity Expansion Guide

StoreOnce 6500 (88TB) System Capacity Expansion Guide StoreOnce 6500 (88TB) System Capacity Expansion Guide Abstract This document explains how to install the StoreOnce 6500 System Capacity Expansion Kit, apply the new license, and add the new storage to

More information

DS8880 High Performance Flash Enclosure Gen2

DS8880 High Performance Flash Enclosure Gen2 Front cover DS8880 High Performance Flash Enclosure Gen2 Michael Stenson Redpaper DS8880 High Performance Flash Enclosure Gen2 The DS8880 High Performance Flash Enclosure (HPFE) Gen2 is a 2U Redundant

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

Warsaw. 11 th September 2018

Warsaw. 11 th September 2018 Warsaw 11 th September 2018 Dell EMC Unity & SC Series Midrange Storage Portfolio Overview Bartosz Charliński Senior System Engineer, Dell EMC The Dell EMC Midrange Family SC7020F SC5020F SC9000 SC5020

More information

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager ONTAP 9 Cluster Management Workflows for OnCommand System Manager August 2018 215-12669_C0 doccomments@netapp.com Table of Contents 3 Contents OnCommand System Manager workflows... 5 Setting up a cluster

More information

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines An Oracle Technical White Paper December 2013 Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines A configuration best practice guide for implementing

More information

Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts

Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts Raid: Who What Where When and Why 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts 1 Table of Contents General Concepts and Definitions... 3 What is Raid... 3 Origins of RAID...

More information

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication CDS and Sky Tech Brief Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication Actifio recommends using Dedup-Async Replication (DAR) for RPO of 4 hours or more and using StreamSnap for

More information

Abstract /10/$26.00 c 2010 IEEE

Abstract /10/$26.00 c 2010 IEEE Abstract Clustering solutions are frequently used in large enterprise and mission critical applications with high performance and availability requirements. This is achieved by deploying multiple servers

More information

Vendor: IBM. Exam Code: Exam Name: IBM System Storage DS8000 Technical Solutions V3. Version: Demo

Vendor: IBM. Exam Code: Exam Name: IBM System Storage DS8000 Technical Solutions V3. Version: Demo Vendor: IBM Exam Code: 000-453 Exam Name: IBM System Storage DS8000 Technical Solutions V3 Version: Demo QUESTION NO: 1 Which function is unique to the DS8000 within the following IBM disk storage products:

More information

DELL EMC UNITY: DATA REDUCTION

DELL EMC UNITY: DATA REDUCTION DELL EMC UNITY: DATA REDUCTION Overview ABSTRACT This white paper is an introduction to the Dell EMC Unity Data Reduction feature. It provides an overview of the feature, methods for managing data reduction,

More information

From an open storage solution to a clustered NAS appliance

From an open storage solution to a clustered NAS appliance From an open storage solution to a clustered NAS appliance Dr.-Ing. Jens-Peter Akelbein Manager Storage Systems Architecture IBM Deutschland R&D GmbH 1 IBM SONAS Overview Enterprise class network attached

More information

IBM Exam C IBM Storwize V7000 Implementation V1 Version: 6.0 [ Total Questions: 78 ]

IBM Exam C IBM Storwize V7000 Implementation V1 Version: 6.0 [ Total Questions: 78 ] s@lm@n IBM Exam C4090-457 IBM Storwize V7000 Implementation V1 Version: 6.0 [ Total Questions: 78 ] Question No : 1 How many days can an external virtualization license be used for migration before incurring

More information

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo Vendor: IBM Exam Code: 000-451 Exam Name: IBM Midrange Storage Technical Support V3 Version: Demo QUESTION NO: 1 On the Storwize V7000, which IBM utility analyzes the expected compression savings for an

More information

Management Abstraction With Hitachi Storage Advisor

Management Abstraction With Hitachi Storage Advisor Management Abstraction With Hitachi Storage Advisor What You Don t See Is as Important as What You Do See (WYDS) By Hitachi Vantara May 2018 Contents Executive Summary... 3 Introduction... 4 Auto Everything...

More information

Nové možnosti storage virtualizace s řešením IBM Storwize V7000

Nové možnosti storage virtualizace s řešením IBM Storwize V7000 Nové možnosti storage virtualizace s řešením IBM Storwize V7000 A new generation of midrange storage Rudolf Hruška Information Infrastructure Leader IBM Systems & Technology Group rudolf_hruska@cz.ibm.com

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates Technical Report A Thorough Introduction to 64-Bit Aggregates Shree Reddy, NetApp September 2011 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES The NetApp Data ONTAP 8.0 operating system operating

More information

IBM Active Cloud Engine centralized data protection

IBM Active Cloud Engine centralized data protection IBM Active Cloud Engine centralized data protection Best practices guide Sanjay Sudam IBM Systems and Technology Group ISV Enablement December 2013 Copyright IBM Corporation, 2013 Table of contents Abstract...

More information

Introduction to NetApp E-Series E2700 with SANtricity 11.10

Introduction to NetApp E-Series E2700 with SANtricity 11.10 d Technical Report Introduction to NetApp E-Series E2700 with SANtricity 11.10 Todd Edwards, NetApp March 2014 TR-4275 1 Introduction to NetApp E-Series E2700 with SANtricity 11.10 TABLE OF CONTENTS 1

More information

IBM. Systems management Disk management. IBM i 7.1

IBM. Systems management Disk management. IBM i 7.1 IBM IBM i Systems management Disk management 7.1 IBM IBM i Systems management Disk management 7.1 Note Before using this information and the product it supports, read the information in Notices, on page

More information

Accelerate with IBM Storage: IBM FlashSystem A9000/A9000R and Hyper-Scale Manager (HSM) 5.1 update

Accelerate with IBM Storage: IBM FlashSystem A9000/A9000R and Hyper-Scale Manager (HSM) 5.1 update Accelerate with IBM Storage: IBM FlashSystem A9000/A9000R and Hyper-Scale Manager (HSM) 5.1 update Lisa Martinez Brian Sherman Steve Solewin IBM Hyper-Scale Manager Copyright IBM Corporation 2016. Session

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

Using EMC FAST with SAP on EMC Unified Storage

Using EMC FAST with SAP on EMC Unified Storage Using EMC FAST with SAP on EMC Unified Storage Applied Technology Abstract This white paper examines the performance considerations of placing SAP applications on FAST-enabled EMC unified storage. It also

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

MIGRATING TO DELL EMC UNITY WITH SAN COPY

MIGRATING TO DELL EMC UNITY WITH SAN COPY MIGRATING TO DELL EMC UNITY WITH SAN COPY ABSTRACT This white paper explains how to migrate Block data from a CLARiiON CX or VNX Series system to Dell EMC Unity. This paper outlines how to use Dell EMC

More information

Surveillance Dell EMC Isilon Storage with Video Management Systems

Surveillance Dell EMC Isilon Storage with Video Management Systems Surveillance Dell EMC Isilon Storage with Video Management Systems Configuration Best Practices Guide H14823 REV 2.0 Copyright 2016-2018 Dell Inc. or its subsidiaries. All rights reserved. Published April

More information

NetVault Backup Client and Server Sizing Guide 2.1

NetVault Backup Client and Server Sizing Guide 2.1 NetVault Backup Client and Server Sizing Guide 2.1 Recommended hardware and storage configurations for NetVault Backup 10.x and 11.x September, 2017 Page 1 Table of Contents 1. Abstract... 3 2. Introduction...

More information

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform A vendor-neutral medical-archive offering Dave Curzio IBM Systems and Technology Group ISV Enablement February

More information

EsgynDB Enterprise 2.0 Platform Reference Architecture

EsgynDB Enterprise 2.0 Platform Reference Architecture EsgynDB Enterprise 2.0 Platform Reference Architecture This document outlines a Platform Reference Architecture for EsgynDB Enterprise, built on Apache Trafodion (Incubating) implementation with licensed

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.3.2 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

IBM i Version 7.3. Systems management Disk management IBM

IBM i Version 7.3. Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM Note Before using this information and the product it supports, read the information in

More information

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC A ClusterStor update Torben Kling Petersen, PhD Principal Architect, HPC Sonexion (ClusterStor) STILL the fastest file system on the planet!!!! Total system throughput in excess on 1.1 TB/s!! 2 Software

More information

HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit

HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit Installation Instructions Abstract This document explains how to install the HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit, apply

More information

Onboarding 1100 Series

Onboarding 1100 Series Onboarding 1100 Series Welcome! Thank you for your recent Scale Computing HC3 purchase! We are excited to have you as a customer and look forward to making sure you are satisfied with every part of your

More information

IBM Corporation IBM Corporation

IBM Corporation IBM Corporation 1 2009 IBM Corporation A New Era in Storage Efficiency Ace Lopez Chief Strategy Officer Storage Solutions Systems and Technology Group IBM Growth Markets Unit 2 2009 IBM Corporation IBM Storage is about

More information

IBM Spectrum Scale Archiving Policies

IBM Spectrum Scale Archiving Policies IBM Spectrum Scale Archiving Policies An introduction to GPFS policies for file archiving with Spectrum Archive Enterprise Edition Version 8 (07/31/2017) Nils Haustein Executive IT Specialist EMEA Storage

More information

CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS

CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS Best Practices CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS Best Practices 2 Abstract ftscalable TM Storage G1, G2 and G3 arrays are highly flexible, scalable hardware storage subsystems that

More information

Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage

Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage Dell Fluid Data architecture Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage Dell believes that storage should help you spend less while giving you the

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

IBM System Storage DS8800 Overview

IBM System Storage DS8800 Overview DATE VENUE IBM System Storage DS8800 Overview Tran Thanh Tu tutt@vn.ibm.com Storage FTSS IBM Vietnam 1 Agenda Introducing new DS8800 model What s new what s not Dramatic efficiency performance benefits

More information

Here is Your Customized Document

Here is Your Customized Document Here is Your Customized Document Your Configuration is: Manage LUNs Model - VNX5300 Storage Type - VNX for Block (SAN) Connection Type - Fibre Channel Switch or Boot from SAN Operating System - ESX Server

More information

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01 EMC ViPR Controller Version 2.3 Integration with VMAX and VNX Storage Systems Guide 302-002-075 REV 01 Copyright 2013-2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 This whitepaper describes the Dell Microsoft SQL Server Fast Track reference architecture configuration

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS)

Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS) Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS) Dr. Jens-Peter Akelbein Mathias Dietz, Christian Ambach IBM Germany R&D 2011 Storage Developer Conference. Insert Your

More information

Video Surveillance EMC Storage with LENSEC Perspective VMS

Video Surveillance EMC Storage with LENSEC Perspective VMS Video Surveillance EMC Storage with LENSEC Perspective VMS Sizing Guide H14768 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published March, 2016 EMC believes the information

More information

Demartek December 2007

Demartek December 2007 HH:MM Demartek Comparison Test: Storage Vendor Drive Rebuild Times and Application Performance Implications Introduction Today s datacenters are migrating towards virtualized servers and consolidated storage.

More information

Системы хранения IBM. Новые возможности

Системы хранения IBM. Новые возможности Системы хранения IBM Новые возможности Introducing: A New Member of the Storwize Family Easy to use, affordable and efficient storage for Small and Medium Businesses New standard for midrange storage IBM

More information

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Milestone XProtect Corporate Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates TECHNICAL REPORT A Thorough Introduction to 64-Bit egates Uday Boppana, NetApp March 2010 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES NetApp Data ONTAP 8.0 7-Mode supports a new aggregate type

More information

IBM Tivoli Storage Productivity Center Version Storage Tier Reports. Authors: Mike Lamb Patrick Leahy Balwant Rai Jackson Shea

IBM Tivoli Storage Productivity Center Version Storage Tier Reports. Authors: Mike Lamb Patrick Leahy Balwant Rai Jackson Shea IBM Tivoli Storage Productivity Center Version 4.2.2 Authors: Mike Lamb Patrick Leahy Balwant Rai Jackson Shea Contents Introduction...3 Knowledge and skills prerequisites...3 Important concepts...3 Storage

More information

IBM CAUDIT/RDSI Briefing

IBM CAUDIT/RDSI Briefing IBM CAUDIT/RDSI Briefing 13 th August 2013 Presented by: Daniel Richards HPC Architect Glen Lance - System Networking Specialist April Neoh HPC Sales Lead Page 1 Agenda Introduction GPFS Storage Server

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC VNXe3200 Unified Snapshots

EMC VNXe3200 Unified Snapshots White Paper Abstract This white paper reviews and explains the various operations, limitations, and best practices supported by the Unified Snapshots feature on the VNXe3200 system. July 2015 Copyright

More information

Architecting Storage for Semiconductor Design: Manufacturing Preparation

Architecting Storage for Semiconductor Design: Manufacturing Preparation White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Infinite Volumes Management Guide

Infinite Volumes Management Guide ONTAP 9 Infinite Volumes Management Guide September 2016 215-11160_B0 doccomments@netapp.com Visit the new ONTAP 9 Documentation Center: docs.netapp.com/ontap-9/index.jsp Table of Contents 3 Contents

More information

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM Note: Before you use this information and the product

More information

Breakthrough Cloud Performance NeoSapphire All-Flash Arrays. AccelStor, Inc.

Breakthrough Cloud Performance NeoSapphire All-Flash Arrays. AccelStor, Inc. Breakthrough Cloud Performance NeoSapphire All-Flash Arrays AccelStor, Inc. Company Profile About AccelStor, Inc. Established in November 2014 Management team President: Charles Tsai, Ph.D. Vice president:

More information

DELL TM AX4-5 Application Performance

DELL TM AX4-5 Application Performance DELL TM AX4-5 Application Performance A Comparison of Entry-level Storage Platforms Abstract This paper compares the performance of the Dell AX4-5 with the performance of similarly configured IBM DS3400

More information

Caching & Tiering BPG

Caching & Tiering BPG Intro: SSD Caching and SSD Tiering functionality in the StorTrends 3500i offers the most intelligent performance possible from a hybrid storage array at the most cost-effective prices in the industry.

More information

SXL-8: LTO-8 Archive System. managed by XenData6 Server software. Eight cartridge library simplifies management of your offline files.

SXL-8: LTO-8 Archive System. managed by XenData6 Server software. Eight cartridge library simplifies management of your offline files. SXL-8: LTO-8 Archive System LTO-8 Archive System managed by XenData6 Server software Eight cartridge library simplifies management of your offline files Functionality Unlimited Offline LTO capacity 84

More information

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan White paper Version: 1.1 Updated: Sep., 2017 Abstract: This white paper introduces Infortrend Intelligent

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

CLOUD-SCALE FILE SYSTEMS

CLOUD-SCALE FILE SYSTEMS Data Management in the Cloud CLOUD-SCALE FILE SYSTEMS 92 Google File System (GFS) Designing a file system for the Cloud design assumptions design choices Architecture GFS Master GFS Chunkservers GFS Clients

More information

Pass IBM C Exam

Pass IBM C Exam Pass IBM C4090-958 Exam Number: C4090-958 Passing Score: 800 Time Limit: 120 min File Version: 36.6 http://www.gratisexam.com/ Exam Code: C4090-958 Exam Name: Enterprise Storage Technical Support V3 Passguide

More information

Hitachi Virtual Storage Platform Family

Hitachi Virtual Storage Platform Family Hitachi Virtual Storage Platform Family Advanced Storage Capabilities for All Organizations Andre Lahrmann 23. November 2017 Hitachi Vantara Vorweg: Aus Hitachi Data Systems wird Hitachi Vantara The efficiency

More information

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity 9-November-2010 Singapore How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity Shiva Anand Neiker Storage Sales Leader STG ASEAN How Smarter Systems Deliver Smarter Economics

More information

Hitachi HQT-4210 Exam

Hitachi HQT-4210 Exam Volume: 120 Questions Question No: 1 A large movie production studio approaches an HDS sales team with a request to build a large rendering farm. Their environment consists of UNIX and Linux operating

More information

10 Having Hot Spare disks available in the system is strongly recommended. There are two types of Hot Spare disks:

10 Having Hot Spare disks available in the system is strongly recommended. There are two types of Hot Spare disks: 0 This Web Based Training module provides support and maintenance related information for the ETERNUS DX S2 family. This module provides an introduction to a number of ETERNUS Web GUI functions, for a

More information

Dell EMC Unity: Performance Analysis Deep Dive. Keith Snell Performance Engineering Midrange & Entry Solutions Group

Dell EMC Unity: Performance Analysis Deep Dive. Keith Snell Performance Engineering Midrange & Entry Solutions Group Dell EMC Unity: Performance Analysis Deep Dive Keith Snell Performance Engineering Midrange & Entry Solutions Group Agenda Introduction Sample Period Unisphere Performance Dashboard Unisphere uemcli command

More information

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 Incentives for migrating to Exchange 2010 on Dell PowerEdge R720xd Global Solutions Engineering

More information

Lenovo SAN Manager - Provisioning and Mapping Volumes

Lenovo SAN Manager - Provisioning and Mapping Volumes Lenovo SAN Manager - Provisioning and Mapping Volumes Lenovo ThinkSystem DS2200, DS4200, DS6200 June 2017 David Vestal, WW Product Marketing Lenovo.com/systems Table of Contents Introduction... 2 Provisioning

More information

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT By Joshua Kwedar Sr. Systems Engineer By Steve Horan Cloud Architect ATS Innovation Center, Malvern, PA Dates: Oct December 2017 INTRODUCTION

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

A GPFS Primer October 2005

A GPFS Primer October 2005 A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those

More information

Method to Establish a High Availability and High Performance Storage Array in a Green Environment

Method to Establish a High Availability and High Performance Storage Array in a Green Environment Method to Establish a High Availability and High Performance Storage Array in a Green Environment Dr. M. K. Jibbe Director of Quality Architect Team, NetApp APG mahmoudj@netapp.com Marlin Gwaltney Quality

More information

Quick Start Guide TABLE OF CONTENTS COMMCELL ARCHITECTURE OVERVIEW COMMCELL SOFTWARE DEPLOYMENT INSTALL THE COMMSERVE SOFTWARE

Quick Start Guide TABLE OF CONTENTS COMMCELL ARCHITECTURE OVERVIEW COMMCELL SOFTWARE DEPLOYMENT INSTALL THE COMMSERVE SOFTWARE Page 1 of 35 Quick Start Guide TABLE OF CONTENTS This Quick Start Guide is designed to help you install and use a CommCell configuration to which you can later add other components. COMMCELL ARCHITECTURE

More information

VSTOR Vault Mass Storage at its Best Reliable Mass Storage Solutions Easy to Use, Modular, Scalable, and Affordable

VSTOR Vault Mass Storage at its Best Reliable Mass Storage Solutions Easy to Use, Modular, Scalable, and Affordable VSTOR Vault Mass Storage at its Best Reliable Mass Storage Solutions Easy to Use, Modular, Scalable, and Affordable MASS STORAGE AT ITS BEST The ARTEC VSTOR Vault VS Mass Storage Product Family ARTEC s

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager ONTAP 9 Cluster Management Workflows for OnCommand System Manager June 2017 215-11440-C0 doccomments@netapp.com Updated for ONTAP 9.2 Table of Contents 3 Contents OnCommand System Manager workflows...

More information

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan White paper Version: 1.1 Updated: Oct., 2017 Abstract: This white paper introduces Infortrend Intelligent

More information

VMware VMFS Volume Management VMware Infrastructure 3

VMware VMFS Volume Management VMware Infrastructure 3 Information Guide VMware VMFS Volume Management VMware Infrastructure 3 The VMware Virtual Machine File System (VMFS) is a powerful automated file system that simplifies storage management for virtual

More information

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Copyright 2012, Oracle and/or its affiliates. All rights reserved. 1 Storage Innovation at the Core of the Enterprise Robert Klusman Sr. Director Storage North America 2 The following is intended to outline our general product direction. It is intended for information

More information