Migrating from SONAS to IBM Spectrum Scale

Size: px
Start display at page:

Download "Migrating from SONAS to IBM Spectrum Scale"

Transcription

1 Migrating from SONAS to IBM Spectrum Scale Naren Rajasingam IBM Spectrum Scale IBM Corporation June 2015

2 2 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] MIGRATING FROM SONAS TO IBM SPECTRUM SCALE PLATFORMS 4 DATA MIGRATION OVERVIEW 5 1. PREPARATION FOR MIGRATION 5 2. DATA TRANSFER 7 3. SYSTEM CUTOVER 8 4. PARALLEL STANDBY OF SOURCE SYSTEM 8 5. SIGNOFF AND DECOMMISSION 9 MIGRATING FROM A SINGLE SONAS TO AN IBM SPECTRUM SCALE PLATFORM 10 OVERVIEW 10 MIGRATION USE CASES 11 RSYNC, ACE, AFM AND AFM DR 12 HSM AND TSM 12 KEY CONTEXT AND ASSUMPTIONS THAT AFFECT / FACILITATE MIGRATION 14 DESCRIPTION OF THE VARIOUS MIGRATION METHODS 15 USE CASE #1 - TRADITIONAL DATA COPY USING RSYNC / ROBOCOPY / FTP / SCP TOOLS 15 USE CASE #2 - DATA MIGRATION USING ACTIVE FILE MANAGEMENT (AFM) CAPABILITY (SINGLE SITE) 16 AFM OVERVIEW 16 MIGRATION STEPS USING AFM COLLECT RELEVANT CONFIGURATION FROM THE SONAS TO BE USED IN IBM SPECTRUM SCALE SET UP SONAS (SA) EXPORTS AS AFM HOMES FOR MIGRATION PHASE MAINTAINING RECOVERY POINTS SET UP AFM CACHE FILESETS AND EXPORTS ON IBM SPECTRUM SCALE (EA) USING INDEPENDENT WRITER (IW) MODE DISABLE CACHE EVICTION AND AFMPREFETCHTHRESHOLD CREATE/SET IBM SPECTRUM SCALE EXPORTS AND IPS TO MATCH THE SOURCE EXPORTS ENSURE AUTHENTICATION IS COMPATIBLE WITH THE SOURCE ENVIRONMENT REPOINT USERS AND APPLICATIONS TO IBM SPECTRUM SCALE EXPORTS USE AFM CONTROL TOOLS TO PULL IN ALL FILE METADATA FROM THE HOME FILESET TO THE CACHE FILESET VERIFY ALL DATA HAS BEEN MIGRATED TO CACHE CONVERT THE AFM CACHE TO AN ORDINARY FILESET (IF NECESSARY) 25 USE CASE #3A CUSTOMER HAS HSM AND IS ABLE TO RECALL ALL HSM DATA TO SONAS PRIOR TO MIGRATION USING ONE OF USE CASES #1 OR #2 (SINGLE SITE) 26 USE CASE #3B CUSTOMER HAS HSM AND IS UNABLE TO RECALL ALL HSM DATA DUE TO SIZE/COMPLEXITY. THIS USE CASE WILL REQUIRE THE SERVICES OF IBM LAB BASED SERVICES (LBS) 26 USE CASE #5 - DATA MIGRATION USING FORKLIFT METHOD 26 MIGRATING FROM SONAS TO IBM SPECTRUM SCALE PLATFORMS DUAL SITE COEXISTENCE 27

3 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 3 OVERVIEW 27 MIGRATION FROM SONAS TO IBM SPECTRUM SCALE 28 KEY DEPENDENCIES (ASSETS, TECHNICAL CAPABILITY KNOW- HOW, TIMELINE) 28 SETTING UP EA CACHE FILESETS TO USE EB FILESETS OVER AFM DR. 31 FEEDBACK AND PROCESS IMPROVEMENT STEPS 33 APPENDIX A SAMPLE SCANING POLICY FOR PREFETCH 34 APPENDIX B MIGRATION PROOF OF CONCEPT PLAYBOOK 36

4 4 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Migrating from SONAS to IBM Spectrum Scale platforms Data and Systems migration can be simple or complex, depending on your current environments. Generally, a single site migration is considered as less complex than a dual site migration. By dual site, we mean implementations where you have both a NAS environment for production (active / live) data use and another environment that is typically only accessed in the event of a disaster (a disaster recovery site). In a dual site implementation, we assume that you already have a method to periodically replicate your production data to your DR site so that in the event of a need to failover (actual disaster, planned or unplanned outage at your production site), you can repoint your users and applications over to the DR site/system and they can continue with (possibly older) version of the production data. When we consider migration of such as system to IBM Spectrum Scale, we also try to keep in mind your need to ensure that your organization can continue to implement your DR strategy, even during the migration period. For some customers, the migration period may take a few days while for others that period may span months, depending on the complexities of their current implementations. We begin with an overview of data migration and then review single site migration scenarios and finally progress to special considerations involving dual site implementations where you, the customer need to try and maintain the ability to fail over to DR, during your migration process. The topics we will cover in dual site migration build upon the topics covered in single site migration, so we recommend that both parts of this document be thoroughly reviewed. We welcome any feedback you may have and suggestions to improve the discussion points or add more points to the topics. For now, the topics we will cover will be categorized as follows: Data Migration Overview Migrating data from a single SONAS to an IBM Spectrum Scale platform Migrating data from DR enabled SONAS to DR enabled IBM Spectrum Scale platform (dual SONAS to dual IBM Spectrum Scale cluster)

5 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 5 Data Migration Overview A successful migration will be dependent on the co- existence of the new IBM Spectrum Scale with the your current environment and making it possible for data to be migrated in line with a schedule that is compatible with your business requirements. A key outcome of a successful migration will be that your users and application must be able to access all their data in the IBM Spectrum Scale environment in a manner identical to or compatible with their access on the SONAS. This section provides an overview of general data migration steps and will be the basis for describing the differences of various migration methods / use cases. You might already find the information located here familiar, however for the sake of completeness, we have endeavored to detail these steps here. The basic logic for this type of transfer is as follows: 1. Preparation for migration 2. Data Transfer 3. System Cutover 4. Parallel Standby of source system 5. Signoff and decommission Figure 3- Single site data migration (per file set) to IBM Spectrum Scale Figure 3 represents a simple view of migrating data from SONAS to IBM Spectrum Scale one fileset at a time. We assume that the IBM Spectrum Scale environment is already properly configured to match the SONAS w.r.t authentication, protection export services (SMB/NFS) and data placement policies. Please note, HSM based migration (movement of HSM stub files) is currently not supported but work is underway to enable that capability in a future release of IBM Spectrum Scale. 1. Preparation For Migration Set up the new environment (IBM Spectrum Scale) as per your company s high availability and resiliency requirements

6 6 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Set up the new environment with all the necessary configuration including, client access IPs, Exports, Authentication, backup service, snapshot rules, snapshot retention periods and data placement policies Ensure your target environment has the appropriate capacity and performance to store the source data (and is able to accommodate the required the data transfer rate during migration) If your target environment will have a Disaster Recovery (DR) partner environment, it is often a good practice to also set this up and ensure you are able successfully to fail over from your new production environment to the new DR environment and then to fail back from the new DR environment to the new production Work with end users and application owners to identify data sets (source exports) that need to be migrated as a group Work with end users and application owners to schedule a cut over time and how much of an outage time can be afforded (this will be necessary for the final cut over stage) Conduct performance analysis to understand current source NAS IO overheads and demands Conduct performance analysis to understand target IBM Spectrum Scale environment IO overheads and demands Calculate the daily data change (churn) for each data set to be migrated. For migration purposes the churn is not the total number of KB written over a 24- hour period but the sum of the size of every file that was changed at the end of that 24- hour window. For example, if you have 1000 files in the source and 50 files which were changed multiple times during the day, at the end of the monitor period, we would count that 50 files have changed and you have the sum size of 50 files in MB to transfer. Derive a % change based on this for example, if we had 1000x 1MB files and 50 files had changed, then we would have 50MB to transfer (5% churn). As a guide, review your daily backup logs going back a month or two for this data source as they often provide a good indication on what your average churn will be. Calculate the current data transfer rate for each data group being transferred. This rate represents the MB/s transfer rate, which incorporates reading the file from source, transferring the target and writing to the target. To keep things simple, the data transfer rate X MB/s should be calculated as ((Net data to be transferred in MB/ transfer window in seconds). Calculate the required data transfer rate for each data group being transferred. This rate represents the MB/s transfer rate required which incorporates transferring the daily churn (MB) from source, transferring the target and writing to the target. To keep things simple, the data transfer rate X MB/s should be calculated as ((daily churn to be transferred in MB/ transfer window in seconds). For example, For a 400 TB Fileset with a 5% daily churn, you may need to transfer 20 TB of data and may only have a 4 hour window to do that in. Therefore, you will need a transfer rate of (20*1024*1024)MB / (4 * 3600) seconds = 1456MB/s (or 1.4GB/s). A transfer rate of 1.4GB/s (11.2gbps) implies

7 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 7 that your SONAS must also be able to offer reads at that rate. The target IBM Spectrum Scale environment will also need to be able to accept writes at of 1.4GB/s (11.2gbps). Finally, the network / WAN link must also be able to accommodate that transfer rate and the data packet overheads which is normally assumed to be an additional 30% (eg: 11.2gbps * 1.3 = Gbps). This calculation is on top of additional effects of latency and network congestion. If you can achieve this sustained rate of transfer and write it to the target environment, then you can assume that you will be able to transfer the net data change accumulated within each 24- hour period. This capability becomes important, as you get closer to your cur- over phase. If your required transfer rate is greater than your actual transfer rate, this would indicate that it is unlikely that you will be able to successfully transfer the data to the new environment without a changed to your migration method. Consider that your network may have an upper limit w.r.t transfer of data per stream and you might find that the network can accommodate a much higher transfer rate if you are able to increase the number of transfer threads. This might be possible by running the transfer jobs with multiple threads or multiple processes or even via multiple nodes. IBM Spectrum Scale is designed to accommodate parallel transfers. 2. Data Transfer Select the data transfer method you will use (RSYNC, Robocopy, ftp, scp etc) and ensure that you are able to successfully transfer representative or sample datasets For each data set, take a snapshot of the source data and begin the data transfer of that point in time copy according to your schedule and monitor the transfer process for any problems At the end of the transfer process, ensure that the data copied is a faithful representation of the source data. Presumably you have a method to compare the source snapshot with the target view. As an example, your transfer tool might already have that capability built- in. Another method is to generate a checksum on every file that you have copied and compare them with the copy on the target environment. Repeat the transfer process with a fresh snapshot and ensure that this transfer rate will complete within the transfer window you have planned or at least will be less than the total change rate within a 24- hour period. If your transfer time is greater than the 24- hour window, this implies that you have more data needing to be transferred at your source each day than you can transfer and get ahead. You may need to rethink the migration process to see if you can get a large enough lock- out window where you can transfer the data or increase the number of parallel threads involved in the transfer while ensuring you do not cause the source to degrade in performance to the point it is unusable by your users. If your transfer time is less than the 24 hour window but greater than then desired transfer/outage window for final cut over, this implies that you will need to repeat the transfers until such time you have reduced the transfer rates to less than the desired final outage window needed for final cut- over

8 8 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 3. System Cutover Prior to system cutover, you will need to conduct some limited tests to confirm that your users have been able to connect to the new IBM Spectrum Scale environment and access their data via the exports. Successful tests should imply the following: The new environment is up and running with valid IP addresses and can be referenced via DNS The CES nodes have been properly configured to enable end users to access their data The CES nodes have been configured with a compatible or identical Authentication mechanism as the SONAS so that user identification and access control is effectively the same as that on the SONAS Additionally, if your SONAS is configured with data protection mechanisms such as snapshots, backup or rsync replication, then you will also need to ensure that the IBM Spectrum Scale environment is also protected to meet your current or new business protection requirements If data transfers are occurring to plan and you are probably at the point where you are regularly transferring the daily churn within the expected outage window, you should be ready to initiate cutover to the IBM Spectrum Scale environment. In most cases, you will need to coordinate the cutover to ensure that the process of cutting over any data or export being will also be coupled with the step of blocking writes to the source. This is to ensure that you have a clean fall back option, should you need to roll back your changes. In some cases, you might not be able to do this (for example with the use of ACE in Single Write or Independent Writer mode). In such cases, you might want to take a snapshot on the source that will not be deleted until you are certain that you no longer need the source. 4. Parallel Standby of Source System It is generally a good practice to maintain the source system for a period of time (often months) just in case you have a need to either refer to the data in the source (in case something was missed) or if there was a catastrophic loss of access or services on the new environment (such as significant Hardware failure) and you may need to roll back and start again. In order to ensure that users do not access the data on the old system, you can remove the reference to the old platform from DNS as well as disable all its exports (or at the very least set them to read- only). As another example, depending on your migration policy, it might be that only new / recent data was migrated to the IBM Spectrum Scale environment, leaving historical data and reference data on the old system. You may also have a need to retain this data for some months or years, depending on your organization s data and records retention policy. Sometimes it is sufficient to be able to restore from backups. You will need to test and ensure that your backups can be restored to the IBM Spectrum Scale environment successfully. If that is not possible, you may have to retain the old (or representative) system for a longer than the anticipated period of time.

9 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 9 5. Signoff and Decommission Once your users and applications have successfully cut over to the new IBM Spectrum Scale environment, and you have confirmed that all data protection policies are in place and working (you have also tested your restores and failure/recovery steps), you are finally at the point where the old system can be signed- off and fully decommissioned, pending other business policies that govern this type of activity.

10 10 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Migrating from a single SONAS to an IBM Spectrum Scale platform OVERVIEW Many current IBM SONAS customers are considering migrating to IBM s latest storage technology based on IBM Spectrum Scale. IBM Spectrum Scale is a collection of capabilities based on IBM General Parallel File System (GPFS) and above. Data on IBM Spectrum Scale can be accessed natively via GPFS clients, Openstack SWIFT object store, Ganesha NSFv3 or NFSv4 and SMB file access protocols. In this document we assume that the IBM Spectrum Scale platform will co- exist with the SONAS platforms for a period of time. Some customers may be able to take an extended outage cut over to IBM Spectrum Scale in one step (big bang approach) while other customers may need to progressively migrate their data and manage access to data residing on both platforms simultaneously over an extended period of time. Figure1 coexisting IBM Spectrum Scale with IBM SONAS As IBM Spectrum Scale environment is largely defined by your requirements, you have the flexibility to design the environment to suit your current and future needs without the constraints that your previously experienced with IBM SONAS. For a successful migration, we will assume, though that, at the minimum, the following is true for the IBM Spectrum Scale environment you have configured: 1. IBM Spectrum Scale environment is configured with the same or compatible authentication method 2. It can deliver files over the same file access protocols (SMB, NFS) 3. It has the capacity and performance that matches or betters your SONAS 4. It will have the same file system and file set names (will preserve/duplicate the existing name space on the SONAS) 5. All DNS names and aliases that currently are part of accessing your SONAS environment could be extended/modified to accommodate your IBM Spectrum Scale environment (if needed) 6. If you use TSM server (IBM Spectrum Protect) on your SONAS, you also plan to use that capability on the IBM Spectrum Scale environment and we would assume that it is desirable to retain the current backup policies and data after you migrate to IBM Spectrum Scale. If this is not the case, then we will assume that you wish to set up new/separate backup policies for the IBM Spectrum Scale environment.

11 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] IBM Spectrum Scale does not directly support NDMP as a service, so the NDMP agents you would install on the nodes may not be able to hook into the GPFS API to access snapshots and ACLs etc, although they should still be able to see and read/write to the filesystem. While we will not be able to assist with that aspect of the IBM Spectrum Scale implementation, you may still find the discussions on TSM interesting and insightful. We recognize that it might not be possible to transfer all data to IBM Spectrum Scale in one move as the complexities of coordinating user groups and application team to cut over might be high. It is also likely that the volume of data that needs to be transferred will require a more progressive/stage migration strategy. Data migration is a complex undertaking as such a move often affects the data access name space from the client or application server s point of view. In some cases it is possible for significant failures to occur during data migration which impact on the integrity of the data being migrated. Many customers also require that their ability to recover from a disaster is not diminished during migration. For many customers migration will require careful coordination with their local user bases to facilitate not only the movement of the data from the SONAS to the IBM Spectrum Scale platform, but also the repointing of applications, sequencing equipment, servers and client desktops to this data at the new location. In order to minimize complexity, we will try to ensure that the name space of the data in the SONAS will be preserved during migration. This document will provide the user with the information necessary to conduct a migration of their data from the SONAS to the new IBM Spectrum Scale environment on a per- file set basis. The method employed here takes into account the need to continue to maintain DR capability for the customer for the duration of data and systems migration (which could take an extended period of time to complete, depending on the complexity of the customer environment) Migration Use Cases Use Case #1 - Customer is able to use Rsync / Robocopy / ftp / scp to transfer data Use Case #2 - Customer is able to use Active Cloud Engine (ACE) between IBM Spectrum Scale and SONAS and re- point users only to IBM Spectrum Scale side (Small Systems up to 200TB) Use Case #3a Customer has HSM and is able to recall all HSM data to SONAS prior to migration using one of use cases #1 or #2 Use Case #3b Customer has HSM and is unable to recall all HSM data due to size/complexity. This use case will require the services of IBM Lab Based Services (LBS) This option is not described in this document at this time. Use Case #4 DR Recovery methods as documented in the Advanced Admin Guide are employed to recreate and restore data to a new IBM Spectrum Scale cluster. This option is not discussed here in this document.

12 12 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Use Case #5 Customer is a candidate for Fork- lift method of transitioning the data/file system on Sonas to IBM Spectrum Scale in one outage RSYNC, ACE, AFM and AFM DR SONAS and V7000 Unified systems are able to replicate their data between systems using the built in remote replication tool (RSYNC) which has been modified to be GPFS aware and via the remote caching capability or Active Cloud Engine (ACE). On IBM Spectrum Scale, the preferred method of remotely replicating data is via the Active File Management (AFM) capability. For datasets (file sets) that require disaster protection, a specialized version of AFM called Active File Management for Disaster Recovery (AFM DR) is available. AFM is compatible with ACE remote caching technology on the SONAS and v7000 Unified systems. AFM DR is specific to IBM Spectrum Scale v4.1.1 and above. HSM and TSM Customers may have multiple file systems on their SONAS to reflect the different workloads they manage and their data protection and replication policies. Some customers use NDMP for data backup while other customers use IBM Spectrum Protect (formerly called Tivoli Storage Manager or TSM) for backup. Many customers who backup with TSM also actively use Hierarchical Storage Management (HSM) as a tertiary form of data storage, leaving file stubs in the SONAS file system in place of the actual file while the data for these files are stored on tape. Access to these stubs will automatically trigger a recall of the actual file from HSM managed tape back into the SONAS, enabling the user to then access the file s contents. For the majority of customers, they already have experience in migrating their data to NAS platforms (such as when they first migrated to the SONAS platform). In this document, we have developed a methodology to facilitate both complete and staged migration of data from the SONAS to IBM Spectrum Scale. In this document we will address the following: Data migration overview Key context / assumptions that affect / facilitate migration Description of the various migration methods Key dependencies (assets, technical capability know- how, timeline) Configuration settings required on SONAS platform Configuration settings required on IBM Spectrum Scale platform Impact to TSM/HSM infrastructure Networking considerations Authentication synchronization Use of DR environments during extended cut- over periods and risk exposure Identifying key failure points for migration and mitigation factors/steps Identifying recovery procedures in the event of component or system failure during migration Feedback and process improvement steps

13 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 13

14 14 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Key Context and assumptions that affect / facilitate migration Ensure data is accessed only via migrated path or from SONAS but not both, to ensure no risk of inadvertent data file data corruption. Data migration actions generally do not maintain simultaneous file/record locking tracking between systems. Customers have potentially billions of files to be migrated. This impacts on time to migrate the original data and the daily churn. Some migrations may take months to complete, depending on customer constraints and complexity. For example some applications cannot endure an outage except at fixed periods of the calendar year. There are different categories of customer configurations: o Customers who do not use any form of HSM o Customers who employ HSM and can recall to SONAS first because they have space on the SONAS and they face the time to do so o Customers who have HSM but cannot recall to SONAS first due to space constraints and time o Customers who have current AFM implementations which need to be migrated to IBM Spectrum Scale o Customers who have GPFS implementations with cnfs/knfs and also want to migrate to IBM Spectrum Scale IBM Spectrum Scale Environment will be able to present data to customers in the same manner as SONAS (CES protocol nodes with SMB, NFS etc) IBM Spectrum Scale Environment must use the same authentication environment as the SONAS or arrive at a compatible User ID (UiD) and group ID (GiD) mapping mechanism. If the SONAS is using an authentication method that is not compatible with IBM Spectrum Scale, then the SONAS needs to first be converted to a compatible mechanism prior to commencing migration. This is to avoid significant risk in incorrectly mapping users to their new authentication mechanism. IBM Spectrum Scale Environment will have the necessary space to accommodate all the data to be migrated from external sources IBM Spectrum Scale environment can use CES protocol nodes as AFM Gateways. If you wish to use non- CES nodes as AFM gateways, you will need to ensure they are properly configured for the task. Eitherway, the gateway nodes must belong to the same GPFS cluster as the CES protocol nodes. Migration plan assumes that we want to migrate source data (exports) to Independent filesets on the IBM Spectrum Scale environment which can subsequently be converted into AFM DR capable filesets for DR protection This migration plan can also be used to migrate data into filesets that will then be converted to AFM use (non AFM DR) if needed

15 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 15 Description of the various migration methods In this document we review the 5 previously mentioned migration methods: Use Case #1 - Customer is able to use Rsync/ndmpcopy + Robocopy / ftp / scp to transfer data Use Case #2 - Customer is able to use ACE between IBM Spectrum Scale and SONAS and re- point users only to IBM Spectrum Scale side (Small Systems up to 200TB) Use Case #3a Customer has HSM and is able to recall all HSM data to SONAS prior to migration using one of use cases #1 or #2 Use Case #3b Customer has HSM and is unable to recall all HSM data due to size/complexity. This use case will require the services of IBM Lab Based Services (LBS) Use Case #5 - Data migration using Forklift method Data migration using GPFS Policy based migration and via restoration of backed up data is not discussed in this section Use Case #1 - Traditional data copy using RSYNC / Robocopy / FTP / SCP tools Most customers who have carried out some form of migration previously are already familiar with this approach of data transfer. At a high level, the approach is: 1. Identify data groups that need to be migrated together. 2. Set up exports on the target platform 3. Initiate periodic copy of data from the source to the target environment until the time it tales to transfer the churn from the source to destination can be completed within the required application outage window (for final cut over) Assuming you are able to transfer the data at a rate and frequency that is higher than the change rate of the data in the source environment, then it is possible for all the data to eventually be copied over. While IBM SONAS has a GPFS aware RSYNC service, IBM Spectrum Scale does not. Therefore, you will not be able to transfer ACLs from the SONAS to IBM Spectrum Scale via the rsync program and will need to use another tool such as robocopy. Note: At this time IBM Spectrum Scale environment does not support NDMP capability for backups or migration and therefore is not available as a means of data migration. For CIFS/SMB data, the Microsoft robocopy tool will be able retrieve the AD compatible ACLs from your SMB source and write them to the IBM Spectrum Scale environment via the SAMBA exports on the CES protocol nodes within IBM Spectrum Scale. Note: If your source data contains ACLs with references to old SIDs you will also need to use IBM provided tools to convert all these data s ACLs to current SIDs for the users/groups. This is because, at the time of writing, IBM Spectrum Scale does not support SID History.

16 16 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] It is essential that you ensure the Authentication configuration on the IBM Spectrum Scale environment is compatible with your source environment so that users are correctly identified and access is properly granted to their data only. Please refer to the robocopy documentation for details on how to transfer data from your sources to IBM Spectrum Scale. At a high level you will need to do the following: 1. Set up the IBM Spectrum Scale environment for capacity and performance 2. Install and configure the protocol nodes 3. Configure smb protocol 4. Configure authentication on the protocol nodes to match the source 5. Verify users and groups can be resolved 6. Create the SMB exports and set permissions appropriately 7. Begin the robocopy process from intermediary windows nodes You can run multiple transfer sessions in parallel as long as you ensure that you have adequately partitioned the data from the source so that you do not overlap your copy. Once the data is transferred, you will be able to verify access before granting users and applications access to the data in the new location. Note: if you intend to use AFM DR to protect your data between the production IBM Spectrum Scale cluster and the DR IBM Spectrum Scale cluster, you might find it convenient to set it up first prior to beginning the robocopy tasks. If transferring NFS data only, you could use cp --preserve=xattr to copy any ACLs or use the AFM method built into GPFS. Please refer the next section for more details. Use Case #2 - Data migration using Active File Management (AFM) capability (Single Site) Note: ACE and AFM refer to the same underlying remote file caching technology (Panache) and are compatible with each other. AFM Overview AFM (Active File Management) is a technology primarily suited for remote caching. However it also has some attributes that make it possible for us to exploit for the purposes of migrating data. An AFM cache is primarily an independent fileset located on a IBM Spectrum Scale filesystem which can hold a cached copy of files located on an AFM home (essentially any NFS export can be a data source). Users on the IBM Spectrum Scale environment can see the filenames of every file on the home via this cache fileset. If a user requests to read a file, the cache will automatically read the file from cache, and, if the file is not yet cached, it will retrieve the file from home and copy it to cache before fulfilling the read request from the user. For write requests, writes are allowed to the cached file and any updates may be pushed back to the home, depending on the type of cache you created. Once the file is copied to the

17 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 17 cache the use is able to enjoy local file access speeds to the file, with updates being pushed asynchronously to the home. There are 4 types of AFM caches: 1. Read Only (RO): non updateable, only reflects home 2. Local Update (LU): updates are not pushed back to home 3. Single Writer (SW): only one cache for a home may be allowed to write and push back updates. Other caches to this home may be in RO mode only 4. Independent Writer (IW): Multiple caches may independently write to the home export. Simultaneous writes to the same file from different caches will damage the file. Updates from one cache will be pushed to the home export and these updates will be seen by other IW caches for this export. Generally, for AFM, we do not need for the cache fileset to be the same size as the home, as the cache is capable of utilizing the space you set, by efficiently evicting older or less recently used files from the cache in order to make room for newer files that have been requested from the home. However, for migration purposes, we will want to configure the cache to retain a copy of every file in the home, because eventually we wish to break the AFM relationship with the home and transform the cache fileset into a normal data fileset for user access. For migration purposes, we can use the RO, LU, IW modes of AFM to match specific needs. In order to ensure we can get a full copy of the home data in the cache, we also need to ensure that the cache will be at least as large as the home export and the cache is configured to disable cache eviction. Additionally, we want to ensure that the prefetch threshold on the cache is set to 0, implying that all (100%) of the file must be cached. Finally, we can run a command on the cache to prepopulate the cache with all the metadata (filenames and attributes) for each file in the home export as well as tell the cache to pull the data for these files over to the cache. By using a file list, we can target and trim the list of files we want pulled to our benefit. It must be noted, though, that each of the cache modes have specific behaviors which may or may not meet your specific export s migration needs. Therefore, it is important that your properly familiarize yourself with each of the caching modes and their behavior so that you can choose the appropriate mode when you migrate an export. You may use a combination of cache types to match the different exports needs you may have. Please note: A GPFS cluster (IBM Spectrum Scale or older GPFS environments) can have both AFM homes as well as AFM caches at the same time. AFM technology is based on a home- cache relationship at a fileset level. There is no concept of a caching relationship at a cluster level. Therefore it is entirely possible for one cluster to have many relationships to caches and homes on many other clusters. This is especially useful for migration purposes for dual cluster environments, as you will see in the section on dual site migration.

18 18 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Migration Steps using AFM The following is an overview of what needs to be done: 1. Collect relevant configuration from the SONAS to be used in IBM Spectrum Scale 2. Set up SONAS for migration phase using AFM 3. Create Recovery Points / snapshots on the SONAS (if roll back is required) 4. Set up AFM Cache file sets and exports on IBM Spectrum Scale (EA) using Independent Writer (IW) mode in this example 5. Disable cache eviction and prefetch threshold on each cache set up on the IBM Spectrum Scale environment 6. Create IBM Spectrum Scale exports to match the source exports 7. Ensure authentication is compatible with the source environment 8. Repoint users and applications to IBM Spectrum Scale exports 9. Use AFM control tools to pull in all data from the home file set to the cache file set 10. Verify all data has been migrated to cache 11. Disable AFM cache relationship with source exports 12. Convert the AFM cache to an ordinary file set (if necessary) 1. Collect relevant configuration from the SONAS to be used in IBM Spectrum Scale Note: The assumption here is that you have already upgraded your SONAS or Storwize V7000 Unified environments to v1.5.3.x or higher. We will use the notations SA, EA, TA etc as described in the legend associated with Figure 2. On the SONAS management node, log in a root and gather IP and export information on SA and store in a location such as /ftdc/migration mkdir -p /ftdc/migration lsnwinterface > /ftdc/migration/lsnwinterface.txt lsexport -v >> /ftdc/migration/lsexport_v.txt lsfset {filesystemname} >> /ftdc/migration/lsfset_{filesstemname}.txt lsauth >> /ftdc/migration/lsauth.txt net conf list >> /ftdc/migration/net_conf_list.txt lsidmap >> /ftdc/migration/lsidmap.txt lshsmcfg >> /ftdc/migration/lshsmcfg.txt lshsm >> /ftdc/migration/lshsm.txt Identify any TSM and HSM configuration on SA lstsmnode >> /ftdc/migration/lstsmnode.txt lshsm >> /ftdc/migration/lshsm.txt cp /opt/tivoli/tsm/client/ba/bin/dsm.sys /ftdc/migration/mgmt001st001_dsm.sys cp /opt/tivoli/tsm/client/ba/bin/dsm.opt /ftdc/migration/ mgmt001st001_dsm.opt If your system is configured for TSM and you wish to preserve the namespace on the target environment so that you can continue to use your TSM backups without first needing a new full backup, then locate and record the node and

19 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 19 proxy definitions for the system on your TSM server. You will need to consider extending the proxy definition to include the nodes in the IBM Spectrum Scale environment that will be doing backup. Your ability to successfully keep the original namespace after migration will be key to being able to continue to use the existing TSM backups. You will also need to recreate the TSM Shadow databases that mmbackup uses. This can be done via the dsmc resync command and should be run on the IBM Spectrum Scale cluster. For more details, please refer the mmbackup man page as well as TSM documentation. In the June relase of IBM Spectrum Scale, HSM stubs cannot be migrated or supported. Therefore, if you have a HSM enabled environment, and you wish to migrate to IBM Spectrum Scale, you will need to recall all the data from HSM storage back onto the SONAS prior to commencing migration. This will help ensure that the migration process will not get stuck waiting for HSM tape recalls nor will it risk generating a recall storm. 2. Set up SONAS (SA) exports as AFM homes for migration phase Setting up Sonas as home for AFM: Identify each Export that needs to be migrated. Configure each export as an afm home (if they are not already an AFM home): 1. Ensure that every export to be migrated is also configured for NFS a. Use chexport or mkexport to configure for NFS with no_root_squash 2. Delete the existing export and create a new one for NFS only. Update NFS export definitions to also limit IP addresses to each of the IBM Spectrum Scale gateways (AFM Gateway nodes). This step is necessary to ensure access to the home is controlled and managed to avoid risk of data corruption from multiple sources. This step will ensure that all updates to the exports will only come from the EA AFM clients 3. This will set the export as an AFM home by running the GPFS cli tool: mmafmconfig {enable disable} <exportpath> 4. Stop any further HSM Migration on the SA fileset(s) being migrated. If that is not possible, then you must at least update the HSM policies to exclude the migrating filesets from consideration. The typical triggers to a migration policy will be the use of GPFS callbacks in relation to filesystem or fileset space utilization and periodic running of migration policies. You will need to ensure that all the possible triggers are identified and the associated policies are appropriately modified to exclude the migrated filesets from scope. 3. Maintaining Recovery Points If you wish to maintain a recovery point to roll back to in case there is an issue with the migration, please ensure you create a snapshot to that effect for the home

20 20 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] SAà Fileset_001 before you set up the IW cache on the IBM Spectrum Scale environment (EA). Ensure this snapshot has a name that reflects this usage. The command to use for taking snapshots on the home is mmcrsnapshot: Usage: mmcrsnapshot Device SnapshotName [-j Fileset] During migration you may also create additional snapshots from the cache, if you are using SW mode. An AFM snapshot created on the AFM Cache will trigger the following actions: 1. Flush local pending IO on cache fileset to disk disk 2. Take a snapshot on the cache fileset 3. Flush pending changes to the home fileset 4. Create a snapshot on the home fileset The command to use for taking snapshots on the cache is mmpsnap Usage: mmpsnap Device create -j FilesetName [{[--comment Comment] [--uid ClusterUID]} --rpo] [--wait] or mmpsnap Device delete -s SnapshotName -j FilesetName or mmpsnap Device status -j FilesetName mmpsnap does not function for AFM IW mode but will work for AFM SW mode. By using this type of snapshots you will have the ability to roll back changes (or refer to previous versions of a change) if you need to. If you are using AFM IW mode, then you can still take periodic fileset level snapshots of the home. Note: the attributes [--uid ClusterUID]} --rpo] [--wait] are not available for standard AFM and cannot be used during the migration phase. They will become available later should you choose to set up AFM DR for dual site failover capability. 4. Set up AFM Cache filesets and exports on IBM Spectrum Scale (EA) using Independent Writer (IW) mode Note: We will use Fileset_001 as the sample fileset being migrated. SAà Fileset_001 will refer to Fileset_001 on SONAS in site A while EAà Fileset_001 will refer to Fileset_001 on IBM Spectrum Scale environment in Site A. For the purpose of our migration scenario, we will assume that we will be using caches in IW mode. To create an AFM Cache fileset you will need the following information: Identify one or more AFM gateway nodes on the cache cluster that can access the home export IP address of home export

21 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 21 Name and path of home export Name of the AFM cache (ideally this will match the home name in order to preserve the name space post migration) The mode you want this cache to operate in (RO/LU/SW/IW). We will assume IW in this scenario The protocol you will use over AFM (for migration from SONAS we will use NFS v3 protocol) An AFM Gateway is required on the cache cluster. This gateway will be the node that communicated with the home cluster and queues reads and updates to it on behalf on f the rest of the cache cluster. To specify an AFM gateway use the following command: mmchnode {--gateway --nogateway} -N node1,node2,... For example, to create EAà Fileset_001, run: mmcrfileset fs1 Fileset_001 -p afmtarget=nfs://sonas_ip1/ibm/fs1/fileset_001 -p afmmode=iw --inodespace new This command will create a new Independent Writer (afmmode=iw) cache fileset on the IBM Spectrum Scale environment, which will use the /ibm/fs1/fileset_001 export on SONAS_IP1 as the home over nfs protocol (afmtarget). Once this command is executed, you will be able to link the fileset to a relevant part of the name space within fs1 and access it. mmlinkfileset fs1 fileset_001 J /ibm/fs1/data/fileset_001 A directory listing in that location will then be able to show you the contents (file names) of the home export this AFM cache is pointing to. cd /ibm/fs1/data/fileset_001 ls ls You will see that initially, the file names and directories are listed but no storage space has been locally allocated for their data. This is normal. Our next step will be to disable cache eviction before we attempt to pull the data from the source to the cache. 5. Disable cache eviction and afmprefetchthreshold As mentioned before, it is important, for migration purposes that we disable the cache eviction feature and set afmprefetchthreshold=0. This will help to ensure that you will have a full copy of every file on the cache. mmchfileset Device {FilesetName -J JunctionPath} [-j NewFilesetName] [-t NewComment] [-p afmattribute=value...] [--inode-limit MaxNumInodes [:NumInodesToPreallocate]] [--allow-permission-change PermissionChangeMode] -p afmenableautoeviction This AFM configuration attribute enables eviction on a given fileset. A yes value specifies that eviction is allowed on the

22 22 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] fileset. A no value specifies that eviction is not allowed on the fileset. -p afmprefetchthreshold This AFM configuration attribute controls partial file caching and prefetching. Valid values are the following: For example: To disable automatic cache eviction on the IBM Spectrum Scale AFM Cache fileset EAà Fileset_001 on filesystem fs1, run the following command on EA: mmchfileset fs1 Fileset_001 p afmenableautoeviction=no -p afmprefetchthreshold=0 Note: This step alone is not enough, though; as you must also ensure you have sufficient disk space on the cache to host all the data on the home. 6. Create/Set IBM Spectrum Scale exports and IPs to match the source exports The purpose of these exports is to enable your users and applications currently using the exports on the SONAS to be moved over / repointed to these new exports as soon as you have completed creating the corresponding AFM cache. Some commands that you can use to view services and set up exports mmces service list all verbose mmsmb export list mmnfs export list Please refer the IBM Spectrum Scale documentation for creating the necessary exports to match your requirements on the CES nodes. 7. Ensure authentication is compatible with the source environment AFM prefetch command will provide a means for a cache to retrieve and cache data located on a home export. The AFM cache client will connect to the home export over NFS protocol using Uid=0 (root). If the home is a GPFS cluster, then the AFM client will use special primitives over NFS protocol to retrieve all the metadata information including owner, group, ACLs and Extended Attributes for each file and directory object as well. AFM client will also retrieve the file s data in response to a read request. The owner, group and ACL information is passed from home to cache unmodified. Therefore, it is imperative that both the home and the cache use the same authentication reference point, to ensure that the owner, group and ACL entries on the home correctly map the same owner, group and ACL. Failure to do so will risk the wrong users getting access to the files. Steps to validate and ensure you have the correct mapping: Run lsauth on the SONAS to get information on the authentication mapping method

23 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 23 Run mmuserauth service list on the IBM Spectrum Scale environment to obtain the current authentication information Ensure they both are referring to the same authentication provider To verify the mapping, for a selection of users on the SONAS, ensure that these users resolve to the same UiDs on the IBM Spectrum Scale environment. (If possible, please do an exhaustive comparison) To verify the mapping, for a sample selection of groups on the SONAS, ensure that these groups resolve to the same GiDs on the IBM Spectrum Scale environment. (If possible, please do an exhaustive comparison) If the results differ, it is essential that you fix/resolve the discrepancies prior to beginning the migration If the user IDs are different, please open a PMR and place a support call to IBM to organize for a specialist to make the necessary changes to the Spectrum Scale configuration so that the IDs will align with your SONAS. 8. Repoint users and applications to IBM Spectrum Scale exports A convenient feature of AFM IW mode is that it will make it possible for users and applications to be immediately repointed to the cache. If there are a lot of files in the home export, it will be better to ensure that at the least, all metadata information is pre- fetched from the home as this will enhance the performance and behavior of the cache for the users and applications. 9. Use AFM control tools to pull in all file metadata from the home fileset to the cache fileset To prefetch file metadata, AFM client on the cache is provided with a file containing the list of files in the home export that needs to be cached locally. Usage: mmafmctl Device prefetch -j FilesetName [--metadata-only] [{--list-file ListFile} {--home-list-file HomeListFile} {--home-inode-file PolicyListFile}] [--home-fs-path HomeFilesystemPath] [-s LocalWorkDirectory] You have the option of specifying --metdata-only in the prefetch command so that only the files metadata (name, size, creation date, modified date, last accessed date, extended user attributes and ACL information) is cached locally first. To obtain a list of files on the source export, you can use the linux find command, or, if the source is a GPFS cluster, you can use the GPFS mmapplypolicy command to generate a list. This list will represent the list of files that need to be cached on the cache. Because we are doing a migration, we would need to ensure that all files in the export are listed to ensure they are migrated.

24 24 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] It is important you take note of the relative position of the file names (full path). If the files list is based on the view of the cache, then you need to use the --list-file parameter; if your files list is based on the view from the home export, then you need to use the --home-list-file parameter and finally, if you use the output of mmapplypolicy to generate a list, you need to use the --home-inode-file inodefile. Please refer to Appendix A for a sample of the scanning policy AFM prefetch will create the directory entries on the cache fileset to match the file names and directories as well as all the metadata information. Additionally, as each of the cached file names will have local inode numbers, additional AFM extended attributes are automatically created for each file to help maintain a relationship between the cached file entry and the corresponding file entry in the home export. Once the metadata information is prepopulated on the cache, you can repoint users and application to the IBM Spectrum Scale export. Initial loading of file data will occur as users and application attempt to read or update any file in the cache. If the file data does not yet exist on the cache, AFM will retrieve the file from home. This implies that in some circumstances, your applications will experience some performance degradation for the initial file access, until the file is cached locally. Any updates to the cached files will translate to background writes to the home occurring in the background. To pull all file data from the home, you can use the AFM prefetch tool again with a refreshed files list without specifying the --metadata-only parameter. You can also be selective about when you pull non- active data. For example you can create a files list to represent all active data (say, data that has been accessed over the past 2 weeks) and another files list for all non- active data. Then you could defer the caching of non- active data to an off- peak activity at night when the performance impact of transferring all this data is not as important. 10. Verify all data has been migrated to cache This process involves the following: Conduct a full scan of filenames on the AFM home and obtain the checksums for each of these files and save the results as a sorted output. Conduct a full scan of filenames on the AFM cache and obtain the checksums for each of these files and save the results as a sorted output The minimum parameters in the scan file should be ctime, mtime, atime, checksum and full path name. You will need to compare this output with the results of the similar scan on the cache side Run diff (or other program of your choice) to compare the two files to list the differences The differences should only refer to files that are new or have been recently accessed and modified or deleted The list of deleted files may appear on both sides of the AFM relationship

25 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 25 The list of new or modified files should be on the cache side only. If you have such files on the home side, something has occurred to allow updates to the home other than via the cache. Please rectify this asap. Note: We are not really interested in the file size as the checksum program will have triggered a full cache recall if that was necessary Note: Do not run these tests on filesets that have HSM migrated files as this action will trigger a recall storm. If you have HSM migrated files, you will need to ensure that that your file scan process will ignore migrated files from the checksum process 11. Convert the AFM cache to an ordinary fileset (if necessary) It is important that you carefully consider the future / intended use of the cache fileset once all the file data can been fully cached from the home export. If your fileset will not serve any special function other than an ordinary home for data for users to work with, then you might want to convert the AFM cache back to a regular fileset. However, if you plan on involving this fileset in further AFM activity, including AFM DR, then you must defer converting this fileset. This is because the action of converting a AFM fileset to an ordinary fileset is currently a one- way step and cannot be reversed. When you are satisfied that your users are able to access all your data on the fileset via the cache and you have verified that all the data has been successfully cached, you can suspend AFM cache writes back to the home, as your next migration step. This is essentially your last step in the migration process for this export in SA. Caution: If your current AFM Home is being replicated to another system, this action will cease updates to the AFM home, therefore your remote system will no longer see any changes occurring on the cache. To disable AFM on the cache run the following command: mmunlinkfileset <filesystem> <FileSet> mmchfileset <filesystem> <FileSet> - p afmtarget=disable Note: Your AFM cache can be reconfigured to use a different home. However, in a dual site configuration, we do not use this capability to enable the establishment of a new AFM home for this fileset. Instead se use the AFM DR methods to set the fileset up for AFM DR. Please refer to the section on dual site migration for more information.

26 26 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Use Case #3a Customer has HSM and is able to recall all HSM data to SONAS prior to migration using one of use cases #1 or #2 (Single Site) Until IBM Spectrum Scale has formal support for HSM on AFM and AFM DR filesets, you will need to recall all files from tape if you need to conduct migration of those fielsets. This is because current AFM logic will not appropriately handle HSM migrated stub files as special cases on the cache. If you plan to conduct a migration after recalling all your files back from tape, you should split the recall process from the migration process. It is not wise to use the migration AFM prefetch process as the means of triggering recalls as this process can become stuck waiting for files to recall, rendering the cache in a hung state for the duration. For some customers, a recall of 100TB of data to tape can take months, depending on their h/w infrastructure for SONAS and the tape environment. If you are able to recall your HSM data back from tape onto your home/source exports first, prior to migration, you should do this as a separate activity. Then, you will find that the migration steps will be the same as with Use Case #2 above. Therefore, this use case is essentially identical to Use Case #2, except that a recall of all HSM data will be needed for the migration to succeed. Use Case #3b Customer has HSM and is unable to recall all HSM data due to size/complexity. This use case will require the services of IBM Lab Based Services (LBS) This option is currently being developed further and is expected to become available in a future release of IBM Spectrum Scale. Use Case #5 - Data migration using Forklift method This option is currently being developed further and is expected to become available in a future release of IBM Spectrum Scale.

27 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 27 Migrating from SONAS to IBM Spectrum Scale platforms Dual site coexistence OVERVIEW The majority of SONAS customers have 2 or more SONAS installation and have implemented their use in an active/passive or Production/DR manner. Some customers use their SONAS in an active/active manner, wherein both SONAS will carry active data for users in their local geography and will also maintain a replicated (RSYNC) copy of file data in a replicated filesystem on the other SONAS. The replicated copies are treated as readonly/passive unless the source SONAS is unavailable and the customer has initiated a failover to the remaining SONAS. For the purposes of migration discussion we will assume that we have 2 SONAS implemented in a manner where SA is active (production) and SB is passive (DR). For customers who also use TSM we will use the terms TA (production site TSM server) and TB (DR site TSM server). We do not imply that TB is a read- only replica of TA. We assume that TA and TB are independent systems, each tasked with backing up SA and SB respectively. If HSM is configured, then TA and TB are also the HSM servers (see Figure2). Filesets often have hundreds of TB of capacity and many millions of files. It is often very difficult to re- copy all the data from the new production environment (EA) to the new DR (EB) environment because of available WAN bandwidth among other factors. In this discussion, we assume that the customer is already regularly replicating their active filesystems in SA to SB using SONAS Async replication service (cnreplicate). Figure 2 overleaf depicts a typical SONAS (SA) to SONAS (SB) relationship and suggests how new replacement IBM Spectrum Scale clusters may be placed. TA and TB represent TSM backup servers in production and DR respectively.

28 28 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Figure 2 Dual Site SONAS with corresponding IBM Spectrum Scale co- existing Migration From SONAS to IBM Spectrum Scale Legend: Site A (PRODuction Site A): SA = SONAS in Site A, EA = IBM Spectrum Scale in Site A, TA = TSM/HSM Server in Site A Site B (Disaster Recovery site): SB = SONAS in Site B, EB = IBM Spectrum Scale in Site B, TB = TSM/HSM Server in Site B Key dependencies (assets, technical capability know- how, timeline) It is essential that proper care is taken when migrating data from a legacy dual- site environment to a new dual site IBM Spectrum Scale environment There are a number of ways where data migration can be carried out. For example you can: Migrate production data to IBM Spectrum Scale and have the new (production) IBM Spectrum Scale to replicate all its data to the new DR IBM Spectrum Scale cluster. This method will introduce an exposure window where in you might have changed data that resides in the new production environment that is not yet copied to the DR environment. Should a disaster occur during this exposure period, you will have lost this data. Migration of production data from SONAS to IBM Spectrum Scale while also migrating data from the DR SONAS to IBM Spectrum Scale. AFM in IW mode on the production IBM Spectrum Scale will replicate data back to the SONAS, which in turn will replicate that data to the SONAS in DR via the Rsync replication process. On the DR IBM Spectrum Scale cluster, you can also set up an AFM IW

29 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] 29 cache that is periodically (and frequently) prefetching the data from the SONAS in DR. This way you are able to eliminate the exposure window should you have a disaster. Eliminating the exposure window does not imply a reduction on any risk you may already previously have incurred with your method of replicating data from the SONAS in production to the SONAS in DR rather it merely implies that your risk exposure is not expected to be any greater. In this design, data update flow will be (for each fileset), until you are ready to cut- over: Users/Apps à EA (AFM iw)à SB (ASYNC Repl)à SB (AFM prefetch)à EB Eventually when all the data is available on the cache (EA) we will be able to the following: o Pause user & application IO to EA o Flush all pending changes to SA o Break the relationship between EA and SA o Run RSYNC to push changes to SB o Prefetch data on SB to EB o Break the relationship between EB and SB o Create a new AFM DR relationship EAà EB o Resume IO to EA and see updates are being pushed to EB At the end of the migration of a fileset from SA, the data will be on EA with an AFM DR relationship to EB (EAà EB). As previously mentioned, one of the current limitations with AFM is that you cannot convert and existing GPFS fileset to an AFM cache fileset. An existing fileset can become an AFM home but not an AFM cache. Therefore on the SONAS (SB), any fileset that exists cannot become a cache but can only become a home. Our only option for migrating data over AFM to the EB cluster is to create IW caches on that cluster with the home on the SONAS (SB) and run periodic prefetch actions to fully populate the cache.

30 30 [MIGRATING FROM SONAS TO IBM SPECTRUM SCALE] Figure 3- Dual site data migration (per file set) to IBM Spectrum Scale Figure 3 shows a dual- site SONAS implementation on the left with new replacement IBM Spectrum Scale clusters on the right. The two SONAS are configured with periodic replication from SA to SB (green arrow). In this setup, all user data is located on the filesystem in SA and the changed files in the entire filesystem is copied over to the target filesystem in SB via the Async Replication service. The target SONAS (SB) serves as a DR failover for SA. In the event of a site outage or loss of SA, users will be repointed to their data on SB. The age of the data (RPO) will be the replication period plus the time to replicate. Again we will migrate data from SONAS to IBM Spectrum Scale one fileset at a time. We assume that the IBM Spectrum Scale environments at both sites are already properly configured to match the respective SONAS implementations w.r.t authentication, protection export services (SMB/NFS) and data placement policies. The end game with this type of migration is to eventually get the IBM Spectrum Scale environment in site A to directly replicate its data to its corresponding fileset in site B without the aid of the SONAS. Eventually, when all data sets has been migrated and are independently replicating directly to IBM Spectrum Scale in site B, you will be in a position to decommission both the SONAS in accordance with your company s policies. As we are not able to convert an existing fileset into an AFM fileset, we will need to use the migration method as described for single- site migration earlier in this document on each of SAà EA and SBà EB migrations. For more details on single- site migration please refer to that section of this document.

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management IBM Spectrum Protect Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management Document version 1.4 Dominic Müller-Wicke IBM Spectrum Protect Development Nils Haustein EMEA Storage

More information

IBM Active Cloud Engine centralized data protection

IBM Active Cloud Engine centralized data protection IBM Active Cloud Engine centralized data protection Best practices guide Sanjay Sudam IBM Systems and Technology Group ISV Enablement December 2013 Copyright IBM Corporation, 2013 Table of contents Abstract...

More information

SONAS Best Practices and options for CIFS Scalability

SONAS Best Practices and options for CIFS Scalability COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING...2 MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS...2 SONAS SYSTEM CONFIGURATION...4 SONAS Best Practices and options for CIFS Scalability A guide

More information

IBM Active Cloud Engine/Active File Management. Kalyan Gunda

IBM Active Cloud Engine/Active File Management. Kalyan Gunda IBM Active Cloud Engine/Active File Management Kalyan Gunda kgunda@in.ibm.com Agenda Need of ACE? Inside ACE Use Cases Data Movement across sites How do you move Data across sites today? FTP, Parallel

More information

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information

More information

CONFIGURING IBM STORWIZE. for Metadata Framework 6.3

CONFIGURING IBM STORWIZE. for Metadata Framework 6.3 CONFIGURING IBM STORWIZE for Metadata Framework 6.3 Publishing Information Software version 6.3.160 Document version 4 Publication date May 22, 2017 Copyright 2005-2017 Varonis Systems Inc. All rights

More information

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates Technical Report A Thorough Introduction to 64-Bit Aggregates Shree Reddy, NetApp September 2011 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES The NetApp Data ONTAP 8.0 operating system operating

More information

Insights into TSM/HSM for UNIX and Windows

Insights into TSM/HSM for UNIX and Windows IBM Software Group Insights into TSM/HSM for UNIX and Windows Oxford University TSM Symposium 2005 Jens-Peter Akelbein (akelbein@de.ibm.com) IBM Tivoli Storage SW Development 1 IBM Software Group Tivoli

More information

AFM Migration: The Road To Perdition

AFM Migration: The Road To Perdition AFM Migration: The Road To Perdition Spectrum Scale Users Group UK Meeting 9 th -10 th May 2017 Mark Roberts (AWE) Laurence Horrocks-Barlow (OCF) British Crown Owned Copyright [2017]/AWE GPFS Systems Legacy

More information

ZYNSTRA TECHNICAL BRIEFING NOTE

ZYNSTRA TECHNICAL BRIEFING NOTE ZYNSTRA TECHNICAL BRIEFING NOTE Backup What is Backup? Backup is a service that forms an integral part of each Cloud Managed Server. Its purpose is to regularly store an additional copy of your data and

More information

IBM Spectrum Scale Archiving Policies

IBM Spectrum Scale Archiving Policies IBM Spectrum Scale Archiving Policies An introduction to GPFS policies for file archiving with Linear Tape File System Enterprise Edition Version 4 Nils Haustein Executive IT Specialist EMEA Storage Competence

More information

XenData Product Brief: SX-550 Series Servers for LTO Archives

XenData Product Brief: SX-550 Series Servers for LTO Archives XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production

More information

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide V7 Unified Asynchronous Replication Performance Reference Guide IBM V7 Unified R1.4.2 Asynchronous Replication Performance Reference Guide Document Version 1. SONAS / V7 Unified Asynchronous Replication

More information

Proof of Concept TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE

Proof of Concept TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE Proof of Concept TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE ATS Innovation Center, Malvern PA Joshua Kwedar The ATS Group October November 2017 INTRODUCTION With the release of IBM Spectrum Scale

More information

TSM Paper Replicating TSM

TSM Paper Replicating TSM TSM Paper Replicating TSM (Primarily to enable faster time to recoverability using an alternative instance) Deon George, 23/02/2015 Index INDEX 2 PREFACE 3 BACKGROUND 3 OBJECTIVE 4 AVAILABLE COPY DATA

More information

IBM Spectrum Scale Archiving Policies

IBM Spectrum Scale Archiving Policies IBM Spectrum Scale Archiving Policies An introduction to GPFS policies for file archiving with Spectrum Archive Enterprise Edition Version 8 (07/31/2017) Nils Haustein Executive IT Specialist EMEA Storage

More information

Side Load Feature Nasuni Corporation Boston, MA

Side Load Feature Nasuni Corporation Boston, MA Feature Nasuni Corporation Boston, MA Overview When Nasuni first supported the Disaster Recovery (DR) process, it was intended to be used to recover from true disasters such as hardware failure or buildings

More information

Exam Name: Midrange Storage Technical Support V2

Exam Name: Midrange Storage Technical Support V2 Vendor: IBM Exam Code: 000-118 Exam Name: Midrange Storage Technical Support V2 Version: 12.39 QUESTION 1 A customer has an IBM System Storage DS5000 and needs to add more disk drives to the unit. There

More information

IBM Storwize V7000 Unified

IBM Storwize V7000 Unified IBM Storwize V7000 Unified Pavel Müller IBM Systems and Technology Group Storwize V7000 Position Enterprise Block DS8000 For clients requiring: Advanced disaster recovery with 3-way mirroring and System

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates TECHNICAL REPORT A Thorough Introduction to 64-Bit egates Uday Boppana, NetApp March 2010 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES NetApp Data ONTAP 8.0 7-Mode supports a new aggregate type

More information

Migration. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1

Migration. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

TS7700 Technical Update TS7720 Tape Attach Deep Dive

TS7700 Technical Update TS7720 Tape Attach Deep Dive TS7700 Technical Update TS7720 Tape Attach Deep Dive Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 Quick background of TS7700 TS7720T Overview TS7720T Deep Dive TS7720T

More information

Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS)

Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS) Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS) Dr. Jens-Peter Akelbein Mathias Dietz, Christian Ambach IBM Germany R&D 2011 Storage Developer Conference. Insert Your

More information

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM Note: Before you use this information and the product

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

Storage for HPC, HPDA and Machine Learning (ML)

Storage for HPC, HPDA and Machine Learning (ML) for HPC, HPDA and Machine Learning (ML) Frank Kraemer, IBM Systems Architect mailto:kraemerf@de.ibm.com IBM Data Management for Autonomous Driving (AD) significantly increase development efficiency by

More information

From an open storage solution to a clustered NAS appliance

From an open storage solution to a clustered NAS appliance From an open storage solution to a clustered NAS appliance Dr.-Ing. Jens-Peter Akelbein Manager Storage Systems Architecture IBM Deutschland R&D GmbH 1 IBM SONAS Overview Enterprise class network attached

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW

More information

AUTOMATED RESTORE TESTING FOR TIVOLI STORAGE MANAGER

AUTOMATED RESTORE TESTING FOR TIVOLI STORAGE MANAGER AUTOMATED RESTORE TESTING FOR TIVOLI STORAGE MANAGER TSMworks, Inc. Based in Research Triangle area, NC, USA IBM Advanced Business Partner Big fans of Tivoli Storage Manager Broad experience with Fortune

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide Note: Before using this information and the product

More information

An introduction to IBM Spectrum Scale

An introduction to IBM Spectrum Scale IBM Platform Computing Thought Leadership White Paper February 2015 An introduction to IBM Spectrum Scale A fast, simple, scalable and complete storage solution for today s data-intensive enterprise 2

More information

IBM Spectrum Scale Strategy Days

IBM Spectrum Scale Strategy Days IBM Spectrum Scale Strategy Days Backup of IBM Spectrum Scale file systems Dominic Müller-Wicke IBM Development IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal

More information

Infinite Volumes Management Guide

Infinite Volumes Management Guide ONTAP 9 Infinite Volumes Management Guide September 2016 215-11160_B0 doccomments@netapp.com Visit the new ONTAP 9 Documentation Center: docs.netapp.com/ontap-9/index.jsp Table of Contents 3 Contents

More information

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report

More information

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT By Joshua Kwedar Sr. Systems Engineer By Steve Horan Cloud Architect ATS Innovation Center, Malvern, PA Dates: Oct December 2017 INTRODUCTION

More information

Isilon OneFS. Version Built-In Migration Tools Guide

Isilon OneFS. Version Built-In Migration Tools Guide Isilon OneFS Version 7.2.1 Built-In Migration Tools Guide Copyright 2015-2016 EMC Corporation. All rights reserved. Published in the USA. Published June, 2016 EMC believes the information in this publication

More information

StorageCraft OneXafe and Veeam 9.5

StorageCraft OneXafe and Veeam 9.5 TECHNICAL DEPLOYMENT GUIDE NOV 2018 StorageCraft OneXafe and Veeam 9.5 Expert Deployment Guide Overview StorageCraft, with its scale-out storage solution OneXafe, compliments Veeam to create a differentiated

More information

DocAve 6 High Availability

DocAve 6 High Availability DocAve 6 High Availability User Guide Service Pack 10, Cumulative Update 1 Issued April 2018 The Enterprise-Class Management Platform for SharePoint Governance Table of Contents What s New in This Guide...

More information

Google File System. Arun Sundaram Operating Systems

Google File System. Arun Sundaram Operating Systems Arun Sundaram Operating Systems 1 Assumptions GFS built with commodity hardware GFS stores a modest number of large files A few million files, each typically 100MB or larger (Multi-GB files are common)

More information

Data Movement & Tiering with DMF 7

Data Movement & Tiering with DMF 7 Data Movement & Tiering with DMF 7 Kirill Malkin Director of Engineering April 2019 Why Move or Tier Data? We wish we could keep everything in DRAM, but It s volatile It s expensive Data in Memory 2 Why

More information

Administrator s Guide. StorageX 7.8

Administrator s Guide. StorageX 7.8 Administrator s Guide StorageX 7.8 August 2016 Copyright 2016 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

Opendedupe & Veritas NetBackup ARCHITECTURE OVERVIEW AND USE CASES

Opendedupe & Veritas NetBackup ARCHITECTURE OVERVIEW AND USE CASES Opendedupe & Veritas NetBackup ARCHITECTURE OVERVIEW AND USE CASES May, 2017 Contents Introduction... 2 Overview... 2 Architecture... 2 SDFS File System Service... 3 Data Writes... 3 Data Reads... 3 De-duplication

More information

DELL EMC UNITY: COMPRESSION FOR FILE Achieving Savings In Existing File Resources A How-To Guide

DELL EMC UNITY: COMPRESSION FOR FILE Achieving Savings In Existing File Resources A How-To Guide DELL EMC UNITY: COMPRESSION FOR FILE Achieving Savings In Existing File Resources A How-To Guide ABSTRACT In Dell EMC Unity OE version 4.2 and later, compression support was added for Thin File storage

More information

GPFS 3.5 enhancements to Panache/ pcache snapshots and LifeCycleManagement

GPFS 3.5 enhancements to Panache/ pcache snapshots and LifeCycleManagement IBM Systems Lab Services and GTS / Technical Support GPFS 3.5 enhancements to Panache/ pcache snapshots and LifeCycleManagement GPFS pcache (Panache/AFM) cluster - terminology Gateway node Gateway node

More information

Asigra Cloud Backup Provides Comprehensive Virtual Machine Data Protection Including Replication

Asigra Cloud Backup Provides Comprehensive Virtual Machine Data Protection Including Replication Datasheet Asigra Cloud Backup Provides Comprehensive Virtual Machine Data Protection Including Replication Virtual Machines (VMs) have become a staple of the modern enterprise data center, but as the usage

More information

XenData SX-10. LTO Video Archive Appliance. Managed by XenData6 Server Software. Overview. Compatibility

XenData SX-10. LTO Video Archive Appliance. Managed by XenData6 Server Software. Overview. Compatibility XenData SX-10 LTO Video Archive Appliance Managed by XenData6 Server Software Overview The XenData SX-10 appliance manages a robotic LTO tape library or stand-alone LTO tape drives and creates a cost effective

More information

White paper ETERNUS CS800 Data Deduplication Background

White paper ETERNUS CS800 Data Deduplication Background White paper ETERNUS CS800 - Data Deduplication Background This paper describes the process of Data Deduplication inside of ETERNUS CS800 in detail. The target group consists of presales, administrators,

More information

OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM

OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM 2015 Storage Developer Conference. Insert Your Company Name. All Rights

More information

TSM HSM Explained. Agenda. Oxford University TSM Symposium Introduction. How it works. Best Practices. Futures. References. IBM Software Group

TSM HSM Explained. Agenda. Oxford University TSM Symposium Introduction. How it works. Best Practices. Futures. References. IBM Software Group IBM Software Group TSM HSM Explained Oxford University TSM Symposium 2003 Christian Bolik (bolik@de.ibm.com) IBM Tivoli Storage SW Development 1 Agenda Introduction How it works Best Practices Futures

More information

Eliminating the Pain of Migrating Your Unstructured Data

Eliminating the Pain of Migrating Your Unstructured Data This white paper documents a user-transparent pull methodology that minimizes unstructured data movement while maximizing its value. Introduction Unstructured data ranges from 60 to 80% of most organizations

More information

Administrator s Guide. StorageX 8.0

Administrator s Guide. StorageX 8.0 Administrator s Guide StorageX 8.0 March 2018 Copyright 2018 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

SAP HANA Disaster Recovery with Asynchronous Storage Replication

SAP HANA Disaster Recovery with Asynchronous Storage Replication Technical Report SAP HANA Disaster Recovery with Asynchronous Storage Replication Using SnapCenter 4.0 SAP HANA Plug-In Nils Bauer, Bernd Herth, NetApp April 2018 TR-4646 Abstract This document provides

More information

Tintri Cloud Connector

Tintri Cloud Connector TECHNICAL WHITE PAPER Tintri Cloud Connector Technology Primer & Deployment Guide www.tintri.com Revision History Version Date Description Author 1.0 12/15/2017 Initial Release Bill Roth Table 1 - Revision

More information

ECE Engineering Robust Server Software. Spring 2018

ECE Engineering Robust Server Software. Spring 2018 ECE590-02 Engineering Robust Server Software Spring 2018 Business Continuity: Disaster Recovery Tyler Bletsch Duke University Includes material adapted from the course Information Storage and Management

More information

Overcoming Obstacles to Petabyte Archives

Overcoming Obstacles to Petabyte Archives Overcoming Obstacles to Petabyte Archives Mike Holland Grau Data Storage, Inc. 609 S. Taylor Ave., Unit E, Louisville CO 80027-3091 Phone: +1-303-664-0060 FAX: +1-303-664-1680 E-mail: Mike@GrauData.com

More information

Troubleshooting and Monitoring ARX v6.1.1

Troubleshooting and Monitoring ARX v6.1.1 Troubleshooting and Monitoring ARX v6.1.1 Table of Contents Module1: Introduction COURSE OBJECTIVES... 1 COURSE OVERVIEW... 1 PREREQUISITES... 2 COURSE AGENDA... 2 F5 PRODUCT SUITE OVERVIEW... 4 BIG-IP

More information

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM IBM Tivoli Storage Manager for AIX Version 7.1.3 Installation Guide IBM IBM Tivoli Storage Manager for AIX Version 7.1.3 Installation Guide IBM Note: Before you use this information and the product it

More information

Administration 1. DLM Administration. Date of Publish:

Administration 1. DLM Administration. Date of Publish: 1 DLM Administration Date of Publish: 2018-07-03 http://docs.hortonworks.com Contents ii Contents Replication Concepts... 4 HDFS cloud replication...4 Hive cloud replication... 4 Cloud replication guidelines

More information

DocAve 6 High Availability

DocAve 6 High Availability DocAve 6 High Availability User Guide Service Pack 8, Cumulative Update 1 Issued December 2016 1 Table of Contents What s New in This Guide...6 About DocAve High Availability...7 Submitting Documentation

More information

Online Demo Guide. Barracuda PST Enterprise. Introduction (Start of Demo) Logging into the PST Enterprise

Online Demo Guide. Barracuda PST Enterprise. Introduction (Start of Demo) Logging into the PST Enterprise Online Demo Guide Barracuda PST Enterprise This script provides an overview of the main features of PST Enterprise, covering: 1. Logging in to PST Enterprise 2. Client Configuration 3. Global Configuration

More information

INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS

INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS TECHNICAL NOTES INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS ALL PRODUCT VERSIONS TECHNICAL NOTE P/N 300-007-585 REV A03 AUGUST 24, 2009 Table of Contents Introduction......................................................

More information

Zadara Enterprise Storage in

Zadara Enterprise Storage in Zadara Enterprise Storage in Google Cloud Platform (GCP) Deployment Guide March 2017 Revision A 2011 2017 ZADARA Storage, Inc. All rights reserved. Zadara Storage / GCP - Deployment Guide Page 1 Contents

More information

Hitachi HQT-4210 Exam

Hitachi HQT-4210 Exam Volume: 120 Questions Question No: 1 A large movie production studio approaches an HDS sales team with a request to build a large rendering farm. Their environment consists of UNIX and Linux operating

More information

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability Applied Technology Abstract This white paper explains how the combination of EMC DiskXtender for Windows and EMC RecoverPoint can be used to implement a solution that offers efficient storage management,

More information

XenData Product Brief: XenData6 Server Software

XenData Product Brief: XenData6 Server Software XenData Product Brief: XenData6 Server Software XenData6 Server is the software that runs the XenData SX-10 Archive Appliance and the range of SX-520 Archive Servers, creating powerful solutions for archiving

More information

Module 4 STORAGE NETWORK BACKUP & RECOVERY

Module 4 STORAGE NETWORK BACKUP & RECOVERY Module 4 STORAGE NETWORK BACKUP & RECOVERY BC Terminology, BC Planning Lifecycle General Conditions for Backup, Recovery Considerations Network Backup, Services Performance Bottlenecks of Network Backup,

More information

Database Management. Understanding Failure Resiliency CHAPTER

Database Management. Understanding Failure Resiliency CHAPTER CHAPTER 10 This chapter contains information on RDU database management and maintenance. The RDU database is the Broadband Access Center (BAC) central database. The Cisco BAC RDU requires virtually no

More information

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide TECHNICAL DEPLOYMENT GUIDE StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide Overview StorageCraft, with its scale-out storage solution OneBlox, compliments Veeam to create a differentiated diskbased

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Client Installation and User's Guide SC27-2809-04 IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Client Installation

More information

NetVault Backup Client and Server Sizing Guide 2.1

NetVault Backup Client and Server Sizing Guide 2.1 NetVault Backup Client and Server Sizing Guide 2.1 Recommended hardware and storage configurations for NetVault Backup 10.x and 11.x September, 2017 Page 1 Table of Contents 1. Abstract... 3 2. Introduction...

More information

Upgrading to UrbanCode Deploy 7

Upgrading to UrbanCode Deploy 7 Upgrading to UrbanCode Deploy 7 Published: February 19 th, 2019 {Contents} Introduction 2 Phase 1: Planning 3 1.1 Review help available from the UrbanCode team 3 1.2 Open a preemptive support ticket 3

More information

Migration and Building of Data Centers in IBM SoftLayer

Migration and Building of Data Centers in IBM SoftLayer Migration and Building of Data Centers in IBM SoftLayer Advantages of IBM SoftLayer and RackWare Together IBM SoftLayer offers customers the advantage of migrating and building complex environments into

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

Dell FluidFS Version 6.0. FS8600 Appliance. Firmware Update Guide

Dell FluidFS Version 6.0. FS8600 Appliance. Firmware Update Guide Dell FluidFS Version 6.0 FS8600 Appliance Firmware Update Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION

More information

IBM Spectrum Protect for Virtual Environments Version Data Protection for Microsoft Hyper-V Installation and User's Guide IBM

IBM Spectrum Protect for Virtual Environments Version Data Protection for Microsoft Hyper-V Installation and User's Guide IBM IBM Spectrum Protect for Virtual Environments Version 8.1.4 Data Protection for Microsoft Hyper-V Installation and User's Guide IBM IBM Spectrum Protect for Virtual Environments Version 8.1.4 Data Protection

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Client Installation and User's Guide SC27-2809-03 IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Client Installation

More information

DocAve 6 Lotus Notes Migrator

DocAve 6 Lotus Notes Migrator DocAve 6 Lotus Notes Migrator User Guide Service Pack 5 Cumulative Update 1 Issued May 2015 1 Table of Contents What s New in this Guide... 5 About Lotus Notes Migrator... 6 Complementary Products... 6

More information

DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT

DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT A Dell Technology White Paper Version 1.0 THIS TECHNOLOGY WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

DocAve 6 Lotus Notes Migrator

DocAve 6 Lotus Notes Migrator DocAve 6 Lotus Notes Migrator User Guide Service Pack 9 Cumulative Update 1 Issued January 2018 1 Table of Contents What s New in this Guide... 5 About Lotus Notes Migrator... 6 Complementary Products...

More information

Where s Your Third Copy?

Where s Your Third Copy? Where s Your Third Copy? Protecting Unstructured Data 1 Protecting Unstructured Data Introduction As more organizations put their critical data on NAS systems, a complete, well thought out data protection

More information

Configuring EMC Isilon

Configuring EMC Isilon This chapter contains the following sections: System, page 1 Configuring SMB Shares, page 3 Creating an NFS Export, page 5 Configuring Quotas, page 6 Creating a Group for the Isilon Cluster, page 8 Creating

More information

XenData Product Brief: SX-550 Series Servers for Sony Optical Disc Archives

XenData Product Brief: SX-550 Series Servers for Sony Optical Disc Archives XenData Product Brief: SX-550 Series Servers for Sony Optical Disc Archives The SX-550 Series of Archive Servers creates highly scalable Optical Disc Digital Video Archives that are optimized for broadcasters,

More information

Exam : Implementing Microsoft Azure Infrastructure Solutions

Exam : Implementing Microsoft Azure Infrastructure Solutions Exam 70-533: Implementing Microsoft Azure Infrastructure Solutions Objective Domain Note: This document shows tracked changes that are effective as of January 18, 2018. Design and Implement Azure App Service

More information

Chapter 11. SnapProtect Technology

Chapter 11. SnapProtect Technology Chapter 11 SnapProtect Technology Hardware based snapshot technology provides the ability to use optimized hardware and disk appliances to snap data on disk arrays providing quick recovery by reverting

More information

Database Management. Understanding Failure Resiliency. Database Files CHAPTER

Database Management. Understanding Failure Resiliency. Database Files CHAPTER CHAPTER 7 This chapter contains information on RDU database management and maintenance. The RDU database is the Broadband Access Center for Cable (BACC) central database. As with any database, it is essential

More information

Availability Implementing high availability

Availability Implementing high availability System i Availability Implementing high availability Version 6 Release 1 System i Availability Implementing high availability Version 6 Release 1 Note Before using this information and the product it

More information

Copyright 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Copyright 2010 EMC Corporation. Do not Copy - All Rights Reserved. 1 Using patented high-speed inline deduplication technology, Data Domain systems identify redundant data as they are being stored, creating a storage foot print that is 10X 30X smaller on average than

More information

Technology Insight Series

Technology Insight Series EMC Avamar for NAS - Accelerating NDMP Backup Performance John Webster June, 2011 Technology Insight Series Evaluator Group Copyright 2011 Evaluator Group, Inc. All rights reserved. Page 1 of 7 Introduction/Executive

More information

Simple And Reliable End-To-End DR Testing With Virtual Tape

Simple And Reliable End-To-End DR Testing With Virtual Tape Simple And Reliable End-To-End DR Testing With Virtual Tape Jim Stout EMC Corporation August 9, 2012 Session Number 11769 Agenda Why Tape For Disaster Recovery The Evolution Of Disaster Recovery Testing

More information

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 3.1 and 3.1.1

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 3.1 and 3.1.1 Veritas NetBackup Copilot for Oracle Configuration Guide Release 3.1 and 3.1.1 Veritas NetBackup Copilot for Oracle Configuration Guide Legal Notice Copyright 2018 Veritas Technologies LLC. All rights

More information

File Archiving Whitepaper

File Archiving Whitepaper Whitepaper Contents 1. Introduction... 2 Documentation... 2 Licensing... 2 requirements... 2 2. product overview... 3 features... 3 Advantages of BackupAssist... 4 limitations... 4 3. Backup considerations...

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Disaster Recovery Solutions Guide

Disaster Recovery Solutions Guide Disaster Recovery Solutions Guide Dell FS8600 Network-Attached Storage (NAS) FluidFS System Engineering January 2015 Revisions Revision Date Description A October 2013 Initial Release B January 2015 Updated

More information

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE DELL EMC ISILON F800 AND H600 I/O PERFORMANCE ABSTRACT This white paper provides F800 and H600 performance data. It is intended for performance-minded administrators of large compute clusters that access

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales 2011 IBM Corporation

Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales 2011 IBM Corporation Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales simon_podeprel@cz.ibm.com Agenda V7000 Unified Overview IBM Active Cloud Engine for V7kU 2 Overview V7000 Unified

More information