Best Practices Guide for using IBM Spectrum Protect with Cohesity

Similar documents
Dell EMC Isilon with Cohesity DataProtect

Best Practices Guide for Protecting Microsoft SharePoint Server with Cohesity and Ontrack PowerControls for SharePoint

Simple Data Protection for the Cloud Era

Deployment Guide For Microsoft Exchange With Cohesity DataProtect

Cohesity Flash Protect for Pure FlashBlade: Simple, Scalable Data Protection

Cohesity SpanFS and SnapTree. White Paper

Cohesity DataPlatform Protecting Individual MS SQL Databases Solution Guide

Using Cohesity with Amazon Web Services (AWS)

Microsoft SQL Server

Commvault with Cohesity Data Platform

Veeam with Cohesity Data Platform

Cohesity Microsoft Azure Data Box Integration

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

Hedvig as backup target for Veeam

IBM Spectrum Protect Node Replication

StorageCraft OneXafe and Veeam 9.5

Agenda Secondary Storage Problem Cohesity Hyperconverged Secondary Storage Demo: Cohesity and Vmware vilogics Use Case

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide

HYCU and ExaGrid Hyper-converged Backup for Nutanix

Symantec NetBackup 7 for VMware

50 TB. Traditional Storage + Data Protection Architecture. StorSimple Cloud-integrated Storage. Traditional CapEx: $375K Support: $75K per Year

THE EMC ISILON STORY. Big Data In The Enterprise. Deya Bassiouni Isilon Regional Sales Manager Emerging Africa, Egypt & Lebanon.

Hybrid Cloud NAS for On-Premise and In-Cloud File Services with Panzura and Google Cloud Storage

Exam Questions c

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

Veritas Storage Foundation for Oracle RAC from Symantec

WHITE PAPER Software-Defined Storage IzumoFS with Cisco UCS and Cisco UCS Director Solutions

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

EMC DATA DOMAIN OPERATING SYSTEM

Storage Solutions for VMware: InfiniBox. White Paper

TITLE. the IT Landscape

Scale-out Data Deduplication Architecture

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

SOLUTION BRIEF Fulfill the promise of the cloud

A Close-up Look at Potential Future Enhancements in Tivoli Storage Manager

Discover the all-flash storage company for the on-demand world

C exam. Number: C Passing Score: 800 Time Limit: 120 min File Version:

Storage Performance Validation for Panzura

The storage challenges of virtualized environments

Modernize Your Backup and DR Using Actifio in AWS

1 Quantum Corporation 1

Exam : Title : Storage Sales V2. Version : Demo

Provisioning with SUSE Enterprise Storage. Nyers Gábor Trainer &

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director

Warsaw. 11 th September 2018

IBM Spectrum Protect Plus

TOP REASONS TO CHOOSE DELL EMC OVER VEEAM

Configuration Guide for Veeam Backup & Replication with the HPE Hyper Converged 250 System

Copyright 2012 EMC Corporation. All rights reserved.

VMware vsphere with ESX 4.1 and vcenter 4.1

C Q&As. IBM Tivoli Storage Manager V7.1 Implementation. Pass IBM C Exam with 100% Guarantee

White Paper Simplified Backup and Reliable Recovery

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

Offloaded Data Transfers (ODX) Virtual Fibre Channel for Hyper-V. Application storage support through SMB 3.0. Storage Spaces

Workspace & Storage Infrastructure for Service Providers

DELL EMC DATA DOMAIN EXTENDED RETENTION SOFTWARE

Nimble Storage Adaptive Flash

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

IBM ProtecTIER and Netbackup OpenStorage (OST)

SnapCenter Software 4.0 Concepts Guide

Cisco HyperConverged Infrastructure

Outline: ONTAP 9 Cluster Administration and Data Protection Bundle (CDOTDP9)

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

IBM Storage Software Strategy

Providing a first class, enterprise-level, backup and archive service for Oxford University

REFERENCE ARCHITECTURE Quantum StorNext and Cloudian HyperStore

Today s trends in the storage world. Jacint Juhasz Storage Infrastructure Architect

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile

Flashed-Optimized VPSA. Always Aligned with your Changing World

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Scale-Out Architectures for Secondary Storage

A product by CloudFounders. Wim Provoost Open vstorage

Detail the learning environment, remote access labs and course timings

Setting Up the Dell DR Series System on Veeam

SimpliVity OmniStack with the HyTrust Platform

Balakrishnan Nair. Senior Technology Consultant Back Up & Recovery Systems South Gulf. Copyright 2011 EMC Corporation. All rights reserved.

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix

WHY DO I NEED FALCONSTOR OPTIMIZED BACKUP & DEDUPLICATION?

Hyper-converged Secondary Storage for Backup with Deduplication Q & A. The impact of data deduplication on the backup process

NetVault Backup Client and Server Sizing Guide 2.1

Enterprise Cloud Data Protection With Storage Director and IBM Cloud Object Storage October 24, 2017

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

Protect enterprise data, achieve long-term data retention

Genomics on Cisco Metacloud + SwiftStack

ZertSoft % Erfolgsgarantie und 100% Geld-Zurück Garantie

Looking ahead with IBM i. 10+ year roadmap

TSM Node Replication Deep Dive and Best Practices

Microsoft SQL Server HA and DR with DVX

Why Datrium DVX is Best for VDI

Passit4sure.C questions C IBM Tivoli Storage Manager V7.1 Implementation

Microsoft Azure StorSimple Hybrid Cloud Storage. Manu Aery, Raju S

Using ESVA to Optimize Returns on Investment

Transcription:

Best Practices Guide for using IBM Spectrum Protect with Cohesity

Abstract This white paper outlines the best practices for using Cohesity as target storage by IBM Spectrum Protect. December 2017 Table of Contents About This Guide...2 Intended Audience...2 Terminology...2 Cohesity View Box / Namespaces...2 Cohesity View...2 Abbreviations...2 Solution Components...3 Cohesity Overview...3 IBM Spectrum Protect Overview...4 Logical Data Flow...5 Best Practices...5 Provisioning Storage for IBM Spectrum Protect...5 Creating the View Box and View/Share...5 Cohesity View Box...6 Cohesity View...6 Protocols...6 IBM Spectrum Protect Storage Pools...6 Storage Pool Types2...6 Storage Pool Performance Comparison when using Cohesity...7 Mounting the NFS file systems and Configuring IBM Spectrum Protect Storage Pools...7 Sequential-access storage pool (Device Class FILE)...7 IBM Spectrum Protect Configuration Option...8 Mounting Options...9 Creating Mount Points...10 Creating the Device Class and Storage Pool...10 Directory-container Storage pool...11 Mounting Options...11 Creating Mount Points...11 Creating the Directory Container Storage Pool...12 Backups and Restores...13 About the Author...14 Version History...14 Reference...15 1.

About This Guide Hyperconvergence is becoming a norm in data centers today. Companies adopting this next generation infrastructure have realized significant savings in TCO/ROI. These savings are the result of vastly simplified architectures, lower power and cooling needs, workload consolidation, smaller hardware footprint and pay as you grow consumption model. SpanFS is a completely new file system designed specifically for secondary storage consolidation. At the topmost layer, SpanFS exposes industry-standard, globally distributed NFS, SMB, and S3 interfaces. Cohesity is unique in its ability to support unlimited, frequent snapshots with no performance degradation. SpanFS has QoS controls built at all layers of the stack to support various workloads and can replicate, archive and tier data to another Cohesity cluster or to the cloud. What ties all these benefits together is the simplicity of managing these web scale platforms from a single UI. The design principles of distributed control and data planes that eliminate complexities in infrastructure and management make hyperconverged architectures attractive and bring overall value to end customers. Cohesity along with IBM Spectrum Protect can provide a robust, scalable, and simple to administer solution while also allowing for seamless growth. Cohesity provides a globally deduplicated, scale-out storage target that is natively integrated with the public cloud. It interoperates with IBM Spectrum Protect to provide a very robust, scale-out data protection solution. This document describes how to configure and use Cohesity as a target for IBM Spectrum Protect. Intended Audience This paper is written for System and IBM Spectrum Protect Administrators who plan to configure and use Cohesity as target storage for IBM Spectrum Protect. Cohesity uses floating/virtual IPs to provide the highest availability and load balancing. Each Cohesity cluster should have an equal number of VIPs per physical nodes. Always mount views (shares) using the VIPs. In the event of a node failure, the VIP on that node will automatically move to another Cohesity node, thus staying available to serve requests. Once the node failure is resolved, the VIP will then move back. This happens automatically. Cohesity recommends having familiarity with the following: Cohesity DataPlatform IBM Spectrum Protect Server Administration Terminology Cohesity View Box / Namespaces A Coheisty View Box is separate shared namespace that have common data reduction, availability or archive policies. For the purpose of this document, a View Box will contain the Views (NFS,SMB,etc). If de-dup is enabled for the View Box, all data will be de-duped both within the View and across all other Views within the View Box. Cohesity View A View is simply a file share, or a logical grouping of files within a View Box. Abbreviations Abbreviation NFS SMB S3 VIP QoS Description Network File System Server Message Block Simple Storage Service Virtual IP Quality of Service 2.

Solution Components The following components were used for interoperably testing Component Component Version OS Version IBM Spectrum Protect Server IBM Spectrum Protect Client Cohesity DataProtect 8.1.3 SUSE Linux Enterprise Server 11 SP4 8.1.2 CentOS Linux 7.2 4.1.2 Cohesity Overview Cohesity introduced the world s first scale-out data management platform to enable organizations to standardize secondary workflows on a unified and fully distributed solution. Cohesity s scale-out distributed file system SpanFSTM which was built from the ground up to ensure complete scalability to enable organizations to flexibly grow their environment by adding nodes to a cluster. With this scalability, organizations can eliminate the costs of data migrations and forklift upgrades, while benefiting from the simplicity of a homogenous solution. SpanFS also provides global, variable-length deduplication and unlimited snapshots and clones - making it the ideal storage target for enterprise environments. Cohesity cluster nodes have a shared-nothing topology and there is no single point of failure or inherent bottlenecks. Consequently both performance and capacity can scale linearly as more physical nodes are added to the cluster. The distributed file system spans across all nodes in the cluster and natively provides global deduplication, compression and encryption. Cohesity is well suited as target storage for IBM Spectrum Protect because: Cohesity provides a single and unified interface for provisioning, managing, and monitoring (low management overhead) target storage for IBM Spectrum Protect Variable-length, post-process or in-line, global deduplication. Cohesity even dedupes between multiple separate IBM Spectrum Protect servers/instances Multiple protocols to choose from Non-disruptive Cohesity hardware refresh and expansion without downtime Unlimited snapshots and clones on the Cohesity platform Global De-dup, even between multiple separate IBM Spectrum Protect servers/instances 3.

Hypervisor + Virtual SAN Data Protection RMAN DataProtect DataPlatform Cloud IBM Spectrum Protect Overview 1 IBM Spectrum Protect provides centralized, automated data protection that helps to reduce data loss and manage compliance with data retention and availability requirements. Data protection components The data protection solutions that IBM Spectrum Protect provides consist of a server, client systems and applications, and storage media. IBM Spectrum Protect provides management interfaces for monitoring and reporting the data protection status. Data protection services IBM Spectrum Protect provides data protection services to store and recover data from various types of clients. The data protection services are implemented through policies that are defined on the server. You can use client scheduling to automate the data protection services. Processes for managing data protection with IBM Spectrum Protect The IBM Spectrum Protect server inventory has a key role in the processes for data protection. You define policies that the server uses to manage data storage. User interfaces for the IBM Spectrum Protect environment For monitoring and configuration tasks, IBM Spectrum Protect provides various interfaces, including the Operations Center, a command-line interface, and an SQL administrative interface 1. IBM Tivoli Storage Manager (TSM), starting with version 7.1.3 is marketed as IBM Spectrum Protect 4.

Logical Data Flow IBM Spectrum Protect Clients IBM Spectrum Protect Servers (Linux / AIX / Windows) NFS / SMB / S3 Data Flow Backup Restore The above diagram shows the logical data flow and relationship between IBM Spectrum Protect Clients, Server, and the Cohesity Cluster. Best Practices Provisioning Storage for IBM Spectrum Protect In order for IBM Spectrum Protect to leverage Cohesity as target storage, we ll need to provision/present storage for IBM Spectrum Protect to use. Once the storage is presented to the IBM Spectrum Protect server OS, storage pools can then be created and used for storing backups from clients. In order to obtain the greatest throughput, all that is needed is to is spread the reads and writes across all nodes in the Cohesity cluster. As a Cohesity cluster size is increased, which is done simply by adding as many new nodes as needed, available raw and useable storage capacity as well as total available throughput is increased. You can add as few as one node or several if needed. IBM Spectrum Protect is able to leverage the power of the Cohesity platform by spreading it s reads and writes among all the Cohesity nodes. Although IBM does not necessarily have a load balancing algorithm, volumes or files stored on Cohesity by IBM Spectrum Protect will be spread out across multiple mount points, one for each node via a VIP. So in the case of an IBM Spectrum Protect server with 4000 volumes and a Cohesity cluster with 4 nodes, roughly 1000 volumes will be read or written to per Cohesity node. This prevents bottlenecks that may present themselves with more traditional single or dual controller based storage. Creating the View Box and View/Share Create a suitable View Box, if the IBM Spectrum Protect server/instance is not doing de-dup and/or compression, enabled de-dup and compression on the Cohesity Viewbox for greatest space saving. Directory-container storage pools default to enable de-dup and compression. Cohesity can still further de-dup and compress data already de-dup ed and compressed by a Directory-container storage pool to gain maximum space efficiency. This would be especially true where multiple IBM Spectrum Protect servers/instances are storing to the same View Box. If it s desired for de-dup to happen between multiple IBM Spectrum Protect servers/instances, all the Views should be created within a single View Box with de-dup enabled. De-dup and compression does add some overhead and thus will reduce the throughput for reads and writes to Cohesity. If the highest throughput is desired, at the expense of space usage, both de-dup and compression can be disabled. However, if space efficiency is of a higher importance, it s recommended to enable both compression and de-dup. 5.

Cohesity View Create a suitable View, if IBM Spectrum Protect is running on Linux or AIX, choose NFS if IBM Spectrum Protect is running on Windows, choose SMB. Set appropriate white-lists. For security reasons, only the IBM Spectrum Protect servers that will be reading and writing to a view should be added to the view or global white list. One or more Views can be created for IBM Spectrum Protect servers/instances. Although not required, it may make sense to create one view per IBM Spectrum Protect storage pool or at the very least one per IBM Spectrum Protect server instance. Set the QoS as appropriate, for example Backup Target High. There are several QoS options, please refer to the Cohesity DataProtect User Guide for details on creating view boxes, views, setting white lists and to understand the different QoS settings. Cohesity DataProtect Documentation Protocols Available Protocols include NFSv3, SMB 7, & S3 7 IBM Spectrum Protect Storage Pools Storage Pools are the logical groups used for storing backups, archives, or space-managed files within IBM Spectrum Protect. There are several types of storage pools, for the purpose of this document, we ll focus on primary storage pools backed by NFS/SMB/S3 storage protocols. IBM Spectrum Protect has several types of Primary storage pools. Below describes some of the different storage pools which are suitable for use with Cohesity via NFS/SMB/S3. Storage Pool Types 2 Below is a table which talks about a few storage pool types, as described and documented by IBM. Storage pool type Description Uses Directory-container storage pool A primary storage pool that a server uses to store data. Data that is stored in directory-container storage pools uses both inline data deduplication and client-side data deduplication. Use when you want to deduplicate data inline. By using directory-container storage pools, you remove the need for volume reclamation, which improves server performance and reduces the cost of storage hardware. You cannot use this type of storage pool for storage pool backup, migration, reclamation, import or export operations. Cloud-container storage pool A primary storage pool that a server uses to store data. Use cloud-container storage pools to store data to an object-store based cloud storage provider. Data that is stored in cloud-container storage pools uses both inline data deduplication and client-side data deduplication. By storing data in cloud-container storage pools, you can exploit the cost per unit advantages that clouds offer along with the scaling capabilities that cloud storage provides. You cannot use this type of storage pool for storage pool backup, migration, reclamation, encryption, import or export operations. Sequential-access storage pool A set of volumes that the server uses to store backup versions of files, files that are archive copies, and files that are migrated from client nodes. Files are stored on tape or FILE devices. Data that is stored in sequential-access storage pools uses both postprocess and client-side data deduplication. Use this type of storage pool to keep a copy of your data on TAPE devices. You can migrate data into this type of storage pool. 6.

Container storage pools were first introduced in IBM Spectrum Protect 7.1.3 and provides in-line server-side deduplication and significant improvements in performance and scalability. The container storage pool was further enhanced in 7.1.5 to provide in-line storage pool compression which further enhances data reduction capabilities. 3 Container storage pools have several advantages over the traditional storage pools. It s recommended to use Directorycontainer storage pool(s) when using Cohesity as a target because of de-dup between multiple IBM Spectrum Protect instances. Sequential-access storage pools do appear to have an performance advantage when it comes to backup throughput, if the IBM Spectrum Protect server and NFS mount points are configured correctly, which is described in the sequential-access storage pool section below. Storage Pool Performance Comparison when using Cohesity Backup Speed/ Throughput Restore Speed/ Throughput Total De-Dup/ Compression Directory-container storage pool GOOD VERY GOOD VERY GOOD 5 Sequential-access storage pool 6 VERY GOOD VERY GOOD GOOD Mounting the NFS file systems and Configuring IBM Spectrum Protect Storage Pools Once the desired storage pool is determined, follow the appropriate section below to mount the NFS file system(s) and create the storage pool(s). Sequential-access storage pool (Device Class FILE) The section below describes and walks through an example of creating a new device class and sequential-access storage pool. Storage Pools defined as a Sequential-access storage pools (device class type FILE) that write to volumes over NFSv3 can do so without the filesystem being mounted with the sync option per IBM 4. According to IBM s support document, this can be done because of how IBM Spectrum Protect issues a standard sync() call to the OS before the metadata is committed to the IBM Spectrum Protect database. Additionally, DIRECTIO needs to be set to NO within the IBM Spectrum Protect server configuration file. If this is not done, write performance to Cohesity will be slow and as a result backups will be slow as well. 7.

Direct IO vs Buffered IO Relative Throughput Performance of Backup / Restore to a 4-Node Cohesity Cluster with IDD 9 8 7 6 5 4 3 2 1 0 Backup DirectIO = YES Restore DirectIO = NO The graph above shows the relative performance difference between having the NFSv3 shares mounted with sync and DIRECTIO=YES. With sync and DIRECTIO=YES, IBM Spectrum Protect writes directly to Cohesity without any buffering in 256KB block sizes, which ends up being very inefficient and causes significant latency and thus lower throughput. Cohesity recommends that the share be mounted without the sync option and with DIRECTIO set to NO when using the device class of FILE. IBM Spectrum Protect Configuration Option Add the DIRECTIO NO to dsmserv.opt file and restart the IBM Spectrum Protect instance. This can simply be added to the very bottom of the dsmserv.opt. The IBM Spectrum Protect instance will need to be restarted for this option to take effect. dsmserv.opt (Example Only) $ cat dsmserv.opt COMMmethod TCPIP TCPport 1500 DEVCONFIG devconf.dat VOLUMEHISTORY volhist.dat... DIRECTIO NO 8.

Verify the DIRECTIO option is set by logging into the IBM Spectrum Protect Administrators CLI $ dsmadmc IBM Spectrum Protect Command Line Administrative Interface - Version X, Release X, Level X.X (c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Enter your user id: admin Enter your password: [Password] Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YY HH:MM:SS Last access: MM/DD/YY HH:MM:SS Protect: IBMSPSRV>q option directio Server Option Option Setting ----------------- -------------------- DIRECTIO No Mounting Options OS Linux AIX Mount Options noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard, intr,nolock noatime,vers=3,proto=tcp,rsize=524288,wsize=524288,hard,intr, nolock Creating Mount Points Create an equal number of mount point directories as Cohesity nodes/vips, in this example we have a 4-node Cohesity cluster with 4 VIPs. These steps/commands are being done on the IBM Spectrum Protect server. Create the mount points $ sudo mkdir /tsminst1/cohesity/filepool1_1; sudo mkdir /tsminst1/cohesity/filepool1_2; sudo mkdir /tsminst1/cohesity/filepool1_3; sudo mkdir /tsminst1/cohesity/filepool1_4 9.

Add the new nfs mounts to fstab fstab example vip1.fqd:/ibmsp1-idd-filepool1 /tsminst1/cohesity/filepool1_1 nfs noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 vip2.fqd:/ibmsp1-idd-filepool1 /tsminst1/cohesity/filepool1_2 nfs noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 vip3.fqd:/ibmsp1-idd-filepool1 /tsminst1/cohesity/filepool1_3 nfs noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 vip4.fqd:/ibmsp1-idd-filepool1 /tsminst1/cohesity/filepool1_4 nfs noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 Mount the file systems $ sudo mount -a Creating the Device Class and Storage Pool Below shows creating a new device class that points to the mounted NFS file systems from the Cohesity cluster as well as the creating the storage pool, then querying the device class to verify its configuration. $ dsmadmc IBM Spectrum Protect Command Line Administrative Interface - Version X, Release X, Level X.X (c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Enter your user id: admin Enter your password: [Password] Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YY HH:MM:SS Last access: MM/DD/YY HH:MM:SS Protect: IBMSPSRV>def devclass fileclass1 devtype=file mountlimit=xxx maxcapacity=xxg directory= /tsminst1/cohesity/filepool1_1,/tsminst1/cohesity/filepool1_1,/tsminst1/cohesity/filepool1_1,/tsminst1/cohesity/filepool1_1 Protect: IBMSPSRV>def stgpool filepool1 fileclass1 maxscratch=xxxxxxx Protect: IBMSPSRV>q devclass FILECLASS1... Device Access Strategy: Sequential... Device Type: FILE... Directory: /tsminst1/cohesity/filepool1_1,/tsminst1/cohesity/file- Pool1_1,/tsminst1/Cohesity/FilePool1_1,/tsminst1/Cohesity/FilePool1_1 10.

Directory-container Storage pool The section below describes and walks through an example of creating a new directory-container storage pool. Mounting Options OS Linux Mount Options sync,noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock AIX dio,noatime,vers=3,proto=tcp,rsize=524288,wsize=1048576,hard,intr,nolock Creating Mount Points Create an equal number of mount point directories as Cohesity nodes/vips, in this example we have a 4-node Cohesity cluster with 4 VIPs. These steps/commands are being done on the IBM Spectrum Protect server. Create the mount points $ sudo mkdir /tsminst1/cohesity/container1_1; sudo mkdir /tsminst1/cohesity/container1_2; sudo mkdir /tsminst1/cohesity/container1_3; sudo mkdir /tsminst1/cohesity/container1_4 fstab example vip1.fqd:/ibmsp1-idd-containerpool1 /tsminst1/cohesity/container1_1 nfs sync,noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 vip2.fqd:/ibmsp1-idd-containerpool1 /tsminst1/cohesity/container1_2 nfs sync,noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 vip3.fqd:/ibmsp1-idd-containerpool1 /tsminst1/cohesity/container1_3 nfs sync,noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 vip4.fqd:/ibmsp1-idd-containerpool1 /tsminst1/cohesity/container1_4 nfs sync,noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0 Mount the file systems $ sudo mount -a 11.

Creating the Directory Container Storage Pool Create the Directory Container Storage Pool with Compression $ dsmadmc IBM Spectrum Protect Command Line Administrative Interface - Version X, Release X, Level X.X (c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Enter your user id: admin Enter your password: [Password] Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YY HH:MM:SS Last access: MM/DD/YY HH:MM:SS Protect: IBMSPSRV>def stgpool contpool1 stgtype=directory compression=yes Protect: IBMSPSRV>def stgpooldirectory contpool1 /tsminst1/cohesity/container1_1,/ tsminst1/cohesity/container1_2,/tsminst1/cohesity/container1_3,/tsminst1/cohesity/ Container1_4 Protect: IBMSPSRV>q stgpooldir stgpool=contpool1 Storage Pool Name Directory Access ----------------- --------------------------------------------- ------------ CONTPOOL1 /tsminst1/cohesity/container1_1 Read/Write CONTPOOL1 /tsminst1/cohesity/container1_2 Read/Write CONTPOOL1 /tsminst1/cohesity/container1_3 Read/Write CONTPOOL1 /tsminst1/cohesity/container1_4 Read/Write 12.

Backups and Restores Once the storage pool is associated with IBM Spectrum Protect nodes/clients, backups can be performed. Backup Example $ sudo dsmc inc file IBM Spectrum Protect Command Line Backup-Archive Client Interface Client Version X, Release X, Level X.X Client date/time: MM/DD/YYYY HH:MM:SS (c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Node Name: XXXXXXX Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YYYY HH:MM:SS Last access: MM/DD/YYYY HH:MM:SS Incremental backup of volume file Normal File--> 2,147,483,648 file [Sent] Successful incremental backup of file Total number of objects inspected: 1 Total number of objects backed up: 1 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of objects encrypted: 0 Total number of objects grew: 0 Total number of retries: 0 Total number of bytes inspected: 2.00 GB Total number of bytes transferred: 2.00 GB... Elapsed processing time: HH:MM:SS 13.

Restore Example $ sudo rm file $ sudo dsmc rest file IBM Spectrum Protect Command Line Backup-Archive Client Interface Client Version X, Release X, Level X.X Client date/time: MM/DD/YYYY HH:MM:SS (c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Node Name: XXXXXXX Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YYYY HH:MM:SS Last access: MM/DD/YYYY HH:MM:SS Restore function invoked. Restoring 2,147,483,648 file [Done] Restore processing finished. Total number of objects restored: 1 Total number of objects failed: 0 Total number of bytes transferred: 2.00 GB... Elapsed processing time: HH:MM:SS About the Author Justin Willoughby is 20-year IT veteran, currently working for Cohesity as a Solutions Engineer. In this role, Justin architects, builds, tests, and validates business-critical applications, databases, and virtualization solutions with Cohesity s DataProtect platform. Version History Version Date Document Version History Version 1.0 December 2017 Original Document 14.

References 1 IBM Spectrum Protect concepts > IBM Spectrum Protect overview, IBM Knowledge Center 2 Servers > Configuring storage > Storage pool types, IBM Knowledge Center 3 Tivoli Storage Manager Deduplication FAQ, IBM developerworks 4 Considerations for using the NFS V3 protocol for an IBM Spectrum Protect storage pool, IBM Support Other Notes 5 IBM Spectrum Protect De-Dup/Compression + Cohesity De-Dup/Compression 6 When Cohesity Views are mounted without the sync option and DIRECTIO is set to NO within the IBM Spectrum Protect server 7 SMB and S3 has not yet been tested/validated Trademarks IBM Spectrum Protect is a registered trademark of IBM Corporation in the United States or other countries or both. IBM1162018 Cohesity, Inc. Address 300 Park Ave., Suite 300, San Jose, CA 95110 Email contact@cohesity.com www.cohesity.com @cohesity 2018 Cohesity. All Rights Reserved. 15.