INTEGRATING PURE AND COHESITY

Similar documents
PROTECTING MISSION CRITICAL DATA

FLASHBLADE DATA PROTECTION WITH RUBRIK Seamlessly Protect and Restore Your Data July 2017

FAST SQL SERVER BACKUP AND RESTORE

WHITE PAPER PURITY CLOUDSNAP SETUP AND BEST PRACTICES GUIDE

Setting Up Quest QoreStor with Veeam Backup & Replication. Technical White Paper

MIXED WORKLOADS ON PURE STORAGE. Microsoft SQL Server September 2016

Remove complexity in protecting your virtual infrastructure with. IBM Spectrum Protect Plus. Data availability made easy. Overview

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X

Setting Up the Dell DR Series System on Veeam

Setting up the DR Series System with vranger. Technical White Paper

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS

CATALOGIC ECX INTEGRATION. for Microsoft SQL Server January 2017

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray

The storage challenges of virtualized environments

FLASHARRAY//M Business and IT Transformation in 3U

Veeam with Cohesity Data Platform

Copyright 2012 EMC Corporation. All rights reserved.

EMC Integrated Infrastructure for VMware. Business Continuity

Storage Solutions for VMware: InfiniBox. White Paper

Best practices for protecting Virtualization, SDDC, Cloud, and the Modern Data Center, with NetBackup

ACCELERATE YOUR ANALYTICS GAME WITH ORACLE SOLUTIONS ON PURE STORAGE

Setting up the DR Series System on Acronis Backup & Recovery v11.5. Technical White Paper

WHITE PAPER ORACLE RAPID BACKUP AND RECOVERY ULTRA-FAST AND EFFICIENT DATA PROTECTION WITH COMMVAULT & FLASHBLADE

CATALOGIC ECX INTEGRATION WITH PURE STORAGE. for VMware Virtualization April 2017

Understanding Virtual System Data Protection

Symantec Protection Center Getting Started Guide. Version 2.0

Setting Up the DR Series System on Veeam

Quest VROOM Quick Setup Guide for Quest Rapid Recovery and Foglight Windows Installers

Veritas NetBackup Plug-in for VMware vsphere Web Client Guide. Release 8.1.1

IBM Spectrum Protect Plus Version Installation and User's Guide IBM

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Quest VROOM Quick Setup Guide for Quest Rapid Recovery and Foglight Windows Installers

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers

Stellar performance for a virtualized world

EMC DATA DOMAIN OPERATING SYSTEM

Configuration Guide for Veeam Backup & Replication with the HPE Hyper Converged 250 System

Protect enterprise data, achieve long-term data retention

50 TB. Traditional Storage + Data Protection Architecture. StorSimple Cloud-integrated Storage. Traditional CapEx: $375K Support: $75K per Year

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Virtual Server Agent v9 with VMware. June 2011

THE PURE STORAGE DATA PLATFORM

Veritas Desktop and Laptop Option 9.1 Qualification Details with Cloud Service Providers (Microsoft Azure and Amazon Web Services)

A Rapid Recovery Technical Whitepaper. Lenovo Nutanix Data Protection: Best Practices for Quest Software Data Protection Solutions

Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers

Veeam Backup & Replication for VMware vsphere

EMC DATA DOMAIN PRODUCT OvERvIEW

FLASHARRAY AT A GLANCE

Data Protection Guide

NetApp AltaVault Cloud-Integrated Storage Appliances

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

FLASHARRAY//M Smart Storage for Cloud IT

VMware vsan 6.6. Licensing Guide. Revised May 2017

Rapid Recovery DocRetriever for SharePoint User Guide

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide

Veritas NetBackup on Cisco UCS S3260 Storage Server

Acronis Backup Advanced 11.7 Update 1

NexentaStor VVOL

Tech Note: vsphere Replication with vsan First Published On: Last Updated On:

Relax with Hyperflex. Alexander Weinbacher Product Specialist CEE Cisco Systems

Executive Summary. The Need for Shared Storage. The Shared Storage Dilemma for the SMB. The SMB Answer - DroboElite. Enhancing your VMware Environment

Simple Data Protection for the Cloud Era

Veritas NetBackup Appliance Fibre Channel Guide

VMware vsan Ready Nodes

Veritas NetBackup for Microsoft SQL Server Administrator's Guide

Veeam Backup & Replication Version 6.0

Scale out a 13th Generation XC Series Cluster Using 14th Generation XC Series Appliance

Virtual Server Agent for VMware VMware VADP Virtualization Architecture

Veritas NetBackup Appliance Fibre Channel Guide

Cohesity Flash Protect for Pure FlashBlade: Simple, Scalable Data Protection

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Veritas Access NetBackup Solutions Guide

Exam Name: Midrange Storage Technical Support V2

Vembu BDR Suite. Free vs Paid Edition. Backup & Disaster Recovery. VEMBU TECHNOLOGIES TRUSTED BY OVER 60,000 BUSINESSES

ACCELERATE THE JOURNEY TO YOUR CLOUD

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Agenda Secondary Storage Problem Cohesity Hyperconverged Secondary Storage Demo: Cohesity and Vmware vilogics Use Case

Commvault with Cohesity Data Platform

The FlashStack Data Center

Protecting Microsoft SQL Server databases using IBM Spectrum Protect Plus. Version 1.0

Accelerate the Journey to 100% Virtualization with EMC Backup and Recovery. Copyright 2010 EMC Corporation. All rights reserved.

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION

IM B09 Best Practices for Backup and Recovery of VMware - DRAFT v1

FlashArray//m. Business and IT Transformation in 3U. Transform Your Business. All-Flash Storage for Every Workload.

StorageCraft OneXafe and Veeam 9.5

Virtualization with Arcserve Unified Data Protection

Preserving the World s Most Important Data. Yours. SYSTEMS AT-A-GLANCE: KEY FEATURES AND BENEFITS

NetApp HCI QoS and Mixed Workloads

Veritas Access Enterprise Vault Solutions Guide

Server Fault Protection with NetApp Data ONTAP Edge-T

Boost your data protection with NetApp + Veeam. Schahin Golshani Technical Partner Enablement Manager, MENA

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

NexentaStor Storage Replication Adapter User Guide

HCI: Hyper-Converged Infrastructure

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

Transcription:

WHITE PAPER INTEGRATING PURE AND COHESITY NEXT GENERATION PERFORMANCE AND RESILIENCE FOR YOUR MISSION-CRITICAL DATA

TABLE OF CONTENTS INTRODUCTION... 3 TEST ENVIRONMENT... 3 TEST PROCESS AND SETUP... 5 COHESITY AND FLASHARRAY//M PROTECTION SETUP AND RESULTS... 5 50 VM RECOVERY FROM COHESITY TO FLASHARRAY//M... 11 FLASHBLADE ARCHIVAL BACKUP TIER TESTING... 14 FLASHBLADE ARCHIVAL TIER RESTORE TESTING... 16 CONCLUSION... 20 2

INTRODUCTION The purpose of this whitepaper is to showcase the ease of integration, best practices, and expected performance results for data backup, restore, and long term data archiving with the Pure Storage FlashArray//M, Cohesity C2500, and Pure Storage FlashBlade data platforms using realistic, real-world examples. The combination of these innovative solutions provides the best performance, highest level of resiliency, simplest implementation, and, perhaps most importantly, lowest cost per GB. The following three core pieces of next-generation storage technology will be highlighted within this document, and explanations will be provided as to how they seamlessly work together: Pure Storage FlashArray//M Provides high performance primary block storage for the most demanding data center requirements. FlashArray provides built-in snapshots and replication to other Pure Storage arrays. Cohesity Next generation secondary storage for hosting backup data, providing instant VM/file recovery, as well as orchestrating archival data movement from FlashArray//M to FlashBlade. Pure Storage FlashBlade A fast and dense, unstructured data platform from Pure Storage that provides rapid backup and recovery of archival data, along with other primary unstructured data use cases. TEST ENVIRONMENT The test cases shown here are intended to emphasize the high levels of throughput, workload consolidation, minimized RTO and RPO, and ease of administration that these connected solutions provide. Though we will focus on virtual machines in our examples, the use cases shown here are easily extensible to other workloads, such as SQL databases, VSI, VDI, and Oracle, just to name a few all within the same infrastructure and management topology. Even entire Pure Storage datastore snapshots can be offloaded to Cohesity and FlashBlade. Our test environment comprised the following elements: Five ESXi 6.5 hosts running 250 Windows 10 desktops with 50GB drives (27GB used). The desktops included MS Office 2016, Adobe Reader, and numerous iso, pdf, mp4, and other pre-compressed, commonly used files. Each ESXi host features two redundant 10GB network connections and two redundant 16GB Fibre Channel HBA connections. Cohesity C2500 Four Node Cluster for secondary storage backup and recovery orchestration between FlashArray//M and FlashBlade. One Pure Storage FlashArray//M20 with 10TB RAW Storage for primary Storage (~25 TB usable assuming a 5:1 data reduction ratio, which is a typical result based on the Pure Storage install base). One Pure Storage FlashBlade with 7 Blades (half-populated chassis) for the archival tier, with 8TB per blade. Two paired Brocade VDX6740T Switches for 1/10/40GB redundant networking. Two paired Cisco MDS 9148S 16GB Fibre-Channel Switches. 3

DS-C9148S-K9 USB STATUS P/S FAN MDS 9148S 16G Multilayer Fabric Switch CONSOLE MGMT ETH LINK ACT 1 2 3 4 5 6 7 8 9 10 11 12 IOIOI Brocade 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 VDX 6740T Brocade Trunk 10GE 40 GE/FC 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 R O M R O M R O M SUPERMICR SUPERMICR SUPERMICR 2 1 2 1 2 1 RESET RESET RESET IOIOI Brocade 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 VDX 6740T Brocade Trunk 10GE 40 GE/FC DS-C9148S-K9 USB STATUS P/S FAN MDS 9148S 16G Multilayer Fabric Switch CONSOLE MGMT ETH LINK ACT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 The simplified connectivity diagram below illustrates how the various solutions are integrated with one another, and how there is not a single point of failure within the design: 10Gb iscsi Connectivity 16Gb FC Connectivity Spine Network WAN 10/40 GbE NW Connectivity Brocade VDX6740T - A Brocade VDX6740T - B Archival Data Archival Data Backup Data FlashBlade Archival Storage Backup Data C ohesity C 2500 4 Node Block Secondary Storage Network Network SUPERMICR SUPERMICR Prod. Data ESXi 6.5 Servers SUPERMICR Prod. Data C isco MDS 9148S - A Prod. Data Prod. Data C isco MDS 9148S - B FlashArray//M20 R2 Primary Storage FIGURE 1. High-Level architectural diagram of the solution 4

TEST PROCESS AND SETUP The 250 Windows 10 desktops were hosted on 5 ESXi SuperMicro servers running under a common vsphere 6.5 instance. Pure Storage FlashArray//M features a fully-integrated vcenter plug-in that enables single pane of glass administration of the primary storage array. For our testing, Cohesity leverages VMware VADP technology to integrate with vcenter as well as Pure Storage and provide both CBT and non-cbt based backup capabilities for all virtual machines running within the vcenter instance. For this testing, we elected to test with CBT enabled. Our test process will cover the following steps with instructions and explanations for each phase: FIGURE 2. Test process to be executed COHESITY AND FLASHARRAY//M PROTECTION SETUP AND RESULTS Connecting vcenter to the Cohesity appliance is easily accomplished by registering it as a source, as shown in the next few screenshots: Under the Protection menu, select Sources. FIGURE 3. Selecting Sources in the Cohesity GUI 5

Next, click on Register Source, and then select the VMware button. FIGURE 4. Register Source button in the Cohesity GUI FIGURE 5. VMware registration option in the Cohesity GUI Finally, fill out the credential information for your vcenter instance. The account used to register the vcenter source to Cohesity must have administrative privileges. FIGURE 6. vcenter registration wizard Once vcenter is connected to the Cohesity appliance, we generate a Protection Policy that allows for daily recovery of any single or group of virtual machine(s) from our test group, or even any single file or group of files within a selected desktop. The first step is to create a protection policy that dictates frequency and retention of VM snapshots, as well as other options. FIGURE 7. Selecting Policy Manager in the Cohesity GUI Click on the New Protection Policy button, and select the Virtual or Physical Server option. FIGURE 8. New Protection Policy button in the Cohesity GUI 6

FIGURE 9. Virtual or Physical Server button in the Cohesity GUI Set the frequency of snapshots, and retention times for those snapshots, in the wizard below. FIGURE 10. Setting the Policy Name, Schedule and other important settings Because Cohesity offers inline deduplication and compression of snapshots, customers can set as aggressive or as lax of a backup schedule as needed dictated by data criticality and churn in each individual workload. Next, we create a Protection Job for our 250 VMs based upon the policy just created above, as demonstrated in the next few screenshots. First, enter the Protection Jobs creation wizard by selecting the Add a Protection Job button in the Protection Job section of the GUI. FIGURE 11. Selecting Protection Jobs in the Cohesity GUI 7

FIGURE 12. Add a Protection Job button in the Cohesity GUI To protect our virtual desktops, we select the Virtual Server option, as shown below: FIGURE 13. Virtual Server selection for Protection Job We choose the 250VM protection policy created in the previous step and apply a few specific job settings. FIGURE 14. Policy Selection for the Protection Job FIGURE 15. Protection Job settings 8

FIGURE 16. Additional available Protection Job settings Finally, we select the 250VMs from our vcenter instance. The desktops are segregated into five folders containing approximately 50 VMs each. FIGURE 17. Selecting the 250 desktops to be protected from our vcenter source FIGURE 18. Success screen from Protection Job creation 9

During this initial protection run for our 250 VMs, we observed extremely high throughput between FlashArray//M and the Cohesity cluster. Ingest rates hovered around 1GB/s over the initial job, in which approximately 7TB was moved from FlashArray//M to the Cohesity appliance. The metrics from the performance monitoring sections of the Pure Storage//M array and Cohesity GUIs confirm fast VM data backup to Cohesity. FIGURE 19. FlashArray//M GUI showing 250 VMs ingested to Cohesity from //M FIGURE 20. Cohesity GUI showing 250 VMs ingested to Cohesity from FlashArray//M As these VMs were very like one another, Pure Storage was also able to achieve approximately 25:1 overall data reduction on the single volume where they were hosted for production usage. From the Pure Storage GUI, we can see that the virtual machines were ingested to Cohesity with high network throughput. The overall job completed in approximately two and a half hours. Worth noting from this experiment is that the initial VM ingest will always be the most expensive in terms of bandwidth, IOPS, and time. That is, subsequent Protection Job runs will only update the changes made to the target VMs, rather than running a full VM backup, as was done in this first example. 10

50 VM RECOVERY FROM COHESITY TO FLASHARRAY//M The next step of the test process was to delete 50 VMs from our 250 VM pool and restore those 50 desktops from the Cohesity snapshot in the previous step as quickly as possible. To accomplish this, we first powered off the 50VMs in question and then deleted them from our vcenter instance. FIGURE 21. 50 Desktops powered off and set for deletion from vcenter The next step was to setup the Recovery job from within Cohesity to restore the original VMs to their original location and datastore. During the restore operation, Cohesity will mount an NFS share to your vcenter instance, register a VM running on the Cohesity appliance to the appropriate ESXi host, optionally power it on (so that the VM being recovered can begin serving IO as quickly as possible, if desired) and then transparently vmotion the virtual device back to the same location on the Pure Storage FlashArray//M primary storage device. Additional recovery options such as renaming the VM or even recovering it to a completely different vcenter instance are supported. The Recovery operation starts by selecting Recovery under the Protection tab of the Cohesity GUI. FIGURE 22. Selecting Recovery in the Cohesity GUI Next, click the Recover button. FIGURE 23. Recovery button in the Cohesity GUI 11

This will spawn a list of available components to recover. We are recovering VMs in this example, so we selected that button. FIGURE 24. VMs recovery option in the Cohesity GUI VMs can be searched by both protection job and machine name. Wildcards are also supported. Click on the check box below to select a single or group of VMs to recover. Once all VMs are selected click the Add to Cart button, followed by Continue once all desktops are selected. FIGURE 25. VM search and selection screen in the Cohesity GUI Under the Recovery Options section of the wizard you can customize the naming convention, networking, power options, and even recovery location for the selected VMs. The pencil on the right allows you to select a different local snapshot or a snapshot located on the archival tier. Clicking on Finish will kick off the recovery operation. FIGURE 26. Selected VM Recover Options in the Cohesity GUI 12

Once again we see similar high throughput going from Cohesity to the FlashArray//M during the recovery operation, allowing all VMs to be successfully recovered to their original location on FlashArray//M in about 90 minutes. However, all VMs were powered on and able to serve IO within about 15 minutes of kicking off the recovery operation. FIGURE 27. Pure Storage GUI during 50 VM recovery operation in which VMs are vmotioned from Cohesity to FlashArray//M FIGURE 28. Cohesity GUI during the 50 VM recovery operation 13

FIGURE 29. Successful recovery operation summary in the Cohesity GUI FLASHBLADE ARCHIVAL BACKUP TIER TESTING In the next test, Cohesity served as the connection and orchestration point between the Pure Storage FlashArray//M (primary storage) and the Pure Storage FlashBlade (archival storage) layers via the Cohesity NAS adapter. FlashBlade enables extremely fast and secure backup and recovery of information, which is hugely important in the healthcare and high-performance computing industries, but this value extends into other businesses as well. FlashBlade offers inline compression and massive density (1.5PB in 4U effective) allowing your datacenter footprint to shrink while also minimizing power and cooling costs. The blades within the FlashBlade feature N+2 availability and will automatically self-heal if a blade is lost in order to rebuild parity immediately and effectively, keeping your archival data secure and always available. Setting up the FlashBlade archival tier is especially uncomplicated. First, we name, size, and set our export rules and permissions for our repository in the FlashBlade GUI: FIGURE 30. FlashBlade Storage section of the GUI FIGURE 31. File System creation in the FlashBlade GUI 14

Next, from Cohesity we register the newly created FlashBlade file system to Cohesity as an external target. FIGURE 32. Registering an external NAS target in the Cohesity GUI FIGURE 33. Register external target wizard Note that compression, encryption, and some other items are available as tunable options on Cohesity. We recommend enabling source-side deduplication in most circumstances, as that will minimize the storage footprint required on FlashBlade without compromising data resiliency or recovery performance. Once FlashBlade has been registered as an external target, we only need to add it to our existing protection policy and set the frequency and retention times for archiving to FlashBlade. 15

FIGURE 34. Editing existing Policy to include an external FlashBlade tier With the archival tier setup, we can see from the Protection Job summary below that the throughput between Cohesity and the FlashBlade archival tier also delivered excellent performance, allowing the data to be moved quickly to the archival tier. As in our earlier backup experiment, subsequent runs will only involve changes to this original dataset, meaning that network bandwidth and time required will be a fraction of the initial run time shown below. FIGURE 35. Summary of a successful Protection Job archival tier run FLASHBLADE ARCHIVAL TIER RESTORE TESTING Next, we once again deleted 50 Windows 10 VMs from our production vcenter cluster. Unlike the last restore operation, we now set our recovery operation to leverage systems on the FlashBlade archival tier. The Recovery operation follows identical steps to the example provided earlier in this paper. The only difference is that the source snapshot for the desktops needs to be changed to the archival tier. All other options are identical to the Restore test completed directly from Cohesity. For a given VM, select the pencil to spawn the available Recover Points. Select the cloud icon to use the archival tier recover point. 16

FIGURE 36. Changing a VM to use archival FlashBlade recovery FIGURE 37. Recovery wizard with VMs being recovered from the FlashBlade archival tier Once the Recovery job is executed, the archival dataset will be copied from FlashBlade to Cohesity, and the 50 VMs will be powered on in Cohesity (this is a tunable option they can remain powered off if desired) so that they can begin serving IO as quickly as possible. And then, lastly, the VMs will be vmotioned back to their original location on FlashArray//M. During our testing of 50 desktops we again witnessed quick data movement from FlashBlade to Cohesity, as the screenshots below depict, with all archival data moved from the archival tier and available on the Cohesity appliance in 17

about an hour. The overall recovery operation was accomplished in approximately 3.5 hours, with the desktops booted and serving IO after only 60 minutes once they were copied from FlashBlade to Cohesity and powered on. This screenshot from the FlashBlade GUI shows the 50VMs moving from FlashBlade to Cohesity. FIGURE 38. FlashBlade GUI during 50 VM copy to Cohesity Cohesity showed a corresponding number of writes from the archival tier, and later we can see many reads as the 50VMs are vmotioned from Cohesity to the FlashArray//M primary storage array. FIGURE 39. Cohesity GUI showing 50VM import from FlashBlade and later 50VM vmotion to Pure Storage FlashArray//M 18

Lastly, we can see the 50VM migration from Cohesity to the Pure Storage array from the Pure Storage GUI. FIGURE 40. Pure Storage FlashArray//M GUI during 50VM vmotion from Cohesity 19

CONCLUSION Through this simple demonstration, we have proven that the combination of Pure Storage FlashArray//M, Cohesity, and FlashBlade delivers the entire suite of data performance, protection, and agility. Additional workloads can easily be mixed and managed from the interfaces shown in this guide, across the entire business. As data capacity requirements grow in both primary and secondary use cases, both FlashBlade and Cohesity can non-disruptively scale and be upgraded completely transparently to your customers. Finally, systems administrators can move away from tedious and repetitive tasks that are focused on just keeping their infrastructure online, and instead move up the stack to concentrate on tasks that will improve the company as whole. 2018 Pure Storage, Inc. All rights reserved. Pure Storage, the P Logo, and FlashBlade are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and other countries. Other company, product, or service names may be trademarks or service marks of others. The Pure Storage product described in this documentation is distributed under a license agreement and may be used only in accordance with the terms of the agreement. The license agreement restricts its use, copying, distribution, decompilation, and reverse engineering. No part of this documentation may be reproduced in any form by any means without prior written authorization from Pure Storage, Inc. and its licensors, if any. THE DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE. ps_wp20p_pure-and-cohesity-integration_ltr_02 SALES@PURESTORAGE.COM 800-379-PURE @PURESTORAGE 20