Cookbook Silverton Consulting, Inc. StorInt Briefing

Similar documents
OnCommand Workflow Automation 3.1

Building Automation and Orchestration for Software-Defined Storage with NetApp and VMware

vrealize Automation, Orchestration and Extensibility

Deep Dive - Veeam Backup & Replication with NetApp Storage Snapshots

Clustered Data ONTAP Administration and Data Protection

Infinite Volumes Management Guide

EMC ViPR Controller. Create a VM and Provision and RDM with ViPR Controller and VMware vrealize Automation. Version 2.

Implementing Consistent Storage Service Levels with OnCommand Workflow Automation. October 2016 SL10296 Version 1.0.1

OnCommand Insight 7.1 Planning Guide

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Foundations and Concepts. 04 December 2017 vrealize Automation 7.3

SnapCenter Software 4.0 Concepts Guide

OnCommand Insight 7.2

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Non-disruptive, two node high-availability (HA) support keeps you operating against unplanned storage failures in the cloud

OnCommand Workflow Automation 4.2 Installation and Setup Guide for Windows

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. NetApp Storage. User Guide

Data Protection Guide

Foundations and Concepts. vrealize Automation 7.0

Clustered Data ONTAP Administration (DCADM)

Data Protection Guide

Server Fault Protection with NetApp Data ONTAP Edge-T

Foundations and Concepts

OnCommand Unified Manager 6.1

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

VMware vsphere on NetApp (VVNA)

OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3

EBOOK. NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

What s New with VMware vcloud Director 9.1. Feature Overview

Storage Replication Adapter for VMware vcenter SRM. April 2017 SL10334 Version 1.5.0

Foundations and Concepts. 12 April 2018 vrealize Automation 7.4

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

NetBackup 7.6 Replication Director A Hands On Experience

Introducing VMware Validated Designs for Software-Defined Data Center

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Data Protection Guide

Software-Defined Storage with NetApp and VMware

Automating vcloud Director with OnCommand Workflow Automation

IBM Data Protection for Virtual Environments: Extending IBM Spectrum Protect Solutions to VMware and Hyper-V Environments

Virtual Storage Console 5.0 for VMware vsphere Performing Basic Tasks

IaaS Integration for Multi- Machine Services. vrealize Automation 6.2

Storage Strategies for vsphere 5.5 users

NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst

Data Protection Guide

VMware vcloud Director for Service Providers

The Latest EMC s announcements

Foundations and Concepts. 20 September 2018 vrealize Automation 7.5

Data Protection Guide

CommVault Simpana 9 Virtual Server - Lab Validation

Volume Disaster Recovery Preparation Express Guide

Introducing VMware Validated Designs for Software-Defined Data Center

NetApp Clustered ONTAP & Symantec Granite Self Service Lab Timothy Isaacs, NetApp Jon Sanchez & Jason Puig, Symantec

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1

OpenNebula on VMware: Cloud Reference Architecture

Availability for the Modern Data Center on FlexPod Introduction NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

TOP REASONS TO CHOOSE DELL EMC OVER VEEAM

Advanced Service Design. vrealize Automation 6.2

EMC Hybrid Cloud. Umair Riaz - vspecialist

Administrator s Guide. StorageX 7.6

OnCommand Unified Manager 6.1 Administration Guide

Installation and User Guide

SnapProtect Live Browse with Granular Recovery on VMware. May 2017 SL10336 Version 1.1.0

NetApp NS0-511 Exam. Volume: 65 Questions

Clustered Data ONTAP 8.3 Administration and Data Protection Workshop

Disclaimer CONFIDENTIAL 2

OnCommand Unified Manager 7.2: Best Practices Guide

Tintri Deployment and Best Practices Guide

ONTAP 9.3 Cluster Administration and Data Protection Bundle (CDOTDP9)

VMware Hybrid Cloud Solution

vrealize Business for Cloud User Guide

SnapCenter Software 4.1 Concepts Guide

What s New in VMware vcloud Automation Center 5.1

Best practices for protecting Virtualization, SDDC, Cloud, and the Modern Data Center, with NetBackup

Data Fabric Solution for Cloud Backup Workflow Guide

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

Introducing VMware Validated Design Use Cases

The End of Storage. Craig Nunes. HP Storage Marketing Worldwide Hewlett-Packard

VMware Mirage Getting Started Guide

OnCommand Unified Manager 6.0

vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017

Automating the Software-Defined Data Center with vcloud Automation Center

Multi-Machine Guide vcloud Automation Center 5.2

AWS Reference Design Document

Data Protection for Virtualized Environments

vrealize Suite Lifecycle Manager 1.1 Installation, Upgrade, and Management vrealize Suite 2017

StreamOne Cloud Marketplace. Order and Management Guide

Hitchhiker s Guide to Veeam Backup Free Edition

Advanced Solutions of Microsoft SharePoint Server 2013 Course Contact Hours

FlexPod Infrastructure Automation. September 2016 SL10295 Version 1.0.0

Advanced Solutions of Microsoft SharePoint 2013

Data Protection Guide

Dedicated Hosted Cloud with vcloud Director

SAN Implementation (SANIW)

Control-M and Payment Card Industry Data Security Standard (PCI DSS)

Cloud Computing the VMware Perspective. Bogomil Balkansky Product Marketing

Transcription:

NetApp OnCommand Workflow Automation Useful Workflows Cookbook Silverton Consulting, Inc. StorInt Briefing Revision 2.0 April 2016

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 2 OF 13 Introduction Storage administration has changed significantly over the years. First there were storage system op-panels where administrators had to sit or stand at a machine typing commands and pushing buttons on the storage array. Then command line interfaces (CLIs) came along where storage administrators could stay at their desks and define aggregates, volumes and file systems/logical unit numbers (LUNs) by typing requests at command prompts. Next, web-based GUIs allowed administrators to define and update storage objects by selecting from multiple screen or dropdown menu options. In recent years, Representational State Transfer (REST) interfaces and APIs have emerged that can be used to perform all of these tasks in an automated fashion. Then a little while back, NetApp s OnCommand software suite offered an entirely new capability called Workflow Automation (WFA), which provides an almost visual, more intuitive, web-based approach to automating storage administration tasks also supporting RESTful APIs. NetApp OnCommand Workflow Automation overview Workflow Automation (WFA) is an OnCommand product for creating consistent, reliable storage services. Workflow Automation supplies NetApp storage administrative and operational commands together with a development environment that can be used to define new combinations of storage commands and other functionality. It also provides an environment for executing a sequence of these storage activities while prompting WFA users to supply mandatory or optional parameters for these services. Essentially, OnCommand Workflow Automation consists of three separate environments: the WFA Designer Portal used by administrators to design, develop and debug new workflows; the WFA Execution Portal used by authorized users to execute workflows and interact with workflows to supply mandatory and optional parameters; and the WFA Figure 1 A sample WFA Execution Portal Administration Portal used to monitor workflow execution and establish connections between WFA workflows and external databases, automation frameworks and other data sources, as well as to establish user authorization to access and use WFA workflows. In addition, a web services interface can be used to invoke workflows from external portals and other data center orchestration solutions.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 3 OF 13 In essence, users and other administrators can be given access to workflows to potentially define their own file systems/luns, volumes and aggregates. The options used for these storage entities are specified by the workflow in combination with selected user input. The WFA workflow does all the hard work to define the storage and link it up with the user s application environment. To execute a storage service request, users only need to click on a workflow and provide a few parameters Figure 2 NetApp Automation Store for WFA workflows Workflows that have been previously developed are generally supplied as packages or packs, which can contain documentation, data sources, schemes, templates and one or more workflows or external scripts. OnCommand Workflow Automation makes these encapsulated services or scripts available to any authorized user on NetApp s Automation store 1. When an authorized user logs into NetApp s Storage Automation store, that user is presented with a list of NetApp Supported or Community Supported workflows that can be downloaded. Any of these workflows can be installed into your WFA Designer portal to be modified and used at will. OnCommand Workflow Automation is a powerful tool that enables users to define, alter and manage NetApp storage without ever having to interact with storage or server administrators. Many storage vendors are hoping to make this level of automation directly available to authorized users by providing RESTful interfaces. However, not all data centers have the needed expertise to code to storage system REST APIs. In contrast, NetApp OnCommand Workflow Automation addresses this challenge by hiding much of the complexity of storage administration. All users or administrators need to do is to fill in the storage names, networking addresses, mandatory storage attributes, etc., that apply to their data center environment in order to execute automated delivery of storage services. Workflow Automation administrators specify information about the NetApp storage configuration, the virtualization environment and linkages to applications and orchestration external functionality. There are two useful products supplied by NetApp to provide information that Workflow Automation can use: 1 At http://automationstore.netapp.com/pack-list.shtml as of 07Mar2016

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 4 OF 13 1. OnCommand Unified Manager (OCUM) can be used as a data source for single sites to automatically supply information on NetApp storage configurations. 2. OnCommand Insight (OCI) can be used as a data source for multiple sites to automatically supply information on NetApp storage, non-netapp storage, and VMware configurations. We will discuss data sources more fully below, but for the moment consider them a way to provide object information, such as storage volume names, storage attributes, server information, etc. that will be used by Workflow Automation in its scripts during workflow execution to automate the environment. Both OCI and OCUM supply similar information about the data center s NetApp storage environment but OCI also supplies information about multiple sites, non- NetApp storage configurations, and VMware VMs, ESX hosts, vcenter configurations, etc., needed to automate VMware data center storage administration. Even with OCI or OCUM data sources, other information such as storage, servers and application configuration information not available from OCI or OCUM can be imported into OnCommand Workflow Automation using standardized WFA facilities, and linkages to external software functionality can also be established. WFA workflows can then take advantage of all of these objects, entities and external capabilities to encapsulate storage administrative activities into a standard set of scripted actions that almost anyone can use. Automation of storage service delivery These days organizations are moving applications and other functionality to the cloud at a rapidly increasing pace. Software as a Service (SaaS), Storage as a Service (STaaS), Infrastructure as a Service (IaaS), etc., are becoming ever easier and cheaper to use. But there are some important considerations to moving IT services to the cloud, especially for larger enterprises, which can include diminished data governance, increased security exposure and potential increases in IT expenses. Some, but not all of these concerns can be mitigated with appropriate due diligence and additional contractual constraints. In contrast, NetApp OnCommand Workflow Automation provides many of the benefits of cloud automation without the challenges associated with using Anything-as-a-Service (XaaS) solutions. WFA can supply cloud-like ease of use inside the data center. With WFA, users can define their own storage by simply clicking on a workflow, and the rest takes place behind the scenes, as with cloud services. Thus, WFA can supply the ease of use of cloud services while taking advantage of the existing investments in data governance, security controls, and expense management already present in your current data center. WFA can even be used to deploy and automate the use of NetApp cloud storage services within a public cloud environment. Furthermore, WFA can turn storage into a programmable, self-service environment by making storage services directly available to authorized users and not having to rely on storage

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 5 OF 13 administrators. Self-service customers can immediately use workflows to define new storage, retire old storage or even migrate storage from one storage system to another. WFA also makes it much easier to standardize storage processes. Naming conventions and other storage standards can be built into workflows, making them more easily enforced or changed as needed. For example, if you use non-deduplicated storage and later decide that deduplication is needed, then a simple change to a workflow template or two can make all new storage provisioning use deduplication. Finally, encapsulation of storage definition, retirement, migration, etc., into workflows enables these activities to automatically interface with configuration management database (CMDB) functionality and automation frameworks from Microsoft, VMware and others. WFA provides some packages with a built-in capability to integrate workflows with these external services. In the past, administrators had to log in to other CMDB services or automation frameworks or manually modify Excel spreadsheets, which meant the potential for error was significant. But with WFA, changes to NetApp storage can automatically update external services as well, keeping functionality and frameworks current and connected. OnCommand Workflow Automation workflows After describing some of what can be accomplished with workflows above, we will next discuss what a workflow consists of, where workflow information can be found, what workflow variables can be used and some of the other entities which can be manipulated by workflows. Workflows are a scripted series of execution steps that perform various storage configuration, CMDB or external functionality using commands or primitives supplied within WFA, supplied by NetApp WFA Engineering or the WFA web community via the Automation Store, or developed by a storage admin or architect within the data center. These scripts can combine multiple workflow primitives with various control options. For example, this capability can be used to select which primitive executes next and indicate how many times each primitive should be executed. Workflow primitives can supply default options for storage configuration activities or ask users to specify options before they can proceed. Once defined, workflows are provided to different users or administrators in the data center through the WFA Execution portal. For example, application owners could be authorized to use workflows that define file systems, LUNs or other storage entities. Application or server administrators could be given access to these and other workflows that allow them to link applications to storage, create aggregates or decommission storage. Storage administrators could be given access to all of these workflows and others that allow them to define new storage clusters. Workflow administrators, in turn, can create new workflows, link workflows to orchestration packages or authorize customers and administrators to use workflows.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 6 OF 13 As discussed earlier, WFA workflows are available within WFA and from NetApp s Storage Automation Store where they can be downloaded and installed into the data center s WFA Designer portal. Within the WFA Designer portal, administrators can modify workflows or create their own workflows. WFA workflows use commands or execution primitives to perform work. Execution primitives are used to retrieve data from external information resources, externally validate information, execute external scripts/procedures, execute administrative actions or can be combined with other primitives into a workflow. Workflows mostly depend on primitives or commands that execute storage commands. These can include NetApp CLI commands such as creating Network File System (NFS) exports, adding quality of service (QoS) services to volumes and unprovisioning aggregates to free storage. OnCommand WFA uses Microsoft Windows PowerShell or Perl Programming Language for workflow execution. Workflow commands operate on objects. Commands can be used to create, update, and remove objects as well as update the association between objects and deal with optional parent and child object relationships. Objects generally refer to NetApp storage entities but can include entities outside the storage system to include host objects. Workflow commands can be repeated where necessary a fixed number of times or a variable number of times based on search results. Moreover, workflows commands can be conditionally executed based on a runtime set of search results, i.e., commands can execute or not based on the results of a search or other conditions checked when the command is actually executed. Workflows can use approval points to pause execution and wait for user go-ahead. Mandatory and optional user input parameters for workflow commands can be readily defined and identified under the User Inputs tab for the workflow. Workflow input parameter attributes can also be specified such as type, defaults and validation parameters. Workflow constants can be defined that are available throughout workflow command execution and are displayed in the Constants Tab. Workflow return parameters can also be specified that may be useful for debugging workflows, which are displayed in the Return Parameters tab. In addition, workflows can retrieve or output information to data sources or databases. As discussed previously, OCUM and OCI can provide important data sources for most workflows. OCUM and OCI auto-discovers NetApp storage and for OCI, non-netapp storage, switch and VMware objects, all of which can be imported directly into a WFA data sources. Once available in a data source, workflows can readily reference or access this information. Moreover, entities or other objects manipulated by or used in workflow commands can be script variables created by the script itself or search objects where WFA searches for information in its data sources. These objects or entities can be passed from one command to another and consequently referenced throughout the execution of a workflow.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 7 OF 13 In contrast to data sources that supply internal information, WFA schemes are used to define external environment attributes and can use SQL queries or scripts to acquire data. For instance, a vc scheme would be used to describe a virtual machine environment, such as virtual machines, hosts and data stores. WFA comes with storage ( storage & cm_storage ) and performance ( performance & cm_performance ) schemes for both 7-mode and clustered-mode Data ONTAP storage. OCI can be used directly to populate the vc scheme, but without OCI, vcenter plugins can be used to gather this information. WFA templates are used as a blueprint for object definition. For instance, one can use a template to specify the types of storage efficiency characteristics used for aggregate creation in their shop. In addition, WFA categories are used to assign user groups to authorized workflows on the WFA Execution portal. Thus, on the WFA Execution portal, users will only be able to access and use the workflows they are authorized for based on the category of the workflow and the user s authorization. Each WFA entity, such as commands and workflows are versioned using a major.minor.revision format and WFA entities can have parent-child relationships with other WFA entities. WFA version numbers are used to keep track of changes to WFA entities and as such, version numbers are automatically incremented when an update occurs to a child entity in a cascading fashion. That is, when updating the major version of a child entity, the minor portion of its parent entity is also updated. Furthermore, WFA commands can also take advantage of powerful tools such as regular expressions, filters and finders to extract or manipulate information from data sources, schemes, or templates. Regular expressions can be used to help define and validate naming conventions or any other workflow text fields. Filters are SQL-based queries to WFA databases or data sources that can return lists of objects or entities that satisfy some particular selection criteria for further workflow command processing. Finders are combinations of one or more filters that can be used to select the one entity or object that will be used for follow-on workflow execution. WFA can take advantage of specialized functions written in MVFLEX Expression Language (MVEL) in order to reuse logic and functionality that has already been developed. Workflows can also pause while executing external Powershell or Perl scripts, which can be especially useful when performing functions needed to connect storage to other applications. At the start of a workflow process, WFA plans the execution and validates that the workflow can be executed with the input provided and the commands used in the workflow. This execution plan is used as a guide for workflow execution which then reserves all required resources and starts executing each step of the workflow in sequence.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 8 OF 13 A single workflow or set of workflows can be scheduled. Workflow schedules can be a one-time event in the future or be a recurring event. Recurring workflow schedules can be used to execute workflows on a periodic or repeating basis. To modify existing workflows or create new ones, it helps to have a general familiarity with NetApp storage CLI and with WFA schemes, OCI or OCUM data sources and execution control capabilities. The OnCommand Workflow Automation 3.1 RC1 Workflow Developers Guide 2 provides more information about how to develop workflows. OnCommand Workflow Automation use cases To give the user more of an understanding for what can be accomplished with Workflow Automation we will discuss some typical use cases of OnCommand WFA. There are more than 50 out-of-the-box WFA workflows available from NetApp, including the following: Create clustered Data ONTAP (cdot) SnapMirror relationships, NFS volume(s), Qtree Common Internet File System (CIFS) share(s), volume with QoS policy groups, VMware data stores, and a basic volume; Move or remove a cdot volume; and Create and configure a NAS Storage Virtual Machine, a storage area network (SAN) Storage Virtual Machine or a Cloud ONTAP instance. Similar sets of workflows exist for 7-mode storage, along with specific workflows for SnapVault, SnapMirror and Infinite Volume support. All of these workflows come standard with OnCommand WFA or are downloadable from the aforementioned Automation Store. We have selected three specific WFA workflow packages that have been developed by subject matter experts or NetApp personnel to be described in detail because of their broad applicability and as good examples of what can be done with workflows. The three workflows include: Workflows for Storage Services Catalog with Service Level Objectives and Adaptive Quality of Service (QoS) Workflows for Cloud Manager Workflows for NetApp Software-Defined Storage in the VMware Software Defined Data Center 2 Documentation available at Support.NetApp.com

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 9 OF 13 Workflows for Storage Services Catalog with Service Level Objectives and Adaptive QoS This package of workflows uses NetApp s adaptive quality of service (QoS) capabilities to limit or rather constrain the performance of an applications IO activity within defined service level objectives (SLO). NetApp QoS can be defined as MBs per second (MB/sec) or IO operations per second (IO/sec) limit and be applied to storage aggregate entities. NetApp QoS limits can be fixed or adaptive. Fixed QoS refers to NetApp s ability to supply a fixed amount of IO performance (IO/sec or MB/sec) to a volume of storage regardless of its size. Adaptive QoS refers to NetApp s ability to supply storage performance levels based on IO operations per second per TB (IOPS/TB). However, the logical vs. physical TB size of a volume can vary depending on its NetApp space efficiency characteristics. For example, a thick volume is assigned all of its capacity at definition time, a thin volume is assigned capacity as data is written to it, and a deduplicated volume can have its capacity shared with other volumes that have the same data. So Adaptive QoS can be configured as thick, thin or effective (dedupe support). To use the Storage Services Catalog with Adaptive QoS workflows one must use a Service Level Class (SLC) that is made up of two components a Storage Service Level and a Protection Service Level. Storage Service Level (SSL) describes a performance level, space efficiency level, and availability level for storage. For example, an EXTREME-PERFORMANCE Storage Service Level could be defined as a minimum of 600 IOPS/TB to a maximum of 3072 IOPS/TB in IO performance, no space efficiency, and highly available storage. Protection Service Level (PSL) describes the secondary and tertiary data protection levels, RPO (recovery point objective) and protection technology attributes for storage. For example, a HIGHLY-PROTECTED Protection Service Level could be defined as using secondary destination storage, with no tertiary storage with a 15 minute RPO and includes Metro Cluster for Clustered data. The SLO workflow package with its Kitchen Police (adaptive) QoS services comes with templates for 7 different SSL performance classes ranging from 6144 IOPS/TB down to 128 IOPS/TB. These should be changed by the customer to reflect performance availability and requirements within their data center environment. NetApp has a Service Design Workshop that customers can attend to help choose appropriate SSLs and PSLs for their application and storage environment. The SLO workflow package consists of 9 workflows:

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 10 OF 13 1. Service catalog initialization this workflow initializes the data center service catalog and prepares the environment for creating SLO and builds the data model to save SLO details 2. Manage SSL class this workflow creates, modifies, or deletes a Storage Service Level and maps or unmaps aggregates to a service level. 3. Manage PSL class this workflow creates, modifies or removes a protection topology used in Protection Service Levels and can be used to create, modify or remove a protection topology s associated edge (protection) storage. 4. Manage SLC this workflow adds, modifies, or removes a service level class. 5. Manage adaptive QOS policy configuration this workflow adds or deletes clusters to adaptive QoS services and can be used to fine-tune adaptive QoS policy configurations. 6. Adaptive QoS manager this workflow can be used to start or stop adaptive QoS policy activity. 7. Service based volume provisioning this workflow can be used to provision volumes based on service level requirements. 8. Service based LUN provisioning this workflow can be used to provision LUNs based on service level requirements in current volumes or can be used to create a new volume for the LUN. 9. Move volumes to new storage service level this workflow can be used to move existing volumes from one service class to a new service class. The above workflows can be used to define SSLs and PSLs which can then be used to create SLCs. Once this is done, other workflows can be invoked by authorized users to provision volumes or LUNs in a specified SLC automatically just by selecting the class of service. For instance, one could create GOLD, SILVER and BRONZE SLCs that have differing performance and protection characteristics and once defined, users can employ some of these workflows to provision storage volumes just by specifying SLC levels. Administrators can use the above workflows to fine tune IO performance and protection characteristics of their storage clusters by modifying SSL and PSL class definitions as needed. Also, Adaptive QoS can be defined, activated or terminated on a cluster basis with the above workflows. Workflows for Cloud Manager This package is for those customers using Cloud ONTAP storage services for Amazon Web Services (AWS). With these workflows authorized users or administrators can create, start, stop and delete as well as populate data schemas for Cloud ONTAP instances, in a programmable fashion, using NetApp s OnCommand Cloud Manager (OCCM) and OCUM WFA. These Cloud Manager workflows are community supported and only touch on some of the many Cloud Manager services that can be automated through WFA. To use these workflows, OCCM must be installed and Cloud Manager Admin, Tennant Admin or Working Environment credentials, and OCCM server credentials must be provided. All Cloud Manager workflows use the Cloud Manager data schema, populated during the execution of the workflows.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 11 OF 13 The Cloud Manager WFA package consists of five workflows: 1. Create Cloud ONTAP instance this workflow will create a Cloud ONTAP SVM in your AWS environment and wait for the instantiation to complete. Once completed the Cloud ONTAP SVM can then be started and used by/connected to any compute instances in the same AWS region to provide storage services. This workflow will automatically acquire AWS premium storage services as needed to supply backing storage for the Cloud ONTAP SVM. 2. Start Cloud ONTAP instance this workflow can be used to start already existing Cloud ONTAP instances in your AWS environments. Note, Cloud ONTAP storage instances must be started to be connected to and used by AWS compute instances 3. Stop Cloud ONTAP instance this workflow can be used to stop an already operating Cloud ONTAP instance when it is no longer needed by your AWS applications. Stopping Cloud ONTAP service will not free up AWS premium storage associated with the instance but will halt any subsequent IO activity to the storage. 4. Delete Cloud ONTAP instance this workflow can be used to stop a running Cloud ONTAP instance and delete it to free up AWS resources used by the Cloud ONTAP instance. Once completed, the deleted Cloud ONTAP storage will no longer be available to your AWS environment. 5. Acquire and wait for Cloud ONTAP instance this workflow is a child workflow, used in all the above workflows and populates the Cloud Manager data schema with information about running Cloud ONTAP instances in your AWS environment. Cloud ONTAP runs as an AWS compute instance and when starting and stopping Cloud ONTAP instances, one is starting and stopping the AWS compute instance where the Cloud ONTAP code is executing. Moreover, the AWS premium storage used to back the Cloud ONTAP storage is connected to the Cloud ONTAP compute instance. Just CREATING or STARTING a Cloud ONTAP instance doesn t provision Cloud ONTAP volumes/luns or connect Cloud ONTAP storage to AWS compute instances. To provision Cloud ONTAP storage use OCCM console and to connect Cloud ONTAP storage to EC2 applications use AWS EC2 instance services or the AWS EC2 management console. Note, before you STOP a Cloud ONTAP instance, all AWS compute instances using that storage will need to be disconnected from the Cloud ONTAP storage or AWS compute instances using that storage should be in a stopped state. Also, when the DELETE workflow is issued, all EC2 instances will be force disconnected from the Cloud ONTAP storage, regardless of whether are stopped or running.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 12 OF 13 Workflows for NetApp Software-Defined Storage in the VMware Software Defined Data Center (SDDC) The OnCommand WFA package for VMware vrealize Orchestrator can be used to help operate the NetApp software-defined storage in VMware s SDDC. This package will take advantage of VMware vcloud or vrealize Suite components such as vrealize Automation, vrealize Operations Management and vrealize Orchestrator (vro) to make use of NetApp WFA and NetApp OnCommand Unified Manager or OnCommand Insight. This package integrates vrealize services with NetApp WFA so that vcloud administrators can use NetApp workflows directly to provision, migrate and decommission storage; set up new virtualization environments; and set up storage for new applications. For OCUM, the workflows depend on three data sources: a vro data source, a vcloud data source and an OnCommand Unified Manager data source. For OCI, one only needs the OCI data source. All data source(s) will periodically query their services for information and, as such, require login credentials, host IP addresses and information regarding how often they should be updated. The data source(s) will be used to populate schemes for vc, vcloud, cm_storage and cm_performance. These schemes will be filtered to supply objects and other entities for the package s workflows. After establishing the above data source(s), users must provide credentials to gain WFA access to vrealize services and then configure vro. vro will be the place where vsphere infrastructure and other VMware automation processes execute and integrate directly with NetApp OnCommand WFA. vro has its own workflows and will call OnCommand WFA to automate NetApp storage activities. vro must support the vro NetApp Integration Package for OnCommand WFA that talks directly with WFA. vrealize Automation is a centralized operational portal that enables users and administrators to request services in vcloud. vrealize Automation will call vro and use vro workflows to perform orchestration tasks inside the vcloud environment. Similarly, vrealize Automation allows cloud administrators to invoke vro services. The WFA workflows described below assume vrealize Automation and vrealize Operations Management are properly connected to vro, vrealize server, and NetApp WFA.

NETAPP ONCOMMAND WFA USEFUL WORKFLOWS COOKBOOK PAGE 13 OF 13 The NetApp WFA vrealize Orchestrator package consists of 16 vro workflows, 14 vro actions and 2 configuration workflows. Essentially, these workflows link vro to NetApp WFA so that WFA workflows can be executed and monitored to completion. This package includes the following: Connect and Disconnect NetApp WFA and NetApp Storage Service Catalog to vrealize Automation, vrealize Operations Managemnt, vro and vcenter; Verify WFA workflow inputs, Run WFA workflow, Get WFA job execution details, Wait for WFA job to complete and Return WFA workflow outputs; and Generate WFA e-mail message and Find WFA workflow ID. The package supplies the data source(s) with linkage information; populates the WFA data schemes; supplies information to set up and connect vrealize Automation, vrealize Operations Management, and vro; and establishes the vro REST connection to OnCommand WFA. Most workflow input parameters come from the data sources and data schemes. These vro installation, deployment and workflows are described in more detail in a NetApp technical report on Software-Defined Storage with NetApp and VMware 3, WFA videos 4 and the VMware vrealize Orchestrator Package for OnCommand Workflow Automation that includes the actual vro workflows, vrealize connectors, data sources and other information, which can be downloaded from the NetApp Community Support pages 5. Summary OnCommand WFA enables storage architects, administrators and solution designers to encapsulate and script standard, everyday NetApp storage activities in a way that non-storage administrators can perform with the click of a button. Not all data centers may wish to design, develop and debug their own WFA workflows, but just about anyone can download the packages identified here from the NetApp Automation store and use them to support NetApp storage activities. With WFA, NetApp has taken the next step in storage service automation by providing an almost visual scripting solution to automate these storage activities. Using WFA workflows, NetApp storage administration can be accomplished with minimal effort and storage expertise. WFA makes using NetApp storage as simple as possible for application, server and storage administrators. Silverton Consulting, Inc., is a U.S.-based Storage, Strategy & Systems consulting firm offering products and services to the data storage community. 3 Available at http://www.netapp.com/us/media/tr-4308.pdf 4 Available at http://www.youtube.com/watch?v=kz8oz16k48c. 5 Available at http://mysupport.netapp.com/tools/download/ecmlp2412683dt.html?productid=62116