NetApp SolidFire Element OS User Guide

Size: px
Start display at page:

Download "NetApp SolidFire Element OS User Guide"

Transcription

1 NetApp SolidFire Element OS User Guide For Element OS Version 10.0 September _A0

2 Copyright Information Copyright NetApp, Inc. All Rights Reserved. No part of this document covered by copyright may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system without prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP AS IS AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this document may be protected by one or more U.S. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS (October 1988) and FAR (June 1987). Trademark Information NETAPP, the NETAPP logo, and the marks listed at are trademarks of NetApp, Inc. Other company and product names may be trademarks of their respective owners. NetApp SolidFire Element OS 10.0 User Guide ii

3 Table of Contents Introduction 13 SolidFire Element OS Features 14 SolidFire System Architecture 15 Network Ports 16 Clusters 18 Nodes 19 Storage Nodes 19 Fibre Channel Nodes 19 Getting Started 20 Setting Up the SolidFire Storage System 20 SF-series Node Configuration 20 Node Versioning and Compatibility 21 Configuring Nodes Using the TUI 22 Configuring Nodes Using the Node UI 24 Configuring idrac on Each New Node 25 Creating a New Cluster 25 Adding Drives at Cluster Creation 26 Using the Element OS Web UI 27 Using the API Log 27 Using Filters 27 Sorting Lists 28 The Web UI and Cluster Load 28 Icon Reference 29 Providing Feedback 29 Reporting 30 NetApp SolidFire Element OS 10.0 User Guide iii

4 Reporting Overview 31 Event Log 33 Event Types 33 Alerts 36 Alert Error Codes 36 iscsi Sessions 39 Fibre Channel Sessions 39 Running Tasks 40 Volume Performance 40 Volume Performance Details 40 Viewing Individual Volume Performance Details 41 Management 42 Volumes 43 Volume Details 44 Creating a New Volume 44 Editing Active Volumes 45 Cloning a Volume 46 Adding Multiple Volumes to an Access Group 47 Applying a QoS Policy to Multiple Volumes 47 Disassociating a QoS Policy from a Volume 47 Volume Pairing 48 Pairing Volumes 48 Pairing Volumes Using a Volume ID 49 Pairing Volumes Using a Pairing Key 50 Editing Volume Pairs 50 Deleting Volume Pairs 51 Switching from a Source to a Target Volume 51 NetApp SolidFire Element OS 10.0 User Guide iv

5 Validating Paired Volumes 51 Validating Data Transmission 52 Deleting a Volume 52 Resizing a Volume 52 Restoring a Deleted Volume 53 Purging a Volume 53 Volume Backup and Restore Operations 54 Volume Backup Operations 54 Volume Restore Operations 55 Viewing Individual Volume Details 57 Accounts 58 Account Details 58 Creating an Account 58 Editing an Account 58 Deleting an Account 59 Viewing Individual Account Details 59 Volume Access Groups 61 Volume Access Group Details 61 Creating an Access Group 62 Adding Volumes to an Access Group 62 Adding Initiators to an Access Group 63 Adding a Single Initiator to an Access Group 63 Adding Multiple Initiators to a Volume Access Group 64 Removing Initiators from an Access Group 64 Deleting an Access Group 64 Removing Volumes from an Access Group 65 Creating a Volume Access Group for Fibre Channel Clients 65 NetApp SolidFire Element OS 10.0 User Guide v

6 Assigning LUNs to Fibre Channel Volumes 66 Viewing Individual Access Group Details 66 Initiators 67 Initiator Details 67 Creating an Initiator 67 Deleting an Initiator 68 Deleting Multiple Initiators 68 Editing an Initiator 68 QoS Policies 69 Understanding Quality of Service 69 QoS Performance Curve 70 QoS Policies Details 71 Creating a QoS Policy 71 Editing a QoS Policy 71 Deleting a QoS Policy 72 Data Protection 73 Volume Snapshots 74 Volume Snapshot Details 74 Creating a Volume Snapshot 75 Editing Snapshot Retention 75 Cloning a Volume from a Snapshot 76 Rolling Back a Volume to a Snapshot 76 Volume Snapshot Backup Operations 77 Backing Up a Volume Snapshot to an Amazon S3 Object Store 77 Backing up a Volume Snapshot to an OpenStack Swift Object Store 77 Backing Up a Volume Snapshot to a SolidFire Cluster 78 Deleting a Snapshot 78 NetApp SolidFire Element OS 10.0 User Guide vi

7 Group Snapshots 80 Group Snapshot Details 80 Creating a Group Snapshot 80 Editing Group Snapshots 81 Cloning Multiple Volumes 81 Cloning Multiple Volumes from a Group Snapshot 82 Rolling Back Volumes to a Group Snapshot 82 Deleting a Group Snapshot 83 Snapshot Schedules 84 Snapshot Schedule Details 84 Creating a Snapshot Schedule 84 Editing a Snapshot Schedule 85 Deleting a Snapshot Schedule 86 Copying a Snapshot Schedule 86 Cluster Pairing 87 Real-Time Replication 87 Multiple Cluster Pairing 88 Replication Configuration Information 88 Node Port Requirements 88 Pairing Clusters Using MVIP 89 Pairing Clusters with a Pairing Key 89 Validating Paired Clusters 90 Deleting a Cluster Pair 90 Volume Pairs 91 Volume Pairing Messages 91 Volume Pairing Warnings 92 Users 93 NetApp SolidFire Element OS 10.0 User Guide vii

8 User Types 94 Creating a Cluster Admin Account 94 Editing Cluster Admin Permissions 94 Deleting a Cluster Admin Account 95 Changing the Cluster Admin Password 95 Terms of Use 95 Enabling Terms of Use 95 Editing Terms of Use 96 Disabling Terms of Use 96 Cluster 97 Cluster Settings 98 Cluster Settings Details 98 Setting Cluster Full Threshold 98 Enabling and Disabling Encryption for a Cluster 98 Setting Network Time Protocol 99 Enabling a Broadcast Client 99 SNMP 100 SNMP Details 100 Configuring an SNMP Requestor 100 Configuring an SNMP USM User 100 Configuring SNMP Traps 101 Viewing Management Information Base Files 101 LDAP 102 LDAP Details 102 Configuring LDAP 102 Disabling LDAP 103 Drives 104 NetApp SolidFire Element OS 10.0 User Guide viii

9 Drives Details 104 Adding Available Drives to a Cluster 105 Wear Remaining 105 Removing a Drive 105 Removing Failed Drives 106 Secure-Erasing Data 106 Multi-Drive Slice Service 107 Recovering Multi-Drive Slice Service Drives 107 Removing MDSS Drives 107 Adding MDSS Drives 108 Nodes 109 Storage Nodes 110 Viewing Individual Node Details 110 Viewing Node Software Version 110 Adding a Node to a Cluster 111 Mixed Node Capacity 111 Accessing Node Settings 111 Viewing Node Activity Graph 118 Removing Nodes from a Cluster 118 Fibre Channel Nodes 120 Adding Fibre Channel Nodes to a Cluster 120 Creating a Cluster with Fibre Channel Nodes 120 Setting Up Fibre Channel Nodes 120 Finding Fibre Channel WWPN Addresses 121 Removing a Fibre Channel Node 121 Fibre Channel Port Details 122 Virtual Networks 123 NetApp SolidFire Element OS 10.0 User Guide ix

10 Viewing Virtual Networks 123 Creating a Virtual Network 123 Enabling Virtual Routing and Forwarding (VRF) 124 Editing a Virtual Network 124 Editing VRF VLANs 125 Deleting a Virtual Network 125 Virtual Volumes 126 Virtual Volumes Overview 127 Virtual Volume Object Types 127 Configuring vsphere for VVols 127 Enabling Virtual Volumes 127 Viewing Virtual Volume Details 128 Virtual Volume Details 129 Individual Virtual Volume Details 129 Deleting a Virtual Volume 130 Storage Containers 131 Creating a Storage Container 131 Viewing Storage Container Details 131 Storage Container Details 132 Individual Storage Container Details 132 Editing a Storage Container 132 Deleting a Storage Container 133 Protocol Endpoints 133 Bindings 134 Hosts 134 Hardware Maintenance 136 Automatic Recovery Scenarios 136 NetApp SolidFire Element OS 10.0 User Guide x

11 Single Node Recovery 136 Multiple Node Recovery 136 Drive Recovery 136 Replacing an SSD 137 Adding a Storage Node 137 Removing a Storage Node 137 Powering Down a Node 138 Powering Up a Node 138 Powering Down a Cluster 138 Appendix A Management Node Overview 139 Management Node Images and Platforms 140 Installing a Management Node 140 Configuring Remote Support Firewall Ports 140 Enabling Remote Support Connections 141 Configuring the Management Node with a Proxy Server 141 Running sfsetproxy Commands 141 Setting the Management Node Host and Port Arguments 142 Setting Up the connection.json File for Active IQ 142 Accessing Management Node Settings 144 Modifying Management Node Settings 144 Management Node Settings for eth0 Networks 144 Network Settings for eth0 145 Management Node Cluster Settings 146 Management Node Cluster Interface Settings 146 Management Node System Tests 146 Running System Utilities on Management Node 147 Creating a Cluster Support Bundle 148 NetApp SolidFire Element OS 10.0 User Guide xi

12 Appendix B Upgrade SolidFire Software 150 Upgrading Management Node Software 150 Upgrading Nodes with Current SolidFire Element Software 151 Appendix C Calculating Max Provisioned Space 153 Appendix D Cluster Fullness Overview 155 Block Cluster Full Severity Levels 155 Metadata Cluster Full Severity Levels 155 Cluster Full Threshold Details 156 Cluster Fullness in a Mixed Node Environment 157 Appendix E Related Information 158 Contacting NetApp Support for SolidFire 159 NetApp SolidFire Element OS 10.0 User Guide xii

13 Introduction Introduction This guide provides information about how to use the NetApp SolidFire Element OS Web user interface (UI) to configure and manage a SolidFire storage system. Use this guide when installing, managing, or troubleshooting your storage solution. The SolidFire Element OS UI is an easy-to-understand representation of the configuration of your SolidFire storage system. Through the Web UI, you can set up and monitor SolidFire cluster storage capacity and performance and manage storage activity across a multi-tenant infrastructure. The Element OS Web UI is built on the SolidFire API which enables you to see system adjustments almost immediately. This guide is intended for IT professionals, software developers, and others who install, administer, or troubleshoot SolidFire storage solutions. This guide makes the following assumptions: You have a background as a Linux system administrator. You are familiar with server networking and networked storage, including IP addresses, netmasks, and gateways. NetApp SolidFire Element OS 10.0 User Guide 13

14 SolidFire Element OS Features SolidFire Element OS Features The SolidFire Element Operating System (OS) comes preinstalled on each node. SolidFire Element OS includes the following features: SolidFire Helix self-healing data protection Always on, inline, real-time deduplication Always on, inline, real-time compression Always on, inline, reservation-less thin provisioning Fibre Channel node integration Management Node LDAP capability for secure login functionality Guaranteed volume level Quality of Service (QoS): Minimum IOPS Maximum IOPS IOPS burst control Instant, reservation-less deduplicated cloning Volume snapshots: Snapshots of individual volumes Scheduling snapshots of a volume or group of volumes Consistent snapshots of a group of volumes Cloning multiple volumes individually or from a group snapshot Integrated Backup and Restore for volumes Real-Time Replication for clusters and volumes Native multi-tenant (VLAN) management and reporting: Virtual Routing and Forwarding (VRF) Tagged Networks Proactive Remote Monitoring through Active IQ (AIQ) Complete REST-based API management Granular management access/role-based access control Virtual Volumes (VVols) support for VMware vsphere VASA support Volume and system level performance and data usage reporting VMware vsphere (VAAI) support Terms of Use banner Upgrade Readiness Analyzer NetApp SolidFire Element OS 10.0 User Guide 14

15 SolidFire System Architecture SolidFire System Architecture SolidFire storage is an interconnection of hardware and software designed for complete automation and management of an entire SolidFire storage system. The following diagram shows the basic layout of the SolidFire Storage System and how it connects to a network: SolidFire Element OS 10.0 User Guide 15

16 SolidFire System Architecture Network Ports You might need to allow the following network ports through your datacenter's edge firewall so that you can manage the system remotely and allow clients outside of your datacenter to connect to resources. Some ports might not be required, depending on how you use the system. NOTE: All ports are TCP unless stated otherwise, and should be open bidirectionally. The following abbreviations are used in the table: MIP: Management IP Address SIP: Storage IP Address MVIP: Management Virtual IP Address SVIP: Storage Virtual IP Address Source Destination Port Description iscsi clients Storage cluster MVIP 443 UI and API access (optional) iscsi clients Storage cluster SVIP 3260 Client iscsi communications iscsi clients Storage node SIP 3260 Client iscsi communications Management node sfsupport.solidfire.com 22 Reverse SSH tunnel for support access Management node solidfire.brickftp.com 22 SFTP for log bundle uploads Management node Storage node MIP 22 SSH access for support Management node pubrepo.solidfire.com 80 Access to NetApp repository for Element OS and management node updates Management node Storage cluster MVIP 161 SNMP Polling Management node Storage node MIP 161 SNMP Polling Management node Storage node MIP 442 UI and API access to storage node Management node monitoring.solidfire.com 443 Storage cluster reporting to Active IQ Management node Storage cluster MVIP 443 UI and API access to storage cluster SNMP Server Storage cluster MVIP 161 SNMP Polling SNMP Server Storage node MIP 161 SNMP Polling Storage node MIP Management node 80 SolidFire Element OS updates Storage node MIP S3/Swift endpoint 80 HTTP communication to S3/Swift endpoint for backup and recovery Storage node MIP Management node 123 NTP Storage node MIP NTP server 123 NTP Storage node MIP Management node 162 SNMP Traps NetApp SolidFire Element OS 10.0 User Guide 16

17 SolidFire System Architecture Source Destination Port Description Storage node MIP SNMP Server 162 SNMP Traps Storage node MIP Remote storage cluster MVIP 443 Remote replication cluster pairing communication Storage node MIP Remote storage node MIP 443 Remote replication cluster pairing communication Storage node MIP S3/Swift endpoint 443 HTTPS communication to S3/Swift endpoint for backup and recovery Storage node MIP Remote storage node MIP 2181 Remote replication intercluster communication Storage node MIP Management node 10514/514 Syslog forwarding. Cluster defaults to port 514 if no port is specified. Storage node MIP Syslog server 10514/514 Syslog forwarding. Cluster defaults to port 514 if no port is specified. Storage node SIP S3/Swift endpoint 80 HTTP communication to S3/Swift endpoint for backup and recovery (optional) Storage node SIP S3/Swift endpoint 443 HTTPS communication to S3/Swift endpoint for backup and recovery (optional) Storage node SIP Remote Storage Node SIP 2181 Remote replication intercluster communication Storage node SIP Storage node SIP 3260 Internode iscsi Storage node SIP Remote storage node SIP Remote replication node-to-node data transfer System administrator PC Management node 442 HTTPS UI and API access to management node System administrator PC Storage node MIP 442 UI and API access to storage node System administrator PC Management node 443 UI and API access to management node System administrator PC Storage cluster MVIP 443 UI and API access to storage cluster System administrator PC Storage node MIP 443 Storage cluster creation, post-deployment UI access to storage cluster vcenter Server Storage cluster MVIP 443 vcenter Plug-in API access vcenter Server Management node 8080/8443 vcenter Plug-in QoSSIOC service redirects to vcenter Server Storage cluster MVIP 8444 vcenter VASA provider access (VVols only) vcenter Server Management node 9443 vcenter Plug-in registration. The port can be closed after registration is complete. Best Practices: Enable ICMP between the management node, SolidFire nodes, and Cluster MVIP. NOTE: For vsphere network port requirements, refer to VMware documentation. NetApp SolidFire Element OS 10.0 User Guide 17

18 SolidFire System Architecture Clusters A cluster is the hub of a SolidFire Storage System and is made up of a collection of nodes. You must have at least four nodes in a cluster (five or more nodes are recommended) in order for SolidFire storage efficiencies to be realized. A cluster appears on the network as a single logical group and can then be accessed as block storage. Creating a new cluster initializes a node as communications owner for a cluster and establishes network communications for each node in the cluster. This process is performed only once for each new cluster. You can create a cluster by using the Element OS Web UI or the API. You can scale out a cluster by adding additional nodes. When you add a new node, there is no interruption of service and the cluster automatically uses the performance and capacity of the new node. The following graphic illustrates basic IP address layout for a cluster. Ethernet Switches When setting up your SolidFire storage system, consider the following information about Ethernet switches. 10GE switches are required for iscsi storage services and node intra-cluster services communication. Best Practices: Configure and utilize Jumbo Frames on the storage network. This means that the entire network traffic path between the iscsi clients (bare metal servers and virtualized servers with VMware or KVM/XEN/HyperV) and the nodes (as well as between the nodes themselves) will be utilizing Jumbo Frames (MTU of 9000). This also means that the NICs in bare metal servers, as well as virtual switches and VMNICs in Hypervisors need to be configured for Jumbo Frames and MTU of Keep in mind that on the 10GE switches the Jumbo Frames MTU setting will be larger to account for packet overhead other Ethernet stuff, so typically the MTU on the switches would be somewhere around It s important to work with the customer/prospect IT staff to determine the appropriate MTU setting for their switches (vendor and model specific). 1GE switches are the preferred option (and best practice) to use for the following: l Management of the cluster and the nodes l Intra-cluster management traffic between the nodes l Traffic between the cluster nodes and the virtual management appliance/node (management node/fdva). Best Practices: Deploy a pair of 1GE switches to provide High Availability, resilience, and load sharing for the pair of 1GE ports NetApp SolidFire Element OS 10.0 User Guide 18

19 SolidFire System Architecture on each node. However, a single 1GE switch can be used with either one or both 1GE ports on each of the nodes. These options provide flexibility to customers in order to fit the SolidFire solution into their existing infrastructure, architecture, and cost. This is a supported option, but not the preferred solution. The management traffic can also be configured to run over the same pair of 10GE ports (and not utilize the 1GE ports at all) as the storage services and intra-cluster traffic, but the management traffic will have to run across the same VLAN as the storage services and intra-cluster traffic. Administrators and hosts can access the cluster using virtual IP addresses. Any node in the cluster can host the virtual IP addresses. The Management Virtual IP (MVIP) enables cluster management through a 1GbE connection, while the Storage Virtual IP (SVIP) enables host access to storage through a 10GbE connection: Type Label Network Management Virtual IP MVIP 1GbE Storage Virtual IP SVIP 10GbE These virtual IP addresses enable consistent connections regardless of the size or makeup of a SolidFire cluster. If a node hosting a virtual IP address fails, another node in the cluster begins hosting the virtual IP address. Nodes SF-series nodes are the hardware that are grouped into a cluster to be accessed as block storage. There are two fundamental types of SF-series nodes: storage and Fibre Channel. Storage Nodes A SolidFire storage node is a collection of drives that communicate with each other through the CIPI Bond10G network interface. Drives in the node contain block and metadata space for data storage and data management. You can create a cluster with new storage nodes, or add storage nodes to an existing cluster to increase storage capacity and performance. Storage nodes have the following characteristics: Each node has a unique name. If a node name is not specified by an administrator, it defaults to SF-XXXX where XXXX is four random characters generated by the system. Each node has its own high-performance non-volatile random access memory (NVRAM) write cache to improve overall system performance and reduce write latency. Each node is connected to two networks with two independent links for redundancy and performance. Each node requires an IP address on each network. You can add or remove nodes from the cluster at any time without interrupting service. Fibre Channel Nodes SolidFire Fibre Channel nodes provide connectivity to a Fibre Channel switch, which you can connect to Fibre Channel clients. Fibre Channel nodes act as a protocol converter between the Fibre Channel and iscsi protocols; this enables you to add Fibre Channel connectivity to any new or existing SolidFire cluster. Fibre Channel nodes have the following characteristics: Fibre Channel switches manage the state of the fabric, providing optimized interconnections. The traffic between two ports flows through the switches only; it is not transmitted to any other port. Failure of a port is isolated and does not affect operation of other ports. Multiple pairs of ports can communicate simultaneously in a fabric. NetApp SolidFire Element OS 10.0 User Guide 19

20 Getting Started Getting Started SolidFire storage nodes are delivered as appliances with SolidFire Element OS installed and ready to be configured. After node configuration, you can add each node to a SolidFire cluster. Setting Up the SolidFire Storage System These instructions assume that the SolidFire hardware you purchased has been racked, cabled, and powered on. Instructions for setting up the SolidFire system hardware are included in the hardware shipment. The SolidFire cluster hardware should be installed and cabled appropriately so that network and configuration management communication can be established. If you are adding SolidFire Fibre Channel nodes to a cluster, see Fibre Channel Nodes for configuration instructions. When setting up the SolidFire hardware, you must follow a specific order of operations to ensure that your nodes and clusters are configured correctly: 1 Installing a Management Node A management node is a virtual machine-based node used to upgrade the system software, connect to Active IQ for system monitoring, and allow NetApp SolidFire Active Support to access your nodes should you need help troubleshooting a problem. A management node runs Element OS and expands on the capabilities previously provided by the FDVA. See Management Node Overview for information about installing and setting up your management node. 2 SF-series Node Configuration You need to configure nodes before you can add them to a cluster. This ensures proper network connectivity and node identification when you create the cluster. 3 Creating a New Cluster You can create a new SolidFire cluster once individual nodes are configured. One of the configured nodes needs to be identified as the node with which the cluster will be configured. This node is the primary node to establish communications with other nodes in the cluster. 4 Adding Available Drives to a Cluster You can add all available drives when the cluster is created or at a later time. 5 Accounts Accounts are required for access to volumes on a node. You can create accounts before or during volume creation. There are two types of accounts on a SolidFire system: Cluster Admin accounts are used to monitor and configure settings on the cluster. For details, see Creating a Cluster Admin Account. User accounts (billable customer accounts) are used to identify customers on the system. For details, see Creating an Account. 6 Creating a New Volume Volumes are primary storage partitions on a node. They are accessed by user accounts on the system. SF-series Node Configuration You need to configure individual SolidFire nodes before you can add them to a cluster. When you install and cable a node in a rack unit and power it on, the terminal user interface (TUI) displays the fields necessary to configure the node. Ensure that you have the necessary configuration information for the node before proceeding. Alternatively, you can configure these settings by accessing the node via the Element OS Web UI using the Dynamic Host Configuration Protocol (DHCP) 1G management IP address displayed in the TUI. The DHCP address is located in the menu bar at the NetApp SolidFire Element OS 10.0 User Guide 20

21 Getting Started top of the TUI. To access the node using the DHCP address, enter where node_dhcp_ip would be the DHCP address provided in the TUI. After initial configuration, you can access the node using the node's management IP address. You can then change the node settings, add it to a cluster, or use the node to create a cluster. For more information, see Accessing Node Settings, Adding a Node to a Cluster, or Creating a New Cluster. A SolidFire Fibre Channel node requires the same configuration as a SolidFire storage node. See Fibre Channel Nodes for more information. NOTE: You cannot add a node with DHCP assigned IP addresses to a cluster. You can use the DHCP IP address to initially configure the node in the Web UI, TUI, or API. During this initial configuration, you can add the static IP address information so that you can add the node to a cluster. You can also configure a new node using SolidFire API methods. See the NetApp SolidFire Element OS API Reference Guide for methods used to configure nodes. Steps to use the TUI and Element OS Web UI are outlined in this document. A node can be in one of the following states depending on the level of configuration: Available: The node has no associated cluster name and is not yet part of a cluster. Pending: The node is configured and can be added to a designated cluster. Authentication is not required to access the node. PendingActive: The system is in the process of installing compatible Element OS software on the node. When complete, the node will move to the Active state. Active: The node is participating in a cluster. Authentication is required to modify the node. At each of these states, some fields are read-only. To see when fields are available for modification, Accessing Node Settings. Technical Demo - SolidFire 5-Node Cluster Installation & Configuration Node Versioning and Compatibility Node compatibility is based on the SolidFire Element OS software version installed on a node. SF-series clusters automatically image a node to the Element OS version on the cluster if the node and cluster are not at compatible versions. The following list identifies the SolidFire software release significance levels that make up the software version number. Major: The first number designates a software release. A node with one major component number cannot be added to a cluster containing nodes of a different major-patch number, nor can a cluster be created with nodes of mixed major versions. Minor: The second number designates smaller software features or enhancements to existing software features that have been added to a major release. This component is incremented within a major version component to indicate that this incremental release is not compatible with any other Element OS incremental releases with a different minor component. For example, 7.0 is not compatible with 7.1, and 7.1 is not compatible with 7.2. Micro: The third number designates a compatible patch (incremental release) to the element version represented by the major.minor components. For example, is compatible with 7.0.2, and is compatible with Major and minor version numbers must match for compatibility. Micro numbers do not have to match for compatibility. NetApp SolidFire Element OS 10.0 User Guide 21

22 Getting Started Configuring Nodes Using the TUI You can use the TUI to perform initial configuration for new nodes. The following is an example of the TUI window. 1. Attach a keyboard and monitor to the node and then power on the node. The TUI appears on the tty1 terminal with the Network Settings tab. If a DHCP server is running on the network with available IP addresses, the 1GbE address appears in the Address field. NOTE: If the node cannot reach your configuration server, the TUI displays an error message. Check your configuration server connection or the networking connection to resolve the error. All configurable TUI fields described in this section also apply when using the Element OS Web UI. To navigate to the Element OS UI, add to the beginning and :442 to the end of the node IP address. Example: 2. Use the on-screen navigation to configure the 1G and 10G network settings for the node. NetApp SolidFire Element OS 10.0 User Guide 22

23 Getting Started NOTE: To enter text, select the Enter key on the keyboard to open edit mode. When entering text is completed, select the Enter key again to close the edit mode. To navigate between fields, use the arrow keys (not the Tab key). Caution: NetApp strongly recommends that you configure the Bond1G and Bond10G interfaces for separate subnets. Bond1G and Bond10G interfaces configured for the same subnet causes routing problems when storage traffic is sent via the Bond1G interface. If you must use the same subnet for management and storage traffic, manually configure management traffic to use the Bond10G interface. You can do this for each node using the Cluster Settings page of the Element OS Web UI. 3. Press the s key to save the settings, and then y to accept the changes. 4. Press the c key to navigate to the Cluster tab. 5. Use the on-screen navigation to configure the cluster settings for the node. NOTE: All nodes of a cluster must have identical Cluster names. 6. Press the s key to save the settings, then y to accept changes. The node is put in a pending state and can be added to an existing cluster or a new cluster. For more information, see Adding a Node to a Cluster or Creating a New Cluster. How to Setup and Install a SolidFire Cluster in Less Than 5 Minutes NetApp SolidFire Element OS 10.0 User Guide 23

24 Getting Started Configuring Nodes Using the Node UI You can perform and initially configure nodes in the SolidFire Node Configuration user interface. You need the DHCP address displayed in the TUI to access a node. For the location of the DHCP address, see Configuring Nodes Using the TUI. The DHCP address is only used to access and configure a node. You cannot use DHCP addresses to add a node to a cluster. The following is an example of the Network Settings - Bond1G page. Caution: NetApp strongly recommends that you configure the Bond1G and Bond10G interfaces for separate subnets. Bond1G and Bond10G interfaces configured for the same subnet causes routing problems when storage traffic is sent via the Bond1G interface. If you must use the same subnet for management and storage traffic, manually configure management traffic to use the Bond10G interface. You can do this for each node using the Cluster Settings page of the Element OS Web UI. 1. In a browser window, enter the DHCP IP address of a node. NOTE: You must add the extension :442 to access the node. For example: :442 The Network Settings tab is displayed automatically and opened to the Network Settings Bond1G page. NetApp SolidFire Element OS 10.0 User Guide 24

25 Getting Started 2. Enter the 1G network settings. 3. Click Save Changes. 4. Click Bond10G to display the settings for the 10G network settings. 5. Enter the 10G network settings. 6. Click Save Changes. 7. Click the Cluster Settings tab. 8. Enter the hostname for the 10G network. 9. Click Save Changes. Configuring idrac on Each New Node NetApp SolidFire installs Dell idrac Enterprise on each node. To configure idrac, see the HOW TO: Remotely Monitor Node Hardware - idrac Configuration solution record on the NetApp SolidFire Support site or ng-sf-support@netapp.com. NOTE: Access to the NetApp SolidFire Support site requires a user name and password. Creating a New Cluster You can create a new SolidFire cluster once you have configured individual nodes. During new node configuration, 1G or 10G Management IP (MIP) addresses are assigned to each node. Use one of the node IP addresses created during configuration to bring up the Create a New Cluster page. The IP address you use will depend on which network you have chosen to accommodate cluster management. When you create a cluster, a cluster administrator user account is automatically created for you. The cluster administrator has permission to manage all cluster attributes and can create other cluster administrator accounts. NetApp SolidFire Element OS 10.0 User Guide 25

26 Getting Started 1. In a browser window, enter a node MIP address. 2. In Create a New Cluster, enter the following: NOTE: User names can be uppercase and lowercase letters, special characters, and numbers. Management VIP ISCSI (Storage) VIP Routable virtual IP on 1GbE or 10Gbe network for network management tasks. Virtual IP on 10GbE network for storage and iscsi discovery. NOTE: The SVIP cannot be changed after the cluster is created. Data Protection Create Username Create Password Repeat Password EULA Two-way data protection - always on. (This is not user configurable.) The primary Cluster Admin user name for authenticated access to the cluster. Store in a secure location for future reference. Password for authenticated access to the cluster. Store it in a secure location for future reference. Standard password confirmation. Read and approve the End User License Agreement. 3. (Optional) In the Nodes list, clear the check boxes for any nodes that should not be included in the cluster (all nodes are selected by default). 4. Click Create Cluster. The system might take several minutes to create the cluster depending on the number of nodes in the cluster. On a properly configured network, a small cluster of five nodes should take less than one minute. After the cluster has been created, the Create a New Cluster window is redirected to the MVIP URL address for the cluster and displays the SolidFire Element OS Web UI. Adding Drives at Cluster Creation You will be directed to the Available Drives list when drives have been detected during cluster creation. You will be prompted to add all the available drives at that time. NetApp SolidFire Element OS 10.0 User Guide 26

27 Using the Element OS Web UI Using the Element OS Web UI The SolidFire Element OS Web UI provides extensive monitoring capability and intuitive access to common tasks on the system. You can access the Web UI using the management virtual IP address of the primary cluster node. NOTE: Popup blockers and NoScript settings should be disabled in your browser before running the Web UI. To access the Element OS Web UI, enter the following URL in your browser window: Example: NOTE: If your browser cannot connect to the internet, enter the MVIP as a URL in your browser window. For example, Click through any authentication certificate messages. Using the API Log The SolidFire system uses the SolidFire API as the foundation for its features and functionality. The Element OS web UI enables you to view various types of real-time API activity on the system as you use the interface. With the API log, you can view user-initiated and background system API activity, as well as API calls made on the page you are currently viewing. You can use the API log to identify what API methods are used for certain tasks, and see how to use the API methods and objects to build custom applications. The complete NetApp SolidFire API Reference Guide is available at 1. From the SolidFire Element OS Web UI navigation bar, click API Log. 2. To modify the type of API activity displayed in the API Log window: a. Select the Requests check box to display API request traffic. b. Select the Responses check box to display API response traffic. c. From the Filter list, choose one of the following types of API traffic: User Initiated: API traffic by your activities during this Web UI session. Background Polling: API traffic generated by background system activity. The following API calls are used in background polling: ListClusterFaults GetClusterStats ListBulkVolumeJobs ListSyncJobs Current Page: API traffic generated by tasks on the page you are currently viewing. Using Filters Some pages within the Element OS Web UI enable you to sort and filter list information. When viewing lists (such as volumes, snapshots, and so on), refer to these steps to use the filter functionality to focus the information. NetApp SolidFire Element OS 10.0 User Guide 27

28 Using the Element OS Web UI 1. When viewing list information, click Filter. 2. Expand the Filter By field. 3. Choose a column to filter by from the leftmost element in the field. 4. Choose a constraint for the column. 5. Enter text to filter by. 6. Click Add. The system runs the new filter on the information in the list, and temporarily stores the new filter in the Filter By field. 7. (Optional) To add another filter: a. Click Add. b. Follow steps 3 through 6 to add another filter. 8. (Optional) Click Clear All to remove the list of filters and display the unfiltered list information. Sorting Lists Some pages within the Element OS Web UI enable you to sort list information by one or more columns. When viewing list information, refer to these steps to use the sorting functionality to arrange the list items. 1. To sort on a single column, click the column heading until the information is sorted in the desired order. 2. To sort using multiple columns, click the column heading for each column you wish to sort by until each column's information is sorted in the desired order. The Sort button appears when you sort using multiple columns. 3. To re-order the sort critiera: a. Click Sort. The system populates the Sort By field with your column selections. b. Arrange the columns in the Sort By field in the order you wish the list to be sorted. The system sorts the list information. 4. To remove a single sort criteria, click the Remove graphic (x) next to the name of a sort criteria. 5. (Optional) To clear the sort criteria, click Clear All. The Web UI and Cluster Load Depending on API response times, the cluster may automatically adjust the data refresh interval for certain portions of the page you are viewing. The refresh interval is reset to the default when you reload the page in your browser. You can see the current refresh interval by clicking the cluster name in the upper right of the page. Note that the interval controls how often API requests are made, not how quickly the data comes back from the server. When a cluster is under heavy load, it may queue API requests from the Web UI. In rare circumstances when system response is significantly delayed, such as a slow network connection combined with a busy cluster, you may be logged out of the Web UI if the system does not respond to queued API requests quickly enough. If you are redirected to the logout screen, you can log in again by dismissing any initial browser authentication prompt and clicking the log in link on the error page. Upon returning to the overview page, you may be prompted for cluster credentials if they are not saved by your browser. NetApp SolidFire Element OS 10.0 User Guide 28

29 Using the Element OS Web UI Icon Reference The following table provides a quick reference to Element OS user interface icons. Icon Description Refresh Filter Actions Edit Clone or copy Delete or purge Pair Restore Snapshot Backup to Restore from Rollback Providing Feedback To help us improve our user interface and address any UI issues, use the simple feedback form that is accessible throughout the UI. 1. From any page in the Element OS UI, select the Feedback button ( ). 2. Enter relevant information in the Summary and Description fields. 3. Attach any helpful screen captures. 4. Enter a name and address. 5. Select the check box to include data about your current environment. 6. For more information, click on the link What is included in the data about my current environment? to view the data below. 7. Click Submit. NetApp SolidFire Element OS 10.0 User Guide 29

30 Reporting Reporting The Reporting tab gives you information about the cluster's components and provides an overview for how the cluster is performing. Reporting opens up into an overview of the cluster components and resources. From the overview you can get an immediate visual presentation of the health of the cluster. See the following topics to learn about or perform reporting tasks: Reporting Overview Event Log Alerts iscsi Sessions Fibre Channel Sessions Running Tasks Volume Performance NetApp SolidFire Element OS 10.0 User Guide 30

31 Reporting Reporting Overview The Reporting Overview is the primary view for the cluster managed by the Element OS Web UI. Capacity, efficiency, and performance information for the cluster is available from this page. Cluster information is presented in each pane and updated at frequent intervals. You can access the Reporting Overview page by entering the MVIP address in your web browser URL bar after a new cluster has been created. Navigating the Reporting Overview Viewing features in the Reporting Overview enables you to see details for certain data. You can hover over graph lines and reporting data with your pointer to display additional details. Heading Cluster Capacity Description This bar graph summarizes the block capacity remaining for Block Storage, Metadata, and Provisioned space. These measurements are obtained with the GetClusterCapacity API method. Move the pointer over the progress bar to see threshold information. NetApp SolidFire Element OS 10.0 User Guide 31

32 Reporting Heading Cluster Information Cluster Efficiency Cluster Input/Output Description This pane shows identifying information specific to the cluster, such as cluster name, the SVIP address, the number of nodes in the cluster, the number of 4k IOPS, the number of volumes on the cluster, the number of iscsi sessions, and the version of Element OS running on the cluster. This graph shows the amount of overall system capacity that is being utilized, taking into account thin provisioning, de-duplication and compression. The calculated benefit achieved on the cluster is compared to what the capacity utilization would be without thin provisioning, de-duplication, and compression on a traditional storage device. These graph lines show I/O currently running on the cluster. The values are calculated based on the previous I/O measurement against the current I/O measurements. The measurements are obtained with the GetClusterStats API method. The graph lines update at 5 second intervals under normal workloads but may take longer under heavier workloads. There are four measurements shown in the graph, each of which you can enable or disable in the graph: Total: The combined read and write IOPS occurring in the system. Read: The number of read IOPS occurring. Write: The number of write IOPS. Average I/O Size: The average size of any IOPS occurring in the system. Throughput This graph shows the bandwidth activity for read, write, and total bandwidth on the cluster. The throughput calculated for the bandwidth measurements are obtained with the GetClusterStats API method. Read: Shows the read activity in Megabytes for the cluster. Write: Shows the write activity in Megabytes for the cluster. Total: Shows the total Megabytes used for both read and write activity in the cluster. Performance Utilization Cluster Health This graph shows the percentage of cluster IOPS being consumed. For example, a 250K IOP cluster running at 100K IOPS would show 40% consumption. This pane shows the general health of the cluster. Color codes indicate the following: Red: Error Orange: Critical Green: Healthy NetApp SolidFire Element OS 10.0 User Guide 32

33 Reporting Heading Provisioned IOPS Description This is a summary of how volume IOPS may be overprovisioned on the cluster. The calculations use the total number of miniops, maxiops and burstiops for all volumes on the cluster, then each total number of IOPS is divided by the maxiops rated for the cluster. Example: If there are four volumes in the cluster, each with miniops of 500, maxiops of 15,000, and burstiops of 15,000, the total number of miniops would be 2,000, total maxiops would be 60,000, and total burstiops would be 60,000. If the cluster is rated at maxiops of 50,000, then the calculations would be the following: Minimum IOPS: 2000/50000 = 0.04x Maximum IOPS: 60000/50000 = 1.20x Burst IOPS: 60000/50000 = 1.20x 1.00x is the baseline at which provisioned IOPS is equal to the rated IOPS for the cluster. Event Log On the Reporting > Event Log page, you can view information about events detected in the system. The system refreshes the event messages every 30 seconds. The event log displays key events for the cluster. For every event, the following information is returned: Heading ID Event Type Message Details Service ID Node Drive ID Event Time Description Unique ID associated with each event. The type of event being logged, for example, API events or clone events. See Event Types for more information. Message associated with the event. Information that helps identify why the event occurred. The service that reported the event (if applicable). The node that reported the event (if applicable). The drive that reported the event (if applicable). The time the event occurred. Event Types The system reports multiple types of events; each event is an operation that the system has completed. Events can be routine, normal events or events that require administrator attention. The Event Types column on the Event Log page indicates in which part of the system the event occurred. NOTE: The system does not log read-only API commands in the event log. The following table describes the types of events that may appear in the event log. NetApp SolidFire Element OS 10.0 User Guide 33

34 Reporting Event Type apievent binassignmentsevent BinSyncEvent BsCheckEvent bulkopevent cloneevent clustermasterevent dataevent dbevent driveevent encryptionatrestevent ensembleevent fibrechannelevent gcevent ieevent installevent iscsievent limitevent networkevent platformhardwareevent remoteclusterevent serviceevent statevent sliceevent snmptrapevent schedulerevent tsevent Description Events initiated by a user through an API or Web UI that modify settings. Events related to the assignment of data bins. Bins are essentially containers that hold data and are mapped across the cluster. System events related to a reassignment of data among block services. System events related to block service checks. Events related to operations performed on an entire volume, such as a backup, restore, snapshot, or clone. Events related to volume cloning. Events appearing upon cluster initialization or upon configuration changes to the cluster, such as adding or removing nodes. Events related to reading and writing data. Events related to the global database maintained by ensemble nodes in the cluster. Events related to drive operations. Events related to the process of encryption on a cluster. Events related to increasing or decreasing the number of nodes in an ensemble. Events related to the configuration of and connections to the Fibre Channel nodes. Events related to processes run every 60 minutes to reclaim storage on block drives. This process is also known as garbage collection. Internal system error. Automatic software installation events. Software is being automatically installed on a pending node. Events related to iscsi issues in the system. Events related to the number of volumes or virtual volumes in an account or in the cluster nearing the maximum allowed. Events related to the status of virtual networking. Events related to issues detected on hardware devices. Events related to remote cluster pairing. Events related to system service status. Events related to system statistics. Events related to the Slice Server, such as removing a metadata drive or volume. Events related to SNMP traps. Events related to scheduled snapshots. Events related to the system transport service. NetApp SolidFire Element OS 10.0 User Guide 34

35 Reporting Event Type unexpectedexception vasaproviderevent Description Events related to unexpected system exceptions. Events related to a VASA (vsphere APIs for Stroage Awareness) Provider. NetApp SolidFire Element OS 10.0 User Guide 35

36 Reporting Alerts Alerts are cluster faults or errors and are reported as they occur on the system. Alerts can be information, warnings, or errors and are a good indicator of how well the cluster is running. Most errors resolve themselves automatically; however, some may require manual intervention. The Alerts "bell" icon at the top right of the Web UI indicates the number of current number of active system alerts. By clicking the icon, you can see the type of alert and the date it was triggered. If you click View All Alerts, the system displays the Reporting > Alerts page. On the Reporting > Alerts page, you can view information about individual system alerts. On this page, you can click Show Details for an individual alert on the page to view information about the alert, and you can also view the details of all alerts on the page by expanding the Details column. The system refreshes the alerts on the page every 30 seconds. After the system resolves an alert, all information about the alert including the date it was resolved is moved to the Resolved area. You can use the ListClusterFaults API method to automate alert monitoring. This enables you to be notified about all alerts that occur. The following table describes the columns on the page. Heading ID Severity Type Description Unique ID for a cluster alert. warning: A minor issue that may soon require attention. System upgrades are still allowed at this severity level. error: A failure that may cause performance degradation or loss of high availability (HA). Errors generally should not affect service otherwise. critical: A serious failure that affects service. The system is unable to serve API or client I/O requests. Operating in this state could lead to potential loss of data. bestpractice: A recommended system configuration best practice is not being used. node: Fault affecting an entire node. drive: Fault affecting an individual drive. cluster: Fault affecting the entire cluster. service: Fault affecting a service on the cluster. volume: Fault affecting a volume on the cluster. Node Node ID for the node that this fault refers to. Included for node and drive faults, otherwise set to - (dash). You can click Show Details to expand information about the alert. Drive ID Error Code Details Date Drive ID for the drive that this fault refers to. Included for drive faults, otherwise set to - (dash). A descriptive code that indicates what caused the fault. Description of the fault with additional details. You can view more information about the alert by clicking the Show Details button. The date and time the fault was logged. Alert Error Codes The system reports error codes with each alert on the Alerts page. Error codes help you determine what component of the system experienced the alert, and you can learn more about why the alert was generated using the information in the Details column. The following table outlines the different types of system alerts. NetApp SolidFire Element OS 10.0 User Guide 36

37 Reporting Alert Type BlockServiceTooFull BlockServiceUnhealthy ClusterCannotSync ClusterFull ClusterIOPSAreOverProvisioned DisconnectedClusterPair DriveWearFault EnsembleDegraded Exception FailedSpaceTooFull FibreChannelAccessDegraded FibreChannelAccessUnavailable InconsistentMtus InvalidConfiguredFibreChannelNodeCount notusinglacpbondmode SliceServiceUnhealthy SliceServiceTooFull ProvisionedSpaceTooFull VolumeDegraded Description A block service is using too much space and running low on capacity. The SolidFire Application cannot communicate with a Block Service. If this condition persists, the system relocates the data to another drive. Once the system relocates the data, you should reboot the unhealthy node to restore communication. There is an out of space condition and data on the offline block storage drives cannot be synced to drives that are still active. Stage 3 Cluster Full: Add additional capacity or free up capacity as soon as possible. Stage 4 Cluster Full: Due to high capacity consumption Helix data protection will not recover if a node fails. Creating new Volumes or Snapshots is not permitted until additional capacity is available. Add additional capacity or free up capacity immediately. The sum of all minimum QoS IOPS is greater than the expected IOPS of the cluster. The system cannot maintain minimum QoS in this condition. IOPS may need to be adjusted. Paired clusters have become disconnected. Reestablish communication between the clusters. A drive may need attention due to wear. Power or network connectivity has been lost to one or more of the ensemble nodes. Restore network connectivity or power to the affected node. A non-routine fault has been detected. This fault will not be cleared. Call NetApp SolidFire Support to resolve the exception fault. A Slice Service is using space reserved for failed writes. Contact NetApp SolidFire Support at ng-sf-support@netapp.com. A Fibre Channel node has stopped responding to the storage nodes in the cluster. All Fibre Channel nodes have become disconnected. Bond1G mismatch: Inconsistent MTUs detected on Bond1G interfaces. MTU to nodeid mapping: <mapping of MTUs to nodes>. Bond10G mismatch: Inconsistent MTUs detected on Bond10G interfaces. MTU to nodeid mapping: <mapping of MTUs to nodes>. There is only one Fibre Channel node configured in a cluster. For proper Fibre Channel operation, at least two Fibre Channel nodes must be configured in a cluster. LACP bonding mode is not configured. NetApp strongly recommends using LACP bonding when deploying SF-Series and newer nodes; clients may experience timeouts if LACP is not configured. The SolidFire Application cannot communicate with a metadata service. A Slice Service is using too much space and running low on capacity. The overall provisioned capacity of the cluster is too full. Secondary volumes have not finished replicating and syncing. NetApp SolidFire Element OS 10.0 User Guide 37

38 Reporting Alert Type NodeHardwareFault Upgrade UnbalancedMixedNodes Description The system has detected a hardware misconfiguration or a component that is not functioning as expected. The software on one or more nodes is being upgraded. The storage on the mix of nodes in a cluster has become unbalanced in a way that may degrade performance. NetApp SolidFire Element OS 10.0 User Guide 38

39 Reporting iscsi Sessions You can view the iscsi sessions that are connected to the cluster. You can filter the information in the iscsi Sessions window to include only the desired sessions. Click the Filter button to see the filter criteria fields. The following table describes the iscsi Sessions details. Heading Node Account Volume Volume ID Initiator ID Initiator Alias Initiator IP Initiator IQN Target IP Target IQN Created On Description The node hosting the primary metadata partition for the volume. The name of the account that owns the volume. If value is blank, a dash (-) will be displayed. The volume name identified on the node. ID of the volume associated with the Target IQN. A system-generated ID for the initiator. An optional name for the initiator that makes finding the initiator easier when in a long list. The IP address of the endpoint that initiates the session. The IQN of the endpoint that initiates the session. The IP address of the node hosting the volume. The IQN of the volume. Date the session was established. Fibre Channel Sessions On the Reporting > FC Sessions page, you can view the active Fibre Channel sessions that are connected to the cluster. You can filter information on this page to include only those connections you want displayed in the window. Click the Filter button to see the filter criteria fields. The following table describes the information on the page. Heading Node ID Node Name Initiator ID Initiator WWPN Initiator Alias Target WWPN Volume Access Group Volume Access Group ID Description The Fibre Channel node hosting the session for the connection. System-generated node name. A system-generated ID for the initiator. The initiating World Wide Port Name. An optional name for the initiator that makes finding the initiator easier when in a long list. The target World Wide Port Name. Name of the volume access group that the session belongs to. System-generated ID for the access group. NetApp SolidFire Element OS 10.0 User Guide 39

40 Reporting Running Tasks You can view the progress and completion status of running tasks in the Web UI that are being reported by ListSyncJobs and ListBulkVolumeJobs API methods. You can access the running tasks page by clicking Reporting > Running Tasks or by clicking the running task ( ) icon at the top of the Web UI. The progress of long-running tasks such as volume replication, cloning operations, backup and restore operations, and block syncing tasks appear if any are currently running. If there are a large number of tasks, the system may queue them and run them in batches. The Running Tasks page displays the services currently being synced. When a task is complete, it is replaced by the next queued syncing task. Syncing tasks may continue to appear on the Running Tasks page until there are no more tasks to complete. NOTE: You can see replication sync data for volumes undergoing replication on the Running Tasks page of the cluster containing the target volume. Volume Performance On the Reporting > Volume Performance page, you can view detailed performance information for all volumes in the cluster. You can sort the information by volume ID or by any of the performance columns. You can also use the Filter button to filter the information by certain criteria. You can change how often the system refreshes performance information on the page by clicking the Refresh every list, and choosing a different value. The default refresh interval is 10 seconds if the cluster has less than 1000 volumes; otherwise, the default is 60 seconds. If you choose a value of Never, automatic page refreshing is disabled. After 10 minutes of inactivity on this page, a dialog appears and automatic refreshing is paused. This helps prevent heavily loaded systems from becoming overwhelmed with polling requests. You can re-enable automatic refreshing by clicking Turn on autorefresh, or clear the dialog while keeping automatic refreshing disabled by clicking Cancel. Volume Performance Details On the Reporting > Volume Performance page, you can view the following information: Heading ID Name Account Access Groups Volume Utilization Total IOPS Read IOPS Write IOPS Total Throughput Read Throughput Description The system-generated ID for the volume. The name given to the volume when it was created. The name of the account assigned to the volume. The name of the volume access group or groups to which the volume belongs. A percentage value that describes how much the client is using the volume. Values: 0 = Client is not using the volume 100 = Client is using their max >100 = Client is using their burst The total number of IOPS (read and write) currently being executed against the volume. The total number of read IOPS currently being executed against the volume. The total number of write IOPS currently being executed against the volume. The total amount of throughput (read and write) currently being executed against the volume. The total amount of read throughput currently being executed against the volume. NetApp SolidFire Element OS 10.0 User Guide 40

41 Reporting Heading Write Throughput Total Latency Description The total amount of write throughput currently being executed against the volume. The average time, in microseconds, to complete read and write operations to a volume. Read Latency The average time, in microseconds, to complete read operations to the volume in the last 500 milliseconds. Write Latency The average time, in microseconds, to complete write operations to a volume in the last 500 milliseconds. Queue Depth Average IO Size The number of outstanding read and write operations to the volume. Average size in bytes of recent I/O to the volume in the last 500 milliseconds. Viewing Individual Volume Performance Details On the Reporting > Volume Performance page, you can view performance statistics for individual volumes. This format provides general information as well as statistics for IOPS, throughput, queue depth and latency for each volume. 1. Go to Reporting > Volume Performance. 2. In the volume list, click the Actions button ( ) for a volume. 3. Click View Details. A tray appears at the bottom of the page containing general information about the volume. 4. To see more detailed information about the volume, click See More Details. The system displays detailed information as well as performance graphs for the volume. NetApp SolidFire Element OS 10.0 User Guide 41

42 Management Management The Management tab enables you to work with volumes and accounts. See the following topics to learn about or perform management tasks: Volumes Accounts Volume Access Groups Initiators QoS Policies NetApp SolidFire Element OS 10.0 User Guide 42

43 Management Volumes The SolidFire system provisions storage using volumes. Volumes are block devices accessed over the network by iscsi or Fibre Channel clients. On the Management > Volumes page, you can create, modify, clone, and delete volumes on a node. You can also view statistics about volume bandwidth and I/O usage. The default view for the volumes list is the Active volumes view. This view shows information that was used to create each volume. See the following topics to learn about or perform volume-related tasks: Volume Details Creating a New Volume Editing Active Volumes Cloning a Volume Adding Multiple Volumes to an Access Group Applying a QoS Policy to Multiple Volumes Disassociating a QoS Policy from a Volume Volume Pairing Pairing Volumes Pairing Volumes Using a Volume ID Pairing Volumes Using a Pairing Key Editing Volume Pairs Deleting Volume Pairs Switching from a Source to a Target Volume Validating Paired Volumes Validating Data Transmission Deleting a Volume Resizing a Volume Restoring a Deleted Volume Purging a Volume Volume Backup and Restore Operations Viewing Individual Volume Details NetApp SolidFire Element OS 10.0 User Guide 43

44 Management Volume Details On the Management > Volumes page, you can view the following information in the list of active volumes. Heading ID Name Account Access Groups Access Description The system-generated ID for the volume. The name given to the volume when it was created. The name and link to the account assigned to the volume. The name of the volume access group or groups to which the volume belongs. The type of access assigned to the volume when it was created. Possible values: Read/Write: All reads and writes are accepted. Read Only: All read activity allowed; no writes allowed. Locked: Only Administrator access allowed. ReplicationTarget: Designated as a target volume in a replicated volume pair. Used Size Snapshots QoS Policy Min IOPS Max IOPS The percentage of used space in the volume. The total size (in GB) of the volume. The number of snapshots created for the volume. The name and link to the user-defined QoS policy. The minimum number of IOPS guaranteed for the volume. The maximum number of IOPS allowed for the volume. Burst IOPS The maximum number of IOPS allowed over a short period of time for the volume. Default = 15,000. Attributes 512e Created On Attributes that have been assigned to the volume as a key/value pair through an API method. Identifies if 512e is enabled on a volume. Can be either Yes or No. The date and time that the volume was created. Creating a New Volume You can create a new volume and associate the volume with a given account (every volume must be associated with an account). This association gives the account access to the volume through the iscsi initiators using the CHAP credentials. You can specify QoS settings for a volume during creation. See QoS Policies for more information. 1. Go to Management > Volumes. 2. Click Create Volume. 3. In the Create a New Volume dialog, enter the Volume Name (must be from 1 through 64 characters). NetApp SolidFire Element OS 10.0 User Guide 44

45 Management 4. Enter the total size of the volume. NOTE: The default volume size selection is in GB. You can create volumes using sizes measured in GB or GiB: 1GB = bytes 1GiB = bytes 5. Select a Block Size for the volume. 6. Click the Account drop-down list and select the account that should have access to the volume. If an account does not exist, click the Create Account link, enter a new account name, and click Create. The account is created and associated with the new volume. NOTE: If there are more than 50 accounts, the list does not appear. Begin typing and the auto-complete function displays possible values for you to choose. 7. To set the Quality of Service, select from the following: a. Under Policy, you can select an existing QoS policy, if available. b. Under Custom Settings, you can enter values or accept the default IOPS values. Caution: Volumes that have a Max or Burst IOPS value greater than 20,000 IOPS may require high queue depth or multiple sessions to achieve this level of IOPS on a single volume. 8. Click Create Volume. Editing Active Volumes You can use the Edit Volume dialog to modify volume attributes such as QoS values, volume size, and the unit of measurement in which byte values are calculated. You can also modify account access for replication usage or to restrict access to the volume. NOTE: You can extend the size of a volume that is configured for replication to prevent replication errors. You should first increase the size of the volume assigned as the replicationtarget. Then you can resize the source volume. NetApp recommends that the target and source volumes be the same size. 1. Go to Management > Volumes. 2. In the Active window, click the Actions button ( ) for the volume you wish to edit. 3. Click Edit ( ). 4. In the Edit Volume dialog, enter the new attributes for the volume. NOTE: You can increase, but not decrease, the size of the volume. For more information, see Resizing a Volume. When you change IOPS values, NetApp recommends increments in tens or hundreds. Input values require valid whole numbers. Best Practice: Configure volumes with an extremely high burst value. This allows the system to process occasional large block sequential workloads more quickly, while still constraining the sustained IOPS for a volume. 5. Click Save Changes. NetApp SolidFire Element OS 10.0 User Guide 45

46 Management Cloning a Volume You can create a clone of a single volume or multiple volumes to make a point-in-time copy of the data. When you clone a volume, the system creates a snapshot of the volume and then creates a copy of the data referenced by the snapshot. This is an asynchronous process and can take a variable amount of time depending on the size of the volume you are cloning and the current cluster load. The cluster supports up to two running clone requests per volume at a time and up to 8 active volume clone operations at a time. Requests beyond these limits are queued for later processing. Caution: Before you truncate a cloned volume by cloning to a smaller size, ensure you prepare the partitions so that they fit into the smaller volume. 1. Go to Management > Volumes. 2. To clone a single volume: a. In the list of volumes on the Active page, click the Actions button ( ) for the volume you wish to clone. b. In the resulting menu, click Clone. c. In the Clone Volume window, enter a Volume Name for the newly cloned volume. d. Select a size and measurement for the volume using the Volume Size spin box and list. NOTE: The default volume size selection is in GB. You can create volumes using sizes measured in GB or GiB: 1GB = bytes 1GiB = bytes e. Select the type of Access for the newly cloned volume. f. Select an account to associate with the newly cloned volume from the Account list. NOTE: You can create an account during this step if you click the Create Account link, enter an account name, and click Create. The system automatically adds the account to the Account list after you create it. g. Click Start Cloning. 3. To clone multiple volumes: a. In the list of volumes on the Active page, check the box next to any volumes you wish to clone. b. Click Bulk Actions. c. In the resulting menu, select Clone. d. In the Clone Multiple Volumes dialog, enter a prefix for the cloned volumes in the New Volume Name Prefix field. e. Select an account to associate with the cloned volumes from the Account list. f. Select the type of Access for the cloned volumes. g. Click Start Cloning. NOTE: Increasing the volume size of a clone results in a new volume with additional free space at the end of the volume. Depending on how you use the volume, you may need to extend partitions or create new partitions in the free space to make use of it. NetApp SolidFire Element OS 10.0 User Guide 46

47 Management Adding Multiple Volumes to an Access Group You can add multiple volumes to a volume access group. Use this process when you want to bulk add one or more volumes to a volume access group. NOTE: You can also use this procedure to add volumes to a Fibre Channel volume access group. 1. Go to Management > Volumes. 2. In the list of volumes, check the box next to any volumes you wish to add. 3. Click Bulk Actions. 4. In the resulting menu, click Add to Volumes Access Group. 5. Select the access group to which you want to add the volumes. 6. Click Add. Applying a QoS Policy to Multiple Volumes You can apply an existing QoS policy to multiple volumes. Use this process when you want to bulk apply a policy to one or more volumes. Prerequisite The QoS policy you want to bulk apply exists. See Creating a QoS Policy for more information. 1. Go to Management > Volumes. 2. In the list of volumes, check the box next to any volumes you wish to apply the QoS policy to. 3. Click Bulk Actions. 4. In the resulting menu, click Apply QoS Policy. 5. Select the QoS policy from the drop-down list. 6. Click Apply. Disassociating a QoS Policy from a Volume You can disassociate a QoS policy from a volume by using the Edit Volume dialog. Prerequisites The volume you want to modify is associated with a QoS policy. 1. Go to Management > Volumes. 2. Click the Actions button ( ) for the volume you wish to remove a QoS policy from. 3. Click Edit. 4. In the resulting menu, under Quality of Service click Custom Settings. 5. Modify the Min IOPS, Max IOPS, and Burst IOPS, or keep the default settings. 6. Click Save Changes. NetApp SolidFire Element OS 10.0 User Guide 47

48 Management Volume Pairing You can pair volumes residing on different clusters to replicate data for data protection. After the volumes are paired, data can be replicated between two volumes that reside on two different clusters. When a connection between two volumes has been established, you can validate the pairing in the Volume Pairs window in the Web UI. See Volume Pairs for more information. Both clusters where the volumes reside must be paired before you can pair the volumes. See Cluster Pairing for cluster pairing instructions. You can pair volumes in one of three different modes for synchronization: Real-time (Asynchronous): Writes are acknowledged to the client once they are committed on the source cluster. Real-time (Synchronous): Writes are acknowledged to the client once they are committed on both the source and target clusters. Snapshots Only: Only snapshots created on the source cluster are replicated. Active writes from the source volume are not replicated. Starting the Volume Pairing - There are two methods to start the volume pairing that are described in this section. After pairing is completed, you can validate the pairing in the Web UI. The two methods are: Pairing Volumes Using a Volume ID: Use this method if there is Cluster Admin access to both clusters to be paired. This method uses the VolumeID of the volume on the remote cluster. Pairing Volumes Using a Pairing Key: Use this method if there is Cluster Admin access to only the source cluster. This method generates a pairing key that can be used on the remote cluster to complete the volume pair. NOTE: The volume pairing key contains an encrypted version of the volume information and may contain sensitive cluster information. Only share this key in a secure manner. Once the volumes are paired, one of the volumes must be identified as the replication target. Pairing Volumes Switching from a Source to a Target Volume: The volume pairing key is used to complete the volume pairing if a key has been used to pair the volumes. One of the volumes is required to be identified as the Replication Target before the volumes start replicating data. Use Modify Volume to edit the volume and change the access status to Replication Target. Validating Paired Volumes: The volume pairing can be validated in the Web UI. The following diagram shows how volumes can be paired on two clusters. As a rule, there can only be a one-to-one volume pairing. Once a volume is paired with a volume on another cluster, you cannot pair it again with any other volume. Volume pairing can be asynchronous, synchronous, or snapshot-only. NetApp SolidFire Element OS 10.0 User Guide 48

49 Management Pairing Volumes Using a Volume ID Follow this procedure to pair two volumes if you have cluster admin credentials to the remote cluster. Prerequisites Ensure that the clusters containing the volumes are paired. Create a new target volume on the remote cluster. It is highly recommended that the target volume has no data. Know the target Volume ID. Edit the access mode on the target volume and set it to Replication Target. Edit the access mode on the source volume and set it to Read / Write. The target volume needs to have the exact characteristics of the source volume, such as size, 512e setting, and QoS configuration. The target volume may be larger than the source volume, but it cannot be smaller. 1. Go to Management > Volumes. 2. Click the Actions button ( ) for the volume you want to pair. 3. Click Pair. 4. In the Pair Volume dialog, select Start Pairing. 5. Select I Do to indicate you have access to the remote cluster. 6. Select a Replication Mode from the list. Real-time (Asynchronous): Writes are acknowledged to the client once they are committed on the source cluster. Real-time (Synchronous): Writes are acknowledged to the client once they are committed on both the source and target clusters. Snapshots Only: Only snapshots created on the source cluster are replicated. Active writes from the source volume are not replicated. 7. Select a Remote Cluster from the list. 8. Choose a Remote Volume ID. 9. Click Start Pairing. The system opens a web browser tab that connects to the web UI of the remote cluster. NetApp SolidFire Element OS 10.0 User Guide 49

50 Management You may be required to log on to the remote cluster with cluster admin credentials. 10. In the web UI of the remote cluster, select Complete Pairing. 11. Confirm the details in Confirm Volume Pairing. 12. Click Complete Pairing. The two clusters begin the process of connecting the volumes for pairing. During the pairing process, you can see progress messages in the Volume Status column of the Volume Pairs window. See Volume Pairing Messages or Volume Pairing Warnings for more information. Pairing Volumes Using a Pairing Key Follow this procedure to pair two volumes if you cannot log in to the remote cluster. Prerequisites Ensure that the clusters containing the volumes are paired. 1. Go to Management > Volumes. 2. Click the Actions button ( ) for the volume you want to pair. 3. Click Pair. 4. In the Pair Volume dialog, select Start Pairing. 5. Select I Do Not to indicate you do not have access to the remote cluster. 6. Select a Replication Mode from the list. Real-time (Asynchronous): Writes are acknowledged to the client once they are committed on the source cluster. Real-time (Synchronous): Writes are acknowledged to the client once they are committed on both the source and target clusters. Snapshots Only: Only snapshots created on the source cluster are replicated. Active writes from the source volume are not replicated. 7. Click Generate Key. The system generates a pairing key in the Pairing Key box. 8. Copy the pairing key to your computer's clipboard. 9. Gain access to the remote cluster, or have an administrator with access to the remote cluster complete the following steps for you. 10. In the remote cluster web UI, click Management and then Volumes. 11. Click the Actions button ( ) for the volume you want to pair. 12. Click Pair. 13. In the Pair Volume dialog, select Complete Pairing. 14. Paste the pairing key from the original cluster into the Pairing Key box. 15. Click Complete Pairing. Editing Volume Pairs Follow these steps to make changes to volume pairs. 1. Go to Data Protection > Volume Pairs. NetApp SolidFire Element OS 10.0 User Guide 50

51 Management 2. Click the Actions button ( ) that corresponds to the volume pair you want to edit. 3. Click Edit ( ). 4. In the Edit Volume Pair pane, make any required changes. 5. Click Save Changes. Deleting Volume Pairs Follow these steps to remove the volume pair association. 1. Go to Data Protection > Volume Pairs. 2. Click the Actions button ( ) that corresponds to the volume pair you want to delete. 3. Click Delete ( ). 4. Confirm the message. Switching from a Source to a Target Volume When volumes are paired, you can redirect data sent to a source volume to a remote target volume should the source volume become unavailable. You must perform the following procedure on each of the clusters that contain the source and target volumes: Prerequisites You must have access to the clusters containing the source and target volumes. 1. Log in to the cluster containing the paired target volume. 2. Go to Management > Volumes. 3. Click the Actions button ( ) button for the volume you want to modify. 4. Click Edit. 5. In the Access drop-down, select Read / Write. 6. Click Save Changes. 7. Log in to the cluster containing the paired source volume. 8. Go to Management > Volumes. 9. Click the Actions button ( ) button for the volume you want to modify. 10. Click Edit. 11. In the Access drop-down, select Replication Target. 12. Click Save Changes. Validating Paired Volumes Validating that the two volumes have been successfully paired can be accomplished in the Web UI. See Volume Pairs for information about volumes that are paired. 1. Go to Data Protection > Volume Pairs. NetApp SolidFire Element OS 10.0 User Guide 51

52 Management 2. View the status of the volume pairing. NOTE: Volumes that are paired using a pairing key appear in the Replicating Volumes column after the pairing process has been completed at the remote location. Validating Data Transmission You can validate that data is being sent from the source to the target by making a clone of the target volume. After the volume is cloned, you can mount the volume to check that data is being properly sent between the source and target volume. Deleting a Volume You can delete one or more volumes from a SolidFire cluster. The system does not immediately purge a deleted volume; the volume remains available for approximately eight hours. If you restore a volume before the system purges it, the volume comes back online and iscsi connections are restored. Caution: When you delete a volume, any snapshots of that volume are also deleted. 1. Go to Management > Volumes. 2. To delete a single volume: a. Click the Actions button ( ) for the volume you want to delete. b. In the resulting menu, click Delete. c. Confirm the action. The system moves the volume to the Deleted area in the Volumes page. 3. To delete multiple volumes: Resizing a Volume a. In the list of volumes, check the box next to any volumes you wish to delete. b. Click Bulk Actions. c. In the resulting menu, click Delete. d. Confirm the action. The system moves the volumes to the Deleted area in the Volumes page. You can increase (but not decrease) the size of a volume. You can only resize one volume in a single resizing operation. Garbage collection operations and software upgrades do not interrupt the resizing operation. You can resize a volume under the following conditions: Normal operating conditions. Volume errors or failures are being reported. The volume is being cloned. The volume is being resynced. NOTE: The default volume size selection is in GB. You can create volumes using sizes measured in GB or GiB: 1GB = bytes 1GiB = bytes NetApp SolidFire Element OS 10.0 User Guide 52

53 Management NOTE: You cannot resize volumes on a cluster that is full. 1. Go to Management > Volumes. 2. Click the Actions button ( ) for the volume you want to resize. 3. In the resulting menu, click Edit. 4. In the Edit Volume dialog, enter the new volume size. 5. Click Save Changes. Restoring a Deleted Volume You can restore a volume in the SolidFire system if it has been deleted but not yet purged. The system automatically purges a volume approximately 8 hours after it has been deleted. If the system has purged the volume, you cannot restore it. 1. Go to Management > Volumes. 2. Click the Deleted tab to view the list of deleted volumes. 3. Click the Actions button ( ) for the volume you wish to restore. 4. In the resulting menu, click Restore ( ). 5. Confirm the action. Purging a Volume The volume is placed in the Active volumes list and iscsi connections to the volume are restored. You can manually purge a volume after you have deleted it. The system automatically purges deleted volumes eight hours after deletion. However, if you want to purge a volume before the scheduled purge time, you can perform a manual purge using the following steps. Caution: When a volume is purged it is permanently removed from the system. All data in the volume is lost. 1. Go to Management > Volumes. 2. Click the Deleted button. 3. To purge a single volume: a. Click the Actions button ( ) for the volume you wish to purge. b. Click Purge ( ). c. In the confirmation dialog, confirm the action. 4. To purge multiple volumes: a. Check the boxes next to the volumes you wish to purge. b. Click Bulk Actions. c. In the resulting menu, select Purge. d. In the confirmation dialog, confirm the action. NetApp SolidFire Element OS 10.0 User Guide 53

54 Management Volume Backup and Restore Operations You can configure the system to back up and restore the contents of a volume to and from an object store container that is external to SolidFire storage. You can also back up and restore data to and from remote SolidFire storage systems. You can view the progress of each backup and restore operation on the Reporting > Running Tasks page. The system logs the results of each operation in the Event Log. NOTE: You can run a maximum of two backup or restore processes at a time on a volume. Volume Backup Operations You can back up SolidFire volumes to SolidFire storage, as well as secondary object stores that are compatible with Amazon S3 or OpenStack Swift. Backing Up a Volume to an Amazon S3 Object Store You can back up SolidFire volumes to external object stores that are compatible with Amazon S3. 1. Go to Management > Volumes. 2. Click the Actions button ( ) button for the volume you wish to back up. 3. In the resulting menu, click Backup to. 4. In the Integrated Backup dialog under Backup to, select S3. 5. Select an option under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 6. Enter a hostname to use to access the object store in the Hostname field. 7. Enter an access key ID for the account in the Access Key ID field. 8. Enter the secret access key for the account in the Secret Access Key field. 9. Enter the S3 bucket in which to store the backup in the S3 Bucket field. 10. (Optional) Enter a nametag to append to the prefix in the Nametag field. 11. Click Start Read. Backing Up a Volume to an OpenStack Swift Object Store You can back up SolidFire volumes to external object stores that are compatible with OpenStack Swift. 1. Go to Management > Volumes. 2. Click the Actions button ( ) for the volume to back up. 3. In the resulting menu, click Backup to. 4. In the Integrated Backup dialog under Backup to, select Swift. 5. Select a data format under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. NetApp SolidFire Element OS 10.0 User Guide 54

55 Management 6. Enter a URL to use to access the object store in the URL field. 7. Enter a username for the account in the Username field. 8. Enter the authentication key for the account in the Authentication Key field. 9. Enter the container in which to store the backup in the Container field. 10. (Optional) Enter a nametag to append to the prefix in the Nametag field. 11. Click Start Read. Backing Up a Volume to a SolidFire Cluster You can back up volumes residing on a SolidFire cluster to a remote SolidFire cluster. When backing up or restoring from one cluster to another, the system generates a key to be used as authentication between the clusters. This bulk volume write key allows the source cluster to authenticate with the destination cluster, providing a level of security when writing to the destination volume. As part of the backup or restore process, you need to generate a bulk volume write key from the destination volume before starting the operation. Prerequisites Ensure that the source and target clusters are paired. See Cluster Pairing for more information. 1. On the destination cluster, go to Management > Volumes. 2. Click the Actions button ( ) for the destination volume. 3. In the resulting menu, click Restore from. 4. In the Integrated Restore dialog, under Restore from, select SolidFire. 5. Select an option under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 6. Click Generate Key. 7. Copy the key from the Bulk Volume Write Key box to your clipboard. 8. On the source cluster, go to Management > Volumes. 9. Click the Actions button ( ) for the volume to back up. 10. In the resulting menu, click Backup to. 11. In the Integrated Backup dialog under Backup to, select SolidFire. 12. Select the same option you selected earlier in the Data Format field. 13. Enter the management virtual IP address of the destination volume's cluster in the Remote Cluster MVIP field. 14. Enter the remote cluster username in the Remote Cluster Username field. 15. Enter the remote cluster password in the Remote Cluster Password field. 16. In the Bulk Volume Write Key field, paste the key you generated on the destination cluster earlier. 17. Click Start Read. Volume Restore Operations When you restore a volume from a backup on an object store such as OpenStack Swift or Amazon S3, you need manifest information from the original backup process. If you are restoring a SolidFire volume that was backed up on a SolidFire storage system, the manifest information is not required. You can find the required manifest information for restoring from Swift and S3 in Reporting > Event Log. NetApp SolidFire Element OS 10.0 User Guide 55

56 Management Restoring a Volume from Backup on an Amazon S3 Object Store Follow these instructions to restore a volume from a backup on an Amazon S3 object store. 1. Go to Reporting > Event Log. 2. Locate the backup event that created the backup you need to restore. 3. In the Details column for the event, click Show Details. 4. Copy the manifest information to your clipboard. 5. Click Management > Volumes. 6. Click the Actions button ( ) for the volume you wish to restore. 7. In the resulting menu, click Restore from. 8. In the Integrated Restore dialog under Restore from, select S3. 9. Select the option that matches the backup under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 10. Enter a hostname to use to access the object store in the Hostname field. 11. Enter an access key ID for the account in the Access Key ID field. 12. Enter the secret access key for the account in the Secret Access Key field. 13. Enter the S3 bucket in which to store the backup in the S3 Bucket field. 14. Paste the manifest information into the Manifest field. 15. Click Start Write. Restoring a Volume from Backup on an OpenStack Swift Object Store Follow these instructions to restore a volume from a backup on an OpenStack Swift object store. 1. Go to Reporting > Event Log. 2. Locate the backup event that created the backup you need to restore. 3. In the Details column for the event, click Show Details. 4. Copy the manifest information to your clipboard. 5. Click Management > Volumes. 6. Click the Actions button ( ) for the volume you wish to restore. 7. In the resulting menu, click Restore from. 8. In the Integrated Restore dialog under Restore from, select Swift. 9. Select the option that matches the backup under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 10. Enter a URL to use to access the object store in the URL field. 11. Enter a username for the account in the Username field. 12. Enter the authentication key for the account in the Authentication Key field. NetApp SolidFire Element OS 10.0 User Guide 56

57 Management 13. Enter the name of the container in which the backup is stored in the Container field. 14. Paste the manifest information into the Manifest field. 15. Click Start Write. Restoring a Volume from Backup on a SolidFire Cluster Follow these instructions to restore a volume from a backup on a SolidFire cluster. When backing up or restoring from one cluster to another, the system generates a key to be used as authentication between the clusters. This bulk volume write key allows the source cluster to authenticate with the destination cluster, providing a level of security when writing to the destination volume. As part of the backup or restore process, you need to generate a bulk volume write key from the destination volume before starting the operation. 1. On the destination cluster, go to Management > Volumes. 2. Click the Actions button ( ) for the volume you wish to restore. 3. In the resulting menu, click Restore from. 4. In the Integrated Restore dialog, under Restore from, select SolidFire. 5. Select the option that matches the backup under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 6. Click Generate Key. 7. Copy the Bulk Volume Write Key information to the clipboard. 8. On the source cluster, click Management > Volumes. 9. Click the Actions button ( ) for the volume you wish to use for the restore. 10. In the resulting menu, click Backup to. 11. In the Integrated Backup dialog, select SolidFire under Backup to. 12. Select the option that matches the backup under Data Format. 13. Enter the management virtual IP address of the destination volume's cluster in the Remote Cluster MVIP field. 14. Enter the remote cluster username in the Remote Cluster Username field. 15. Enter the remote cluster password in the Remote Cluster Password field. 16. Paste the key from your clipboard into the Bulk Volume Write Key field. 17. Click Start Read. Viewing Individual Volume Details You can view performance activity for individual volumes in a graphical format. This format provides general information as well as statistics for IOPS, throughput, queue depth and latency for each volume. 1. Go to Management > Volumes. 2. In the Active volumes window, click the Actions button ( ) for a volume. 3. Click View Details. NetApp SolidFire Element OS 10.0 User Guide 57

58 Management Accounts In SolidFire storage systems, accounts enable clients to connect to volumes on a node. When you create a volume, it is assigned to an account. The account contains the CHAP authentication required to access the volumes assigned to it. An account can have up to two thousand volumes assigned to it, but a volume can belong to only one account. See the following topics to perform tasks with accounts: Creating an Account Editing an Account Deleting an Account Viewing Individual Account Details Account Details On the Management > Accounts page, you can view the following information in the list of accounts. Heading ID Username Status Active Volumes Compression Deduplication Thin Provisioning Overall Efficiency Description System-generated ID for the account. The name given to the account when it was created. The status of the account. The number of active volumes assigned to the account. The compression efficiency score for the volumes assigned to the account. The deduplication efficiency score for the volumes assigned to the account. The thin provisioning efficiency score for the volumes assigned to the account. The overall efficiency score for the volumes assigned to the account. Creating an Account You can create an account to allow access to volumes. After you create an account, you can assign up to 2000 volumes to the account. Each account name in the system must be unique. 1. Go to Management > Accounts. 2. Click Create Account. 3. Enter a Username. 4. In the CHAP Settings section: a. Enter the Initiator Secret for CHAP node session authentication. b. Enter the Target Secret for CHAP node session authentication. NOTE: Leave the credential fields blank to auto-generate either password. 5. Click Create Account. Editing an Account You can edit an account to change the status, change the CHAP secrets, or modify the account name. NetApp SolidFire Element OS 10.0 User Guide 58

59 Management NOTE: Modifying CHAP settings in an account or removing initiators or volumes from an access group can cause initiators to lose access to volumes unexpectedly. To verify that volume access will not be lost unexpectedly, always logout iscsi sessions that will be affected by an account or access group change, and verify that initiators can reconnect to volumes after any changes to initiator settings and cluster settings have been completed. NOTE: Changing CHAP settings can cause hosts to lose access to volumes they are currently connected to. 1. Go to Management > Accounts. 2. Click the Actions button ( ) for an account. 3. In the resulting menu, select Edit ( ). 4. (Optional) Edit the Username. 5. (Optional) Click the Status drop-down list and select a different status. Caution: Changing the Status to Locked terminates all iscsi connections to the account, and the account is no longer accessible. Volumes associated with the account are maintained; however, the volumes are not iscsidiscoverable. 6. (Optional) Under CHAP Settings, edit the Initiator Secret and Target Secret credentials used for node session authentication. NOTE: If you do not change the CHAP Settings credentials, they remain the same. If you make the credentials fields blank, the system generates new passwords. 7. Click Save Changes. Deleting an Account You can delete accounts when they are no longer needed. Prerequisites Delete and purge any volumes associated with the account before you delete the account. Refer to Purging a Volume for more information. 1. Go to Management > Accounts. 2. Click the Actions button ( ) for the account you wish to delete. 3. In the resulting menu, select Delete ( ). 4. Confirm the action. Viewing Individual Account Details You can view performance activity for individual accounts in a graphical format. Account activity is displayed from the point at which the graph is opened and accumulates account activity data as long as the graph window is open. The graph information provides I/O and throughput information for the account. The Average and Peak activity levels are shown in increments of 10- second reporting periods. These statistics cover activity for all volumes assigned to the account. NetApp SolidFire Element OS 10.0 User Guide 59

60 Management 1. Go to Management > Accounts. 2. Click the Actions button ( ) for an account. 3. Click View Details. NetApp SolidFire Element OS 10.0 User Guide 60

61 Management Volume Access Groups A volume access group is a collection of volumes that users can access using either iscsi initiators or Fibre Channel initiators. You can create access groups by mapping iscsi initiator IQNs or Fibre Channel WWPNs in a collection of volumes. Each IQN that you add to an access group can access each volume in the group without requiring CHAP authentication. Each WWPN that you add to an access group enables Fibre Channel network access to the volumes in the access group. NOTE: Volume access groups have the following system limits: A maximum of 64 IQNs or WWPNs are allowed in an access group. An access group can be made up of a maximum of 2000 volumes. An IQN or WWPN can belong to only one access group. A single volume can belong to a maximum of four access groups. You can view volume access groups on the Management > Access Groups page. See the following topics to learn about or perform tasks with volume access groups: Creating an Access Group Adding Volumes to an Access Group Adding Initiators to an Access Group Removing Initiators from an Access Group Deleting an Access Group Removing Volumes from an Access Group Creating a Volume Access Group for Fibre Channel Clients Assigning LUNs to Fibre Channel Volumes Viewing Individual Access Group Details Volume Access Group Details On the Management > Access Groups page, you can view the following information in the list of volume access groups. Heading ID Name Active Volumes Compression Deduplication Thin Provisioning Overall Efficiency Initiators Description System-generated ID for the access group. The name given to the access group when it was created. The number of active volumes in the access group. The compression efficiency score for the access group. The deduplication efficiency score for the access group. The thin provisioning efficiency score for the access group. The overall efficiency score for the access group. The number of initiators connected to the access group. NetApp SolidFire Element OS 10.0 User Guide 61

62 Management Creating an Access Group The Access Groups page enables you to create volume access groups by mapping initiators to a collection of volumes for secured access. You can then grant access to the volumes in the group with an account CHAP initiator secret and target secret. See Adding Volumes to an Access Group for information about adding volumes to the access group. 1. Go to Management > Access Groups. 2. Click Create Access Group. 3. Enter a name for the volume access group in the Name field. 4. To add a Fibre Channel initiator to the volume access group: a. Under Add Initiators, select an existing Fibre Channel initiator from the Unbound Fibre Channel Initiators list. b. Click Add FC Initiator. NOTE: You can create an initiator during this step if you click the Create Initiator link, enter an initiator name, and click Create. The system automatically adds the initiator to the Initiators list after you create it. Example format: 5f:47:ac:c0:5c:74:d4:02 5. To add an iscsi initiator to the volume access group, under Add Initiators, select an existing initiator from the Initiators list. NOTE: You can create an initiator during this step if you click the Create Initiator link, enter an initiator name, and click Create. The system automatically adds the initiator to the Initiators list after you create it. The accepted format of an initiator IQN: iqn.yyyy-mm where y and m are digits, followed by text which must only contain digits, lower-case alphabetic characters, a period (.), colon (:) or dash (-). Example format: iqn com.solidfire:c2r9.fc e1e09bb8b TIP: You can find the initiator IQN for each volume by selecting View Details in the Actions menu for the volume on the Management > Volumes > Active list. 6. (Optional) Add more initiators as needed. 7. Under Attach Volumes, select a volume from the Volumes list. The volume appears in the Attached Volumes list. 8. (Optional) Add more volumes as needed. 9. Click Create Access Group. Adding Volumes to an Access Group You can add volumes to a volume access group. Each volume can belong to more than one volume access group; you can see the groups that each volume belongs to in the Active volumes page. NOTE: You can also use this procedure to add volumes to a Fibre Channel volume access group. NetApp SolidFire Element OS 10.0 User Guide 62

63 Management 1. Go to Management > Access Groups. 2. Choose an access group and click the Actions button ( ). 3. In the resulting menu, click the Edit button ( ). 4. Under Add Volumes, select a volume from the Volumes list. 5. Click Attach Volume. 6. Repeat steps 4 and 5 to add more volumes as needed. 7. Click Save Changes. Adding Initiators to an Access Group You can add an initiator to an access group to allow access to volumes in the volume access group without requiring CHAP authentication. When you add an initiator to a volume access group, the initiator has access to all volumes in that volume access group. Adding a Single Initiator to an Access Group You can add an initiator to an existing access group. TIP: You can find the initiator for each volume by selecting View Details in the Actions menu for the volume in the Management > Volumes > Active list. 1. Go to Management > Access Groups. 2. Click the Actions button ( ) for the access group you want to edit. 3. Click the Edit button ( ). 4. To add a Fibre Channel initiator to the volume access group: a. Under Add Initiators, select an existing Fibre Channel initiator from the Unbound Fibre Channel Initiators list. b. Click Add FC Initiator. NOTE: You can create an initiator during this step if you click the Create Initiator link, enter an initiator name, and click Create. The system automatically adds the initiator to the Initiators list after you create it. Example format: 5f:47:ac:c0:5c:74:d4:02 5. To add an iscsi initiator to the volume access group, under Add Initiators, select an existing initiator from the Initiators list. NOTE: You can create an initiator during this step if you click the Create Initiator link, enter an initiator name, and click Create. The system automatically adds the initiator to the Initiators list after you create it. The accepted format of an initiator IQN: iqn.yyyy-mm where y and m are digits, followed by text which must only contain digits, lower-case alphabetic characters, a period (.), colon (:) or dash (-). NetApp SolidFire Element OS 10.0 User Guide 63

64 Management Example format: iqn com.solidfire:c2r9.fc e1e09bb8b TIP: You can find the initiator IQN for each volume by selecting View Details in the Actions menu for the volume on the Management > Volumes > Active list. 6. Click Save Changes. Adding Multiple Initiators to a Volume Access Group You can add multiple initiators to an existing volume access group. TIP: You can find the initiator for each volume by selecting View Details in the Actions menu for the volume in the Management > Volumes > Active list. 1. Go to Management > Initiators. 2. Select the check boxes next to the initiators you wish to add to an access group. 3. Click the Bulk Actions button. 4. Choose Add to Volume Access Group from the resulting list. 5. In the Add to Volume Access Group dialog, choose an access group from the Volume Access Group list. 6. Click Add. Removing Initiators from an Access Group You can remove an initiator from an access group. When you remove the initiator, it can no longer access the volumes in that volume access group. Normal account access to the volume is not disrupted. NOTE: Modifying CHAP settings in an account or removing initiators or volumes from an access group can cause initiators to lose access to volumes unexpectedly. To verify that volume access will not be lost unexpectedly, always logout iscsi sessions that will be affected by an account or access group change, and verify that initiators can reconnect to volumes after any changes to initiator settings and cluster settings have been completed. 1. Go to Management > Access Groups. 2. Click the Actions button ( ) for the access group you wish to edit. 3. In the resulting menu, select Edit ( ). 4. Under Add Initiators in the Edit Volume Access Group dialog, click the arrow on the Initiators list. 5. Choose an initiator from the list and click its associated Delete button ( ). 6. (Optional) Repeat step 5 to remove more initiators as needed. 7. Click Save Changes. Deleting an Access Group You can delete an access group when it is no longer needed. You do not need to delete Initiator IDs and Volume IDs from the volume access group prior to deleting the group. After you delete the access group, group access to the volumes is discontinued. NetApp SolidFire Element OS 10.0 User Guide 64

65 Management 1. Go to Management > Access Groups. 2. Click the Actions button ( ) for the access group you wish to delete. 3. In the resulting menu, click Delete. 4. To also delete the initiators associated with this access group, select the Delete initiators in this access group check box. 5. Confirm the action. Removing Volumes from an Access Group You can remove volumes from an access group. When you remove a volume from an access group, the group no longer has access to that volume. NOTE: Modifying CHAP settings in an account or removing initiators or volumes from an access group can cause initiators to lose access to volumes unexpectedly. To verify that volume access will not be lost unexpectedly, always logout iscsi sessions that will be affected by an account or access group change, and verify that initiators can reconnect to volumes after any changes to initiator settings and cluster settings have been completed. 1. Go to Management > Access Groups. 2. Click the Actions button ( ) for the access group you wish to edit. 3. Choose an access group from the list, and click its associated Actions button ( ). 4. In the resulting menu, select Edit ( ). 5. Under Add Volumes in the Edit Volume Access Group dialog, click the arrow on the Attached Volumes list. 6. Choose a volume from the list and click its associated Delete button ( ). 7. (Optional) Repeat step 5 to remove more volumes as needed. 8. Click Save Changes. Creating a Volume Access Group for Fibre Channel Clients Volume access groups enable communication between Fibre Channel clients and volumes on a SolidFire storage system. Mapping Fibre Channel client initiators (WWPN) to the volumes in a volume access group enables secure data I/O between a Fibre Channel network and a SolidFire volume. You can also add iscsi initiators to a volume access group; this gives the initiators access to the same volumes in the volume access group. 1. Go to Management > Access Groups. 2. Click Create Access Group. 3. Enter a name for the volume access group in the Name field. 4. Select and add the Fibre Channel initiators from the Unbound Fibre Channel Initiators list. NOTE: You can add or delete initiators at a later time. 5. (Optional) Select and add an iscsi initiator from the Initiator list. NetApp SolidFire Element OS 10.0 User Guide 65

66 Management 6. To attach volumes to the access group: a. Select a volume from the Volumes list. b. Click Attach Volume. 7. Click Create Volume Access Group. Assigning LUNs to Fibre Channel Volumes You can change the LUN assignment for a Fibre Channel volume in a volume access group. You can also make Fibre Channel volume LUN assignments when you create a volume access group. NOTE: Assigning new Fibre Channel LUNs is an advanced function and could have unknown consequences on the connecting host. For example, the new LUN ID might not be automatically discovered on the host, and the host might require a rescan to discover the new LUN ID. 1. Go to Management > Access Groups. 2. Click the Actions button ( ) for the access group you wish to edit. 3. In the resulting menu, select Edit ( ). 4. Under Assign LUN IDs in the Edit Volume Access Group dialog, click the arrow on the LUN Assignments list. 5. Choose a volume in the list, and select a new value in the corresponding LUN spin box. 6. (Optional) Repeat step 5 to edit more volumes as needed. 7. Click Save Changes. Viewing Individual Access Group Details You can view details for an individual access group, such as attached volumes and initiators, in a graphical format. 1. Click Management > Access Groups. 2. Click the Actions button ( ) for an access group. 3. Click View Details. NetApp SolidFire Element OS 10.0 User Guide 66

67 Management Initiators Initiators enable external clients access to volumes in a cluster, serving as the entry point for communication between clients and volumes. You can create and delete initiators, and give them friendly aliases to simplify administration and volume access. When you add an initiator to a volume access group, that initiator enables access to all volumes in the group. You can view initiators on the Management > Initiators page. See the following topics to learn about or perform tasks with initiators: Creating an Initiator Deleting an Initiator Deleting Multiple Initiators Editing an Initiator Initiator Details On the Management > Initiators page, you can view the following information in the list of initiators. Heading ID Name Alias Attributes Access Group Description The system-generated ID for the initiator. The name given to the initiator when it was created. The friendly name given to the initiator, if any. The optional attributes assigned to the initiator. The volume access group to which the initiator is assigned. Creating an Initiator You can create iscsi or Fibre Channel initiators and optionally assign them aliases. 1. Go to Management > Initiators. 2. Click Create Initiator. 3. To create a single initiator: a. Select Create a Single Initiator. b. Enter the IQN or WWPN for the initiator in the IQN/WWPN field. c. Enter a friendly name for the initiator in the Alias field. d. Click Create Initiator. 4. To create multiple initiators: a. Select Bulk Create Initiators. b. Enter a list of IQNs or WWPNs in the text box. c. Click Add Initiators. d. (Optional) Choose an initiator from the resulting list and click the corresponding Add button in the Alias column to add an alias for the initiator. e. (Optional) Click the check mark to confirm the new alias. NetApp SolidFire Element OS 10.0 User Guide 67

68 Management f. (Optional) To remove an initiator from the list, click next to the initiator you wish to remove. g. Click Create Initiators. Deleting an Initiator You can delete an initiator once it is no longer needed. When you delete an initiator, the system removes it from any associated volume access group. Any connections using the initiator remain valid until the connection is reset. 1. Go to Management > Initiators. 2. Click the Actions button ( ) for the initiator you wish to delete. 3. In the resulting menu, select Delete. 4. Confirm the action. Deleting Multiple Initiators You can delete multiple initiators at the same time once they are no longer needed. When you delete initiators, the system removes them from any associated volume access groups. Any connections using the initiators remain valid until the connection is reset. 1. Go to Management > Initiators. 2. Select the check boxes next to the initiators you wish to delete. 3. Click the Bulk Actions button. 4. In the resulting menu, select Delete. 5. Confirm the action. Editing an Initiator You can change the alias of an existing initiator or add an alias if one does not already exist. 1. Go to Management > Initiators. 2. Click the Actions button ( ) for the initiator you wish to edit. 3. In the resulting menu, select Edit ( ). 4. Enter a new alias for the initiator in the Alias field. 5. Click Save Changes. NetApp SolidFire Element OS 10.0 User Guide 68

69 Management QoS Policies A QoS policy enables you to create and save a custom quality of service setting for volumes. You can create, edit, and delete QoS policies. You can view QoS Policies on the Management > QoS Policies page. See the following topics to learn about or perform tasks related to QoS policies: Understanding Quality of Service Creating a QoS Policy Editing a QoS Policy Deleting a QoS Policy Disassociating a QoS Policy from a Volume Understanding Quality of Service A SolidFire cluster has the ability to provide QoS parameters on a per-volume basis. Cluster performance is measured in inputs and outputs per second (IOPS). There are three configurable parameters that define QoS: Min IOPS, Max IOPS, and Burst IOPS. For minimum, maximum, and default QoS values, see the following table that lists the quality of service values. IOPs parameters are defined in the following ways: Min IOPS: The minimum number of sustained inputs and outputs per second (IOPS) that the SolidFire cluster provides to a volume. The Min IOPS configured for a volume is the guaranteed level of performance for a volume. Performance does not drop below this level. Max IOPS: The maximum number of sustained IOPS that the SolidFire cluster provides to a volume. When cluster IOPS levels are critically high, this level of IOPS performance is not exceeded. Burst IOPS: The maximum number of IOPS allowed in a short burst scenario. If a volume has been running below the Max IOPS, burst credits are accumulated. When performance levels become very high and are pushed to maximum levels, short bursts of IOPS are allowed on the volume. SolidFire uses Burst IOPS when a cluster is running in a state of low cluster IOPS utilization. A single volume can accrue Burst IOPS and use the credits to burst above their Max IOPS up to their Burst IOPS level for a set "burst period". A volume can burst for up to 60 seconds if the cluster has the capacity to accommodate the burst. A volume accrues one second of burst credit (up to a maximum of 60 seconds) for every second that the volume runs below its Max IOPS limit. Burst IOPS are limited in two ways: A volume can burst above its Max IOPS for a number of seconds equal to the number of burst credits that the volume has accrued. When a volume bursts above its Max IOPS setting, it is limited by its Burst IOPS setting. Therefore, the burst IOPS never exceeds the burst IOPS setting for the volume. Effective Max Bandwidth: The maximum bandwidth is calculated by multiplying the number of IOPS (based on the QoS curve) by the IO size. Example: QoS parameter settings of 100 Min IOPS, 1000 Max IOPS, and 1500 Burst IOPs have the following effects on quality of performance: Workloads are able to reach and sustain a maximum of 1000 IOPS until the condition of workload contention for IOPS becomes apparent on the cluster. IOPS are then reduced incrementally until IOPS on NetApp SolidFire Element OS 10.0 User Guide 69

70 Management all volumes are within the designated QoS ranges and contention for performance is relieved. Performance on all volumes is pushed toward the Min IOPS of 100. Levels do not drop below the Min IOPS setting but could remain higher than 100 IOPS when workload contention is relieved. Performance is never greater than 1000 IOPS, or less than 100 IOPS for a sustained period. Performance of 1500 IOPS (Burst IOPS) is allowed, but only for those volumes that have accrued burst credits by running below Max IOPS and only allowed for a short periods of time. Burst levels are never sustained. The following table describes possible minimum and maximum values for Quality of Service. I/O Size Max Parameters Min Allowed Default 4Kb 8Kb 16Kb 262Kb Min IOPS ,000 9,375* 5556* 385* Max IOPS , ,000** 125,000 74, Burst IOPS , ,000** 125,000 74, *These estimations are approximate. **Max IOPS and Burst IOPS can be set as high as 200,000; however this setting is allowed only to effectively "uncap" the performance of a volume. Real-world maximum performance of a volume is limited by cluster usage and per-node performance. QoS Performance Curve Block size and bandwidth have a direct impact on the number of IOPS that an application can obtain. SolidFire software takes into account the block sizes it receives by normalizing block sizes to 4k. Based on workload, the system may increase block sizes. As block sizes increase, the system increases bandwidth to a level necessary to process the larger block sizes. As bandwidth increases the number of IOPS the system is able to attain decreases. The following figure of the QoS performance curve shows the relationship between block size and the percentage of IOPS: NetApp SolidFire Element OS 10.0 User Guide 70

71 Management For example, if block sizes are 4k, and bandwidth is 4000 KB/s, the IOPS are If block sizes increase to 8k, bandwidth increases to 5000 KB/s, and IOPS decrease to 625. By taking block size into account, the system ensures that lower priority workloads that use higher block sizes, such as backups and hypervisor activities, do not take too much of the performance needed by higher priority traffic using smaller block sizes. QoS Policies Details On the Management > QoS Policies page, you can view the following information. Heading ID Name Min IOPS Max IOPS Description The system-generated ID for the QoS policy. The user-defined name for the QoS policy. The minimum number of IOPS guaranteed for the volume. The maximum number of IOPS allowed for the volume. Burst IOPS The maximum number of IOPS allowed over a short period of time for the volume. Default = 15,000. Volumes Shows the number of volumes using the policy. This number links to a table of volumes that have the policy applied. Creating a QoS Policy You can create QoS policies and apply them when creating volumes. 1. Go to Management > QoS Policies. 2. Click Create QoS Policy. 3. Enter the Policy Name (must be from 1 through 64 characters). 4. Enter the Min IOPS, Max IOPS, and Burst IOPS values. 5. Click Create QoS Policy. Editing a QoS Policy You can change the name of an existing QoS policy or edit the values associated with the policy. NOTE: Changing a QoS policy affects all volumes associated with the policy. 1. Go to Management > QoS Policies. 2. Click the Actions button ( ) for the QoS policy you wish to edit. 3. In the resulting menu, select Edit ( ). 4. In the Edit QoS Policy dialog, modify the following properties as required: Policy Name Min IOPS Max IOPS NetApp SolidFire Element OS 10.0 User Guide 71

72 Management Burst IOPS 5. Click Save Changes. Deleting a QoS Policy You can delete a QoS policy once it is no longer needed. When you delete a QoS policy, all volumes associated with the policy maintain the QoS settings, but become unassociated with a policy. 1. Go to Management > QoS Policies. 2. Click the Actions button ( ) for the QoS policy you wish to delete. 3. In the resulting menu, select Delete. 4. Confirm the action. NetApp SolidFire Element OS 10.0 User Guide 72

73 Data Protection Data Protection From the Data Protection tab you can perform tasks that ensure that copies of your data are created and stored where you need them. See the following topics to learn about or perform data protection tasks: Volume Snapshots Group Snapshots Snapshot Schedules Cluster Pairing Volume Pairing NetApp SolidFire Element OS 10.0 User Guide 73

74 Data Protection Volume Snapshots A volume snapshot is a point-in-time copy of a volume. Creating a volume snapshot takes only a small amount of system resources and space; this makes snapshot creation faster than cloning. You can use snapshots to roll a volume back to the state it was in at the time the snapshot was created. However, because snapshots are simply replicas of volume metadata, you cannot mount or write to them. You can replicate snapshots to a remote SolidFire cluster and use them as a backup copy for the volume. This enables you to roll back a volume to a specific point in time by using the replicated snapshot; you can also create a clone of a volume from a replicated snapshot. You can view snapshots from the Data Protection > Snapshots page. See the following topics to learn about or perform tasks with snapshots: Creating a Volume Snapshot Editing Snapshot Retention Cloning a Volume from a Snapshot Rolling Back a Volume to a Snapshot Volume Snapshot Backup Operations Backing Up a Volume Snapshot to an Amazon S3 Object Store Backing up a Volume Snapshot to an OpenStack Swift Object Store Backing Up a Volume Snapshot to a SolidFire Cluster Deleting a Snapshot Volume Snapshot Details On the Data Protection > Snapshots page, you can view the following information in the list of volume snapshots. Heading ID UUID Name Size Volume ID Volume Name Account Volume Size Create Time Retain Until Group Snapshot ID Description System generated ID for the snapshot. The unique ID of the snapshot. User-defined name for the snapshot. User-defined size of the snapshot. ID of the volume from which the snapshot was created. User defined name of the volume. Account the volume is associated with. Size of the volume from which the snapshot was created. The time at which the snapshot was created. The day and time the snapshot will be deleted. The group ID the snapshot belongs to if grouped together with other volume snapshots. NetApp SolidFire Element OS 10.0 User Guide 74

75 Data Protection Heading Remote Replication Replicated Description Identifies whether or not the snapshot is enabled for replication to a remote SolidFire cluster. Possible Values: Enabled: The snapshot is enabled for remote replication. Disabled: The snapshot is not enabled for remote replication. Displays the status of the snapshot on the remote SolidFire cluster. Possible Values: Present: The snapshot exists on a remote cluster. Not Present: The snapshot does not exist on a remote cluster. Syncing: The target cluster is currently replicating the snapshot. Deleted: The target replicated the snapshot and then deleted it. Creating a Volume Snapshot You can create a snapshot of the active volume to preserve the volume image at any point in time. You can create up to 32 snapshots for a single volume. See Snapshot Schedules for information about automating snapshot creation. 1. Go to Management > Volumes. 2. Click the Actions button ( ) for the volume you wish to use for the snapshot. 3. In the resulting menu, select Snapshot. 4. Enter the New Snapshot Name in the Create Snapshot of Volume dialog. 5. (Optional) Select the Include Snapshot in Replication When Paired check box to ensure that the snapshot is captured in replication when the parent volume is paired. 6. To choose a Retention option for the snapshot, do one of the following: Choose Keep Forever to retain the snapshot on the system indefinitely. Choose Set Retention Period and use the date spin boxes to choose a length of time for the system to retain the snapshot. 7. To take a single, immediate snapshot: a. Choose Take Snapshot Now. b. Click Create Snapshot. 8. To schedule the snapshot to run at a future time: a. Choose Create Snapshot Schedule. b. Enter a New Schedule Name. c. Choose a Schedule Type from the list. d. (Optional) Select the Recurring Schedule check box to repeat the scheduled snapshot periodically. e. Click Create Schedule. Editing Snapshot Retention You can change the retention period for a snapshot to control when or if the system deletes snapshots. The retention period you specify begins when you enter the new interval. When you set a retention period, you can select a period that begins at the current time (retention is not calculated from the snapshot creation time). You can specify intervals in minutes, hours, and days. You can also choose to retain snapshots indefinitely or delete them manually on the Snapshots page. NetApp SolidFire Element OS 10.0 User Guide 75

76 Data Protection When you delete a snapshot from the source cluster, the target cluster snapshot is not affected (the reverse is also true). 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to edit. 3. In the resulting menu, click Edit. 4. (Optional) Select the Include Snapshot in Replication When Paired check box to ensure that the snapshot is captured in replication when the parent volume is paired. 5. (Optional) Choose a Snapshot Retention option for the snapshot: Choose Keep Forever to retain the snapshot on the system indefinitely. Choose Set Retention Period and use the date spin boxes to choose a length of time for the system to retain the snapshot. 6. Click Save Changes. Cloning a Volume from a Snapshot You can create a new volume from a snapshot of a volume. When you do this, the system uses the snapshot information to clone a new volume using the data contained on the volume at the time the snapshot was created. This process also stores information about other snapshots of the volume in the new created volume. 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to use for the volume clone. 3. In the resulting menu, click Clone Volume From Snapshot. 4. Enter a Volume Name in the Clone Volume From Snapshot dialog. 5. Choose a Total Size and size units for the new volume. 6. Select an Access type for the volume. 7. Choose an Account from the list to associate with the new volume. 8. Click Start Cloning. Rolling Back a Volume to a Snapshot You can roll back a volume to a previous snapshot at any time. This reverts any changes made to the volume since the snapshot was created. 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to use for the volume rollback. 3. In the resulting menu, select Rollback Volume To Snapshot. 4. (Optional) To save the current state of the volume before rolling back to the snapshot: a. In the Rollback To Snapshot dialog, select Save volume's current state as a snapshot. b. Enter a name for the new snapshot. 5. Click Rollback Snapshot. NetApp SolidFire Element OS 10.0 User Guide 76

77 Data Protection Volume Snapshot Backup Operations You can use the integrated backup feature to back up a volume snapshot. You can back up snapshots from a SolidFire cluster to an external object store, or to another SolidFire cluster. When you back up a snapshot to an external object store, you must have a connection to the object store that allows read / write operations. Backing Up a Volume Snapshot to an Amazon S3 Object Store You can back up SolidFire snapshots to external object stores that are compatible with Amazon S3. 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to back up. 3. In the resulting menu, click Backup to. 4. In the Integrated Backup dialog under Backup to, select S3. 5. Select an option under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 6. Enter a hostname to use to access the object store in the Hostname field. 7. Enter an access key ID for the account in the Access Key ID field. 8. Enter the secret access key for the account in the Secret Access Key field. 9. Enter the S3 bucket in which to store the backup in the S3 Bucket field. 10. (Optional) Enter a nametag to append to the prefix in the Nametag field. 11. Click Start Read. Backing up a Volume Snapshot to an OpenStack Swift Object Store You can back up SolidFire snapshots to secondary object stores that are compatible with OpenStack Swift. 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to back up. 3. In the resulting menu, click Backup to. 4. In the Integrated Backup dialog, under Backup to, select Swift. 5. Select an option under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 6. Enter a URL to use to access the object store. 7. Enter a Username for the account. 8. Enter the Authentication Key for the account. 9. Enter the Container in which to store the backup. 10. (Optional) Enter a Nametag. 11. Click Start Read. NetApp SolidFire Element OS 10.0 User Guide 77

78 Data Protection Backing Up a Volume Snapshot to a SolidFire Cluster You can back up volume snapshots residing on a SolidFire cluster to a remote SolidFire cluster. When backing up or restoring from one cluster to another, the system generates a key to be used as authentication between the clusters. This bulk volume write key allows the source cluster to authenticate with the destination cluster, providing a level of security when writing to the destination volume. As part of the backup or restore process, you need to generate a bulk volume write key from the destination volume before starting the operation. Prerequisites Ensure that the source and target clusters are paired. 1. On the destination cluster, go to Management > Volumes. 2. Click the Actions button ( ) for the destination volume. 3. In the resulting menu, click Restore from. 4. In the Integrated Restore dialog under Restore from, select SolidFire. 5. Select a data format under Data Format: Native: A compressed format readable only by SolidFire storage systems. Uncompressed: An uncompressed format compatible with other systems. 6. Click Generate Key. 7. Copy the key from the Bulk Volume Write Key box to your clipboard. 8. On the source cluster, click Data Protection > Snapshots. 9. Click the Actions button ( ) for the snapshot you want to use for the backup. 10. In the resulting menu, click Backup to. 11. In the Integrated Backup dialog under Backup to, select SolidFire. 12. Select the same data format you selected earlier in the Data Format field. 13. Enter the management virtual IP address of the destination volume's cluster in the Remote Cluster MVIP field. 14. Enter the remote cluster username in the Remote Cluster Username field. 15. Enter the remote cluster password in the Remote Cluster Password field. 16. In the Bulk Volume Write Key field, paste the key you generated on the destination cluster earlier. 17. Click Start Read. Deleting a Snapshot You can delete a volume snapshot from a SolidFire cluster. When you delete a snapshot, the system immediately removes it. You can delete snapshots that are being replicated from the source cluster. If a snapshot is syncing to the target cluster when you delete it, the sync replication completes and the snapshot is deleted from the source cluster. The snapshot is not deleted from the target cluster. You can also delete snapshots that have been replicated to the target from the target cluster. The deleted snapshot is kept in a list of deleted snapshots on the target until the system detects that you have deleted the snapshot on the source cluster. Once the target has detected that you have deleted the source snapshot, the target stops replication of the snapshot. 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to delete. NetApp SolidFire Element OS 10.0 User Guide 78

79 Data Protection 3. In the resulting menu, select Delete. 4. Confirm the action. NetApp SolidFire Element OS 10.0 User Guide 79

80 Data Protection Group Snapshots You can create a group snapshot of a related set of volumes to preserve a point-in-time copy of the metadata for each volume. You can use the group snapshot in the future as a backup or rollback to restore the state of the group of volumes to a desired point in time. See the following topics to learn about or perform tasks with group snapshots: Creating a Group Snapshot Editing Group Snapshots Cloning Multiple Volumes Cloning Multiple Volumes from a Group Snapshot Rolling Back Volumes to a Group Snapshot Deleting a Group Snapshot Group Snapshot Details On the Data Protection > Group Snapshots page, you can view the following information in the list of group snapshots. Heading ID UUID Name Create Time Status Description System generated ID for the group snapshot. The unique ID of the group snapshot. User-defined name for the group snapshot. The time at which the group snapshot was created. The current status of the snapshot. Possible values: Preparing: The snapshot is being prepared for use and is not yet writable. Done: This snapshot has finished preparation and is now usable. Active: The snapshot is the active branch. # Volumes The number of volumes in the group. Retain Until Remote Replication The time at which this snapshot will expire and be purged from the cluster. If blank, no expiration was set. Identifies if remote replication is enabled or disabled for the group. If enabled, the volumes in the group will be replicated to a remote cluster. Creating a Group Snapshot You can create a snapshot of a group of volumes, and you can also create a group snapshot schedule to automate group snapshots. A single group snapshot can consistently snapshot up to 32 volumes at one time. 1. Go to Management > Volumes. 2. Use the check boxes to select multiple volumes for a group of volumes. 3. Click Bulk Actions. 4. In the resulting menu, select Group Snapshot. NetApp SolidFire Element OS 10.0 User Guide 80

81 Data Protection 5. Enter a New Group Snapshot Name in the Create Group Snapshot of Volumes dialog. 6. (Optional) Select the Include Each Group Snapshot Member in Replication When Paired check box to ensure that each snapshot is captured in replication when the parent volume is paired. 7. To choose a Retention option for the group snapshot, do one of the following: Choose Keep Forever to retain the snapshot on the system indefinitely. Choose Set Retention Period and use the date spin boxes to choose a length of time for the system to retain the snapshot. 8. To take a single, immediate snapshot: a. Choose Take Group Snapshot Now. b. Click Create Group Snapshot. 9. To schedule the snapshot to run at a future time: a. Choose Create Group Snapshot Schedule. b. Enter a New Schedule Name. c. Choose a Schedule Type from the list. d. (Optional) Select the Recurring Schedule check box to repeat the scheduled snapshot periodically. e. Click Create Schedule. Editing Group Snapshots You can edit the replication and retention settings for existing group snapshots. 1. Go to Data Protection > Group Snapshots. 2. Click the Actions button ( ) for the group snapshot you wish to edit. 3. In the resulting menu, select Edit. 4. (Optional) To change the replication setting for the group snapshot: a. Click Edit next to Current Replication. b. Select the Include Each Group Snapshot Member in Replication When Paired check box to ensure that each snapshot is captured in replication when the parent volume is paired. 5. (Optional) To change the retention setting for the group snapshot: a. Click Edit next to Current Retention. b. Do one of the following: 6. Click Save Changes. Choose Keep Forever to retain the snapshot on the system indefinitely. Choose Set Retention Period and use the date spin boxes to choose a length of time for the system to retain the snapshot. Cloning Multiple Volumes You can create multiple volume clones in a single operation to create a point-in-time copy of the data on a the group of volumes. When you clone a volume, the system creates a snapshot of the volume and then creates a new volume from the data in the snapshot. You can mount and write to the new volume clone. Cloning multiple volumes is an asynchronous process and takes a variable amount of time depending on the size and number of the volumes being cloned. 1. Go to Management > Volumes. NetApp SolidFire Element OS 10.0 User Guide 81

82 Data Protection 2. Click the Active tab. 3. Use the check boxes to select multiple volumes, creating a group of volumes. 4. Click Bulk Actions. 5. Click Clone in the resulting menu. 6. Enter a New Volume Name Prefix in the Clone Multiple Volumes dialog. The prefix is applied to all volumes in the group. 7. (Optional) Select a different account to which the clone will belong. If you do not select an account, the system assigns the new volumes to the current volume account. 8. (Optional) Select a different access method for the volumes in the clone. If you do not select an access method, the system uses the current volume access. 9. Click Start Cloning. NOTE: Volume size and current cluster load affect the time needed to complete a cloning operation. NOTE: Increasing the volume size of a clone results in a new volume with additional free space at the end of the volume. Depending on how you are using the volume, you may need to extend partitions or create new partitions in the free space. Caution: Truncating a cloned volume by cloning to a smaller size requires preparation on the operating system or application so that the partitions fit into a smaller volume size. Cloning Multiple Volumes from a Group Snapshot You can clone a group of volumes from a point-in-time group snapshot. This operation requires that a group snapshot of the volumes already exist, because the group snapshot is used as the basis to create the volumes. Once you create the volumes you can use them like any other volume in the system. 1. Go to Data Protection > Group Snapshots. 2. Click the Actions button ( ) for the group snapshot you wish to use for the volume clones. 3. In the resulting menu, select Clone Volumes From Group Snapshot. 4. Enter a New Volume Name Prefix in the Clone Volumes From Group Snapshot dialog. The prefix is applied to all volumes created from the group snapshot. 5. (Optional) Select a different account to which the clone will belong. If you do not select an account, the system assigns the new volumes to the current volume account. 6. (Optional) Select a different access method for the volumes in the clone. If you do not select an access method, the system uses the current volume access. 7. Click Start Cloning. NOTE: Volume size and current cluster load affect the time needed to complete a cloning operation. Rolling Back Volumes to a Group Snapshot You can roll back a group of volumes at any time to a group snapshot. This restores all the volumes in the group to the state they were in at the time the group snapshot was created. This also restores volume sizes to the size recorded in the original snapshot. If NetApp SolidFire Element OS 10.0 User Guide 82

83 Data Protection the system has purged a volume, all snapshots of that volume were also deleted at the time of the purge; the system does not restore any deleted volume snapshots. 1. Go to Data Protection > Group Snapshots. 2. Click the Actions button ( ) for the group snapshot you wish to use for the volume rollback. 3. In the resulting menu, select Rollback Volumes To Group Snapshot. 4. (Optional) To save the current state of the volumes before rolling back to the snapshot: a. In the Rollback To Snapshot dialog, select Save volumes' current state as a group snapshot. b. Enter a name for the new snapshot. 5. Click Rollback Group Snapshot. Deleting a Group Snapshot You can delete a group snapshot from the system. When you delete the group snapshot, you can choose whether all snapshots associated with the group are deleted or retained as individual snapshots. NOTE: If you delete a volume or snapshot that is a member of a group snapshot, you can no longer roll back to the group snapshot. However, you can roll back each volume individually. 1. Go to Data Protection > Snapshots. 2. Click the Actions button ( ) for the snapshot you wish to delete. 3. In the resulting menu, click Delete. 4. Do one of the following in the confirmation dialog: Choose Delete group snapshot AND all group snapshot members to delete the group snapshot and all member snapshots. Choose Retain group snapshot members as individual snapshots to delete the group snapshot but keep all member snapshots. 5. Confirm the action. NetApp SolidFire Element OS 10.0 User Guide 83

84 Data Protection Snapshot Schedules You can schedule a snapshot of a volume to automatically occur at specified intervals. When you configure a snapshot schedule, you can choose from time intervals based on days of the week or days of the month. You can also specify the days, hours, and minutes before the next snapshot occurs. You can view information about active snapshot schedules on the Data Protection > Schedules page. NOTE: Schedules are created using UTC+0 time. You may need to adjust the actual time a snapshot will run based on your time zone. You can schedule either single volume snapshots or group snapshots to run automatically. When you create snapshot schedules, you can store the resulting snapshots on a remote SolidFire storage system if the volume is being replicated. See the following topics to learn about or perform tasks with snapshot schedules: Creating a Snapshot Schedule Editing a Snapshot Schedule Deleting a Snapshot Schedule Copying a Snapshot Schedule Snapshot Schedule Details On the Data Protection > Schedules page, you can view the following information in the list of snapshot schedules. Heading ID Type Description System-generated ID for the schedule. Indicates the type of schedule. Snapshot is currently the only type supported. Name The name given to the schedule when it was created. Snapshot schedule names can be up to 223 characters in length and contain a-z, 0-9, and dash (-) characters. Frequency Recurring Manually Paused Volume IDs Last Run Last Run Status The frequency at which the schedule is run. The frequency can be set in hours and minutes, weeks, or months. Indicates if the schedule is to run only once or if it is to run at regular intervals. Identifies whether or not the schedule has been manually paused. Displays the ID of the volume the schedule will use when the schedule is run. Displays the last time the schedule was run. Displays the outcome of the last schedule execution. Can be either Success or Failure. Creating a Snapshot Schedule You can schedule a snapshot of a volume or volumes to automatically occur at specified intervals. When you configure a snapshot schedule, you can choose from time intervals based on days of the week or days of the month. You can also create a recurring schedule and specify the days, hours, and minutes before the next snapshot occurs. If you schedule a snapshot to run at a time period that is not divisible by 5 minutes, the snapshot will run at the next time period that is divisible by 5 minutes. For example, if you schedule a snapshot to run at 12:42:00 UTC, it will run at 12:45:00 UTC. You cannot schedule a snapshot to run at intervals of less than 5 minutes. NetApp SolidFire Element OS 10.0 User Guide 84

85 Data Protection 1. Go to Data Protection > Schedules. 2. Click Create Schedule. 3. In the Volume IDs CSV field, enter a single volume ID or a comma-separated list of volume IDs to include in the snapshot operation. 4. Enter a New Schedule Name. 5. To schedule the snapshot to run on certain days of the week, do as follows: a. Click Schedule Type and select Days of Week. b. Click which days of the week to perform the snapshot. c. Set a Time of Day for the schedule to run. 6. To schedule the snapshot to run on certain days of the month, do as follows: a. Click Schedule Type and select Days of Month. b. Use the calendar to choose which days of the month to perform the snapshot. c. Set a Time of Day for the schedule to run. 7. To schedule the snapshot to run at a specific time interval, do as follows:: a. Click Schedule Type and select Time Interval. b. Enter the amount of days, hours, and minutes for the interval between snapshots. 8. (Optional) Select Recurring Schedule to repeat the snapshot schedule indefinitely. 9. (Optional) Enter a name for the new snapshot in the New Snapshot Name field. If you leave the field blank, the system uses the time and date of the snapshot's creation as the name. 10. (Optional) Select the Include Snapshots in Replication When Paired check box to ensure that the snapshots are captured in replication when the parent volume is paired. 11. To choose a Snapshot Retention option for the snapshot, do one of the following: Select Keep Forever to retain the snapshot on the system indefinitely. Select Set Retention Period and use the date spin boxes to choose a length of time for the system to retain the snapshot. 12. Click Create Schedule. Editing a Snapshot Schedule You can modify existing snapshot schedules. After modification, the next time the schedule runs it uses the updated attributes. Any snapshots created by the original schedule remain on the storage system. 1. Go to Data Protection > Schedules. 2. Click the Actions ( ) button for the schedule you want to change. 3. In the resulting menu, click Edit ( ). 4. In the Volume IDs CSV field, modify the single volume ID or comma-separated list of volume IDs currently included in the snapshot operation. 5. To pause or resume the schedule: To pause an active schedule, select Yes from the Manually Pause Schedule list. To resume a paused schedule, select No from the Manually Pause Schedule list. 6. Enter a different name for the schedule in the New Schedule Name field if desired. NetApp SolidFire Element OS 10.0 User Guide 85

86 Data Protection 7. To change the schedule to run on different days of the week: a. Choose Days of Week from the Schedule Type list. b. Choose which days of the week to perform the snapshot. c. Choose a Time of Day for the schedule to run. 8. To change the schedule to run on different days of the month: a. Choose Days of Month from the Schedule Type list. b. Use the calendar to choose which days of the month to perform the snapshot. c. Choose a Time of Day for the schedule to run. 9. (Optional) Select Recurring Schedule to repeat the snapshot schedule indefinitely. 10. (Optional) Enter or modify the name for the new snapshot in the New Snapshot Name field. If you leave the field blank, the system uses the time and date of the snapshot's creation as the name. 11. (Optional) Select the Include Snapshots in Replication When Paired check box to ensure that the snapshots are captured in replication when the parent volume is paired. 12. To choose a different Snapshot Retention option for the snapshot, do one of the following: Select Keep Forever to retain the snapshot on the system indefinitely. Select Set Retention Period and use the date spin boxes to choose a length of time for the system to retain the snapshot. 13. Click Save Changes. Deleting a Snapshot Schedule You can delete a snapshot schedule. Once you delete the schedule, it does not run any future scheduled snapshots. Any snapshots that were created by the schedule remain on the storage system. 1. Go to Data Protection > Schedules. 2. Click the Actions ( ) button for the schedule you want to delete. 3. In the resulting menu, click Delete ( ). 4. Confirm the action. Copying a Snapshot Schedule You can copy a schedule and maintain its current attributes. 1. Go to Data Protection > Schedules. 2. Click the Actions ( ) button for the schedule you want to copy. 3. In the resulting menu, click Make a Copy ( ). The Create Schedule dialog appears, populated with the current attributes of the schedule. 4. (Optional) Enter a name and updated attributes for the new schedule. 5. Click Create Schedule. NetApp SolidFire Element OS 10.0 User Guide 86

87 Data Protection Cluster Pairing You can use real-time replication (remote replication) functionality to connect (pair) two clusters and enable continuous data protection (CDP). When you pair two clusters, active volumes on one cluster can be continuously replicated to a second cluster to provide data recovery ability. Once the connection between two clusters has been established, you can identify volumes as being the source or target of the replication. The prerequisites for cluster pairing are: Cluster admin privileges to one or both clusters being paired. All node IPs on paired clusters are routed to each other. Less than 2000 ms of round-trip latency between clusters. The Element OS software versions on each cluster must be within one major version of each other. To determine Element OS compatibility, see Node Versioning and Compatibility for information about how Element OS software is versioned. NOTE: Cluster pairing requires full connectivity between nodes on the management network. Replication requires connectivity between the individual nodes on the storage cluster network. Choose one of the following methods to start cluster pairing. After pairing is completed, the process can be validated in the Web UI. Pairing Clusters Using MVIP: Use this method if there is Cluster Admin access to both clusters. This method uses the MVIP of the remote cluster to pair two clusters. Pairing Clusters with a Pairing Key: Use this method if there is Cluster Admin access to only one of the clusters. This method generates a pairing key that can be used on the target cluster to complete the cluster pairing. NOTE: The cluster pairing key contains a version of the MVIP, user name, password, and database information to permit volume connections for remote replication. This key should be treated in a secure manner and not stored in a way that would allow accidental or unsecured access to the username or password. Real-Time Replication The following diagram shows Real-Time Replication (remote replication) for cluster pairing. NetApp SolidFire Element OS 10.0 User Guide 87

88 Data Protection Multiple Cluster Pairing You can pair one cluster with up to four other clusters for replicating volumes. You can also pair clusters within the cluster group with each other. The following diagram shows a simple pairing configuration of five SolidFire clusters. Clusters B, C, D, and E are all paired with SF Cluster A. However, the other clusters within the configuration can also be paired. For example, Cluster B could potentially pair with Clusters D, E, and F. The cluster pairing limit is reached when each cluster is paired with four other clusters. Replication Configuration Information Use the information in this section to assist in making the correct evaluation for your system network setup and communications for using real-time replication. Node Port Requirements All node IP addresses on both management and storage networks for paired clusters need to route to each other. The following ports are used in a paired cluster configuration: Port Type Port Number Description ICMP Type 8 - Echo Type 0 - Echo reply Cluster-to-cluster latency. TCP - HTTP 2181 Remote Replication cluster management communications. TCP - RPC Node-to-node data communications. TCP - HTTPS 442 Node access on each cluster. TCP - HTTPS 443 Remote Replication cluster communications. All node IPs, MVIPs, and SVIPs. MTU Defaults and Recommendations MTU of all paired nodes must be the same and be supported end-to-end between clusters. NetApp SolidFire Element OS 10.0 User Guide 88

89 Data Protection Cluster Setup Software Versions: Two clusters can have different versions of the SolidFire Element OS installed and still be able to replicate data. However, the software versions must be within one major version of the cluster software. For example, if SF Cluster A has Element OS version 6.0 installed and SF Cluster B has Element OS version 7.1 installed, the clusters are able to be paired and can perform data replication. If SF Cluster A has Element OS version 6.0 installed and SF Cluster B has Element OS version 8.2 installed, this would present an incompatibility and SF Cluster A would need to upgrade to at least Element OS 7.0 to resolve the incompatibility. NOTE: WAN Accelerator appliances have not been qualified by NetApp for use when replicating data. These appliances can interfere with compression and deduplication if deployed between two clusters that are replicating data. Be sure to fully qualify the effects of any WAN Accelerator appliance before you deploy it in a production environment. Pairing Clusters Using MVIP You can pair two clusters by using the MVIP of one cluster to establish a connection with another cluster. Cluster Admin access on both of clusters is required to use this method. The Cluster Admin user name and password is used to authenticate cluster access before the clusters can be paired. If the MVIP is not known, or access to the cluster is not available, you can pair the cluster by generating a pairing key and use the key to pair the two clusters. For more details, see Pairing Clusters with a Pairing Key. 1. On the local cluster, go to Data Protection > Cluster Pairs. 2. Click Pair Cluster. 3. Click Start Pairing and click Yes to indicate that you have access to the remote cluster. 4. Enter the remote cluster MVIP address. 5. Click Complete pairing on remote cluster. In the Authentication Required window, enter the cluster admin user name and password of the remote cluster. 6. On the remote cluster, go to Data Protection > Cluster Pairs. 7. Click Pair Cluster. 8. Click Complete Pairing. 9. Click the Complete pairing button. In the Cluster Pairs window, you can view the status of the pairing on the remote cluster. Navigate back to the local cluster and the Cluster Pairs window to validate the status of the pairing on the local cluster. Pairing Clusters Using MVIP Pairing Clusters with a Pairing Key You can pair two clusters by using a pairing key to establish a connection with another cluster. A pairing key is generated on a local cluster and then sent to a cluster admin at a remote site to complete the cluster pairing. 1. On the local cluster, go to Data Protection > Cluster Pairs. 2. Click the Pair Cluster button. 3. Click Start Pairing and click No to indicate that you do not have access to the remote cluster. 4. Click Generate key. NetApp SolidFire Element OS 10.0 User Guide 89

90 Data Protection 5. Copy the cluster pairing key to your clipboard. 6. Make the pairing key accessible to the Cluster Admin at the remote cluster site. NOTE: Security measures should be taken when giving access to the encrypted pairing key or sending the key via . The key contains cluster MVIP information, user name, password, and database information. Caution: Do not modify any of the characters in the pairing key. The key becomes invalid if it is modified. 7. On the remote cluster, go to Data Protection > Cluster Pairs. 8. Click the Pair Cluster button. 9. Click Complete Pairing and enter the pairing key in the Pairing Key field (paste is the recommended method). 10. Click Complete Pairing button. In the Cluster Pairs window, you can view the status of the pairing on the remote cluster. Navigate back to the local cluster and the Cluster Pairs window to validate the status of the pairing on the local cluster. Pairing Clusters Using a Cluster Pairing Key Validating Paired Clusters You can validate that two clusters have been successfully paired by checking cluster pair connection status on each of the two clusters. 1. Go to Data Protection > Cluster Pairs. 2. Verify that the remote cluster is connected. Deleting a Cluster Pair You can delete a cluster pair from the Web interface of either of the clusters in the pair. 1. Go to Data Protection > Cluster Pairs. 2. Click Action for a cluster pair. 3. In the resulting menu, click Delete. 4. Confirm the action. NetApp SolidFire Element OS 10.0 User Guide 90

91 Data Protection Volume Pairs On the Data Protection > Volume Pairs page, you can view the following information for volumes that have been paired or are in the process of being paired. The system displays pairing and progress messages in the Volume Status column of the Volume Pairs window. See Volume Pairing Messages and Volume Pairing Warnings for more descriptions. Heading ID Name Account Volume Status Snapshot Status Mode Description System-generated ID for the volume. The name given to the volume when it was created. Volume names can be up to 223 characters and contain a-z, 0-9, and dash (-). Name of the account assigned to the volume. Replication status of the volume. Status of the snapshot volume. Indicates the client write replication method. Possible values: Async Snapshot-Only Sync Direction Indicates the direction of the volume data: Source volume icon ( ) indicates data is being written out to a target. Target volume icon ( ) indicates data is being written in from a source. Async Delay Remote Cluster Remote Volume ID Remote Volume Name Length of time since the volume was last synced with the remote cluster. If the volume is not paired, the value is null. Name of the remote cluster on which the volume resides. Volume ID of the volume on the remote cluster. Name given to the remote volume when it was created. Volume Pairing Messages On the Data Protection > Volume Pairs page, you can view the following messages during the initial pairing process. These messages are displayed in the Replicating Volumes list view. Status Message Source Target Description Paused Disconnected Yes Yes Source replication or sync RPCs timed out. Connection to the remote cluster has been lost. Check network connections to the cluster. Resuming Connected* Yes Yes The remote replication sync is now active. Beginning the sync process and waiting for data. Resuming RR Sync* Yes Yes Making a single helix copy of the volume metadata to the paired cluster. NetApp SolidFire Element OS 10.0 User Guide 91

92 Data Protection Status Message Source Target Description Resuming Local Sync* Yes Yes Making a double helix copy of the volume metadata to the paired cluster. Resuming Data Transfer* Yes Yes Data transfer has been resumed. Active Yes Yes Volumes are paired and data is being sent from the source to the target volume and the data is in sync. *This process is driven by the target volume and might not display on the source volume. Volume Pairing Warnings On the Data Protection > Volume Pairs page, you can view the following messages after you pair volumes. These messages are displayed in the Replicating Volumes list view. Status Message Source Target Description Paused Misconfigured Yes Yes Waiting for an active source and target. Manual intervention required to resume replication. Paused Slow Link Yes No Slow link detected and stopped replication. Replication auto-resumes. Paused QoS Yes No Target QoS could not sustain incoming IO. Replication auto-resumes. Paused Volume Size Mismatch Yes Yes Target volume is smaller than the source volume. Paused Cluster Full Yes No Source replication/bulk data transfer cannot proceed because the target cluster is full. Paused Manual Remote Yes Yes Remote volume is in manual paused mode. Manual intervention required to unpause the remote volume before replication resumes. Paused Manual Yes Yes Local volume has been manually paused. It must be unpaused before replication resumes. Stopped Misconfigured Yes Yes A permanent configuration error has been detected. The remote volume has been purged or unpaired. No corrective action is possible; a new pairing must be established. NetApp SolidFire Element OS 10.0 User Guide 92

93 Users Users From the Users tab, you can manage cluster administrators for a SolidFire storage system. When you create a new cluster, a primary cluster administrator is created in the process. This administrator has permissions to perform all functions within the storage system. You can create additional cluster admin accounts and use them to manage specific operations within the system. Administrators can also configure Terms of Use, which is described here. See the following topics to learn about or perform cluster admin account tasks: User Types Creating a Cluster Admin Account Editing Cluster Admin Permissions Deleting a Cluster Admin Account Changing the Cluster Admin Password Terms of Use NetApp SolidFire Element OS 10.0 User Guide 93

94 Users User Types The following types of administrators can exist in a SolidFire cluster: Primary cluster administrator account: This administrator account is created when the cluster is created. This account is the primary administrative account with the highest level of access to the cluster. This account is analogous to a root user in a Linux system. Two (or more) cluster administrators with administrator access permissions must exist before you can delete the primary cluster administrator account. You can change the password for this administrator account, but NetApp recommends doing so only if necessary. Cluster admin accounts: You can give the cluster admin accounts a limited range of administrative access to perform specific tasks within a cluster. The credentials assigned to each Cluster Admin account are used to authenticate API and Web UI requests within the SolidFire system. NOTE: The primary Cluster Admin account is the only account that can access active nodes in a cluster. Account credentials are not required to access a node that is not yet part of a cluster. Creating a Cluster Admin Account You can create new cluster administrator accounts to manage the storage cluster. You can configure the account permissions to allow or restrict access to specific areas of the storage system. The system grants read-only permissions for any permissions you do not assign to the cluster administrator. Prerequisite LDAP must be configured on the cluster before you can create a cluster administrator account in the LDAP directory. 1. Go to Users > Cluster Admins. 2. Click the Create Cluster Admin button. 3. To create a cluster-wide (non-ldap) cluster administrator account: a. Select the Cluster option in the Select User Type area. b. Enter a username in the Username field. c. Enter a password for the account in the Password field. d. Confirm the password in the Confirm Password field. e. Choose permissions to apply to the account in the Select User Permissions area. f. Select the check box to agree to the SolidFire End User License Agreement. g. Click Create Cluster Admin. 4. To create a cluster administrator account in the LDAP directory: a. Select the LDAP option in the Select User Type area. b. Enter a full distinguished name for the user in the Distinguished Name text box, following the example. c. Choose permissions to apply to the account in the Select User Permissions area. d. Select the check box to agree to the SolidFire End User License Agreement. e. Click Create Cluster Admin. Editing Cluster Admin Permissions You can change cluster administrator user privileges for reporting, nodes, drives, volumes, accounts, and cluster-level access. The type of access you give to an administrator enables write access for that level. The system grants the administrator user read-only access for the levels that you do not select. NetApp SolidFire Element OS 10.0 User Guide 94

95 Users 1. Go to Users > Cluster Admins. 2. Click the Actions button ( ) for the cluster admin you want to edit. 3. In the resulting menu, select Edit ( ). 4. In the User Permissions section of the Edit Cluster Admin dialog, select the desired permission levels. 5. Click Save Changes. Deleting a Cluster Admin Account You can remove any cluster administrator user account created by a system administrator. You cannot remove the primary Cluster Admin account that was created when the cluster was created. 1. Go to Users > Cluster Admins. 2. Click the Actions button ( ) for the cluster admin you wish to delete. 3. In the resulting menu, select Delete. 4. Confirm the action. Changing the Cluster Admin Password You can update the cluster administrator credentials through the Element OS Web UI. NOTE: If you have deployed SolidFire Active IQ with your storage system and change the credentials for the cluster administrator account that Active IQ uses for cluster monitoring, ensure that you also update the collector credentials on the management node to match. 1. Go to Users > Cluster Admins. 2. Click the Actions button ( ) for the cluster admin you wish to edit. 3. In the resulting menu, select Edit ( ). 4. In the Change Password field, enter a new password. 5. Confirm the password. 6. Click Save Changes. Terms of Use You can inform users about the Terms of Use for using your SolidFire cluster. Cluster administrators can configure these settings. Enabling Terms of Use You can enable a Terms of Use banner to appear when a user logs in to the Element OS UI. When the user clicks on the banner, a text dialog box appears containing the message you have configured for the cluster. The banner can be dismissed at any time. Prerequisites You must have cluster admin privileges to enable Terms of Use functionality. NetApp SolidFire Element OS 10.0 User Guide 95

96 Users 1. Go to Users > Terms of Use. 2. In the Terms of Use form, enter the text to be displayed for the Terms of Use dialog box. NOTE: Do not exceed 4096 characters. 3. Click Enable. Editing Terms of Use You can edit the text that a user will see when they select the Terms of Use login banner. Prerequisites You must have cluster admin privileges to configure Terms of Use. Terms of Use is enabled. 1. Go to Users > Terms of Use. 2. In the Terms of Use dialog box, edit the text that you want to appear. NOTE: Do not exceed 4096 characters. 3. Click Save Changes. Disabling Terms of Use You can disable the Terms of Use banner. The user will then no longer be requested to accept the terms of use when using the Element OS UI. Prerequisites You must have cluster admin privileges to configure Terms of Use. Terms of Use is enabled. 1. Go to Users > Terms of Use. 2. Click Disable. NetApp SolidFire Element OS 10.0 User Guide 96

97 Cluster Cluster From the Cluster tab you can view and change cluster-wide settings and perform cluster-specific tasks. See the following topics to learn about or perform cluster-related tasks: Cluster Settings SNMP LDAP Drives Nodes Fibre Channel Port Details Virtual Networks NetApp SolidFire Element OS 10.0 User Guide 97

98 Cluster Cluster Settings On the Cluster Settings page, you can configure cluster fullness alert settings, encryption at rest, virtual volumes, and NTP. When cluster settings are changed, they affect the entire cluster. See the following topics to learn about or perform cluster-specific tasks: Cluster Settings Details Setting Cluster Full Threshold Enabling and Disabling Encryption for a Cluster Setting Network Time Protocol Enabling a Broadcast Client Cluster Settings Details On the Cluster > Settings page, you can view the following information. Cluster Specific Settings Cluster Full Settings Encryption at Rest Virtual Volumes (VVols) Network Time Protocol Settings Description Set the level at which a cluster fault warning is generated. Enable or disable encryption at rest on the cluster. See Enabling Virtual Volumes for a complete explanation of virtual volumes. Broadcast Client: Enable each node to listen for broadcast NTP packets. Server: List of servers to synchronize clocks over a network. Setting Cluster Full Threshold Cluster administrators can change the level at which the system generates a cluster fullness warning. The percentage of used cluster block storage can be calculated using the formula described in Cluster Fullness Overview. Prerequisites You must have cluster admin privileges to change settings. 1. Go to Cluster > Settings. 2. Enter a percentage in Cluster Full Settings > Raise a warning alert when _ % capacity remains before Helix could not recover from a node failure. 3. Click Save Changes. Enabling and Disabling Encryption for a Cluster The Element OS Web UI can be used to enable cluster-wide encryption at rest. This feature is not enabled by default. All drives in SF-series nodes leverage AES 256-bit encryption at the drive level. Each drive has its own encryption key, which is created when the drive is first initialized. When you enable the encryption feature, a cluster-wide password is created, and chunks of the password are then distributed to all nodes in the cluster. No single node stores the entire password. The password is then used to password-protect all access to the drives and must then be supplied for every read and write operation to the drive. Enabling the encryption at rest feature does not affect performance or efficiency on the cluster. Additionally, if an encryptionenabled drive or node is removed from the cluster with the API or Web UI, Encryption at Rest will be disabled on the drives. Once NetApp SolidFire Element OS 10.0 User Guide 98

99 Cluster the drive is removed, the drive can be secure erased by using the SecureEraseDrives API method. If a drive or node is forcibly removed from the cluster, the data remains protected by the cluster-wide password and the drive s individual encryption keys. You should only enable or disable encryption when the cluster is running and in a healthy state. Prerequisites You must have cluster admin privileges to change settings. 1. Go to Cluster > Settings. 2. Click Enable Encryption at Rest. 3. (Optional) To disable encryption at rest, click Disable Encryption at Rest. Setting Network Time Protocol The NTP is used to synchronize clocks over a network. Connection to an internal or external NTP server should be part of the initial cluster setup. Best Practices: Configure NTP on the cluster to point to a local NTP server. NetApp recommends using the IP address and not the DNS host name. The default NTP server at cluster creation time is set to us.pool.ntp.org; however, a connection to this site cannot always be made depending on the physical location of the SolidFire cluster. You can use the Element OS Web UI to enter up to five different NTP servers. Enabling a Broadcast Client The broadcast client setting enables each node in the cluster to listen for NTP broadcast packets. An NTP server must be set up as a broadcast client in order to effectively use this option. Prerequisites You must have cluster admin privileges to change settings. 1. Go to Cluster > Settings. 2. Under Network Time Protocol Settings, select Yes to use as a Broadcast Client. 3. In the Server field, enter the desired NTP address. 4. Click Save Changes. NetApp SolidFire Element OS 10.0 User Guide 99

100 Cluster SNMP On the Clusters > SNMP page, you can configure the Simple Network Management Protocol (SNMP) and view MIB files. See the following topics to learn about or perform SNMP-related tasks: SNMP Details Configuring an SNMP Requestor Configuring an SNMP USM User Configuring SNMP Traps Viewing Management Information Base Files SNMP Details On the Cluster > SNMP page, you can view the following information. Heading SNMP MIBs General SNMP Settings SNMP Trap Settings Description Displays the MIB files available for you to view or download. Enables or disables SNMP. Once you enable SNMP, you can choose which version to use. If using version 2, you can add requestors, and if using version 3, you can set up USM users. Enables you to identify which traps you want to capture. You can set the host, port, and community string for each trap recipient. Configuring an SNMP Requestor When SNMP version 2 is enabled, you can use the options in the Cluster > SNMP > General SNMP Settings section to enable or disable a requestor, and configure requestors to receive authorized SNMP requests. 1. Go to Cluster > SNMP. 2. Under General SNMP Settings, click Yes to enable SNMP. 3. From the Version list, select Version In the Requestors section, enter the Community String and Network information. 5. (Optional) To add another requestor, follow these steps: a. Click Add a Requestor. b. Enter the Community String and Network information. NOTE: By default, the community string is public, and the network is localhost. You can change these default settings. 6. Click Save Changes. Configuring an SNMP USM User In the General SNMP Settings section, you can enable or disable SNMP. If you enable SNMP Version 3, you need to configure a USM user to receive authorized SNMP requests. NetApp SolidFire Element OS 10.0 User Guide 100

101 Cluster 1. Go to Cluster > SNMP. 2. Under General SNMP Settings, click Yes to enable SNMP. 3. From the Version list, select Version In the USM Users section, enter the Name, Password, and Passphrase information. 5. (Optional) To add another USM user, follow these steps: a. Click Add a USM User. b. Enter a Name, Password, and Passphrase. 6. Click Save Changes. Configuring SNMP Traps System administrators can use SNMP traps, also referred to as notifications, to monitor the health of the SolidFire cluster. When traps are enabled, the SolidFire cluster generates traps associated with entries made in the Event Log and Alerts views. To receive SNMP notifications, you need to choose the traps that should be generated and identify the recipients of the trap information. By default, no traps are generated. 1. Go to Cluster > SNMP. 2. Select one or more types of traps in the SNMP Trap Settings section that the system should generate: Cluster Fault Traps Cluster Resolved Fault Traps Cluster Event Traps 3. In the Trap Recipients section, enter the Host, Port, and Community String information for a recipient. 4. (Optional) To add another trap recipient, follow these steps: a. Click Add a Trap Recipient. b. Enter the Host, Port, and Community String information for the recipient. 5. Click Save Changes. Viewing Management Information Base Files You can view and download the management information base (MIB) files used to define each of the managed objects. The SNMP feature supports read-only access to those objects defined in the SolidFire-StorageCluster-MIB. The statistical data provided in the MIB shows system activity for the following: Cluster statistics Volume statistics Volumes by account statistics Node statistics Other data such as reports, errors, and system events The system also supports access to the MIB file containing the upper level access points (OIDS) to SF-series products. 1. Go to Cluster > SNMP. 2. Under SNMP MIBs, click the MIB file you want to download. NetApp SolidFire Element OS 10.0 User Guide 101

102 Cluster LDAP On the Cluster > LDAP page, you can set up the Lightweight Directory Access Protocol (LDAP) to enable secure directory-based login functionality to SolidFire storage. The functionality provided with SolidFire Element OS allows configuring LDAP at the cluster and authorizing LDAP users and groups. See the following topics to learn about or perform LDAP-related tasks: LDAP Details Configuring LDAP Disabling LDAP LDAP Details On the Cluster > LDAP page, you can view and change the following settings. NOTE: You must enable LDAP to view these settings. LDAP Configuration Settings LDAP Authentication Enabled LDAP Servers Auth Type Search Bind DN Search Bind Password User Search Base DN User Search Filter Group Search Type Description Once enabled, LDAP can be configured. Address of an LDAP or LDAPS directory server. Identifies which user authentication method is used. Valid values: DirectBind SearchAndBind A fully qualified DN to log in with to perform an LDAP search for the user (needs read access to the LDAP directory). Password used to authenticate access to the LDAP server. The base DN of the tree used to start the user search. The system searches the subtree from the specified location. The LDAP filter that the system uses to search the directory for users. Controls the default group search filter used. Possible values: ActiveDirectory: Nested membership of all of a user s LDAP groups. NoGroups: No group support. MemberDN: MemberDN-style groups (single-level). Group Search Base DN Test User Authentication The base DN of the tree used to start the group search. The system searches the subtree from the specified location. Once LDAP has been configured, use to test username and password authentication for the LDAP server. Configuring LDAP The Web UI enables an LDAP administrator to configure integration with an existing LDAP server. This provides centralized user management for storage system access. NetApp SolidFire Element OS 10.0 User Guide 102

103 Cluster 1. Go to Cluster > LDAP. 2. Click Yes to enable LDAP authentication. 3. Click Add a Server. 4. Enter the Host Name/IP Address. 5. (Optional) Select Use LDAPS Protocol. 6. Enter the required information in General Settings. 7. Click Enable LDAP. 8. Click Test User Authentication if you want to test the server access for a user. 9. (Optional) Click Save Changes to save any new settings. Disabling LDAP You can disable LDAP integration using the Element OS Web UI. 1. Go to Cluster > LDAP. 2. Click No. Caution: Disabling LDAP erases all configuration settings. Make a note of all settings prior to disabling LDAP. 3. Click Disable LDAP. NetApp SolidFire Element OS 10.0 User Guide 103

104 Cluster Drives Each node contains one or more physical drives that are used to store a portion of the data for the cluster. The cluster utilizes the capacity and performance of the drive after the drive has been successfully added to a cluster. A SolidFire storage node contains two types of drives: Volume Metadata Drives store compressed information that defines each volume, clone, or snapshot within a cluster. The total metadata drive capacity in the system determines the maximum amount of storage that can be provisioned as volumes. The maximum amount of storage that can be provisioned is independent from how much data is actually stored on the cluster s block drives. Volume metadata drives store data redundantly across a cluster using SolidFire Double Helix data protection. NOTE: Some system event log and error messages refer to volume metadata drives as slice drives. Block Drives store the compressed, de-duplicated data blocks for server application volumes. See Volumes for more information. Block drives make up a majority of the storage capacity of the system. The majority of read requests for data already stored on the SolidFire cluster, as well as requests to write data, occur on the block drives. The total block drive capacity in the system determines the maximum amount of data that can be stored, taking into account the effects of compression, thin provisioning, and de-duplication. See the following topics to learn about or perform tasks with drives: Drives Details Adding Available Drives to a Cluster Wear Remaining Removing a Drive Removing Failed Drives Secure-Erasing Data Multi-Drive Slice Service Drives Details On the Cluster > Drives page, you can view a list of the active drives in the cluster. You can filter the page by selecting from the Active, Available, Removing, Erasing, and Failed tabs. When you first initialize a cluster, the active drives list is empty. You can add drives that are unassigned to a cluster and listed in the Available tab after a new SolidFire cluster is created. The following table describes the elements shown in the list of active drives. Heading Drive ID Node ID Node Name Slot Capacity Serial Wear Remaining Type Description Sequential number assigned to the drive. Assigned node number when the node is added to the cluster. Name of the node where the drive resides. Slot number where the drive is physically located. GB size of the drive. Serial number of the SSD. Wear level indicator. Drive type can be block or metadata. NetApp SolidFire Element OS 10.0 User Guide 104

105 Cluster Adding Available Drives to a Cluster When you add a node to the cluster or install new drives in an existing node, the drives automatically register as available. You must add the drives to the cluster using either the Web UI or SolidFire API before it can participate in the cluster. Drives are not displayed in the Available Drives list when the following conditions exist: Drives are in an Active, Removing, Erasing, or a Failed state. The node of which the drive is a part of is in a Pending state. NOTE: Drive sizes must be compatible within a node. For example, if a 2405 node drive needs to be replaced, it must be replaced with a drive compatible with a 2405 node system. A drive from a 4805 or 9605 node cannot be used to replace a drive in a 2405 node. This is true for all node models in the SolidFire family of nodes. The SolidFire system does not recognize an incompatible drive, and it is never made available to the system. 1. Go to Cluster > Drives. 2. Click Available to view the list of available drives. 3. There are two ways to add drives: 4. Click Add. Wear Remaining a. To add individual drives, click the Actions button ( ) for the drive you wish to add. b. To add multiple drives, select the check box of the drives to add and then Bulk Actions. The Wear Remaining attribute indicates the approximate amount of wear available on the SSD for writing/erasing data. A drive that has consumed 5% of its designed write and erase cycles reports 95% wear remaining. NOTE: Drive wear status does not refresh automatically. You need to either click the Refresh button ( the Active Drives page to refresh the drive wear status. ) or close and reopen Removing a Drive You can remove a drive from the list of active drives or from the failed drives list. This takes the drive offline. Before the system fully removes a drive, it writes the data on the drive to other available drives in the system. The data migration to other active drives in the system can take a few minutes to an hour depending on how much capacity is utilized on the cluster and how much active I/O there is on the cluster. 1. Go to Cluster > Drives. 2. Do one of the following: To remove individual drives, click the Actions button ( ) for the drive you wish to remove. To remove multiple drives, select the check box of each drive you wish to remove and click Bulk Actions. 3. Click Remove. 4. Confirm the action. NOTE: If there is not enough capacity to remove active drives prior to removing a node, an error message appears when you confirm the drive removal. NetApp SolidFire Element OS 10.0 User Guide 105

106 Cluster Removing Failed Drives You can remove a failed drive from the failed drive list. The system puts a drive in a failed state if the self-diagnostics of the drive tells the node it has failed or if communications to the drive stop for 5.5 minutes or longer. The Failed drives page displays a list of the failed drives. If a drive has failed, see Replacing an SSD. Drives in the Alerts list show as blockserviceunhealthy when a node is offline. When rebooting the node, if the node and its drives come back online within 5.5 minutes, the drives automatically update and continue as Active drives in the cluster. 1. Go to Cluster > Drives. 2. Click Failed to view the list of failed drives. 3. Do one of the following: To remove individual drives, click the Actions button ( ) for the drive you wish to remove. To remove multiple drives, select the check box of each drive you wish to remove and click Bulk Actions. 4. Click Remove. Secure-Erasing Data You can securely remove residual data from drives that are listed in the Available drives list. This process uses a Security Erase Unit command to write a predetermined pattern to the drive and resets the encryption key on the drive. The drive shows a status of Erasing while the drive is being secure-erased. NOTE: This process is available only through the API. For more details, see the SecureEraseDrives method in the Element OS API Reference Guide. NetApp SolidFire Element OS 10.0 User Guide 106

107 Cluster Multi-Drive Slice Service You can add a second metadata drive on a SolidFire node by converting the block drive in slot 2 to a slice drive. This is accomplished by enabling the multi-drive slice service (MDSS) feature. To enable this feature, contact SolidFire Support for assistance. Only slot 2 can be converted to a metadata drive to work in combination with the system slice drive to increase the metadata capacity of the node. NOTE: Contact SolidFire Support at ng-sf-support@netapp.com to enable the multi-drive slice service (MDSS) feature. This feature can only be enabled by support using the Element OS API. Recovering Multi-Drive Slice Service Drives You can recover metadata drives by adding them back to the cluster in the event that the system slice drive, the slot 2 slice drive, or both drives fail. The recovery operation can be performed in the Element OS Web UI if MDSS is already enabled on the node. If either or both of the metadata drives in a node experiences a failure, the slice service will shutdown and data from both drives will be synced off. The following scenarios identify the failure types. System Slice Drive Fails Slot 2 is verified and returned to an available state. System slice drive must be repopulated before the slice service can be brought back online. Recommended: Replace the system slice drive, when the system slice drive becomes available, add the drive and the slot 2 drive at the same time. NOTE: The drive in slot 2 cannot be added by itself as a metadata drive. Both drives must be added back to the node at the same time. Slot 2 Fails System slice drive is verified and returned to an available state. Recommended: Replace slot 2 with a spare, when slot 2 becomes available, add the system slice drive and the slot 2 drive at the same time. System Slice Drive and Slot 2 Fails Recommended: Replace both system slice drive and slot 2 with a spare drive. When both drives become available, add the system slice drive and the slot 2 drive at the same time. Order of Operations Replace the failed hardware drive with a spare drive (replace both drives if both have failed). Add drives back to the cluster when they have been repopulated and are in an available state. See Adding MDSS Drives. Verify Operations Slot 0 (or internal) and slot 2 are identified as metadata drives in the Active Drives list. All slice balancing has completed (no more moving slices messages in the event log for 30 minutes). Removing MDSS Drives You can remove the multi-drive slice service (MDSS) drives. This procedure applies only if the node has multiple slice drives. NetApp SolidFire Element OS 10.0 User Guide 107

108 Cluster NOTE: If the system slice drive and the slot 2 drive fail, the system will shutdown slice services and remove the drives. If there is no failure and you remove the drives, both drives must be removed at the same time. 1. Go to Cluster > Drives. 2. From the Available drives tab, click the check box for the slice drives being removed. 3. Click Bulk Actions. 4. Click Remove. 5. Confirm the action. Adding MDSS Drives You can add multi-drive slice service drives back to a node. Getting a slice drive into an available state may require swapping out the failed drive with a new or spare drive. NOTE: You must add the system slice drive at the same time you add the drive for slot 2. If you try to add the slot 2 slice drive alone or before you add the system slice drive, the system will generate an error. 1. Go to Cluster > Drives. 2. Click Available to view the list of available drives. 3. Click the check box for the slice drives being added. 4. Click Bulk Actions. 5. Click Add. 6. Confirm from the Active Drives tab that the drives have been added. NetApp SolidFire Element OS 10.0 User Guide 108

109 Cluster Nodes A SolidFire cluster can be made up of storage nodes and Fibre Channel nodes. On the Cluster > Nodes page, you can view individual node details, add pending nodes, and delete nodes from a cluster. See the following topics to learn about and perform node-related tasks: Storage Nodes Viewing Node Software Version Adding a Node to a Cluster Mixed Node Capacity Accessing Node Settings Node Network Settings for 10G and 1G Node Cluster Settings System Test Settings System Utilities Viewing Node Activity Graph Removing Nodes from a Cluster Fibre Channel Nodes Adding Fibre Channel Nodes to a Cluster Creating a Cluster with Fibre Channel Nodes Setting Up Fibre Channel Nodes Finding Fibre Channel WWPN Addresses Zoning the Fibre Channel Connections Removing a Fibre Channel Node NetApp SolidFire Element OS 10.0 User Guide 109

110 Cluster Storage Nodes On the Cluster > Nodes page, you can view the following information. Heading Node ID Node Name Available 4k IOPS Node Role Node Type Active Drives Management IP Cluster IP Storage IP Management VLAN ID Storage VLAN ID Version Replication Port Service Tag Description System-generated ID for the node. The system-generated node name. Displays the IOPS configured for the node. Identifies what role the node has in the cluster. This can be Cluster Master, Ensemble Node, or Fibre Channel node. Displays the model type of the node. Number of active drives in the node. Management IP (MIP) address assigned to node for 1GbE or 10GbE network admin tasks. Cluster IP (CIP) address assigned to the node used for the communication between nodes in the same cluster. Storage IP (SIP) address assigned to the node used for iscsi network discovery and all data network traffic. The virtual ID for the management local area network. The virtual ID for the storage local area network. Version of SolidFire Element OS software running on each node. The port used on SolidFire nodes for remote replication. Unique service tag number assigned to the node. Viewing Individual Node Details On the Cluster > Nodes page, you can view details for individual nodes such as service tag and drive details, as well as graphics for utilization and drive statistics. 1. Go to Cluster > Nodes. 2. Click the Actions button ( ) for a node. 3. Click View Details. Viewing Node Software Version The version of software running on each node can be viewed in the active Nodes window. Go to Cluster > Nodes. The version of each node is listed in the Version column. NetApp SolidFire Element OS 10.0 User Guide 110

111 Cluster Adding a Node to a Cluster You can add nodes when a cluster is created, or when more storage is needed. Nodes require initial configuration when they are first powered on. After the node is configured, it appears in the Pending Nodes list and you can add it to a cluster. The software version on each node in a cluster must be compatible. See Node Versioning and Compatibility. SolidFire Fibre Channel nodes are added using the same procedure as a SolidFire storage node. They can be added when a cluster is created, or added later. 1. Go to Cluster > Nodes. 2. Click Pending to view the list of pending nodes. 3. Do one of the following: a. To add individual nodes, click the Actions button ( ) for the node you wish to add. b. To add multiple nodes, select the check box of the nodes to add and then Bulk Actions. NOTE: If the node you are adding has a different version of Element OS than the version running on the cluster, the cluster asynchronously updates the node to the version of Element OS running on the cluster master. Once the node is updated, it automatically adds itself to the cluster. During this asynchronous process, the node will be in a pendingactive state. See the NetApp SolidFire Element OS API Reference Guide. 4. Click Add. Mixed Node Capacity The node appears in the list of active nodes. Nodes of smaller or larger capacities can be added to an existing cluster. Larger node capacities can be added to a cluster to allow for capacity growth. Larger nodes added to a cluster with smaller nodes must be added in pairs. This allows sufficient space for Double Helix to move the data should one of the larger nodes fail. Smaller node capacities can be added to a larger node cluster to improve performance, and data from smaller node clusters can be migrated to larger nodes. Adding a node with a capacity that differs from nodes already used in a cluster is the same process as adding a new node. For instructions, see Adding a Node to a Cluster. Accessing Node Settings You can configure individual nodes to participate in a cluster or they can be modified once they are running on a cluster. You must authenticate as a cluster admin user to configure a node in an active state that is part of a cluster. Do one of the following: You can enter the node MIP followed by :442 in a browser window. The Node UI will display the authentication dialog so you can enter an admin user name and password, if required, and then display the node settings in the UI. Example: Alternatively, you can launch the per node UI by selecting a link for the node from the Element OS Web UI. for launching the per node UI from the Element OS Web UI 1. Go to Cluster > Nodes. 2. Click the management IP address link for the node you want to configure or modify. NetApp SolidFire Element OS 10.0 User Guide 111

112 Cluster A new browser window will open where settings for this node can be edited. NOTE: You should configure or modify one node at a time. It is a best practice to first ensure the network settings specified are having the expected effect, are stable, and are performing well before you make modifications to another node. In this way, the worst effect you can expect to experience by entering an incorrect setting is taking a single node offline. But even then you can simply add the drives back to the cluster when the correct information is entered. Node Network Settings for 10G and 1G Node network settings can be changed to give the node a new set of network attributes. The image here identifies the network 10G and 1G fields. The following table describes what can be modified when a node is in available, pending, and active states. The network settings for a node appear in the Network Settings window. The following view is available once you are logged into a node. See the description of this information in Network Setting Details for 10G and 1G. When changes have been completed, click Save Changes to apply the changes. Network Setting Details for 10G and 1G The following table identifies the node network interface setting fields. NetApp SolidFire Element OS 10.0 User Guide 112

113 Cluster Name Method IP Address Subnet Mask Gateway Address MTU DNS Servers Search Domains Bond Mode LACP Status Virtual Network Tag Routes Description The method used to configure the interface. This depends on other settings, such as the use of a static IP address, which will change the method to static. Valid methods are: loopback: Used to define the IPv4 loopback interface. manual: Used to define interfaces for which no configuration is done by default. dhcp: May be used to obtain an IP address via DHCP. static: Used to define Ethernet interfaces with statically allocated IPv4 addresses. IP address for the 10G or 1G network. Address subdivisions of the IP network. Router network address to send packets out of the local network. Largest packet size that a network protocol can transmit. Must be greater than or equal to 1500 bytes. Network interface used for cluster communication. Search for additional MAC addresses available to the system. Can be one of the following: ActivePassive (default) ALB LACP If LACP is selected as the Bond Mode, the following selections will be available to choose from: LACP Slow: Packets are transmitted at 30 second intervals. LACP Fast: Packets are transmitted in 1 second intervals. Can be one of the following: UpAndRunning Down Up This is the primary network tag. All nodes in a cluster have the same VLAN tag. Static routes to specific hosts or networks via the associated interface the routes are configured to use. Node Cluster Settings The cluster settings for a node displays in the Cluster Settings tab. The following view is available once you are logged into a node. See the description of this information in Node Cluster Settings Details. NetApp SolidFire Element OS 10.0 User Guide 113

114 Cluster Node Cluster Settings Details The following table identifies the cluster settings fields. Name Role Hostname Cluster Cluster Membership Version Ensemble Node ID Description Role the node has in the cluster. Can be one of the following: Storage:Storage or Fibre Channel node. Management: Node is a management node. Name of the node. Name of the cluster. State of the node. Can be one of the following: Available: The node has no associated cluster name and is not yet part of a cluster. Pending: The node is configured and can be added to a designated cluster. Authentication is not required to access the node. PendingActive: The system is in the process of installing compatible software on the node. When complete, the node will move to the Active state. Active: The node is participating in a cluster. Authentication is required to modify the node. Version of Element OS running on the node. Nodes that are part of the database ensemble. ID assigned when a node is added to the cluster. NetApp SolidFire Element OS 10.0 User Guide 114

115 Cluster Name Cluster Interface Management Interface Storage Interface Description Network interface used for cluster communication. Management network interface. This defaults to Bond1G but can also use Bond10G. Storage network interface using Bond10G. System Test Settings Changes to the network settings can be tested after the settings are made and you have committed them to the network configuration. The tests ensure that the node is stable and can be brought online without issues. In the System Test window, click the button for the test you want to run. NetApp SolidFire Element OS 10.0 User Guide 115

116 Cluster The following table describes the system tests. NetApp SolidFire Element OS 10.0 User Guide 116

117 Cluster Test Name Run All Tests Description All test operations are started and run. NOTE: This can be time consuming and should only be done at the direction of NetApp SolidFire Support. Test Connected Ensemble Show/hide options Test status Show/hide details Details Test Connect Mvip Test Connect Svip Test Hardware Config Tests and verifies the connectivity to a database ensemble. The test uses the ensemble for the cluster the node is associated with. Displays available options for a test. Not all tests have options that can be set. Displays the status of each test that was implemented. Displays or hides details of each test that was implemented. Displays the JSON details of the test that was implemented. Pings the specified MVIP and then executes a simple API call to the MVIP to verify connectivity. Pings the specified SVIP using Internet Control Message Protocol (ICMP) packets that matches the Maximum Transmission Unit (MTU) size set on the network adapter. It then connects to the SVIP as an iscsi initiator. Tests that all hardware configurations are correct, validates firmware versions are correct, and confirms all drives are installed and running properly. This is the same as factory testing. NOTE: This test is extremely resource intensive and should only be run if requested by support. Test Local Connectivity Test Locate Cluster Test Network Config Tests the connectivity to all of the other nodes in the cluster by pinging the cluster IP (CIP) on each node. This test will only be displayed on a node if the node is part of an active cluster. Locates the cluster on the network by its name. Verifies that the configured network settings match the network settings being used on the system. NOTE: This test is not intended to detect hardware failures when a node is actively participating in a cluster. Hardware failures are automatically reported by the system if they occur. Test Ping Test Remote Connectivity Pings a specified list of hosts or, if none specified, dynamically builds a list of all registered nodes in the cluster and pings each for simple connectivity. Tests the connectivity to all remotely paired clusters' nodes by pinging the cluster IP (CIP) on each node. This test will only be displayed on a node if the node is part of an active cluster. System Utilities The System Utilities window can be used to reset configuration settings for drives, restart network or cluster services, and create or delete a support bundle. Each utility is described in the following table. NetApp SolidFire Element OS 10.0 User Guide 117

118 Cluster Name Create Support Bundle Delete All Support Bundles Reset Drives Restart Networking Restart Services Description Creates a support bundle under the node's directory /tmp/bundles. Possible values: Bundle Name: Name for the bundle. Extra Args: Should be used only at the request of NetApp SolidFire Support. Timeout Sec: Time allowed, in seconds, for the process to run seconds is the default. The time can be shortened or extended. Deletes any current support bundles residing on the node. Initialize drives and remove all data currently residing on the drive. The drive can then be reused in an existing node or used in an upgraded node. Restarts all networking services on a node. Caution should be taken when using this operation as it will cause temporary loss of networking connectivity. Restarts SolidFire Element OS services on a node. NOTE: This action causes a temporary node service interruption. Restarting the node services should be done at the direction of NetApp SolidFire Support. Viewing Node Activity Graph You can view performance activity for each node in a graphical format. This information provides real-time statistics for CPU and read/write IOPS for each drive the node. The Utilization graph is updated every five seconds. The Drive Statistics graph updates every 10 seconds. 1. Go to Cluster > Nodes. 2. Click the Actions button ( ) for the node you wish to view. 3. Click View Details. NOTE: Specific points in time on the line and bar graphs can be seen by hovering your cursor over the line or bar. Removing Nodes from a Cluster You can remove nodes from a cluster without service interruption when their storage is no longer needed or they require maintenance. You must remove all drives in the node before removing the node from the cluster. You can do this using the Web UI or API. For more information about removing a drive using the Web UI, see Removing a Drive. At least two Fibre Channel nodes are required for Fibre Channel connectivity in a SolidFire cluster. If only one Fibre Channel node is connected, the system triggers alerts in the Event Log until you add another Fibre Channel node to the cluster, even though all Fibre Channel network traffic continues to operate with only one Fibre Channel node. See Removing a Fibre Channel Node for more information about removing Fibre Channel nodes. Prerequisites Remove the drives in the node from the cluster before proceeding. For more information about removing a drive using the Web UI, see Removing a Drive. 1. Go to Cluster > Nodes. NetApp SolidFire Element OS 10.0 User Guide 118

119 Cluster 2. Do one of the following: a. To remove individual nodes, click the Actions button ( ) for the node you wish to remove. b. To remove multiple nodes, select the check box of each node you wish to remove and click Bulk Actions. 3. Click Remove. Any nodes removed from a cluster appear in the list of Pending nodes. NOTE: If there are drives still registered to the node, the system displays an error message when you confirm removal. NetApp SolidFire Element OS 10.0 User Guide 119

120 Cluster Fibre Channel Nodes Fibre Channel nodes provide Fibre Channel connectivity to a cluster. Fibre Channel nodes are added in pairs, and operate in activeactive mode (all Fibre Channel nodes actively process traffic for the cluster). Clusters running Element OS version 9.0 and later support up to four Fibre Channel nodes; clusters running previous versions support a maximum of two Fibre Channel nodes. The following diagram illustrates cluster cabling for Fibre Channel nodes. Adding Fibre Channel Nodes to a Cluster You can add Fibre Channel nodes to a cluster using the same process as adding SolidFire storage nodes. The system discovers Fibre Channel nodes when you create a new SolidFire cluster. The Fibre Channel nodes are listed in the Cluster > Nodes > Pending area of the Web UI. See Adding a Node to a Cluster for more details. Creating a Cluster with Fibre Channel Nodes You can create a cluster with Fibre Channel nodes using the same process as creating a SolidFire storage cluster. See Creating a New Cluster for the steps required to create a new cluster. Setting Up Fibre Channel Nodes SolidFire Fibre Channel nodes enable you to connect the SolidFire cluster to a Fibre Channel network fabric. You can use a similar process to set up Fibre Channel nodes as you would to set up a SolidFire storage node. Use the following general procedure when setting up Fibre Channel nodes: Prerequisites Ensure that at least two SolidFire Fibre Channel nodes are cabled to Fibre Channel switches. 1. Configure the Fibre Channel Nodes. See SF-series Node Configuration. Fibre Channel nodes use the same configuration method as storage nodes. 2. Create a Cluster. See Creating a New Cluster. You can add Fibre Channel nodes to a SolidFire storage cluster in the same manner as SolidFire storage nodes. 3. Zone Fibre Channel Ports. See Finding Fibre Channel WWPN Addresses. Fibre Channel nodes are zoned on the Fibre Channel switch. This requires that the WWPN port addresses are assigned properly. NetApp SolidFire Element OS 10.0 User Guide 120

121 Cluster 4. Create Volume Access Groups. SolidFire volumes in a volume access group are used to communicate between SolidFire storage and the Fibre Channel fabric. The process of creating an iscsi volume access group is very similar to the process of creating a Fibre Channel volume access group. Once you have completed setting up the Fibre Channel nodes, you can find the nodes on the Nodes page of the Element OS Web UI identified as Node Type FC0025. Finding Fibre Channel WWPN Addresses Each Fibre Channel node has four Fibre Channel ports. Each port has a World Wide Port Name (WWPN) and is assigned to a World Wide Node Name (WWNN). WWPNs are registered in the SolidFire system when you create a new cluster with Fibre Channel nodes. The WWPNs are then used to zone the ports on the Fibre Channel switches. To find the Fibre Channel WWPN addresses, do the following: 1. Go to Cluster > FC Ports. 2. View the WWPN addresses in the WWPN column. Zoning the Fibre Channel Connections When you create a new SolidFire cluster with Fibre Channel nodes and SolidFire storage nodes, the WWPN addresses for the Fibre Channel nodes are available in the Web UI. You can use the WWPN addresses to zone the Fibre Channel switch. To learn where to find the WWPN addresses in the SolidFire Web UI, see Finding Fibre Channel WWPN Addresses. Refer to the Configuring SolidFire Fibre Channel guide for further information. Removing a Fibre Channel Node You can remove Fibre Channel nodes from a cluster when they require maintenance. You can also add new Fibre Channel nodes to a SolidFire cluster. See the following guidelines for each of these scenarios. Replacing a Fibre Channel node with a new Fibre Channel node: Remove the existing Fibre Channel node from the cluster. See the following procedure. Configure the new Fibre Channel node. Once configured, it is visible in the Pending list and you can add it to the cluster. New WWPNs are generated and you must zone the new ports on the Fibre Channel switch. Repairing and adding back an existing Fibre Channel node: First, remove the Fibre Channel node from the cluster. See the following procedure. When the node is ready to be added again, it is visible in the Pending list. Add the node back to the cluster. No new WWPNs are generated, so no re-zoning is needed. NOTE: At least two Fibre Channel nodes are required in a cluster. In the event of a failure, the system operates normally with one Fibre Channel node, but it displays an alert in the Event Log until you replace the failed Fibre Channel node. 1. Go to Cluster > Nodes. 2. To remove a single Fibre Channel node: a. Click the Actions ( ) button for the Fibre Channel node you wish to remove. b. In the resulting list, click Remove. c. Confirm the action. NetApp SolidFire Element OS 10.0 User Guide 121

122 Cluster 3. To remove multiple Fibre Channel nodes: a. Check the boxes for the Fibre Channel nodes you wish to remove. b. Click Bulk Actions. c. In the resulting list, click Remove. d. Confirm the action. After you have removed the node from the cluster, the system adds it to the Pending list. You can add any node in the Pending list back to the cluster you removed it from. Fibre Channel Port Details On the Cluster > FC Ports page, you can view details of each Fibre Channel port such as its status, name, and port address. For information about Fibre Channel nodes, see Fibre Channel Nodes. NetApp SolidFire Element OS 10.0 User Guide 122

123 Cluster Virtual Networks Virtual networking in SolidFire storage enables traffic between multiple clients that are on separate logical networks to be connected to one cluster. Connections to the cluster are segregated in the networking stack through the use of VLAN tagging. See the NetApp SolidFire Element OS API Reference Guide for API methods used to configure virtual networks. Caution: Using VLAN with Fibre Channel nodes as cluster members is not supported. See the following topics to learn about or perform VLAN-related tasks: Viewing Virtual Networks Creating a Virtual Network Enabling Virtual Routing and Forwarding (VRF) Editing a Virtual Network Editing VRF VLANs Deleting a Virtual Network Viewing Virtual Networks On the Cluster > Network page, you can view the following information about virtual networks: Heading ID Name VLAN Tag SVIP Netmask Gateway VRF Enabled IPs Used Description Unique ID of the VLAN network, which is assigned by the SolidFire system. Unique user-assigned name for the VLAN network. VLAN tag assigned when the virtual network was created. Storage virtual IP address assigned to the virtual network. Netmask for this virtual network. Unique IP address of a virtual network gateway. VRF must be enabled. Shows if virtual routing and forwarding is enabled or not. The range of virtual network IP addresses used for the virtual network. Creating a Virtual Network You can add a new virtual network to a cluster configuration to enable a multi-tenant environment connection to a SolidFire cluster. When a virtual network is added, an interface for each node is created and each will require a virtual network IP address. The number of IP addresses specified when creating a new virtual network must be equal to or greater than the number of nodes in the cluster. Virtual network addresses are bulk provisioned by and assigned to individual nodes automatically. Virtual network addresses do not need to be assigned to nodes manually. Prerequisites Identify the block of IP addresses that will be assigned to the virtual networks on the SolidFire nodes. Identify a storage network IP (SVIP) address that will be used as an endpoint for all SolidFire storage traffic. NetApp SolidFire Element OS 10.0 User Guide 123

124 Cluster Caution: The following criteria should be considered for this configuration: VLANs that are not VRF-enabled require initiators to be in the same subnet as the SVIP. VLANs that are VRF-enabled do not require initiators to be in the same subnet as the SVIP, and routing is supported. The default SVIP does not require initiators to be in the same subnet as the SVIP, and routing is supported. 1. Go to Cluster > Network. 2. Click Create VLAN. 3. In the Create a New VLAN dialog, enter the following: a. VLAN Name b. VLAN Tag c. SVIP d. Netmask e. (Optional) Description 4. Enter the Starting IP address for the range of IP addresses in IP Address Blocks. 5. Enter the Size of the IP range as the number of IP addresses to include in the block. 6. Click Add a Block to add a non-continuous block of IP addresses for this VLAN. 7. Click Create VLAN. Enabling Virtual Routing and Forwarding (VRF) You can enable virtual routing and forwarding (VRF) which allows multiple instances of a routing table to exist in a router and work simultaneously. This functionality is available for storage networks only. VRF can only be enabled at the time of creating a VLAN. NOTE: If you want to switch back to non-vrf, you must delete and recreate the VLAN. 1. Go to Cluster > Network. 2. To enable VRF on a new VLAN, select Create VLAN. a. Enter relevant information for the new VRF/VLAN. See Creating a Virtual Network. b. Select the Enable VRF check box. c. (Optional) Enter a gateway. 3. Click Create VLAN. Editing a Virtual Network You can change VLAN attributes, such as VLAN name, netmask, and size of the IP address blocks. The VLAN Tag and SVIP cannot be modified for a VLAN. The gateway attribute is not a valid parameter for non-vrf VLANs. NOTE: If any iscsi, remote replication, or other network sessions exist, the modification may fail. NetApp SolidFire Element OS 10.0 User Guide 124

125 Cluster 1. Go to Cluster > Network. 2. Click the Actions button ( ) for the VLAN you want to edit. 3. Click Edit ( ). 4. Enter the new attributes for the VLAN in the Edit VLAN dialog. 5. Click Add a Block to add a non-continuous block of IP addresses for the virtual network. 6. Click Save Changes. Editing VRF VLANs You can change VRF VLAN attributes, such as VLAN name, netmask, gateway, and IP address blocks. If a user needs to delete this virtual network, it must be deleted and recreated as a non-vrf VLAN. 1. Go to Cluster > Network. 2. Click the Actions button ( ) for the VLAN you wish to edit. 3. Click Edit ( ). 4. Enter the new attributes for the VRF VLAN in the Edit VLAN dialog. 5. Click Save Changes. Deleting a Virtual Network You can remove a virtual network object. See Editing a Virtual Network to add the address blocks to another virtual network. 1. Go to Cluster > Network. 2. Click the Actions button ( ) for the VLAN you want to delete. 3. Click Delete. 4. Confirm the message. NetApp SolidFire Element OS 10.0 User Guide 125

126 Virtual Volumes Virtual Volumes From the VVols tab, you can view information and perform tasks for virtual volumes and their associated storage containers, protocol endpoints, bindings, and hosts. The tab is only visible in the Element OS user interface once VVol functionality has been enabled. See the following topics to learn about or perform VVols-related tasks: Virtual Volumes Overview Virtual Volume Object Types Configuring vsphere for VVols Enabling Virtual Volumes Viewing Virtual Volume Details Virtual Volume Details Individual Virtual Volume Details Deleting a Virtual Volume Storage Containers Creating a Storage Container Viewing Virtual Volume Details Storage Container Details Individual Storage Container Details Editing a Storage Container Deleting a Storage Container Protocol Endpoints Bindings Hosts NetApp SolidFire Element OS 10.0 User Guide 126

127 Virtual Volumes Virtual Volumes Overview Beginning with Element OS version 9.0, SolidFire supports VMware vsphere Virtual Volumes (VVols) functionality that helps you to manage abstract virtual volume objects that are created using VMware vsphere and stored on ESXi-host clusters. VVols functionality allows every virtual machine disk (VMDK) to be hosted on its own SolidFire volume. Instead of hosting multiple VMDKs in a single large datastore, VVols offer more refined functionality at the VMDK rather than datastore level, including quality of service (QoS) and backup. VVols functionality can be enabled through the Element OS user interface. Once enabled, a VVols tab appears in the user interface that offers VVols-related monitoring and limited management options. Additionally, a storage-side software component known as the VASA Provider acts as a storage awareness service for vsphere. Most VVols commands, such as VVol creation, cloning, and editing, are initiated by a vcenter Server or ESXi host and translated by the VASA Provider to SolidFire APIs for the SolidFire storage system. Commands to create, delete, and manage storage containers and delete virtual volumes can be initiated using the Element OS user interface. Virtual Volumes functionality has a cluster limit of 8000 virtual volumes. Virtual Volume Object Types With VVols, virtual machines (VM) are no longer encapsulated in folders on a virtual machine file system (VMFS). Instead, each virtual machine is built across at least two distinct virtual volume types (config and data). The following virtual volume object types each map to a unique and specific virtual machine file. Virtual Volume Type Config Data Swap Memory Other Description A config virtual volume contains all of the configuration information for a VM, including log files and the VMX file for the VM. There is always one 4GB config virtual volume per VM that is formatted with VMFS. A data virtual volume contains all data for a VMDK and varies in size based on VMDK capacity. There is a 1:1 mapping between VM disks and data virtual volumes. A swap virtual volume contains the swap file space for a VM and is only created and bound at runtime for the VM. It is destroyed when the VM is powered off. There is always one swap virtual volume per powered on VM. This virtual volume is created whenever a snapshot containing the VM memory is created. There is always one memory virtual volume per snapshot and its size depends on memory size. A vsphere solution-specific object. Configuring vsphere for VVols The majority of configurations necessary for using Virtual Volumes functionality with SolidFire storage systems are made in vsphere. See the VMware vsphere Virtual Volumes for SolidFire Storage Configuration Guide, available from the NetApp SolidFire Support Portal, for information about registering the VASA provider, creating and managing virtual volume datastores, and managing storage based on policies. Enabling Virtual Volumes You must manually enable vsphere Virtual Volumes (VVols) functionality through the SolidFire Element OS Web UI. The SolidFire system comes with VVols functionality disabled by default, and it is not automatically enabled as part of a new installation or upgrade. Enabling the VVol feature is a one-time configuration task. Prerequisites The SolidFire cluster must be running Element OS version 9.0 or later. NetApp SolidFire Element OS 10.0 User Guide 127

128 Virtual Volumes The SolidFire cluster must be connected to an ESXi 6.0 and later environment that is compatible with VVols. 1. Go to Clusters > Settings. 2. Find the cluster-specific settings for Virtual Volumes. 3. Click Enable Virtual Volumes. CAUTION: Once enabled, VVols functionality cannot be disabled. Enabling vsphere Virtual Volumes functionality permanently changes Element OS configuration. You should only enable VVols functionality if your cluster is connected to a VMware ESXi VVols-compatible environment. You can only disable the VVols feature and restore the default settings by returning the cluster to the factory image. 4. Click Yes to confirm the Virtual Volumes configuration change. The VVols tab appears in the Element OS UI. NOTE: When VVols functionality is enabled, the SolidFire cluster starts the VASA Provider, opens port 8444 for VASA traffic, and creates protocol endpoints that can be discovered by vcenter and all ESXi hosts. 5. Copy the VASA Provider URL from the Virtual Volumes settings in Clusters > Settings. You will use this URL to register the VASA Provider in vcenter. 6. Create a storage container. See Creating a Storage Container. NOTE: You must create at least one storage container so that VMs can be provisioned to a VVol datastore. 7. Go to VVols > Protocol Endpoints. 8. Verify that a protocol endpoint has been created for each node in the cluster. NOTE: Additional configuration tasks are required in vsphere. See the VMware vsphere Virtual Volumes for SolidFire Storage Configuration Guide to register the VASA Provider in vcenter, create and manage VVol datastores, and manage storage based on policies. Viewing Virtual Volume Details You can review virtual volume information for all active virtual volumes on the cluster in the Element OS user interface. You can also view performance activity for each virtual volume, including input, output, throughput, latency, queue depth, and volume information. Prerequisites You have enabled VVols functionality in the Element OS for the cluster. You have created an associated storage container. You have configured your vsphere cluster to use SolidFire VVols functionality. You have created at least one VM in vsphere. 1. Go to VVols > Virtual Volumes. The information for all active virtual volumes displays. 2. Click the Actions button ( ) for the virtual volume you wish to review. 3. In the resulting menu, select View Details. NetApp SolidFire Element OS 10.0 User Guide 128

129 Virtual Volumes Virtual Volume Details On the VVols > Virtual Volumes page, you can view the following virtual volume information for all active virtual volumes on the cluster. Heading Volume ID Snapshot ID Parent Virtual Volume ID Virtual Volume ID Name Storage Container Guest OS Type Virtual Volume Type Access Size Snapshots Min IOPS Max IOPS Burst IOPS VMW_VmID Create Time Description The ID of the underlying volume. The ID of the underlying volume snapshot. The value is 0 if the virtual volume does not represent a SolidFire snapshot. The virtual volume ID of the parent virtual volume. If the ID is all zeros, the virtual volume is independent with no link to a parent. The UUID of the virtual volume. The name assigned to the virtual volume. The storage container that owns the virtual volume. Operating system associated with the virtual volume. The virtual volume type: Config, Data, Memory, Swap, or Other. The read/write permissions assigned to the virtual volume. The size of the virtual volume in GB or GiB. The number of associated snapshots. Click the number to link to snapshot details. The minimum IOPS QoS setting of the virtual volume. The maximum IOPS QoS setting of the virtual volume. The maximum burst QoS setting of the virtual volume. Information in fields prefaced with "VMW_" are defined by VMware. See VMware documentation for descriptions. The time the virtual volume creation task was completed. Individual Virtual Volume Details On the VVols > Virtual Volumes page, you can view the following virtual volume information when you select an individual virtual volume and view its details. Heading VMW_XXX Parent Virtual Volume ID Virtual Volume ID Virtual Volume Type Volume ID Description Information in fields prefaced with "VMW_" are defined by VMware. See VMware documentation for descriptions. The virtual volume ID of the parent virtual volume. If the ID is all zeros, the virtual volume is independent with no link to a parent. The UUID of the virtual volume. The virtual volume type: Config, Data, Memory, Swap, or Other. The ID of the underlying volume. NetApp SolidFire Element OS 10.0 User Guide 129

130 Virtual Volumes Heading Access Account Name Access Groups Total Volume Size Non-Zero Blocks Zero Blocks Snapshots Min IOPS Max IOPS Burst IOPS Enable 512 Volumes Paired Create Time Block Size Unaligned Writes Unaligned Reads scsieuideviceid scsinaadeviceid Attributes Description The read/write permissions assigned to the virtual volume. Name of the account containing the volume. Associated volume access groups. Total provisioned capacity in bytes. Total number of 4KiB blocks with data after the last garbage collection operation has completed. Total number of 4KiB blocks without data after the last round of garbage collection operation has completed. The number of associated snapshots. Click the number to link to snapshot details. The minimum IOPS QoS setting of the virtual volume. The maximum IOPS QoS setting of the virtual volume. The maximum burst QoS setting of the virtual volume. Because virtual volumes always use 512-byte block size emulation, the value is always yes. Indicates if a volume is paired. The time the virtual volume creation task was completed. Size of the blocks on the volume. For 512e volumes, the number of write operations that were not on a 4k sector boundary. High numbers of unaligned writes may indicate improper partition alignment. For 512e volumes, the number of read operations that were not on a 4k sector boundary. High numbers of unaligned reads may indicate improper partition alignment. Globally unique SCSI device identifier for the volume in EUI-64 based 16-byte format. Globally unique SCSI device identifier for the volume in NAA IEEE Registered Extended format. List of Name/Value pairs in JSON object format. Deleting a Virtual Volume Although virtual volumes should always be deleted from the VMware Management Layer, the functionality for you to delete virtual volumes is enabled from the Element OS user interface. 1. Go to VVols > Virtual Volumes. 2. Click the Actions button ( ) for the virtual volume you wish to delete. 3. In the resulting menu, select Delete. Caution: Virtual volumes should always be deleted from the VMware Management Layer to ensure that the virtual volume is properly unbound before deletion. You should only delete a virtual volume from the Element OS user interface when absolutely necessary, such as when vsphere fails to clean up virtual volumes on SolidFire storage. If you delete a virtual volume from the Element OS user interface, the volume will be purged immediately. NetApp SolidFire Element OS 10.0 User Guide 130

131 Virtual Volumes 4. Confirm the action. 5. Refresh the list of virtual volumes to confirm that the virtual volume has been removed. 6. (Optional) Go to Reporting > Event Log to confirm that the purge has been successful. Storage Containers Storage containers are logical constructs that map to SolidFire accounts and are used for reporting and resource allocation. They pool raw storage capacity or aggregate storage capabilities that the storage system can provide to virtual volumes. A VVol datastore that is created in vsphere is mapped to an individual storage container. A single storage container has all available resources from the SolidFire cluster by default. If more granular governance for multi-tenancy is required, multiple storage containers can be created. Neither storage containers nor traditional account types can contain both virtual volumes and traditional volumes. A maximum of four storage containers per cluster is supported. A minimum of one storage container is required to enable VVols functionality. On the VVols > Storage Containers page in the Element OS Web UI, you can create, delete, and manage storage containers. You can discover storage containers in vcenter during VVols creation. Creating a Storage Container You can create storage containers in the SolidFire Element OS user interface and discover them in vcenter. You must create at least one storage container to begin provisioning VVol-backed virtual machines. Prerequisites You have enabled VVols functionality in the Element OS for the cluster. 1. Go to VVols > Storage Containers. 2. Click the Create Storage Containers button. 3. Enter storage container information in the Create a New Storage Container dialog: a. Enter a name for the storage container. b. Configure initiator and target secrets for CHAP. Best Practices: Leave the CHAP Settings fields blank to automatically generate secrets. c. Click the Create Storage Container button. 4. Verify that the new storage container appears in the list in the Storage Containers sub-tab. NOTE: Because a SolidFire account ID is created automatically and assigned to the storage container, it is not necessary to manually create an account. Viewing Storage Container Details You can review information for all active storage containers on the cluster in the Element OS user interface. You can also view performance activity for each storage container, including input, output, throughput, and available storage container and volume information. Prerequisites You have enabled VVols functionality in the Element OS for the cluster. At least one storage container is available to select. NetApp SolidFire Element OS 10.0 User Guide 131

132 Virtual Volumes 1. Go to VVols > Storage Containers. The information for all active storage containers displays. 2. Click the Actions button ( ) for the storage container you wish to review. 3. In the resulting menu, select View Details. Storage Container Details On the VVols > Storage Containers page, you can view the following information for all active storage containers on the cluster. Heading Account ID Name Status PE Type Storage Container ID Active Volumes Description The ID of the SolidFire account associated with the storage container. The name of the storage container. The status of the storage container. Possible values: Active: The storage container is in use. Locked: The storage container is locked. Indicates the protocol endpoint type (SCSI is the only available protocol for Element OS version 9.0). The UUID of the virtual volume storage container. The number of active virtual volumes associated with the storage container. Individual Storage Container Details On the VVols > Storage Containers page, you can view the following storage container information when you select an individual storage container and view its details. Heading Account ID Name Status Chap Initiator Secret Chap Target Secret Storage Container ID Protocol Endpoint Type Description The ID of the cluster account associated with the storage container. The name of the storage container. The status of the storage container. Possible values: Active: The storage container is in use. Locked: The storage container is locked. The unique CHAP secret for the initiator. The unique CHAP secret for the target. The UUID of the virtual volume storage container. Indicates the protocol endpoint type (SCSI is the only available protocol for Element OS version 9.0). Editing a Storage Container You can modify storage container CHAP authentication in the Element OS Web UI. NetApp SolidFire Element OS 10.0 User Guide 132

133 Virtual Volumes Prerequisites You have enabled VVols functionality in the Element OS for the cluster. An existing storage container is available to modify. 1. Go to VVols > Storage Containers. 2. Click the Actions button ( ) for the storage container you wish to edit. 3. In the resulting menu, select Edit. 4. Under CHAP Settings, edit the Initiator Secret and Target Secret credentials used for authentication. NOTE: If you do not change the CHAP Settings credentials, they remain the same. If you make the credentials fields blank, the system automatically generates new secrets. 5. Click Save Changes. Deleting a Storage Container You can delete storage containers from the Element OS user interface. Prerequisites You have enabled VVols functionality in the Element OS for the cluster. An existing storage container is available to delete. All virtual machines have been removed from the VVol datastore. 1. Go to VVols > Storage Containers. 2. Click the Actions button ( ) for the storage container you wish to delete. 3. In the resulting menu, select Delete. 4. Confirm the action. 5. Refresh the list of storage containers in the Storage Containers sub-tab to confirm that storage container has been removed. Protocol Endpoints VMware ESXi hosts use logical I/O proxies known as protocol endpoints to communicate with virtual volumes. ESXi hosts bind virtual volumes to protocol endpoints to perform I/O operations. When a virtual machine on the host performs an I/O operation, the associated protocol endpoint directs I/O to the virtual volume with which it is paired. Protocol endpoints in a SolidFire cluster function as SCSI administrative logical units. Each protocol endpoint is created automatically by the SolidFire cluster. For every node in a SolidFire cluster, a corresponding protocol endpoint is created. For example, a four-node cluster will have four protocol endpoints. For the Element OS version 9.0 release, iscsi is the only supported protocol. Fibre Channel protocol is not supported. Protocol endpoints cannot be deleted or modified by a user, are not associated with an account, and cannot be added to a volume access group. On the VVols > Protocol Endpoints page in the Web UI, you can review protocol endpoint information. See the following table for a description of each column on the page. NetApp SolidFire Element OS 10.0 User Guide 133

134 Virtual Volumes Heading Primary Provider ID Secondary Provider ID Protocol Endpoint ID Protocol Endpoint State Provider Type SCSI NAA Device ID Description The ID of the primary protocol endpoint provider. The ID of the secondary protocol endpoint provider. The UUID of the protocol endpoint. The status of the protocol endpoint. Possible values: Active: The protocol endpoint is in use. Start: The protocol endpoint is starting. Failover: The protocol endpoint has failed over. Reserved: The protocol endpoint is reserved. The type of the protocol endpoint's provider. Possible values: Primary Secondary The globally unique SCSI device identifier for the protocol endpoint in NAA IEEE Registered Extended Format. Bindings To perform I/O operations with a virtual volume, an ESXi host must first bind the virtual volume. The SolidFire cluster chooses an optimal protocol endpoint, creates a binding that associates the ESXi host and virtual volume with the protocol endpoint, and returns the binding to the ESXi host. Once bound, the ESXi host can perform I/O operations with the bound virtual volume. On the VVols > Bindings page, you can verify and filter binding information for each virtual volume. See the following table for a description of each column on the page. Heading Host ID Protocol Endpoint ID Protocol Endpoint in Band ID Description The UUID for the ESXi host that hosts virtual volumes and is known to the cluster. Protocol endpoint IDs that correspond to each node in the SolidFire cluster. The SCSI NAA device ID of the protocol endpoint. Protocol Endpoint Type Indicates the protocol endpoint type (SCSI is the only available protocol for Element OS version 9.0). VVol Binding ID VVol ID VVol Secondary ID The binding UUID of the virtual volume. The universally unique identifier (UUID) of the virtual volume. The secondary ID of the virtual volume that is a SCSI second level LUN ID. Hosts On the VVols > Hosts page in the Web UI, you can view Information about VMware ESXi hosts that host virtual volumes. See the following table for a description of the columns on the page. NetApp SolidFire Element OS 10.0 User Guide 134

135 Virtual Volumes Heading Host ID Host Address Bindings ESX Cluster ID Initiator IQNs SolidFire Protocol Endpoint IDs Description The UUID for the ESXi host that hosts virtual volumes and is known to the cluster. The IP address or DNS name for the ESXi host. Binding IDs for all virtual volumes bound by the ESXi host. The vsphere host cluster ID or vcenter GUID. Initiator IQNs for the virtual volume host. The protocol endpoints that are currently visible to the ESXi host. NetApp SolidFire Element OS 10.0 User Guide 135

136 Hardware Maintenance Hardware Maintenance Hardware maintenance should only be performed when instructed to do so by NetApp SolidFire Active Support or if you have received training on proper hardware troubleshooting and replacement procedures. See the following topics to learn about or perform hardware maintenance related tasks: Automatic Recovery Scenarios Single Node Recovery Multiple Node Recovery Drive Recovery Replacing an SSD Adding a Storage Node Removing a Storage Node Powering Down a Node Powering Up a Node Powering Down a Cluster Automatic Recovery Scenarios There are two copies of data kept in a SolidFire storage system. The Double Helix function automatically repairs data from a known good copy of data after a drive failure and ensures two copies of unique data are always kept on a cluster. Single Node Recovery If a single node in a cluster fails, data availability should not be affected. Any virtual IP addresses, MVIP or SVIP, and iscsi sessions recover on other nodes in the cluster. The Double Helix data protection feature automatically re-replicates data across all other nodes and drives in the cluster should a node go offline for more than 5.5 minutes. Multiple Node Recovery Double Helix data protection continues to re-replicate data on failed nodes and drives provided there is enough free capacity in the cluster to do so. If two drives or nodes that contain the same data fail simultaneously, Double Helix is not able to re-replicate that data and reports a cluster error and event log entry indicating any volumes that were affected. Drive Recovery If a drive fails, Double Helix redistributes the data on the drive across the nodes remaining on the cluster. Multiple drive failures on the same node are not an issue since Element OS protects against two copies of data residing on the same node. Drive failures are reported in the Event Log and Alerts. A failed drive results in the following events: Data is migrated off of the drive. Overall cluster capacity is reduced by the capacity of the drive. Double Helix data protection ensures that there are two valid copies of the data. NetApp SolidFire Element OS 10.0 User Guide 136

137 Hardware Maintenance Replacing an SSD You can replace a failed SSD drive with a replacement drive. When an SSD drive fails, a notification is sent to the Event Log and Alerts. This information is accessed through the Reporting tab in the Element OS Web UI. SolidFire SSDs are hot-swappable. If you suspect an SSD has failed, we encourage you to contact NetApp SolidFire Active Support, so that NetApp can verify the failure and walk you through the proper resolution procedure. Support also works with you to get a replacement drive and replenish the spare in accordance with the Service Level Agreement. Best Practices: NetApp recommends that you maintain on-site spares suggested by Support to allow for immediate replacement. NOTE: For testing purposes, if you are simulating a drive failure by pulling a drive from a node, you must wait 30 seconds prior to inserting the drive back into the drive slot. The following image illustrates the numbering of the drives. 1. Press on the latch mechanism and carefully pull the drive carrier from the slot. 2. Place the spare drive in the empty slot. 3. Press firmly to seat it into the backplane. 4. Close the latch mechanism. After the drive is seated, the drive should show up in the Element OS Web UI list of available drives on the Cluster > Drives page. 5. Use the API or Web UI to add it back to the cluster for it to become active. No other actions are required. Helix will automatically replicate the appropriate data to the drive. Adding a Storage Node If you are installing a new storage node, you can use the SolidFire Storage Node Getting Started Guide provided with your new node. When the storage node has been set up and configured, it registers itself on the cluster identified when the node was configured and appears in the Web UI in the list of pending nodes on the Cluster > Nodes page. See Adding a Node to a Cluster for more information about adding a pending node to a cluster. After the node is added to the cluster, the drives become available and are identified on the cluster. When you add the drives, you see the cluster capacity increase in the Reporting Overview. NOTE: To allow Helix time to replicate data, drives should only be added to an existing cluster one node at a time. Removing a Storage Node You can remove nodes from a cluster for maintenance or replacement. NetApp recommends using the Element OS Web UI or API to remove nodes before taking them offline. Caution: SF-series systems do not support removal of a drive if it results in an insufficient amount of storage to migrate data. NetApp SolidFire Element OS 10.0 User Guide 137

138 Hardware Maintenance 1. Ensure there is sufficient capacity in the cluster to create a copy of the data on the node. 2. Remove drives from the node. See Removing a Drive for more details. This results in the system migrating data from the node's drives to other drives in the cluster. The time this process takes is dependent on how much data must be migrated. 3. Remove the node from the cluster. See Removing Nodes from a Cluster for more details. Powering Down a Node NOTE: Powering down a node should be done under the direction of NetApp SolidFire Active Support. Powering down nodes and clusters involves risks if not performed properly. To safely reboot or power down a node, you can use the Shutdown API command. Use this command to reboot a node, or do a full power-off for a node. If a node has been down longer than 5.5 minutes under any type of shutdown condition, the Element OS software determines that the node is not coming back to join the cluster. Double Helix data protection begins the task of writing single replicated blocks to another location to re-replicate the data. In this case, contact NetApp SolidFire Active Support so the downed node can be analyzed. Powering Up a Node If a node is in a down, or in an off state, contact NetApp SolidFire Active Support prior to bringing it back online. Support will investigate why the node is offline and may need to reset the node before it is brought back online. After a node is brought back online, its drives must be added back to the cluster, depending on how long it has been out of service. See Adding a Node to a Cluster for details about adding a node back to a cluster. Powering Down a Cluster You can safely power down an entire cluster after you have contacted Support and completed preliminary steps. NOTE: Contact NetApp SolidFire Active Support prior to powering down a cluster. Prerequisites Prepare the cluster for shutdown by doing the following: Stop all I/O. Disconnect all iscsi sessions. Shutdown the nodes at the same exact time. The next steps are to use the Shutdown API method to power down all nodes in the cluster. 1. Navigate to the MVIP on the cluster to open the Web UI. 2. Note the nodes listed in the Nodes list. 3. Run the Shutdown API method with option=halt specified on each Node ID in the cluster. NetApp SolidFire Element OS 10.0 User Guide 138

139 Appendix A Management Node Overview Appendix A Management Node Overview The management node enables remote access to your SolidFire cluster. Remote access is required for the following functions: SolidFire Element OS software upgrades. Remote alerting and historical data collection through Active IQ. Remote support tunneling for NetApp SolidFire Active Support hands-on access. Ancillary functions for NetApp SolidFire Plug-in for VMware vcenter functionality and syslog aggregation. You can install the management node using the same base image as a storage node. The management node runs the SolidFire version of the OS/kernel, uses sfconfig for network configuration, has a TUI/Node UI for configuration, and has an sfadmin user with OTP enabled. See the following topics to learn about or perform management node-related tasks: Management Node Images and Platforms Installing a Management Node Configuring Remote Support Firewall Ports Enabling Remote Support Connections Configuring the Management Node with a Proxy Server Running sfsetproxy Commands Setting the Management Node Host and Port Arguments Setting Up the connection.json File for Active IQ Accessing Management Node Settings Modifying Management Node Settings Management Node Settings for eth0 Networks Network Settings for eth0 Management Node Cluster Settings Management Node Cluster Interface Settings Management Node System Tests Running System Utilities on Management Node Creating a Cluster Support Bundle NetApp SolidFire Element OS 10.0 User Guide 139

140 Appendix A Management Node Overview Management Node Images and Platforms The following table identifies the supported platforms for management nodes as well as the installation image type for each platform: Platform Microsoft Hyper-V KVM VMware vsphere VMware vsphere Citrix XenServer TM OpenStack Installation Image Type.iso.iso.iso.ova.iso.qcow2 Installing a Management Node You can install the management node using the appropriate image for your configuration. NOTE: It is recommended that you install your management node before setting up your new cluster. Prerequisites Use a minimum screen resolution of 1024x768 at 16-bit color. 1. Create a new 64-bit virtual machine with the following configuration: One virtual CPU 4GB RAM 100GB virtual disk One virtual network interface with internet access (Optional) One virtual network interface with management network access to the SolidFire Element OS cluster 2. Attach the solidfire-fdva-xxxx-xxxx.iso to the virtual machine, and boot to the.iso install image. For example: solidfire-fdva-oxygen-x.x.x.xxxx.iso NOTE: Contact NetApp SolidFire Active Support for the latest version of the management node.iso image. NOTE: Installing a management node removes all data from the VM. 3. Power on the management node after the installation completes. 4. Create a management node admin user using the TUI. 5. Configure the management node network using the TUI. Configuring Remote Support Firewall Ports The TCP ports must permit bi-directional communications between the SolidFire Support Server, management node, and SolidFire nodes. For a complete list of TCP ports and functions, see SolidFire System Architecture. NetApp SolidFire Element OS 10.0 User Guide 140

141 Appendix A Management Node Overview Enabling Remote Support Connections In the event of a situation requiring technical support for your SolidFire storage system, NetApp SolidFire Active Support can connect remotely with your system. To gain remote access, Active Support can open a reverse Secure Shell (SSH) connection to your environment. Opening TCP Ports for SSH Reverse Tunnel You can open a TCP port for SSH reverse tunnel connection with NetApp SolidFire Active Support. This connection allows Support to log in to your management node. If your management node is behind a proxy server, the following TCP ports are required in the sshd.config file: TCP Port Description Direction 443 API calls/https for reverse port forwarding via open support tunnel to the Web UI Management node to SolidFire nodes 22 SSH login access Management node to/from SolidFire nodes 1. Log in to your management node and open a terminal session. 2. At a prompt, type the following: rst r sfsupport.solidfire.com u element p <port number> NetApp SolidFire Support can provide the port number to access your management node with an SSH connection. 3. (Optional) To close a remote support tunnel, type the following: rst --killall Configuring the Management Node with a Proxy Server If your SolidFire cluster is behind a proxy server, you must configure the proxy settings so you can reach a public network. The sfsetproxy command is used to configure the proxy settings for a SolidFire management node. The sfsetproxy command will modify the settings for the following: Reverse SSH tunnel apt-get and aptitude (via /etc/profile.d/sf_proxy_settings.sh) apt-mirror (via wget) collector (script) Running sfsetproxy Commands You can check that the proxy settings are consistent by running sfsetproxy without any arguments. The following example demonstrates the command without arguments. Example: admin@mnode:~$ sudo sfsetproxy Example output from this command: NetApp SolidFire Element OS 10.0 User Guide 141

142 Appendix A Management Node Overview Proxy host: Proxy port: 3128 Run the following command to set the host and port arguments when a user name and password are not required on the proxy server: sfsetproxy [-P ssh_port] Example: sfsetproxy Run the following command to set the host and port arguments when a user name and password are required on the proxy server: sfsetproxy [-P ssh_port] [-u username -p password] ProxyHostnameOrIPAddress ProxyPort (Initial setup of proxy) Example: sfsetproxy u testproxy p solidfire NOTE: This command does not return output if it completes successfully. Setting the Management Node Host and Port Arguments To set the host and port arguments, run the following command: sfsetproxy [-P ssh_port] [-u username -p password] ProxyHostnameOrIPAddress ProxyPort (Initial setup of proxy) Example:./sfsetproxy u testproxy p solidfire NOTE: This command provides no output after completion. Run./sfsetproxy to see if the proxy has been set. Setting Up the connection.json File for Active IQ You need to configure the connection.json file when connecting to SolidFire Active IQ from behind a proxy server. Parameters listed are generally self-explanatory. You may choose to leave out the proxy and certificates steps if they are not required in your environment. The proxyusername and proxypassword are optional even if you specify a proxy server. If you specify a proxyip, you need to specify a proxyport as well. NOTE: The certificates option may be required if the collector is installed on something other than a SolidFire management node. By default, the certificates option looks for the /etc/ssl/certs/ca-certificates.crt file to get the set of trusted root certificates to validate the remote support server SSL certification. If that file does not exist, you can use the certificates file that is maintained by the curl project. The certificates file is located at: Save the cacert.pem file in a desired location, and point the certificates option to that file. NetApp SolidFire Element OS 10.0 User Guide 142

143 Appendix A Management Node Overview 1. Open a terminal window and use SSH to connect to your management node. 2. Become "root" user with the following command: sudo su 3. Change to the following directory: cd /solidfire/collector 4. Change the permissions for the collector.py file to 755 with the following command: Example: sudo chmod 755 collector.py 5. View the help to see the options that you can use to configure the connection.json file: Example: sudo./manage-collector.py --help The following list of optional arguments are displayed: -h, --help --config CONFIG --save-config --set-username USERNAME --set-password --set-mvip MVIP --set-remotehost REMOTEHOST --set-customeruid CUSTOMERUID --get-all --get-username --get-password --get-mvip --get-remotehost --get-customeruid --debug Show help message and exit. Collector configuration file to manage (default:./connection.json). Save the configuration to the collector configuration file. This option is not necessary when calling any of the set commands, the config is saved automatically when using those commands. Set the cluster username in the collector configuration file. Set the cluster password in the collector configuration file. NOTE: new password will be captured at the prompt. Set the cluster MVIP in the collector configuration file. Set the remote host in the collector configuration file. Set the customeruid in the collector configuration file. Get the all parameters from the collector configuration file. Get the cluster username from the collector configuration file. Get the cluster password from the collector configuration file. Get the cluster MVIP from the collector configuration file. Get the remote host from the collector configuration file. Get the customeruid from the collector configuration file. Enable debug messages. NetApp SolidFire Element OS 10.0 User Guide 143

144 Appendix A Management Node Overview 6. Use the following example to set the username, mvip, and customeruid for the collection configuration in the connection.json file. You need to set the password separately. See the next step. Example:./manage-collector.py --set-username <username> --set-mvip <mvip> --set-customeruid <customeruid> The script automatically saves the connection.json file for you. NOTE: When you enter a password using the set-password command, you are prompted to enter the password, and then enter it again to confirm the password. 7. Restart the collector service with the following command: Example: sudo restart sfcollector 8. Verify the connection is working with the following command: Example: tail -f/var/log/sf-collector.log Accessing Management Node Settings You can access a management node similar to accessing a storage node. 1. In a browser window, enter the management node MIP followed by :442. Example: 2. In the authentication dialog, enter an admin user name and password, if required. The Node UI will open where all management node settings can be modified. NOTE: Only one management node is required for reporting to Active IQ and managing upgrades for the SolidFire cluster. However, it might be necessary to have multiple management nodes to allow multiple NetApp SolidFire vcenter plug-ins to connect to a single SolidFire cluster. Modifying Management Node Settings You can configure a management node with new network and cluster settings. Once new settings have been applied to the management node, the settings can be tested to ensure proper communications. A support bundle for a number of nodes or an entire cluster can also be created through the management node. Management Node Settings for eth0 Networks You can modify eth0 network fields for a management node from the Node UI. The network settings for a management node display in the Network Settings tab. The following view is available once you are logged into a management node. NetApp SolidFire Element OS 10.0 User Guide 144

145 Appendix A Management Node Overview Network Settings for eth0 On the Network Settings tab of the Node UI for management node, you can modify the management node network interface fields. Name Method IP Address Subnet Mask Gateway Address Description The method used to configure the interface. Valid methods are: loopback: Used to define the IPv4 loopback interface. manual: Used to define interfaces for which no configuration is done by default. dhcp: May be used to obtain an IP address via DHCP. static: Used to define Ethernet interfaces with statically allocated IPv4 addresses. IP address for the eth0 network. Address subdivisions of the IP network. Router network address to send packets out of the local network. MTU Largest packet size that a network protocol can transmit. Must be greater than or equal to DNS Servers Network interface used for cluster communication. NetApp SolidFire Element OS 10.0 User Guide 145

146 Appendix A Management Node Overview Name Search Domains Status Routes Description Search for additional MAC addresses available to the system. Can be one of the following: UpAndRunning Down Up Static routes to specific hosts or networks via the associated interface the routes are configured to use. Management Node Cluster Settings You can view or modify cluster settings from the Cluster Settings tab of the Node UI for the managment node. Management Node Cluster Interface Settings On the Cluster Settings tab of the Node UI for the management node, you can modify cluster interface fields when a node is in Available, Pending, PendingActive, and Active states. Name Role Hostname Version Default Interface Description Role the management node has in the cluster. Can be only: Management. Name of the management node. Element OS version running on the cluster. Default network interface used for management node communication with the SolidFire cluster. Management Node System Tests Changes to the network settings for the management node can be tested after the settings are made and you have committed them to the network configuration. You can use the tests from the Management Node System Tests tab. NetApp SolidFire Element OS 10.0 User Guide 146

147 Appendix A Management Node Overview The following table describes the system test information available for the management node. Test Type Run All Tests Test Network Config Test Ping Description Starts and runs all test operations. Verifies that the configured network settings match the network settings being used on the system. Pings a specified list of hosts or, if none specified, dynamically builds a list of all registered nodes in the cluster and pings each for simple connectivity. Running System Utilities on Management Node You can use tests in the System Utilities tab in the Node UI on a management node to reset node configuration settings, restart networking, and create or delete a cluster support bundle. You must first be logged into the management node for the cluster to run these utilities. NetApp SolidFire Element OS 10.0 User Guide 147

148 Appendix A Management Node Overview The following table describes the information on the System Utilities page. Create Cluster Support Bundle Delete All Support Bundles Reset Node Restart Networking Creates a support bundle under the management node directory /tmp/bundles. Bundle Name: Name for the bundle. Mvip: MVIP address of the cluster to gather bundles for. Nodes: Specific node IDs to gather bundles for. Specify either MVIP or node IDs but not both. Username: Admin user name. Password: Admin password. Allow Incomplete: Allow gathering process to continue if some of the node bundles cannot be gathered. Extra Args: Should be used only at the request of Support. Deletes any current support bundles residing on the management node. Resets the management node back to a new install image. The network configuration is kept, but all other settings are reset to a default state. Restarts all networking services on the management node. Caution should be taken when using this operation as it will cause temporary loss of networking connectivity. Creating a Cluster Support Bundle You can create a support bundle to assist in diagnostic evaluations of one or more nodes, or all the nodes in a cluster. NetApp SolidFire Support can use the bundles created to determine the issues on a node and help provide solutions. You will need to be NetApp SolidFire Element OS 10.0 User Guide 148

NetApp SolidFire Element OS. Setup Guide. Version March _A0

NetApp SolidFire Element OS. Setup Guide. Version March _A0 NetApp SolidFire Element OS Setup Guide Version 10.2 March 2018 215-12911_A0 doccomments@netapp.com Table of Contents 3 Contents SolidFire system overview... 4 Configuring a storage node... 5 Configuring

More information

NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide

NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide Version 4.0 May 2018 215-13061_C0 Copyright Information Copyright 1994-2018 NetApp, Inc. All Rights Reserved. No part of this document

More information

NetApp SolidFire Element OS. User Guide. Version March _A0

NetApp SolidFire Element OS. User Guide. Version March _A0 NetApp SolidFire Element OS User Guide Version 10.2 March 2018 215-13063_A0 doccomments@netapp.com Table of Contents 3 Contents About this guide... 8 SolidFire system overview... 9 Clusters... 9 Nodes...

More information

NetApp SolidFire Element OS. User Guide. Version June _A0

NetApp SolidFire Element OS. User Guide. Version June _A0 NetApp SolidFire Element OS User Guide Version 10.3 June 2018 215-13201_A0 doccomments@netapp.com Table of Contents 3 Contents About this guide... 8 SolidFire system overview... 9 Clusters... 9 Nodes...

More information

NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide

NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide Version 4.1 May 2018 215-13142_A0 Copyright Information Copyright 1994-2018 NetApp, Inc. All Rights Reserved. No part of this document

More information

NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide

NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide NetApp SolidFire Plug-in for VMware vcenter Server Web Client User Guide Version 3.0.1 October 2017 215-12733_A0 Copyright Information Copyright 1994-2017 NetApp, Inc. All Rights Reserved. No part of this

More information

USER GUIDE. SolidFire Element 8.0 User Guide 06/10/2015

USER GUIDE. SolidFire Element 8.0 User Guide 06/10/2015 USER GUIDE SolidFire Element 8.0 User Guide 06/10/2015 TABLE OF CONTENTS Document Overview 1 SolidFire Element 8.0 Features 2 SolidFire System Overview 3 Node 4 Cluster 5 Getting Started 6 Setting up SolidFire

More information

NetApp Element Plug-in for vcenter Server User Guide

NetApp Element Plug-in for vcenter Server User Guide NetApp Element Plug-in for vcenter Server User Guide Version 4.2 November 2018 215-13259_A0 doccomments@netapp.com Table of Contents 3 Contents About this guide... 8 vcenter Plug-in overview... 9 Network

More information

NetApp HCI Network Setup Guide

NetApp HCI Network Setup Guide Technical Report NetApp HCI Network Setup Guide Version 1.2 Aaron Patten, NetApp April 2018 TR-4679 TABLE OF CONTENTS 1 Introduction... 4 2 Hardware... 4 2.1 Node and Chassis Layout... 4 2.2 Node Types...

More information

FlexPod Datacenter with SolidFire All-Flash Array Add-On

FlexPod Datacenter with SolidFire All-Flash Array Add-On NetApp Verified Architecture FlexPod Datacenter with SolidFire All-Flash Array Add-On NVA Deployment Karthick Radhakrishnan, David Klem, NetApp April 2017 NVA-0027-DEPLOY Version 1.0 TABLE OF CONTENTS

More information

Cluster and SVM Peering Express Guide

Cluster and SVM Peering Express Guide ONTAP 9 Cluster and SVM Peering Express Guide December 2017 215-11182_E0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide... 4 Prerequisites

More information

iscsi Configuration for ESXi using VSC Express Guide

iscsi Configuration for ESXi using VSC Express Guide ONTAP 9 iscsi Configuration for ESXi using VSC Express Guide May 2018 215-11181_E0 doccomments@netapp.com Updated for ONTAP 9.4 Table of Contents 3 Contents Deciding whether to use this guide... 4 iscsi

More information

OnCommand Unified Manager 7.2: Best Practices Guide

OnCommand Unified Manager 7.2: Best Practices Guide Technical Report OnCommand Unified : Best Practices Guide Dhiman Chakraborty August 2017 TR-4621 Version 1.0 Abstract NetApp OnCommand Unified is the most comprehensive product for managing and monitoring

More information

NetApp SolidFire Active IQ User Guide

NetApp SolidFire Active IQ User Guide NetApp SolidFire Active IQ User Guide For Version 4.0 April 2018 Copyright Information Copyright 1994-2018 NetApp, Inc. All Rights Reserved. No part of this document covered by copyright may be reproduced

More information

StorageGRID Webscale NAS Bridge Management API Guide

StorageGRID Webscale NAS Bridge Management API Guide StorageGRID Webscale NAS Bridge 2.0.3 Management API Guide January 2018 215-12414_B0 doccomments@netapp.com Table of Contents 3 Contents Understanding the NAS Bridge management API... 4 RESTful web services

More information

Replacing drives for SolidFire storage nodes

Replacing drives for SolidFire storage nodes NetApp Replacing drives for SolidFire storage nodes You can hot-swap a failed solid-state disk (SSD) drive with a replacement drive. Before you begin You have a replacement drive. You have an electrostatic

More information

NetApp HCI QoS and Mixed Workloads

NetApp HCI QoS and Mixed Workloads Technical Report NetApp HCI QoS and Mixed Workloads Stephen Carl, NetApp October 2017 TR-4632 Abstract This document introduces the NetApp HCI solution to infrastructure administrators and provides important

More information

NFS Client Configuration with VAAI for ESX Express Guide

NFS Client Configuration with VAAI for ESX Express Guide ONTAP 9 NFS Client Configuration with VAAI for ESX Express Guide February 2018 215-11184_F0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide...

More information

NetApp AltaVault Cloud-Integrated Storage Appliances

NetApp AltaVault Cloud-Integrated Storage Appliances Technical Report NetApp AltaVault Cloud-Integrated Storage Appliances Solution Deployment: AltaVault Christopher Wong, NetApp November 2017 TR-4417 Abstract This solution deployment guide outlines how

More information

SQL Server on NetApp HCI

SQL Server on NetApp HCI Technical Report SQL Server on NetApp HCI Bobby Oommen, NetApp October 2017 TR-4638 Abstract This document introduces the NetApp HCI solution to infrastructure administrators and provides important design

More information

VMware vsphere Virtual Volumes for SolidFire Storage Configuration Guide

VMware vsphere Virtual Volumes for SolidFire Storage Configuration Guide Technical Report VMware vsphere Virtual Volumes for SolidFire Storage Aaron Patten and Andy Banta, NetApp October 2017 TR-4642 TABLE OF CONTENTS 1 Introduction... 4 1.1 Related Documents...4 2 Virtual

More information

NetApp Cloud Volumes Service for AWS

NetApp Cloud Volumes Service for AWS NetApp Cloud Volumes Service for AWS AWS Account Setup Cloud Volumes Team, NetApp, Inc. March 29, 2019 Abstract This document provides instructions to set up the initial AWS environment for using the NetApp

More information

HCI File Services Powered by ONTAP Select

HCI File Services Powered by ONTAP Select Technical Report HCI File Services Powered by ONTAP Select Quick Start Guide Aaron Patten, NetApp March 2018 TR-4669 Abstract NetApp ONTAP Select extends the NetApp HCI product, adding a rich set of file

More information

AltaVault Cloud Integrated Storage Installation and Service Guide for Virtual Appliances

AltaVault Cloud Integrated Storage Installation and Service Guide for Virtual Appliances AltaVault Cloud Integrated Storage 4.4.1 Installation and Service Guide for Virtual Appliances April 2018 215-130007_B0 doccomments@netapp.com Table of Contents 3 Contents System requirements and supported

More information

Volume Disaster Recovery Preparation Express Guide

Volume Disaster Recovery Preparation Express Guide ONTAP 9 Volume Disaster Recovery Preparation Express Guide August 2018 215-11187_F0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use this guide... 4 Volume disaster recovery

More information

NetApp Element Software Remote Replication

NetApp Element Software Remote Replication Technical Report NetApp Element Software Remote Replication Feature Description and Deployment Guide Pavani Krishna Goutham Baru, NetApp January 2019 TR-4741 Abstract This document describes different

More information

Migrating Performance Data to NetApp OnCommand Unified Manager 7.2

Migrating Performance Data to NetApp OnCommand Unified Manager 7.2 Technical Report Migrating Performance Data to NetApp OnCommand Unified Manager 7.2 Dhiman Chakraborty, Yuvaraju B, Tom Onacki, NetApp March 2018 TR-4589 Version 1.2 Abstract NetApp OnCommand Unified Manager

More information

NetApp AltaVault Cloud-Integrated Storage Appliances

NetApp AltaVault Cloud-Integrated Storage Appliances Technical Report NetApp AltaVault Cloud-Integrated Storage Appliances Solution Deployment: AltaVault Christopher Wong, NetApp November 2017 TR-4422 Abstract This solution deployment guide outlines how

More information

OnCommand Cloud Manager 3.2 Provisioning NFS Volumes Using the Volume View

OnCommand Cloud Manager 3.2 Provisioning NFS Volumes Using the Volume View OnCommand Cloud Manager 3.2 Provisioning NFS Volumes Using the Volume View April 2017 215-12036_B0 doccomments@netapp.com Table of Contents 3 Contents Logging in to Cloud Manager... 4 Creating NFS volumes...

More information

Performance Characterization of ONTAP Cloud in Azure with Application Workloads

Performance Characterization of ONTAP Cloud in Azure with Application Workloads Technical Report Performance Characterization of ONTAP Cloud in NetApp Data Fabric Group, NetApp March 2018 TR-4671 Abstract This technical report examines the performance and fit of application workloads

More information

Upgrade Express Guide

Upgrade Express Guide ONTAP 9 Upgrade Express Guide December 2017 215-11234_G0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide... 4 Cluster software update workflow...

More information

Clustered Data ONTAP 8.3

Clustered Data ONTAP 8.3 Clustered Data ONTAP 8.3 Volume Disaster Recovery Preparation Express Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:

More information

AltaVault Cloud Integrated Storage Installation and Service Guide for Cloud Appliances

AltaVault Cloud Integrated Storage Installation and Service Guide for Cloud Appliances AltaVault Cloud Integrated Storage 4.4.1 Installation and Service Guide for Cloud Appliances March 2018 215-13006_A0 doccomments@netapp.com Table of Contents 3 Contents Introduction to AltaVault cloud-based

More information

MongoDB Database on NetApp AFF8080

MongoDB Database on NetApp AFF8080 Technical Report MongoDB Database on NetApp AFF8080 Customer Blueprint Ranga Sankar, NetApp February 2018 TR-4659 Abstract This document describes the installation of MongoDB database on NetApp AFF8080

More information

NetApp AltaVault Cloud-Integrated Storage Appliances

NetApp AltaVault Cloud-Integrated Storage Appliances Technical Report NetApp AltaVault Cloud-Integrated Storage Appliances Solution Deployment: AltaVault with EMC NetWorker Christopher Wong, NetApp November 2017 TR-4425 Abstract This solution deployment

More information

Volume Disaster Recovery Express Guide

Volume Disaster Recovery Express Guide ONTAP 9 Volume Disaster Recovery Express Guide December 2017 215-11188_E0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide... 4 Volume disaster

More information

Cluster Switch Setup Guide for Cisco Switches. May _A0_UR006

Cluster Switch Setup Guide for Cisco Switches. May _A0_UR006 Cluster Switch Setup Guide for Cisco Switches May 2018 215-06775_A0_UR006 doccomments@netapp.com Table of Contents 3 Contents Switches supported by ONTAP... 4 Setting up the switches... 5 Required cluster

More information

E-Series Cabling E-Series Hardware

E-Series Cabling E-Series Hardware E-Series Cabling E-Series Hardware September 2018 215-13049_A0 doccomments@netapp.com Table of Contents 3 Contents Overview and requirements... 4 Cabling your storage system... 5 Host cabling... 5 Cabling

More information

Performance Characterization of ONTAP Cloud in Amazon Web Services with Application Workloads

Performance Characterization of ONTAP Cloud in Amazon Web Services with Application Workloads Technical Report Performance Characterization of ONTAP Cloud in Amazon Web Services with Application Workloads NetApp Data Fabric Group, NetApp March 2018 TR-4383 Abstract This technical report examines

More information

Disaster Recovery for Enterprise Applications with ONTAP Cloud

Disaster Recovery for Enterprise Applications with ONTAP Cloud Technical Report Disaster Recovery for Enterprise Applications with ONTAP Cloud Step-by-Step Guide Shmulik Alfandari, NetApp September 2016 TR-4554i Abstract This document describes the required steps

More information

Navigating VSC 6.1 for VMware vsphere

Navigating VSC 6.1 for VMware vsphere Navigating VSC 6.1 for VMware vsphere Virtual Storage Console for VMware vsphere works with the VMware vsphere Web Client and has dropped support for the VMware Desktop Client. This change means that VSC

More information

Big-Data Pipeline on ONTAP and Orchestration with Robin Cloud Platform

Big-Data Pipeline on ONTAP and Orchestration with Robin Cloud Platform Technical Report Big-Data Pipeline on ONTAP and Orchestration with Robin Cloud Platform Ranga Sankar, Jayakumar Chendamarai, Aaron Carter, David Bellizzi, NetApp July 2018 TR-4706 Abstract This document

More information

Volume Move Express Guide

Volume Move Express Guide ONTAP 9 Volume Move Express Guide June 2018 215-11197_G0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use this guide... 4 Volume move workflow... 5 Planning the method and timing

More information

NetApp Data ONTAP Edge on SoftLayer

NetApp Data ONTAP Edge on SoftLayer Technical Report NetApp Data ONTAP Edge on SoftLayer Express Setup Guide Jarod Rodriguez, NetApp April 2016 TR-4502 Abstract This document provides instructions on how to quickly install NetApp Data ONTAP

More information

FlexArray Virtualization Implementation Guide for NetApp E-Series Storage

FlexArray Virtualization Implementation Guide for NetApp E-Series Storage ONTAP 9 FlexArray Virtualization Implementation Guide for NetApp E-Series Storage June 2017 215-11151-C0 doccomments@netapp.com Updated for ONTAP 9.2 Table of Contents 3 Contents Where to find information

More information

Nokia Intrusion Prevention with Sourcefire. Appliance Quick Setup Guide

Nokia Intrusion Prevention with Sourcefire. Appliance Quick Setup Guide Nokia Intrusion Prevention with Sourcefire Appliance Quick Setup Guide Part Number N450000567 Rev 001 Published September 2007 COPYRIGHT 2007 Nokia. All rights reserved. Rights reserved under the copyright

More information

Replication between SolidFire Element OS and ONTAP

Replication between SolidFire Element OS and ONTAP ONTAP 9 Replication between SolidFire Element OS and ONTAP August 2018 215-12645_D0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use the Replication between SolidFire Element

More information

Replacing a PCIe card

Replacing a PCIe card AFF A700s systems Replacing a PCIe card To replace a PCIe card, you must disconnect the cables from the cards in the riser, remove the riser, replace the riser, and then recable the cards in that riser.

More information

Setting Up Quest QoreStor with Veeam Backup & Replication. Technical White Paper

Setting Up Quest QoreStor with Veeam Backup & Replication. Technical White Paper Setting Up Quest QoreStor with Veeam Backup & Replication Technical White Paper Quest Engineering August 2018 2018 Quest Software Inc. ALL RIGHTS RESERVED. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

End-to-End Storage Provisioning for MongoDB

End-to-End Storage Provisioning for MongoDB Technical Report End-to-End Storage Provisioning for MongoDB Deploying NetApp OnCommand Workflow Automation for MongoDB John Elliott, NetApp April 2018 TR-4674 Abstract This technical report explains the

More information

SonicWall Secure Mobile Access SMA 500v Virtual Appliance 8.6. Getting Started Guide

SonicWall Secure Mobile Access SMA 500v Virtual Appliance 8.6. Getting Started Guide SonicWall Secure Mobile Access SMA 500v Virtual Appliance 8.6 Getting Started Guide Copyright 2017 SonicWall Inc. All rights reserved. SonicWall is a trademark or registered trademark of SonicWall Inc.

More information

NFS Client Configuration for ESX Express Guide

NFS Client Configuration for ESX Express Guide ONTAP 9 NFS Client Configuration for ESX Express Guide December 2017 215-11183_E0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide... 4 NFS

More information

Polycom RealPresence Access Director System, Virtual Edition

Polycom RealPresence Access Director System, Virtual Edition Getting Started Guide Version 4.0 July 2014 3725-78702-002D Polycom RealPresence Access Director System, Virtual Edition Copyright 2014, Polycom, Inc. All rights reserved. No part of this document may

More information

SNMP Configuration Express Guide

SNMP Configuration Express Guide ONTAP 9 SNMP Configuration Express Guide December 2017 215-11190_D0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide... 4 SNMP configuration

More information

Nondisruptive Operations with SMB File Shares

Nondisruptive Operations with SMB File Shares Technical Report Nondisruptive Operations with SMB File Shares ONTAP 9.x John Lantz, NetApp November 2016 TR-4100 Abstract This technical report details NetApp ONTAP support for nondisruptive operations

More information

OnCommand Cloud Manager 3.2 Updating and Administering Cloud Manager

OnCommand Cloud Manager 3.2 Updating and Administering Cloud Manager OnCommand Cloud Manager 3.2 Updating and Administering Cloud Manager April 2017 215-12037_B0 doccomments@netapp.com Table of Contents 3 Contents Updating Cloud Manager... 4 Enabling automatic updates...

More information

Nokia Intrusion Prevention with Sourcefire Appliance Quick Setup Guide. Sourcefire Sensor on Nokia v4.8

Nokia Intrusion Prevention with Sourcefire Appliance Quick Setup Guide. Sourcefire Sensor on Nokia v4.8 Nokia Intrusion Prevention with Sourcefire Appliance Quick Setup Guide Sourcefire Sensor on Nokia v4.8 Part No. N450000774 Rev 001 Published September 2008 COPYRIGHT 2008 Nokia. All rights reserved. Rights

More information

Replacing a PCIe card

Replacing a PCIe card AFF A800 systems Replacing a PCIe card To replace a PCIe card, you must disconnect the cables from the cards, remove the SFP and QSFP modules from the cards before removing the riser, reinstall the riser,

More information

CloudLink SecureVM. Administration Guide. Version 4.0 P/N REV 01

CloudLink SecureVM. Administration Guide. Version 4.0 P/N REV 01 CloudLink SecureVM Version 4.0 Administration Guide P/N 302-002-056 REV 01 Copyright 2015 EMC Corporation. All rights reserved. Published June 2015 EMC believes the information in this publication is accurate

More information

vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017

vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017 vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017 vrealize Suite Lifecycle Manager 1.0 Installation and Management You can find the most up-to-date technical documentation

More information

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until

More information

The Privileged Appliance and Modules (TPAM) 1.0. Diagnostics and Troubleshooting Guide

The Privileged Appliance and Modules (TPAM) 1.0. Diagnostics and Troubleshooting Guide The Privileged Appliance and Modules (TPAM) 1.0 Guide Copyright 2017 One Identity LLC. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in

More information

Lenovo ThinkAgile XClarity Integrator for Nutanix Installation and User's Guide

Lenovo ThinkAgile XClarity Integrator for Nutanix Installation and User's Guide Lenovo ThinkAgile XClarity Integrator for Nutanix Installation and User's Guide Version 1.0 Note Before using this information and the product it supports, read the information in Appendix A Notices on

More information

Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers

Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers INTRODUCTION Setup of Quest VROOM requires installation of Rapid Recovery and Foglight for Virtualization

More information

FlexArray Virtualization

FlexArray Virtualization FlexArray Virtualization Implementation Guide for NetApp E-Series Storage NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:

More information

Inventory Collect Tool 1.4

Inventory Collect Tool 1.4 Inventory Collect Tool 1.4 Host and Storage Information Collection Guide For Transition Assessment NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

Oracle Enterprise Manager Ops Center E Introduction

Oracle Enterprise Manager Ops Center E Introduction Oracle Enterprise Manager Ops Center Discover an Oracle ZFS Storage Appliance and Configure Storage Libraries 12c Release 2 (12.2.2.0.0) E40770-03 December 2014 This guide provides an end-to-end example

More information

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved.

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved. SolidFire Petr Slačík Systems Engineer NetApp petr.slacik@netapp.com 27.3.2017 1 2017 NetApp, Inc. All rights reserved. 1 SolidFire Introduction 2 Element OS Scale-out Guaranteed Performance Automated

More information

iscsi Configuration for ESX Express Guide

iscsi Configuration for ESX Express Guide ONTAP 9 iscsi Configuration for ESX Express Guide December 2017 215-11181_D0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this guide... 4 iscsi configuration

More information

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015 VMware vsphere Data Protection REVISED APRIL 2015 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Requirements.... 4 Evaluation Workflow... 5 Overview.... 5 Evaluation

More information

Videoscape Distribution Suite Software Installation Guide

Videoscape Distribution Suite Software Installation Guide First Published: August 06, 2012 Last Modified: September 03, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.3.2 FlexArray Virtualization Implementation Guide for NetApp E-Series Storage NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers

Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers Quest VROOM Quick Setup Guide for Quest Rapid Recovery for Windows and Quest Foglight vapp Installers INTRODUCTION Setup of Quest VROOM requires installation of Rapid Recovery and Foglight for Virtualization

More information

Setting up the DR Series System on Acronis Backup & Recovery v11.5. Technical White Paper

Setting up the DR Series System on Acronis Backup & Recovery v11.5. Technical White Paper Setting up the DR Series System on Acronis Backup & Recovery v11.5 Technical White Paper Quest Engineering November 2017 2017 Quest Software Inc. ALL RIGHTS RESERVED. THIS WHITE PAPER IS FOR INFORMATIONAL

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.2.4 FlexArray Virtualization Implementation Guide for NetApp E-Series Storage NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

SolidFire and AltaVault

SolidFire and AltaVault Technical Report SolidFire and AltaVault Back Up SolidFire Storage to the Cloud by Using AltaVault Mike Braden, NetApp August 2016 TR-4542 Abstract Effectively using storage at scale is how organizations

More information

Polycom RealPresence Platform Director

Polycom RealPresence Platform Director RELEASE NOTES 3.0.0 April 2016 3725-66007-002B Polycom RealPresence Platform Director Contents What s New in Release 3.0... 3 Polycom RealPresence Clariti Support... 3 Support for Appliance Edition...

More information

StorageGRID Webscale Installation Guide. For VMware Deployments. October _B0

StorageGRID Webscale Installation Guide. For VMware Deployments. October _B0 StorageGRID Webscale 11.1 Installation Guide For VMware Deployments October 2018 215-12792_B0 doccomments@netapp.com Table of Contents 3 Contents Installation overview... 5 Planning and preparation...

More information

Administering vrealize Log Insight. September 20, 2018 vrealize Log Insight 4.7

Administering vrealize Log Insight. September 20, 2018 vrealize Log Insight 4.7 Administering vrealize Log Insight September 20, 2018 4.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

Polycom Video Border Proxy (VBP ) 7301

Polycom Video Border Proxy (VBP ) 7301 RELEASE NOTES 14.8.2 January 2017 3725-78311-001I Polycom Video Border Proxy (VBP ) 7301 Release Notes Polycom VBP 7301 Version 14 Current Version: 14.8.2 Release Date: January 2017 Polycom VBP Release

More information

Creating Resources on the ZFS Storage Appliance

Creating Resources on the ZFS Storage Appliance Oracle Enterprise Manager Ops Center Creating Non-Global Zones Using a SAN Storage Library 12c Release 3 (12.3.0.0.0) E65613-01 October 2015 This guide provides an end-to-end example for how to use Oracle

More information

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.8 April 2017 Last modified: July 17, 2017 2017 Nasuni Corporation All Rights Reserved Document Information Testing Disaster

More information

E-Series Converting the Protocol of E2800 Host Ports (New Systems)

E-Series Converting the Protocol of E2800 Host Ports (New Systems) E-Series Converting the Protocol of E2800 Host Ports (New Systems) October 2016 215-11500_A0 doccomments@netapp.com Table of Contents 3 Contents Converting the Protocol of E2800 Host Ports (New Systems)...

More information

Replacing a Drive in E2660, E2760, E5460, E5560, or E5660 Trays

Replacing a Drive in E2660, E2760, E5460, E5560, or E5660 Trays E-Series Replacing a Drive in E2660, E2760, E5460, E5560, or E5660 Trays The Recovery Guru in SANtricity Storage Manager monitors the drives in the storage array and can notify you of an impending drive

More information

1.0. Quest Enterprise Reporter Discovery Manager USER GUIDE

1.0. Quest Enterprise Reporter Discovery Manager USER GUIDE 1.0 Quest Enterprise Reporter Discovery Manager USER GUIDE 2012 Quest Software. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide

More information

StorageGRID Webscale Installation Guide. For VMware Deployments. January _B0

StorageGRID Webscale Installation Guide. For VMware Deployments. January _B0 StorageGRID Webscale 11.0 Installation Guide For VMware Deployments January 2018 215-12395_B0 doccomments@netapp.com Table of Contents 3 Contents Installation overview... 5 Planning and preparation...

More information

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4 NSX-T Data Center Migration Coordinator Guide 5 APR 2019 VMware NSX-T Data Center 2.4 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

NetApp HCI with Mellanox SN2010 Switch Quick Cabling Guide

NetApp HCI with Mellanox SN2010 Switch Quick Cabling Guide Technical Report NetApp HCI with Mellanox SN2010 Switch Stephen Carl, HCI Tech Solutions, NetApp December 2018 TR-4735-1218 TABLE OF CONTENTS 1 Introduction... 4 2 NetApp HCI Hardware... 4 2.1 Node and

More information

SnapCenter Software 3.0 Importing Data from SnapManager to SnapCenter

SnapCenter Software 3.0 Importing Data from SnapManager to SnapCenter SnapCenter Software 3.0 Importing Data from SnapManager to SnapCenter July 2017 215-12093_A0 doccomments@netapp.com Table of Contents 3 Contents Deciding on whether to read this information... 4 Importing

More information

vcenter Server Appliance Configuration Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vcenter Server Appliance Configuration Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware

More information

Polycom RealPresence Access Director System

Polycom RealPresence Access Director System RELEASE NOTES Version 4.0.1 August 2014 3725-78700-001D1 Polycom RealPresence Access Director System Polycom, Inc. 1 Document Title Version What s New in Release 4.0.1 The RealPresence Access Director

More information

OnCommand Cloud Manager 3.0 Administration Guide

OnCommand Cloud Manager 3.0 Administration Guide OnCommand Cloud Manager 3.0 Administration Guide June 2016 215-11111_A0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use this guide... 4 Backing up Cloud Manager... 5 Removing

More information

SonicWall SMA 8200v. Getting Started Guide

SonicWall SMA 8200v. Getting Started Guide SonicWall SMA 8200v Getting Started Guide Copyright 2017 SonicWall Inc. All rights reserved. SonicWall is a trademark or registered trademark of SonicWall Inc. and/or its affiliates in the U.S.A. and/or

More information

Deploy the ExtraHop Explore 5100 Appliance

Deploy the ExtraHop Explore 5100 Appliance Deploy the ExtraHop Explore 5100 Appliance Published: 2018-09-25 In this guide, you will learn how to configure the rack-mounted EXA 5100 ExtraHop Explore appliance and to join multiple Explore appliances

More information

Copyright. Trademarks. Warranty. Copyright 2018 YEALINK (XIAMEN) NETWORK TECHNOLOGY

Copyright. Trademarks. Warranty. Copyright 2018 YEALINK (XIAMEN) NETWORK TECHNOLOGY Copyright Copyright 2018 YEALINK (XIAMEN) NETWORK TECHNOLOGY Copyright 2018 Yealink (Xiamen) Network Technology CO., LTD. All rights reserved. No parts of this publication may be reproduced or transmitted

More information

Installing and Configuring vcenter Support Assistant

Installing and Configuring vcenter Support Assistant Installing and Configuring vcenter Support Assistant vcenter Support Assistant 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Microsoft Office Groove Server Groove Manager. Domain Administrator s Guide

Microsoft Office Groove Server Groove Manager. Domain Administrator s Guide Microsoft Office Groove Server 2007 Groove Manager Domain Administrator s Guide Copyright Information in this document, including URL and other Internet Web site references, is subject to change without

More information

Polycom RealPresence Resource Manager System

Polycom RealPresence Resource Manager System Upgrade Guide 8.2.0 July 2014 3725-72106-001E Polycom RealPresence Resource Manager System Copyright 2014, Polycom, Inc. All rights reserved. No part of this document may be reproduced, translated into

More information

ForeScout CounterACT. Single CounterACT Appliance. Quick Installation Guide. Version 8.0

ForeScout CounterACT. Single CounterACT Appliance. Quick Installation Guide. Version 8.0 ForeScout CounterACT Single CounterACT Appliance Version 8.0 Table of Contents Welcome to CounterACT Version 8.0... 4 CounterACT Package Contents... 4 Overview... 5 1. Create a Deployment Plan... 6 Decide

More information

Cisco Meeting Management

Cisco Meeting Management Cisco Meeting Management Cisco Meeting Management 1.1 User Guide for Administrators September 19, 2018 Cisco Systems, Inc. www.cisco.com Contents 1 Introduction 4 1.1 The software 4 2 Deployment overview

More information