SUPPORTING CLUSTERED DATA ONTAP: NODE MANAGEMENT

Similar documents
Clustered Data ONTAP 8.3 Update 2, IPspaces. Self-paced Lab NETAPP UNIVERSITY. NetApp University - Do Not Distribute

Clustered Data ONTAP 8.3

Installing a Cisco Nexus 3132Q-V cluster switch and a passthrough panel in a NetApp cabinet

FlexArray Virtualization

FlexArray Virtualization

FlexArray Virtualization

Disaster Recovery for Enterprise Applications with ONTAP Cloud

Inventory Collect Tool 1.4

NetApp Data ONTAP Edge on SoftLayer

OnCommand Cloud Manager 3.2 Provisioning NFS Volumes Using the Volume View

Navigating VSC 6.1 for VMware vsphere

NDMP in Clustered Data ONTAP for Tape Backup Software Applications

Replacing a Cisco Nexus 5596 cluster switch

FlexArray Virtualization Implementation Guide for NetApp E-Series Storage

Replacing a Cisco Nexus 5010 cluster switch

All other components in the system must be functioning properly; if not, you must contact technical support.

SnapCenter Software 3.0 Importing Data from SnapManager to SnapCenter

E-Series Converting the Protocol of E2800 Host Ports (New Systems)

Volume Disaster Recovery Express Guide

SNMP Configuration Express Guide

Clustered Data ONTAP 8.3

OnCommand Cloud Manager 3.0 Administration Guide

Clustered Data ONTAP 8.3

Replacing a Failed Power-Fan Canister in the E5612 or the E5624 Controller-Drive Tray

Replacing a Drive in E2660, E2760, E5460, E5560, or E5660 Trays

Implementing Microsoft Hyper-V on Data ONTAP

Replacing Cisco Nexus 3132Q-V cluster switches

All other components in the system must be functioning properly; if not, you must contact technical support.

Replacing a Failed Controller Battery in the E5560 Controller- Drive Tray

All Flash FAS SAN-Optimized Configuration

Replacing an NVMEM battery in a FAS22xx system

SMB/CIFS Configuration Express Guide

Replacing a Failed Controller Canister in a E2760 Controller- Drive Tray

NFS Client Configuration for ESX Express Guide

Replacing a Failed Host Interface Card in the E5560 Controller- Drive Tray

iscsi Configuration for Windows Express Guide

Replacing Failed Memory in the E2760 Controller-Drive Tray

Cluster Switch Setup Guide for Cisco Switches. October _A0_ur005

OnCommand Cloud Manager 3.2 Updating and Administering Cloud Manager

NETAPP UNIVERSITY. Data ONTAP 7-Mode Administration. Exercise Guide. Course ID: STRSW-ILT-D7ADM-REV02 Catalog Number: STRSW-ILT-D7ADM-REV02-EG

Inventory Collect Tool 2.2

SolidFire and AltaVault

Replacing the chassis on a FAS22xx system

Windows Unified Host Utilities 7.0 Using Windows Hosts with ONTAP Storage

NFS Configuration Express Guide

Replacing a Power Canister in E5760 or E2860 Shelves

Replacing a Failed Controller Canister in the E5660 Controller- Drive Tray

Installing and Configuring for AIX

Replacing a Power-Fan Canister in E5700 or E2800 Shelves

FC Configuration for ESX Express Guide

Replacing the chassis

Installing a Cisco Nexus 5596 cluster switch and passthrough panel in a NetApp cabinet

FC Configuration for Windows Express Guide

Replacing the chassis

Replacing a Failed Drive in the EF550 Flash Array

Replacing the chassis

OnCommand Unified Manager Reporting for Clustered Data ONTAP

Replacing a Failed Host Interface Card in the E2760 Controller- Drive Tray

Upgrading a Host Interface Card in the E5560 Controller-Drive Tray

Replacing Failed Memory in the E5512 Controller-Drive Tray or the E5524 Controller-Drive Tray

iscsi Configuration for ESX Express Guide

Replacing a Failed Controller Battery in the E2712 Controller- Drive Tray or the E2724 Controller-Drive Tray

Cluster Expansion Express Guide

Optimizing SAP Lifecycle Management with NetApp Solutions for SAP HANA

Replacing the controller module

All other components in the system must be functioning properly; if not, you must contact technical support.

FPolicy Solution Guide for Clustered Data ONTAP: Veritas Enterprise Vault

7-Mode Transition Tool 3.3 Installation and Setup Guide

Replacing a Failed Controller Canister in the EF550 Flash Array

Replacing a drive in E5724 shelves

Data ONTAP-v Administration Tool 1.2

IBM. SnapManager 7.2 for Microsoft Exchange Server Release Notes. IBM System Storage N series GC

ONTAP 9. Software Setup Guide. November _E0 Updated for ONTAP 9.3

Hadoop Online NoSQL with NetApp FAS NFS Connector for Hadoop

Replacing a Drive Drawer in a 60-Drive Tray

Replacing Controller Canisters in the E5560 Controller-Drive Tray to Upgrade Host Interface Cards

IBM. OnCommand Plug-in for Microsoft Release Notes. IBM System Storage N series GA

FPolicy Solution Guide for Clustered Data ONTAP: Veritas Data Insight

Upgrading Host Interface Cards in the E2712 or the E2724 Controller-Drive Tray

FPolicy Solution Guide for Clustered Data ONTAP: PoINT Storage Manager

SMB/CIFS and NFS Multiprotocol Configuration Express Guide

SnapProtect Backups for VMware vsphere

Replacing a Failed Host Interface Card in the E2712 Controller- Drive Tray or the E2724 Controller-Drive Tray

Replacing a Battery in E2800 or E5700 Shelves

NetApp System Manager 1.1 Quick Start Guide

NetApp Storage Replication Adapter for E- Series Installation and User Guide

AIX Host Utilities 6.0 Release Notes

Replacing a DIMM in a FAS20xx system

SnapDrive for Windows Release Notes

NetApp AFF8080A EX Storage Efficiency and Performance with Oracle Database

Replacing an E5700 or E2800 Controller (Duplex Configuration)

Simulate ONTAP 8.1 Installation and Setup Guide

Simulate ONTAP 9.3. Installation and Setup Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

NetApp AFF8080A EX Storage Efficiency and Performance with Microsoft SQL Server 2014

ns0-157 Passing Score: 700 Time Limit: 120 min File Version: 1.0

Cluster Peering Express Guide For 7-Mode Administrators Learning Cluster-Mode

IBM. Single Mailbox Recovery Release Notes. IBM System Storage N series GC

Learning About Cloud Manager and ONTAP Cloud

Adding a second controller module to create an HA pair operating in Data ONTAP 7-Mode

Storage Replication Adapter for VMware vcenter SRM. April 2017 SL10334 Version 1.5.0

Transcription:

OFFERED BY: CUSTOMER SUCCESS OPERATIONS KNOWLEDGE MANAGEMENT ADVANCED TECHNICAL TRAINING SUPPORTING CLUSTERED DATA ONTAP: NODE MANAGEMENT EXERCISE GUIDE Content Version: 0.9

This page left intentionally blank.

ATTENTION The information contained in this course is intended only for training. This course contains information and activities that, while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other severe consequences in a production environment. This course material is not a technical reference and should not, under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp product documentation that is located at http://now.netapp.com/. COPYRIGHT 2015 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice. No part of this document covered by copyright may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system without prior written permission of NetApp, Inc. U.S. GOVERNMENT RIGHTS Commercial Computer Software. Government users are subject to the NetApp, Inc. standard license agreement and applicable provisions of the FAR and its supplements. TRADEMARK INFORMATION NetApp, the NetApp logo, Go further, faster, ASUP, AutoSupport, Campaign Express, Customer Fitness, CyberSnap, Data ONTAP, DataFort, FilerView, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, and WAFL are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Other product and service names might be trademarks of NetApp or other companies. A current list of NetApp trademarks is available on the Web at http://www.netapp.com/us/legal/netapptmlist.aspx.

Table of Contents GETTING STARTED....6 MODULE 1: HA OVERVIEW...8 MODULE 2: HA STORAGE GIVEBACK..14 MODULE 3: AGGREGATE RELOCATION... 27

This page left intentionally blank.

Getting Started STUDY AID ICONS These four icons may be used throughout your exercises to identify steps that require your special attention: Warning You should follow all the exercise steps, but misconfiguring steps labeled with this icon might cause later steps to not work properly. Check this step carefully before continuing to the next step. Attention Steps or comments labeled with this icon should be reviewed carefully to save time, learn a best practice, or avoid errors. Information Comments labeled with this icon provide additional information about the topic or procedure. Knowledge Comments labeled with this icon provide reference material that gives additional context that you may find useful.

EXERCISE EQUIPMENT DIAGRAM Your lab contains these virtual machines: Windows 2012 server A 4-node ONTAP cluster A 2-node ONTAP cluster Red Hat Linux server When you provision your lab kit, you are first connected via Remote Desktop to a Windows 2012 Server. From this Windows desktop, you connect to your Data ONTAP cluster or the individual nodes in the cluster. Open mremoteng. User Name admin (case sensitive) Password Netapp1!

MODULE 1: STORAGE HA OVERVIEW EXERCISE 1: CONFIG ADVISOR In these exercise, you will work with specially configured ONTAP simulators that support High Availability operations. OBJECTIVES This exercise focuses on enabling you to do the following: Use Config Advisor to analyze the configuration of your exercise environment Verify the results of the analysis TASK 1: ANSWER QUESTIONS STEP ACTION 1. Review Best Practices for HA Pairs in Clustered Data ONTAP 8.3 High-Availability Configuration Guide. 2. What are the steps to verify the HA pair configuration? 3. Where can you download Config Advisor? 4. Quickly review the Release Notes for Config Advisor. What special considerations exist for MetroCluster systems? 5. Navigate to the download page for Config Advisor. What plug-ins are available? 6. Download the Installation and Administration Guide. What report formats can Config Advisor create?

TASK 2: RUN CONFIG ADVISOR AGAINST A CLUSTER TO VALIDATE STORAGE HA STEP ACTION 1. Download Config Advisor software from NetApp Support Site to the jumphost system: 1. Using a web browser, navigate to https://support.netapp.com and log in with your SSO. 2. Hover the mouse over the Tools tab near the top of the page. 3. Select Toolchest from the Tools dropdown menu. 4. Select Config Advisor from the list of Tools After accepting Terms & Conditions, the Config Advisor download page contains downloadable components. Download these components: Client Tool: ConfigAdvisor-X.X.X.exe (where X.X.X is the latest Config Advisor version) Quick Start Guide Release Notes

STEP ACTION 2. After the Config Advisor software has downloaded to the jumphost system, install it on the jumphost system using the Installation Guide instructions. During the Config Advisor software installation wizard, use the default values to install most of the software installation questions. However, select this value for this question prior to installing the Config Advisor software: Are you installing Config Advisor in a Secure Site? Yes 3. Once software installation is complete, start the Config Advisor software. 4. Before using Config Advisor to check the cluster, this information from the cluster is needed: The cluster-management LIF IP address for the 4-node cluster. Use this command: net int show -role cluster_mgmt The admin username and password for the 4-node cluster (given on page 7 with the Exercise Equipment diagram). 5. Click on Create a New Data collection.

STEP ACTION 6. In the Config Advisor software, configure and initiate the data collection from the 4-node cluster. Select or populate these configuration fields, starting from the top: 1. Choose a Profile: ONTAP 2. Choose Cluster Switch Model: NetApp CN1610/CN1601 3. Choose Management Switch Option: Disable Cluster and Management Switch 4. Hostname (or IP): <Cluster-mgmt IP Address for 4-node cluster> 5. Username: admin 6. Password: <admin password> 7. To validate, click Next. 7. Click Next to view the device summary. You can also click the View icon to check all the commands that would be run on the system. 8. Click Save & Collect. Name the report in Save Project as. If asked, re-enter the log in credentials. 9. The Config Advisor begins to collect data from the 4-node cluster. NOTE: Due to the virtualized configuration of the ONTAP software running in this lab, there might be some failures that are logged during data collection. This is normal because those commands are not available in a virtualized configuration.

STEP ACTION 10. After data collection is complete, click on View & Analyze for a summary of the results. Click on Export and select Export to PDF to view full results. 11. Browse through the various sections of the results. NOTE: Due to the virtualized configuration of the ONTAP software running in this lab, the Configuration Check results might flag some sections High Impact, Medium Impact, or Best Practices. 12. Click the back navigation arrow, and then click on the binoculars icon to View collected data. 13. Observe the results of the HA Config Check and Storage Failover State check that ran against the 4-node cluster. To view these results, click on each node in the cf status section of the Viewer. 14. Click on the storage failover show command family to more details about the storage failover status of the cluster. END OF EXERCISE

MODULE 2: STORAGE HA TAKEOVER AND GIVEBACK OBJECTIVES This exercise focuses on enabling you to do the following: Perform different takeover and giveback types Create a partial giveback scenario Examine vetos Explore automatic givebacks TASK 1: PERFORM USER INITIATED TAKEOVER/GIVEBACK STEP ACTION 1. This exercise will be performed on the 4-node cluster. Login as admin user on the console of all 4 nodes in the cluster. 2. In each console window, verify the node name that has been logged in to: ::> node show local 3. Verify the storage failover settings are properly set by running these commands on the console of one of the node: -fields enabled,onreboot,onpanic,autogiveback,auto-giveback-after-panic,delay-seconds All the fields should have a value of either true or 600 4. Determine the current state of the storage failover in the cluster Note the node and the storage failover partner for each node. Observe the Takeover Possible and State Description fields. Question: If we were to perform a manual takeover of node3, which node would take over the storage of node3? Answer: 5. Initiate a manual takeover of node3 from any node console window: ::> storage failover takeover -ofnode <node3> Answer y to the warning question. View the console of node3 and observe the node while it is being taken over.

STEP ACTION 6. From any node that is currently up, while node3 is in the process of being rebooted, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check takeover status: -takeover Observe the Takeover Status for each aggregate. Question: Which node performed the takeover? Answer: 7. Observe the console of node3 until it pauses at boot with this message: Waiting for giveback...(press Ctrl-C to abort wait) 8. From any other node that is currently up, while node3 is Waiting for giveback, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate. 9. Let node3 remain at the Waiting for giveback prompt for 2 minutes, and then manually perform the giveback using this command from any node that is currently up: ::> storage failover giveback -ofnode <node3> 10. After the giveback command is issued, observe the console of node3 and verify that node3 continues to boot.

STEP ACTION 11. From any other node that is currently up, while node3 is continuing to boot, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate 12. Repeat Step 10 until all the aggregates have been given back. After all aggregates have been given back, the giveback process is complete. 13. Observe the console of node3 and verify that the console is presenting the login prompt. TASK 2: OBSERVE AUTOMATIC TAKEOVER PROCESS AND PERFORM A GIVEBACK AFTER AN AUTOMATIC TAKEOVER STEP ACTION 1. This exercise will be performed on the 4-node cluster. Login as admin user on the console of all 4 nodes in the cluster. 2. In each console window, verify the node name that has been logged in to: ::> node show local

STEP ACTION 3. Verify the storage failover settings are properly set by running these commands on the console of one of the node: -fields enabled,onreboot,onpanic,autogiveback,auto-giveback-after-panic,delay-seconds The onpanic field should have a value of true to indicate that automatic takeover will occur when a node panics. Determine the current state of the storage failover in the cluster: Note the node and the storage failover partner for each node. Observe the Takeover Possible and State Description fields. Question: If we were to perform an automatic takeover of node3, which node would take over the storage of node4? Answer: 4. Initiate a panic on node4 from the console of node4: ::> node run -node <node4> priv set diag; panic View the console of node4 and observe the node during the panic and reboot. NOTE: These steps simulate a software failure on node4. 5. From any node that is currently up, while node4 is in the process of dumping core and rebooting, check the storage failover status: Observe the Takeover Possible and State Description fields. NOTE: Automatic takeover should have occurred due to node4 panic. Question: Which node performed the takeover? Answer: 6. Observe the console of node4 until it pauses at boot with this message: Waiting for giveback...(press Ctrl-C to abort wait)

STEP ACTION 7. From any other node that is currently up, while node4 is Waiting for giveback, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate 8. Let node4 remain at the Waiting for giveback prompt for 2 minutes, and then manually perform the giveback using this command from any node that is currently up: ::> storage failover giveback -ofnode <node4> 9. After the giveback command is issued, observe the console of node4 and verify that node4 continues to boot. 10. From any other node that is currently up, while node4 is continuing to boot, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate. 11. Repeat Step 10 until all the aggregates have been given back. After all aggregates have been given back, the giveback process is complete. 12. Observe the console of node4 and verify that the console is presenting the login prompt.

TASK 3: PERFORM A CFO ONLY GIVEBACK (PARTIAL GIVEBACK) STEP ACTION 1. This exercise will be performed on the 4-node cluster. Login as admin user on the console of all 4 nodes in the cluster. 2. In each console window, verify the node name that has been logged in to: ::> node show local 3. Determine the current state of the storage failover in the cluster: Note the node and the storage failover partner for each node. Observe the Takeover Possible and State Description fields. 4. Initiate a manual takeover of node3 from any node console window: ::> storage failover takeover -ofnode <node3> Answer y to the warning question. View the console of node3 and observe the node while it is being taken over. 5. From any node that is currently up, while node3 is in the process of being rebooted, check the storage failover status: Observe the Takeover Possible and State Description fields. 6. Observe the console of node3 until it pauses at boot with this message: Waiting for giveback...(press Ctrl-C to abort wait) 7. From any other node that is currently up, while node3 is Waiting for giveback, check the storage failover status: Observe the Takeover Possible and State Description fields.

STEP ACTION 8. Let node3 remain at the Waiting for giveback prompt for 2 minutes, and then manually perform a giveback of only the CFO Aggregate using this command from any node that is currently up: ::> storage failover giveback -ofnode <node3> -only-cfo-aggregates true 9. After the giveback command is issued, observe the console of node3 and verify that node3 continues to boot. 10. From any other node that is currently up, while node3 continues to boot, check the storage failover status: Observe the Takeover Possible and State Description fields. Note the Partial giveback in the State Description on node4. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate. Note which aggregates have been given back and which ones have not. 11. Repeat Step 10 until node3 console has completed booting. Observe the aggregates that have been given back. 12. Repeat Step 10 again 5 minutes after node3 has completed booting. Question: Have there been any changes to the Giveback Status of the non-cfo Aggregate(s) and why? What needs to be done for the non-cfo aggregate to be given back? 13. Perform giveback of all non-cfo aggregates to node3: ::> storage failover giveback -ofnode <node3>

STEP ACTION 14. From any other node, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate. 15. After all aggregates have been given back, the giveback process is complete. TASK 4: PERFORM AN USER INITIATED TAKEOVER AND OBSERVE AUTOMATIC GIVEBACK STEP ACTION 1. This exercise will be performed on the 2-node cluster. Login as admin user on the console of both nodes in the cluster. 2. In each console window, verify the node name that has been logged in to: ::> node show local 3. Verify the storage failover settings are properly set by running this commands on the console of one of the node: -fields enabled,onreboot,onpanic,autogiveback,auto-giveback-after-panic,delay-seconds All the fields should have a value of either true or 600. The auto-giveback and auto-giveback-after-panic fields controls the autogiveback behavior of each node in the cluster. Determine the current state of the storage failover in the cluster: Note the node and the storage failover partner for each node. Observe the Takeover Possible and State Description fields.

STEP ACTION 4. Initiate a manual takeover of node2 from any node console window: ::> storage failover takeover -ofnode <node2> Answer y to the warning question. View the console of node2 and observe the node while it is being taken over. 5. From any node that is currently up, while node2 is in the process of being rebooted, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check takeover status: -takeover Observe the Takeover Status for each aggregate. 6. Observe the console of node2 until it pauses at boot with this message: Waiting for giveback...(press Ctrl-C to abort wait) 7. From any other node that is currently up, while node3 is Waiting for giveback, check the storage failover status: Observe the Takeover Possible and State Description fields. Notice when an automatic giveback will be attempted. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate. 8. Observe the console of node2 and let it remain at the Waiting for giveback prompt. The node will to continue to boot when the automatic giveback is attempted.

STEP ACTION 9. Use the storage failover show command to observe when the automatic giveback is attempted. 10. After the automatic giveback has begun, observe the console of node2 and verify that node2 continues to boot. 11. From node1, while node2 is continuing to boot, check the storage failover status: Observe the Takeover Possible and State Description fields. Run this command to check giveback status: -giveback Observe the Giveback Status for each aggregate. 12. Repeat Step 11 until all the aggregates have been given back. After all aggregates have been given back, the giveback process is complete. 13. Observe the console of node2 and verify that the console is presenting the login prompt. END OF EXERCISE

MODULE 3: AGGREGATE RELOCATION OBJECTIVES This exercise focuses on enabling you to do the following: Identify ARL requirements and processing Relocate aggregates between members of an HA pair Examine the process of using ARL for a controller upgrade TASK 1: ANSWER QUESTIONS STEP ACTION 7. Open TR 4146: Aggregate Relocate Overview and Best Practices for Clustered Data ONTAP. What does it say about offline aggregates? 8. In what phase of ARL does the aggregate move from the source to the destination? 9. Can ARL be used to upgrade from a FAS6030 to a FAS6240? 10. Refer to the High-Availability Configuration Guide, Relocating Aggregate Ownership. During which nine system-level operations should ARL not be initiated? 11. Can ARL be used on All-Flash Optimized FAS80xx-series systems? If so, under what conditions? END OF EXERCISE TASK 2: USE AGGREGATE RELOCATION TO RELOCATE AGGREGATE BETWEEN AN HA-PAIR STEP ACTION 1. This exercise will be performed on the 2-node cluster. Login to the 2-node cluster via SSH using the cluster-management LIF as admin user.

STEP ACTION 2. In this exercise, we will be relocating all data aggregate that exists on node2 to node1. Determine the name of the data aggregate on node2 that will be relocated: ::> storage aggregate show -node <node2> -ha-policy sfo Note the name(s) of the aggregate(s) that are currently hosted node2. 3. To perform aggregate relocation, both source and destination nodes must be inquorum. Determine the health of the cluster using these commands: ::> set diagnostic ::*> cluster show ::*> cluster ring show ::*> debug smdb table bcomd_info show 4. Perform the aggregate relocation: ::*> set -privilege admin ::> storage aggregate relocation start -node <node2> -destination <node1> -aggregate-list <aggr_name>,<aggr_name>,... 5. Monitor the progress of the aggregate relocation: ::> storage aggregate relocation show -node <node2> 6. Once aggregate relocation is complete, verify all the data aggregate(s) are now owned by node1: ::> storage aggregate show -ha-policy sfo NOTE: This is a permanent relocation; node1 now owns the aggregate(s) that used to be owned by node2. 7. Relocate the data aggregates that were relocated to node1 back to node2: ::> storage aggregate relocation start -node <node1> -destination <node2> -aggregate-list <comma_seperated_aggregate_list_used_in_step_4>

STEP ACTION 8. Monitor the progress of the aggregate relocation: ::> storage aggregate relocation show -node <node1> 9. Once aggregate relocation is complete, verify the data aggregate(s) that were relocated in Step 8 is now own by node2: ::> storage aggregate show -ha-policy sfo END OF EXERCISE