SAN Features of Clustered Data ONTAP October 2015 SL10209 Version 1.1

Similar documents
iscsi Configuration for Red Hat Enterprise Linux Express Guide

Nondistruptive Operations for Clustered Data ONTAP December 2015 SL10239 Version 1.2

Migrating to NetApp ONTAP Using the 7-Mode Transition Tool Copy-Based Transition. September 2016 SL10284 Version 3.0

NetApp SANtricity Cloud Connector 3.1

SAN Administration Guide

Monitoring and Reporting for an ONTAP Account

Clustered Data ONTAP 8.3

NetApp Data Ontap Simulator Cookbook

SnapManager for Microsoft SQL Server. December 2016 SL10311 Version 1.6.0

FC Configuration for Red Hat Enterprise Linux Express Guide

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

SAN Administration Guide

Data Protection Guide

SnapProtect Live Browse with Granular Recovery on VMware. May 2017 SL10336 Version 1.1.0

7-Mode Data Transition Using SnapMirror Technology

NetApp Data Ontap Simulator Cookbook

Data Protection Guide

A. Both Node A and Node B will remain on line and all disk shelves will remain online.

Clustered Data ONTAP 8.2

NS0-506 Q&As NetApp Certified Implementation Engineer - SAN, Clustered Data ONTAP

Volume Move Express Guide

SnapDrive for UNIX

The Contents and Structure of this Manual. This document is composed of the following four chapters.

HYPER-UNIFIED STORAGE. Nexsan Unity

iscsi Boot from SAN with Dell PS Series

Oracle 12c deployment using iscsi with IBM Storwize V7000 for small and medium businesses Reference guide for database and storage administrators

SAN Configuration Guide

7-Mode Transition Tool 2.2

7-Mode Data Transition Using SnapMirror

ns0-157 Passing Score: 700 Time Limit: 120 min File Version: 1.0

SnapDrive 5.3 for UNIX

Accelerated NCDA Boot Camp Data ONTAP 7-Mode Course ANCDABC87; 5 Days, Instructor-led

Cluster Management Workflows for OnCommand System Manager

Host Redundancy, and IPoIB and SRP Redundancies

Copy-Based Transition Guide

NetApp Storage Lab Blog Posts: Introduction:

Windows Unified Host Utilities 7.0 Using Windows Hosts with ONTAP Storage

Configuring Cisco UCS Server Pools and Policies

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05

SAP NetWeaver on IBM Cloud Infrastructure Quick Reference Guide Red Hat Enterprise Linux. December 2017 V1.0

EXAM - NS NetApp Certified Implementation Engineer - SAN, Clustered Data ONTAP. Buy Full Product.

Data Protection Guide

Using iscsi with BackupAssist. User Guide

SnapCenter Software 2.0 Installation and Setup Guide

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Dell EMC Unity Family

Data Protection Guide

Cluster Management Workflows for OnCommand System Manager

Data ONTAP Edge 8.2. Express Setup Guide For Clustered Data ONTAP. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Cluster Management Workflows for OnCommand System Manager

Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2

SUSE Enterprise Storage 5 and iscsi

Clustered Data ONTAP 8.2

SAN Implementation (SANIW)

NS Number: NS0-507 Passing Score: 800 Time Limit: 120 min File Version: NS0-507

Clustered Data ONTAP Administration and Data Protection

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

iscsi Configuration for Windows Express Guide

Exam Questions ns0-507

UltraPath Technical White Paper

Clustered Data ONTAP 8.2

TECHNICAL REPORT. Design Considerations for Using Nimble Storage with OpenStack

STORAGE CONFIGURATION BEST PRACTICES FOR SAP HANA TAILORED DATA CENTER INTEGRATION ON EMC VNX SERIES UNIFIED STORAGE SYSTEMS

Configuring and Managing Virtual Storage

Clustered Data ONTAP Administration (DCADM)

VMware vsphere 6.0 on NetApp MetroCluster. September 2016 SL10214 Version 1.0.1

Data ONTAP Edge 8.3. Express Setup Guide For Clustered Data ONTAP. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Windows Host Utilities Installation and Setup Guide

NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

LepideAuditor for File Server. Installation and Configuration Guide

Configuring Cisco UCS Server Pools and Policies

Dell EMC Unity Family

Storage Replication Adapter for VMware vcenter SRM. April 2017 SL10334 Version 1.5.0

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5

This course is intended for those who provide basic support for and perform administrative functions of the Data ONTAP operating system.

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. NetApp Storage. User Guide

COPYRIGHTED MATERIAL. Windows Server 2008 Storage Services. Chapter. in this chapter:

SnapDrive 6.5 for Windows for Clustered Data ONTAP: Best Practices Guide

Linux Host Utilities 6.2 Quick Start Guide

Deep Dive - Veeam Backup & Replication with NetApp Storage Snapshots

NetworkAppliance NS NetApp Certified Data Administrator, ONTAP.

SnapCenter Software 4.0 Concepts Guide

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Host PowerTools User Manual

StorTrends - Citrix. Introduction. Getting Started: Setup Guide

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Cluster Management Using OnCommand System Manager

vsphere Replication for Disaster Recovery to Cloud

Data Protection Guide

Production Installation and Configuration. Openfiler NSA

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

NexentaStor VVOL

Exam Questions NS0-157

NetApp. Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0.

Installing on a Virtual Machine

Accelerated NCDA Boot Camp Data ONTAP 7-Mode (ANCDABC87)

Transcription:

SAN Features of Clustered Data ONTAP 8.3.1 October 1 SL19 Version 1.1

TABLE OF CONTENTS 1 Introduction... 4 1.1 Lab Objectives... 4 1. Prerequisites... 4 Lab Environment....1 Lab Diagram... 3 Lab Activities... 6 3.1 Preparing the Windows Server 1 R Host... 6 3.1.1 Login... 6 3.1. View LUN Configuration... 7 3.1.3 Start traffic to LUN... 8 3. DataMotion for Volumes... 1 3..1 Preparation...1 3.. Start DataMotion for Volumes... 11 3.3 DataMotion for LUN...14 3.3.1 Preparation...14 3.3. Start DataMotion for LUN... 14 3.4 Selective LUN Mapping...16 3.4.1 Preparation...16 3.4. Add Paths... 16 3.4.3 DataMotion for Volumes... 18 3.4.4 Remove Paths... 3. Connecting to and Preparing the RHEL Host... 1 3..1 Login... 1 3.. View LUN configuration... 3 3..3 Start traffic to LUN... 4 3.6 DataMotion for Volumes... 1 NetApp, Inc. All rights reserved. NetApp Proprietary

3.6.1 Preparation... 3.6. Start DataMotion for Volumes... 3.7 DataMotion for LUN...7 3.7.1 Preparation...7 3.7. Start DataMotion for LUN... 7 3.8 Selective LUN Mapping...9 3.8.1 Preparation...9 3.8. Add paths...9 3.8.3 DataMotion for Volumes... 3 3.8.4 Remove paths...33 4 Lab Limitations... 36 Software...37 6 Version History... 38 3 1 NetApp, Inc. All rights reserved. NetApp Proprietary

1 Introduction The purpose of this lab is to introduce SAN features in clustered Data ONTAP 8.3.1. SAN features introduced in this lab include DataMotion for LUNs and Selective LUN mapping. Also covered is MPIO path management in Windows Server 1 R and Red Hat Enterprise Linux release 6. while using these new SAN features. Differences between DataMotion for LUNs and DataMotion for Volumes will also be demonstrated along with how they work with Selective LUN mapping. 1.1 Lab Objectives Introduce and demonstrate SAN features in clustered Data ONTAP 8.3.1. Demonstrate these features on Windows Server 1 R. Demonstrate these features on Red Hat Enterprise Linux Server release 6.. Demonstrate and manage MPIO during data mobility operations. 1. Prerequisites We recommend that you have a basic understanding of the following concepts before you start this lab: 4 Clustered Data ONTAP, Windows Server, and Red Hat Enterprise Linux. SAN storage, such as the initiator-target concept. 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Lab Environment This lab uses JUMPHOST to demonstrate clustered Data ONTAP storage features on a Windows host. RHEL1 is used to demonstrate the features on a Linux host. The storage system is cluster1, running clustered Data ONTAP, and is composed of 4 nodes ( HA-pairs). RHEL1 and cluster1 are accessed using SSH through PuTTY sessions, and they authenticate using SSH keys that are already established. Both JUMPHOST and RHEL1 already have 8 sessions established, sessions per node for 4 nodes. Sessions on each particular node are separated by a different subnet, to follow the dual-subnet design for resilience and redundancy. JUMPHOST and RHEL1 have NICs each, in order to reach both subnets. Hostname Description Windows Server 1 R JUMPHOST.demo.netapp.com Password: Netapp1! Red Hat Enterprise Linux Server release 6. RHEL1.demo.netapp.com Password not needed, using SSH key exchange with JUMPHOST. Clustered Data ONTAP cluster1.demo.netapp.com Password not needed, using SSH key exchange with JUMPHOST..1 Lab Diagram The following illustration identifies the components of this lab. Figure -1: 1 NetApp, Inc. All rights reserved. NetApp Proprietary

3 Lab Activities This lab is designed to accomplish the following four major tasks: Prepare the host, and understand the initial configuration of the SAN. Move a volume containing a LUN, and observe traffic and MPIO changes. Demonstrate SFMoD on a LUN, and observe traffic and MPIO changes. Contrast with DataMotion for Volumes. Demonstrate SLM by moving a volume containing a LUN while maintaining direct paths to the LUN. This involves expanding/contracting the LUN masking on the cluster. Use the following lab activities to accomplish those tasks: Preparing the Windows Server 1 R Host on page 6 DataMotion for Volumes on page 1 DataMotion for LUN on page 14 Selective LUN Mapping on page 16 Connecting to and Preparing the RHEL Host on page 1 DataMotion for Volumes on page DataMotion for LUN on page 7 Selective LUN Mapping on page 9 3.1 Preparing the Windows Server 1 R Host This section begins the preparation of the Windows Server 1 R host. 3.1.1 Login Perform the following activities to log in to the Windows Server 1 R: 1. Log in to Windows Server 1 R with DEMO\administrator, and the password Netapp1!.. Double-click Computer Management on the desktop. You may need to maximize the window. Figure 3-1: 6 1 NetApp, Inc. All rights reserved. NetApp Proprietary

3.1. View LUN Configuration 1. In the left pane, under the Storage section, click Data ONTAP(R) DSM Management. Observe the LUN s MPIO configuration. There are 4 paths. Even though you have 8 sessions, there are only 4 paths to lun1 because of Selective LUN Mapping, which is also covered in this guide. The paths to cluster1-1 are direct paths, and labeled Active/Optimized. The paths to cluster1-, which are used during an HA event, are indirect paths and labeled Active/Non-Optimized. Figure 3-: Now, you will observe the LUN mapping from the clustered Data ONTAP view.. Right-click the PuTTY icon on the taskbar and click cluster1 under Recent Sessions. Figure 3-3: 3. Issue the set adv command, and enter y when prompted to continue. Using username "admin". Authenticating with public key "rsa-key-14813"` cluster1::> set adv Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y n}: y cluster1::*> 7 1 NetApp, Inc. All rights reserved. NetApp Proprietary

4. Issue the lun mapping show -vserver svm_win -fields reporting-nodes cluster1::*> lun mapping vserver path ------- ----------------svm_win /vol/vol_win/lun1 svm_win /vol/vol_win/lun entries were displayed. command. show -vserver svm_win -fields reporting-nodes igroup --------win_iscsi win_iscsi reporting-nodes ----------------------cluster1-1,cluster1- cluster1-1,cluster1-. Observe the LUN mapping. The nodes cluster1-1 and cluster1- are reporting nodes; they report and permit connections to the LUNs in the list. Minimize the Computer Management window, and the PuTTY session window. Figure 3-4: 3.1.3 Start traffic to LUN Now, you will generate traffic to lun1. 1. Double-click lun1.icf on the desktop. Wait a few seconds for Iometer to initialize. Figure 3-:. Click the Results Display tab in the Iometer window. This will permit you to see the current traffic levels. 3. Make sure to choose Last Update. 8 1 NetApp, Inc. All rights reserved. NetApp Proprietary

4. Click the green flag button near the top of the Iometer window. This will begin the traffic to lun1. Figure 3-6:. Click Save so that the Iometer traffic begins. Figure 3-7: 6. Iometer has now started a 4KiB block, % Read, % Random workload on lun1. 9 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-8: 3. DataMotion for Volumes This lab activity demonstrates clustered Data ONTAP data mobility by moving a volume containing a LUN that is accessed through. 3..1 Preparation Perform the following tasks to prepare the... 1. Restore the cluster1 PuTTY window by clicking the PuTTY icon on the taskbar. Figure 3-9:. Create two more cluster1 PuTTY sessions. You will use them to monitor traffic on cluster1-1 and cluster1-. 3. Arrange and resize the windows to prepare to monitor the environment as the vol move occurs. 1 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-1: 4. In one of the PuTTY session windows, run the run -node cluster1-1 -command sysstat -i command, and in another PuTTY session window run the run -node cluster1- -command sysstat -i command. This allows you to monitor the traffic on cluster1-1 and cluster1- as the vol move occurs. Node cluster1-1: cluster1::> run -node cluster1-1 -command sysstat -i CPU NFS CIFS Net kb/s Disk kb/s in out read write 4% 81 181 1 1 1 44% 89 1817 191 1484 77 8% 78 1777 171 31 1774 in 16 163 1617 kb/s out 183 174 187 Cache age 1s 1s 1s in kb/s out Cache age 16s 16s 16s Node cluster1-: cluster1::> run -node cluster1- -command sysstat -i CPU NFS CIFS Net kb/s Disk kb/s in out read write 9% 4 4 14% 6 7 16 7% 1 3.. Start DataMotion for Volumes 11 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-11: 1. To start the vol move to cluster1-, run the vol move start -vserver svm_win destination-aggregate aggr command in the third PuTTY session window. -volume vol_win - cluster1::> vol move start -vserver svm_win -volume vol_win -destination-aggregate aggr [Job 8] Job is queued: Move "vol_win" in Vserver "svm_win" to aggregate "aggr". Use the "volume move show -vserver svm_win -volume vol_win" command to view the status of this operation.. To show the status of the vol move, run the vol move show -vserver svm_win -volume vol_win command. cluster1::> vol move show -vserver svm_win -volume vol_win Vserver Name: svm_win Volume Name: vol_win Actual Completion Time: Bytes Remaining: 771.9MB Destination Aggregate: aggr Detailed Status: Transferring data: 9.6MB sent. Estimated Time of Completion: Tue Sep 1:31: 14 Managing Node: cluster1-1 Percentage Complete: 4% Move Phase: replicating Estimated Remaining Duration: ::11 Replication Throughput: 68.6MB/s Duration of Move: ::16 Source Aggregate: aggr1 Start Time of Move: Tue Sep 1:3:8 14 Move State: healthy cluster1::> vol move show -vserver svm_win -volume vol_win Vserver Name: svm_win Volume Name: vol_win Actual Completion Time: Tue Sep 1:31:41 14 Bytes Remaining: Destination Aggregate: aggr Detailed Status: Successful Estimated Time of Completion: Managing Node: cluster1-1 Percentage Complete: 1% Move Phase: completed Estimated Remaining Duration: Replication Throughput: 71.MB/s Duration of Move: ::43 Source Aggregate: aggr1 Start Time of Move: Tue Sep 1:3:8 14 Move State: done 3. Observe the status of Iometer and sysstat traffic on cluster1-1 and cluster1- as the volume is moved, and cutover occurs. 1 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-1: Note: Since there is a constant level of IO on the volume, cutover may be deferred if ONTAP is unable to quiesce the volume; usually cutover is successful after the first deferral. cluster1::> vol move show -vserver svm_win -volume vol_win Vserver Name: svm_win Volume Name: vol_win Actual Completion Time: Bytes Remaining: 1.6MB Destination Aggregate: aggr Detailed Status: Waiting to Cutover. (1.61GB Sent)::Reason: Preparing source volume for cutover: Volume quiesce failed because there are outstanding file system requests on the volume (Volume can't be quiesced as it did not drain in time.) Volume move job at decision point Estimated Time of Completion: Tue Sep 16:14:1 14 Managing Node: cluster1-1 Percentage Complete: 98% Move Phase: cutover_soft_deferred Estimated Remaining Duration: ::4 Replication Throughput: 8.7MB/s Duration of Move: :1:4 Source Aggregate: aggr1 Start Time of Move: Tue Sep 16:1:1 14 Move State: healthy 4. Observe the path changes in Data ONTAP(R) DSM. Note: You may have to click Refresh in the right pane. 13 1 NetApp, Inc. All rights reserved. NetApp Proprietary

. Verify the volume now resides on aggr. cluster1::> vol show -vserver svm_win Vserver Volume Aggregate State --------- ------------ ------------ ---------svm_win svm_win_root aggr1 online svm_win vol_win aggr online entries were displayed. Type Size Available Used% ---- ---------- ---------- ----RW MB 18.88MB % RW 1GB 7.91GB % You have just completed a DataMotion for Volume operation in an SAN environment. Notice how this feature enables non-disruptive data mobility within the cluster, even with load on the data, while maintaining direct (Active/Optimized) SAN paths. Also, with the value of Data ONTAP DSM, you can easily see path changes and LUN location in the context of your SVM and cluster. 3.3 DataMotion for LUN This section demonstrates a new SAN feature in clustered Data ONTAP 8.3. Single File Move on Demand (SFMoD) allows you to move and copy LUNs between volumes. 3.3.1 Preparation 1. Now that you ve moved the vol_win volume to cluster1- and aggr, you will see a demonstration of DataMotion for LUN, where you will move lun1 from vol_win to a new volume on cluster1-1 and aggr1 while observing traffic and path changes.. To create the destination volume dest_vol on aggr1, issue the vol create -vserver svm_win -volume dest_vol -aggregate aggr1 -size 1GB -state online command. cluster1::> vol create -vserver svm_win -volume dest_vol -aggregate aggr1 -size 1GB -state online [Job 86] Job succeeded: Successful 3.3. Start DataMotion for LUN 1. Initiate the DataMotion for LUN by running the lun move start -vserver svm_win -destinationpath /vol/dest_vol/lun1 -source-path /vol/vol_win/lun1 command. cluster1::> lun move start -vserver svm_win -destination-path /vol/dest_vol/lun1 - source-path /vol/vol_win/lun1 Following LUN moves have been started: "svm_win:/vol/vol_win/lun1" move to "svm_win:/vol/dest_vol/lun1" 14 1 NetApp, Inc. All rights reserved. NetApp Proprietary

. To show the status of the LUN move, run the lun move show -vserver svm_win or, for a more detailed output, lun move show -vserver svm_win instance command. cluster1::> lun move show -vserver svm_win Vserver Destination Path Status Progress --------- ------------------------------- --------------- -------svm_win /vol/dest_vol/lun1 Data 1% cluster1::> lun move show -vserver svm_win -instance Vserver Name: svm_win Destination Path: /vol/dest_vol/lun1 Source Path: /vol/vol_win/lun1 Is Destination Promoted Late: false Maximum Transfer Rate (per sec): B LUN Move Status: Complete LUN Move Progress (%): 1% Elapsed Time: hms Cutover Time: hms Is Snapshot Fenced: false Is Destination Ready: true Last Failure Reason: - 3. Observe the status of Iometer & sysstat traffic on cluster1-1 & cluster1- as the LUN is moved. Notice how cutover occurs immediately with DataMotion for LUN whereas DataMotion for Volumes cutover occurs after movement is complete. 4. Observe the path changes in Data ONTAP(R) DSM. The node cluster1-1 now holds the direct paths marked Active/Optimized. Note: You may have to click Refresh in the right pane. 1 1 NetApp, Inc. All rights reserved. NetApp Proprietary

. Verify the LUN now resides in dest_vol on aggr1. cluster1::> lun show -vserver svm_win Vserver Path --------- ------------------------------svm_win /vol/dest_vol/lun1 svm_win /vol/vol_win/lun entries were displayed. State ------online online Mapped -------mapped mapped Type Size -------- -------windows 1.GB windows.1gb You have just completed a DataMotion for LUN operation in an SAN environment. DataMotion for LUN is a new SAN feature in the next major release of clustered Data ONTAP. Notice how this feature enables non-disruptive mobility of sub-volume data within the cluster, even with load on the data, while maintaining direct (Active/Optimized) SAN paths. Also, with the value of Data ONTAP DSM, you can easily see path changes and LUN location in the context of your SVM and cluster. 3.4 Selective LUN Mapping This section will demonstrate a new SAN feature in clustered Data ONTAP 8.3. Selective LUN Mapping (SLM) enables LUN masking at the node level. SLM is enabled by default on all created LUNs. It can be used with, or without portsets. 3.4.1 Preparation 1. The node cluster1-1 is currently hosting lun1, this is shown by running the lun show -vserver svm_win -path /vol/dest_vol/lun1 -fields node command. cluster1::> lun show -vserver svm_win -path /vol/dest_vol/lun1 -fields node vserver path node ------- ------------------ ----------svm_win /vol/dest_vol/lun1 cluster1-1. SLM maps the LUN to the hosting node s HA-pair, so in this case, nodes cluster1-1 and cluster1- present the LUN to its mapped igroup. To show this, enter advanced mode (if not already in it) and run the lun mapping show -vserver svm_win -fields reporting-nodes command. cluster1::> set adv Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y n}: y cluster1::*> lun mapping show -vserver svm_win -fields reporting-nodes vserver path igroup reporting-nodes ------- ------------------ --------- ----------------------svm_win /vol/dest_vol/lun1 win_iscsi cluster1-1,cluster1- svm_win /vol/vol_win/lun win_iscsi cluster1-1,cluster1- entries were displayed. 3.4. Add Paths 1. You will now prepare the cluster for dest_vol to be moved to aggr3, which is owned by cluster1-3. Since cluster1-3 belongs to the cluster1-3/cluster1-4 HA-pair, if you were to move dest_vol to aggr3, then lun1 would still be mapped to the cluster1 1/cluster1 HA-pair, and all paths would become indirect. To avoid this, bring the destination HA-pair into the current SLM configuration, add the new paths to Windows, move the volume, and then remove the source HA-pair paths once the move is complete. This way, you maintain at least direct paths. 16 1 NetApp, Inc. All rights reserved. NetApp Proprietary

. To add the destination HA-pair to the SLM configuration for lun1, run the lun mapping add-reportingnodes -vserver svm_win -path /vol/dest_vol/lun1 -igroup win_iscsi -destinationaggregate aggr3 command. cluster1::*> lun mapping add-reporting-nodes -vserver svm_win -path /vol/dest_vol/ lun1 -igroup win_iscsi -destination-aggregate aggr3 3. To verify the new SLM configuration for lun1, run the lun mapping show -vserver svm_win -fields reporting-nodes command. cluster1::*> lun mapping show -vserver svm_win -path /vol/dest_vol/lun1 -fields reporting-nodes vserver path igroup reporting-nodes ------- ------------------ --------- ----------------------------------------------svm_win /vol/dest_vol/lun1 win_iscsi cluster1-1,cluster1-,cluster1-3,cluster1-4 4. Click Disk Management in the left pane of the Computer Management window. Figure 3-13:. Click Action near the top of the window, then click Rescan Disks. This will initiate Windows to scan its SCSI bus and will pick up the new paths from the destination HA-pair. Figure 3-14: 17 1 NetApp, Inc. All rights reserved. NetApp Proprietary

6. In the left pane, under the Storage section, click Data ONTAP(R) DSM Management. Observe the path changes in Data ONTAP(R) DSM. There should now be 8 paths shown, paths per node for 4 nodes. Now the LUN will have direct paths on its destination HA-pair. Note: You may have to click Refresh in the right pane. Figure 3-1: 7. To observe traffic as the vol move occurs, click inside the PuTTY session running sysstat on cluster1- and press Ctrl-D to move back to the cluster shell, then issue the run -node cluster1-3 command sysstat i command. - Figure 3-16: 3.4.3 DataMotion for Volumes Now that Windows has paths to all the nodes, you can move the volume to aggr3 and Data ONTAP(R) DSM will handle the Active/Optimized path changes so you can maintain direct paths to the LUN. 18 1 NetApp, Inc. All rights reserved. NetApp Proprietary

1. Issue the vol move start -vserver svm_win -volume dest_vol -destination-aggregate aggr3 command. cluster1::*> vol move start -vserver svm_win -volume dest_vol -destination-aggregate aggr3 [Job 868] Job is queued: Move "dest_vol" in Vserver "svm_win" to aggregate "aggr3". Use the "volume move show -vserver svm_win -volume dest_vol" command to view the status of this operation.. To show the status of the vol move, issue the vol move show -vserver svm_win -volume dest_vol command. cluster1::*> vol move show -vserver svm_win -volume dest_vol Vserver Name: svm_win Volume Name: dest_vol Actual Completion Time: Bytes Remaining: 898.8MB Specified Action For Cutover: retry_on_failure Specified Cutover Time Window: 4 Destination Aggregate: aggr3 Destination Node: cluster1-3 Detailed Status: Transferring data: 463.1MB sent. Estimated Time of Completion: Wed Sep 3 16:39:41 14 Job ID: 868 Managing Node: cluster1-1 Percentage Complete: 33% Move Phase: replicating Prior Issues Encountered: Estimated Remaining Duration: ::1 Replication Throughput: 7.89MB/s Duration of Move: ::1 Source Aggregate: aggr1 Source Node: cluster1-1 Start Time of Move: Wed Sep 3 16:39:16 14 Move State: healthy Move Initiated by Auto Balance Aggregate: false 3. Observe the status of Iometer & sysstat traffic on cluster1-1 and cluster1-3 as the volume is moved and cutover occurs. Note: Since there is a constant level of I/O on the volume, cutover may be deferred if ONTAP is unable to quiesce the volume; usually cutover is successful after the first deferral. Figure 3-17: 4. Observe the path changes in Data ONTAP(R) DSM; cluster1-3 should now own both of the direct, Active/Optimized paths. Note: You may have to click Refresh in the right pane for the changes to be visible. 19 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-18:. Verify the volume now resides on aggr3. cluster1::*> vol show -vserver svm_win Vserver Volume Aggregate State --------- ------------ ------------ ---------svm_win dest_vol aggr3 online svm_win svm_win_root aggr1 online svm_win vol_win aggr online 3 entries were displayed. Type Size Available Used% ---- ---------- ---------- ----RW 1GB 8.19GB 18% RW MB 18.88MB % RW 1GB 8.GB 14% 3.4.4 Remove Paths 1. Now you will remove the previous HA-pair from the lun1 masking. Issue the lun mapping removereporting-nodes -vserver svm_win -path /vol/dest_vol/lun1 -igroup win_iscsi -remotenodes true command. cluster1::> lun mapping remove-reporting-nodes -vserver svm_win -path /vol/dest_vol/ lun1 -igroup win_iscsi -remote-nodes true. Verify the reporting nodes for lun1, enter advanced mode (if not already in it) and issue the lun mapping show -vserver svm_win -fields reporting-nodes command. cluster1::> set adv Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y n}: y cluster1::*> lun mapping show -vserver svm_win -fields reporting-nodes vserver path igroup reporting-nodes ------- ------------------ --------- ----------------------svm_win /vol/dest_vol/lun1 win_iscsi cluster1-3,cluster1-4 svm_win /vol/vol_win/lun win_iscsi cluster1-1,cluster1- entries were displayed. 3. Observe the path changes in Data ONTAP(R) DSM. Note: You may have to click Refresh in the right pane. 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-19: You have just completed a DataMotion for Volume operation while utilizing Selective LUN Mapping in an SAN environment. Selective LUN Mapping is a new SAN feature in the next major release of clustered Data ONTAP. Notice how Selective LUN Mapping enables greater, more granular control of SAN paths to data across the cluster. This enables you to choose which controllers present particular LUNs on their SAN LIFs, and also helps you maintain direct (Active/Optimized) paths to data while engaging in non-disruptive data mobility operations within the cluster. Also, with the value of Data ONTAP DSM, you can easily see path changes and LUN location in the context of your SVM and cluster. Please note that using Selective LUN Mapping in conjunction with DataMotion for LUN is also supported. 3. Connecting to and Preparing the RHEL Host This section illustrates how to connect to and prepare the RHEL host. Note: If you are starting this section after completing the Windows section, then close all windows currently open and begin at Step below. 3..1 Login 1. Log in to Windows Server 1 R, the Jumphost, and use the password Netapp1!. Right-click the PuTTY icon on the taskbar, and click rhel1 under Recent Sessions. Note: Once the PuTTY session terminal window opens, feel free to adjust the size of the window to your preference, as it will be the main RHEL session window where commands will be run. 1 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-: 3. Open another rhel1 PuTTY session. You will use this secondary session to initiate storage workloads. 4. Right-click the PuTTY icon on the taskbar and click cluster1 under Recent Sessions. Figure 3-1:. Repeat the previous step twice, so that you have a total of three cluster1 PuTTY session windows. You will use two of these windows to monitor traffic, and the other to initiate cluster commands. 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 3-: 3.. View LUN configuration 1. In the main rhel1 session window, issue the df h command. This shows the currently mounted devices. Notice the partition /dev/mapper/mpathbp1 is mounted at /lun1, and is 1GB in size. That is the LUN you will work with. [root@rhel1 ~]# df -h Filesystem Size /dev/mapper/vg_rhel1-lv_root 1G tmpfs 1.9G /dev/sda1 48M /dev/mapper/mpathap1.g /dev/mapper/mpathbp1 17M Used Avail Use% Mounted on 4.9G 6.G 4% / 76K 1.9G 1% /dev/shm 4M 41M 9% /boot 1.1G 83M 6% /lun 18M 438M % /lun1. Next, issue the multipath ll command. Observe the paths for mpathb. Notice the top two paths have status=active. These are the direct paths; the other two paths labeled status=enabled are the indirect paths. [root@rhel1 ~]# multipath -ll mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN size=1.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 6:::1 sdc 8:3 active ready running `- 4:::1 sdb 8:16 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 7:::1 sdf 8:8 active ready running `- 1:::1 sdg 8:96 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN size=.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 6::: sde 8:64 active ready running `- 4::: sdd 8:48 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 1::: sdi 8:18 active ready running `- 7::: sdh 8:11 active ready running C-Mode hwhandler='1 alua' wp=rw C-Mode hwhandler='1 alua' wp=rw 3. To view the current sessions, issue the iscsiadm -m session command. There should be 8 sessions, sessions per node for 4 nodes. Even though you have 8 sessions, there are only 4 paths to lun1 because of Selective LUN Mapping. [root@rhel1 ~]# iscsiadm -m session tcp: [1] 19.168..3:36,139 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 tcp: [] 19.168.1.33:36,14 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 3 1 NetApp, Inc. All rights reserved. NetApp Proprietary

tcp: [3] 19.168.1.3:36,138 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 tcp: [4] 19.168..33:36,141 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 tcp: [] 19.168.1.34:36,14 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 tcp: [6] 19.168.1.31:36,136 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 tcp: [7] 19.168..31:36,137 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 tcp: [8] 19.168..34:36,143 iqn.199-8.com.netapp:sn.deffb113dc11e4be37699da:vs.6 4. command. This command is provided by NetApp s SnapDrive for UNIX application. Observe the LUN path on the SVM. snapdrive storage show -devices [root@rhel1 ~]# snapdrive storage show -devices Connected LUNs and devices: device filename adapter path size proto snapshot ---------------------- -------------------------/dev/mapper/mpathb P 1g iscsi vol_linux/lun1 /dev/mapper/mpatha P g iscsi vol_linux/lun - state clone lun path backing ----- ----- -------- online No svm_linux:/vol/ online No svm_linux:/vol/. Issue the sanlun lun show command. This command is provided by NetApp s Linux Host Utilities Kit. This provides additional useful LUN information. [root@rhel1 ~]# sanlun lun show controller(7mode)/ device host lun vserver(cmode) lun-pathname filename adapter protocol size mode ------------------------------------------------------------------------------------------svm_linux /vol/vol_linux/lun /dev/sdd host4 g C svm_linux /vol/vol_linux/lun /dev/sdh host7 g C svm_linux /vol/vol_linux/lun1 /dev/sdg host1 1g C svm_linux /vol/vol_linux/lun1 /dev/sdf host7 1g C svm_linux /vol/vol_linux/lun /dev/sdi host1 g C svm_linux /vol/vol_linux/lun /dev/sde host6 g C svm_linux /vol/vol_linux/lun1 /dev/sdc host6 1g C svm_linux /vol/vol_linux/lun1 /dev/sdb host4 1g C 6. In the primary cluster1 PuTTY session window, issue the set to continue. adv command, and enter y when prompted Using username "admin". Authenticating with public key "rsa-key-14813" cluster1::> set adv Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y n}: y cluster1::*> 7. Issue the lun mapping current LUN mapping. show -vserver svm_linux -fields reporting-nodes command. Observe the cluster1::> lun mapping show -vserver svm_linux -fields reporting-nodes vserver path igroup reporting-nodes --------- ------------------- ----------- ----------------------svm_linux /vol/vol_linux/lun1 linux_iscsi cluster1-3,cluster1-4 svm_linux /vol/vol_linux/lun linux_iscsi cluster1-3,cluster1-4 entries were displayed. 3..3 Start traffic to LUN 1. In the secondary rhel1 session window, you will start I/O to lun1 using SIO, a NetApp IO utility. Issue the sio 4k m 1 /lun1/m.randomfile command. [root@rhel1 ~]# sio 4k m 1 /lun1/m.randomfile Version: 3. 4 1 NetApp, Inc. All rights reserved. NetApp Proprietary

3.6 DataMotion for Volumes This section will demonstrate clustered Data ONTAP data mobility by moving a volume containing a LUN being accessed through. 3.6.1 Preparation 1. To observe traffic on cluster1 3 and cluster1 4 as the vol move occurs, use two of the cluster1 PuTTY session windows created in step 7. In one window, run the run -node cluster1-3 -command sysstat -i command, and in another PuTTY session window, run the run -node cluster1-4 -command sysstat I command. Node cluster1-3 cluster1::> run -node cluster1-3 -command sysstat -i CPU NFS CIFS Net kb/s Disk kb/s in out read write 4% 81 181 1 1 1 44% 89 1817 191 1484 77 8% 78 1777 171 31 1774 in 16 163 1617 kb/s out 183 174 187 Cache age 1s 1s 1s in kb/s out Cache age 16s 16s Node cluster1-4 cluster1::> run -node cluster1-4 -command sysstat -i CPU NFS CIFS Net kb/s Disk kb/s in out read write 9% 4 4 14% 6 7 16 7% 1 16s 3.6. Start DataMotion for Volumes 1. To start the vol move to cluster1 4, issue the vol move start -vserver svm_linux -volume destination-aggregate aggr4 command in the primary cluster1 PuTTY session window. vol_linux - cluster1::> vol move start -vserver svm_linux -volume vol_linux -destination-aggregate aggr4 [Job 94] Job is queued: Move "vol_linux" in Vserver "svm_linux" to aggregate "aggr4". Use the "volume move show -vserver svm_linux -volume vol_linux" command to view the status of this operation. 1 NetApp, Inc. All rights reserved. NetApp Proprietary

. To show the status of the vol move, issue the vol command. move show -vserver svm_linux -volume vol_linux cluster1::> volume move show -vserver svm_linux -volume vol_linux Vserver Name: svm_linux Volume Name: vol_linux Actual Completion Time: Bytes Remaining: 1.1GB Destination Aggregate: aggr4 Detailed Status: Transferring data: 46.MB sent. Estimated Time of Completion: Fri Sep 18::4 14 Managing Node: cluster1-3 Percentage Complete: % Move Phase: replicating Estimated Remaining Duration: ::9 Replication Throughput: 3.4MB/s Duration of Move: ::1 Source Aggregate: aggr3 Start Time of Move: Fri Sep 18::4 14 Move State: healthy cluster1::> volume move show -vserver svm_linux -volume vol_linux Vserver Name: svm_linux Volume Name: vol_linux Actual Completion Time: Fri Sep 18:3:4 14 Bytes Remaining: Destination Aggregate: aggr4 Detailed Status: Successful Estimated Time of Completion: Managing Node: cluster1-3 Percentage Complete: 1% Move Phase: completed Estimated Remaining Duration: Replication Throughput: 9.96MB/s Duration of Move: :3: Source Aggregate: aggr3 Start Time of Move: Fri Sep 18::4 14 Move State: done 3. Observe the status of the sysstat traffic on cluster1 3 and cluster1 4 as the volume is moved and cutover occurs. Note: Since there is a constant high level of IO on the volume, cutover may be deferred if ONTAP is unable to quiesce the volume, usually cutover is successful after the first deferral. cluster1::> volume move show -vserver svm_linux -volume vol_linux Vserver Name: svm_linux Volume Name: vol_linux Actual Completion Time: Bytes Remaining: 687.9MB Destination Aggregate: aggr4 Detailed Status: Waiting to Cutover. (1.86GB Sent)::Reason: Preparing source volume for cutover: Volume quiesce failed because there are outstanding file system requests on the volume (Volume can't be quiesced as it did not drain in time.) Volume move job at decision point Estimated Time of Completion: Fri Sep 18:: 14 Managing Node: cluster1-3 Percentage Complete: 7% Move Phase: cutover_soft_deferred Estimated Remaining Duration: ::4 Replication Throughput: 9.96MB/s Duration of Move: ::6 Source Aggregate: aggr3 Start Time of Move: Fri Sep 18::4 14 Move State: healthy 4. Observe the path changes by running the multipath -ll command in the primary rhel1 PuTTY session window. Notice how, under mpathb, sdf and sdg are now status=active, and sdc and sdb are now status=enabled while before it was opposite. This is because the volume is now on aggr4, which is owned by cluster1 4. Therefore, the paths to cluster1 4 are now direct paths, and the paths to cluster1 3 are now indirect paths. [root@rhel1 ~]# multipath -ll mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN C-Mode size=1.g features='3 queue_if_no_path pg_init_retries ' hwhandler='1 alua' wp=rw -+- policy='round-robin ' prio= status=active 6 1 NetApp, Inc. All rights reserved. NetApp Proprietary

- 7:::1 sdf 8:8 active ready running `- 1:::1 sdg 8:96 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 6:::1 sdc 8:3 active ready running `- 4:::1 sdb 8:16 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN C-Mode size=.g features='3 queue_if_no_path pg_init_retries ' hwhandler='1 alua' wp=rw -+- policy='round-robin ' prio= status=active - 7::: sdh 8:11 active ready running `- 1::: sdi 8:18 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4::: sdd 8:48 active ready running `- 6::: sde 8:64 active ready running. Verify the volume now resides on aggr4. cluster1::> vol show -vserver svm_linux Vserver Volume Aggregate State --------- ------------ ------------ ---------svm_linux svm_linux_root aggr3 online svm_linux vol_linux aggr4 online entries were displayed. Type Size Available Used% ---- ---------- ---------- ----RW RW MB 1GB 18.88MB 7.6GB % 3% You have just completed a DataMotion for Volume operation in an SAN environment. Notice how this feature enables non-disruptive data mobility within the cluster, even with load on the data, while maintaining direct (Active/Optimized) SAN paths. 3.7 DataMotion for LUN This section demonstrates a new SAN feature in the clustered Data ONTAP 8.3. Single File Move on Demand (SFMoD) allows you to move and copy LUNs between volumes. 3.7.1 Preparation Now that you ve moved vol_linux to cluster1 4 and aggr4, this activity will demonstrate DataMotion for LUN, where you move lun1 from vol_linux to a new volume on cluster1 3, and observe traffic and path changes. 1. To create the destination volume dest_vol on aggr3, issue the vol dest_vol -aggregate aggr3 -size 1GB -state online command. create -vserver svm_linux -volume cluster1::> vol create -vserver svm_linux -volume dest_vol -aggregate aggr3 -size 1GB -state online [Job 943] Job succeeded: Successful 3.7. Start DataMotion for LUN 1. Initiate the DataMotion for LUN by issuing the lun move start -vserver svm_linux -destination-path / vol/dest_vol/lun1 -source-path /vol/vol_linux/lun1 command. cluster1::> lun move start -vserver svm_linux -destination-path /vol/dest_vol/lun1 -sourcepath /vol/vol_linux/lun1 Following LUN moves have been started: "svm_linux:/vol/vol_linux/lun1" move to "svm_linux:/vol/dest_vol/lun1". To show the status of the LUN move, issue the lun move show -vserver svm_linux command. Or, for a more detailed output, issue the lun move show -vserver svm_linux instance command. cluster1::> lun move show -vserver svm_linux Vserver Destination Path Status Progress --------- ------------------------------- --------------- -------svm_linux /vol/dest_vol/lun1 Data 34% cluster1::> lun move show -vserver svm_linux -instance Vserver Name: svm_linux Destination Path: /vol/dest_vol/lun1 Source Path: /vol/vol_linux/lun1 7 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Is Destination Promoted Late: Maximum Transfer Rate (per sec): LUN Move Status: LUN Move Progress (%): Elapsed Time: Cutover Time: Is Snapshot Fenced: Is Destination Ready: Last Failure Reason: false B Data 78% hm1s hms false true - 3. Observe the status of sysstat traffic on cluster1 3 and cluster1 4 as the LUN is moved. Notice how cutover occurs immediately with DataMotion for LUN, whereas vol move cutover occurs after movement is complete. Node cluster1-3: CPU NFS 8% 7% 14% 6% 6% 1% 6% 34% % 8% 7% 7% 86% 83% 88% 86% 8% 7% 78% 83% CIFS 3 7 8 84 174 373 11 946 444 1834 38 Net in 1 14 11 19 13 1 74169 169 3778 889 1361 13 3187 38768 431 84 4387 31 kb/s out 11 3 13 11 17 11 4 6 191 3398 199 489 3436 963 144 141 1748 79 116 17 Disk read 1 16 7 8 16 743 174 399 48 991 174 6 66 146 61 9 67 938 kb/s write 1 16 16 73 16 936 4378 93979 118 1366 1166 119 1977 184 7844 43 4331 797 in 41 4637 8367 14736 18 3669 1767 3646 3 kb/s Cache out age 1 1 Net kb/s in out 3639 9498 4881 33 1468 119 1997 1964 1186 7348 16669 1687 1884 18944 776 889 148 146 146 143 19 17 4 3 4 4 3 4 3 6 4 7 4 13 6 7 7 Disk read 347 4786 467 1 31 9 16 16 1 138 37 16 3 7 8 17 kb/s write 1 16 767 47 16 16 89 37 16 1 16 1867 96 in 31481 6 116 1184 887 14736 177 1884 948 147 kb/s Cache out age >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 >6 Node cluster1-4: CPU % 47% 1% 63% 43% 3% 3% 9% 64% 18% 6% 1% % % 13% 7% 1% 11% 13% % NFS CIFS 36 39 1997 13 189 338 36 387 89 389 3 3 4. Observe the path changes by issuing the multipath -ll command in the primary rhel1 PuTTY session window. Notice how, under mpathb, sdc and sdb are now status=active, and sdf and sdg are now status=enabled where before it was the opposite. This is because lun1 is now on dest_vol, which resides on aggr3 which is owned by cluster1 3. Therefore the paths to cluster1 3 are now direct paths, and the paths to cluster1 4 are now indirect paths. [root@rhel1 ~]# multipath -ll mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN C-Mode size=1.g features='3 queue_if_no_path pg_init_retries ' hwhandler='1 alua' wp=rw -+- policy='round-robin ' prio= status=active - 6:::1 sdc 8:3 active ready running 8 1 NetApp, Inc. All rights reserved. NetApp Proprietary

`- 4:::1 sdb 8:16 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 7:::1 sdf 8:8 active ready running `- 1:::1 sdg 8:96 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN C-Mode size=.g features='3 queue_if_no_path pg_init_retries ' hwhandler='1 alua' wp=rw -+- policy='round-robin ' prio= status=active - 7::: sdh 8:11 active ready running `- 1::: sdi 8:18 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4::: sdd 8:48 active ready running `- 6::: sde 8:64 active ready running. Verify that lun1 now resides in dest_vol on aggr3. cluster1::> lun show -vserver svm_linux Vserver Path --------- ------------------------------svm_linux /vol/dest_vol/lun1 svm_linux /vol/vol_linux/lun entries were displayed. State ------online online Mapped -------mapped mapped Type Size -------- -------linux 1GB linux GB You have just completed a DataMotion for LUN operation in an SAN environment. DataMotion for LUN is a new SAN feature in the next major release of clustered Data ONTAP. Notice how this feature enables non-disruptive mobility of sub-volume data within the cluster, even with load on the data, while maintaining direct (Active/Optimized) SAN paths. 3.8 Selective LUN Mapping This section demonstrates another new SAN feature in the clustered Data ONTAP 8.3. Selective LUN Mapping (SLM) enables LUN masking at the node level. SLM is enabled by default on all created LUNs. It can be used with, or without portsets. 3.8.1 Preparation 1. cluster1 3 is currently hosting lun1. You can view this by issuing the lun path /vol/dest_vol/lun1 -fields node command. show -vserver svm_linux - cluster1::> lun show -vserver svm_linux -path /vol/dest_vol/lun1 -fields node vserver path node --------- ------------------ ----------svm_linux /vol/dest_vol/lun1 cluster1-3. SLM maps the LUN to the hosting node s HA-pair, so in this case, cluster1 3 and cluster1 4 present the LUN to its mapped igroup. To show this, enter advanced mode (if not already in it) and issue the lun mapping show -vserver svm_linux -fields reporting-nodes command. cluster1::> set adv Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y n}: y cluster1::*> lun mapping show -vserver svm_linux -fields reporting-nodes vserver path igroup reporting-nodes --------- ------------------ ----------- ----------------------svm_linux /vol/dest_vol/lun1 linux_iscsi cluster1-3,cluster1-4 svm_linux /vol/vol_linux/lun linux_iscsi cluster1-3,cluster1-4 entries were displayed. 3.8. Add paths You will now prepare the cluster for dest_vol to be moved to aggr1, which is owned by cluster1 1. Since cluster1 1 belongs to the cluster1 1/cluster1 HA pair, if you were to move dest_vol to aggr1, then lun1 would still be mapped to the cluster1 3/cluster1 4 HA pair, and all paths would become indirect. 9 1 NetApp, Inc. All rights reserved. NetApp Proprietary

To avoid this, you will bring the destination HA-pair into the current SLM configuration, add the new paths to RHEL, move the volume, and then remove the source HA-pair paths once the move is complete. This way, you maintain at least direct paths. 1. To add the destination HA pair to the SLM configuration for lun1, issue the lun mapping add-reportingnodes -vserver svm_linux -path /vol/dest_vol/lun1 -igroup linux_iscsi -destination-aggregate aggr1 command. cluster1::*> lun mapping add-reporting-nodes -vserver svm_linux -path /vol/dest_vol/lun1 igroup linux_iscsi -destination-aggregate aggr1. To verify the new SLM configuration for lun1, issue the lun reporting-nodes command. mapping show -vserver svm_linux -fields cluster1::*> lun mapping show -vserver svm_linux -path /vol/dest_vol/lun1 -fields reportingnodes vserver path igroup reporting-nodes --------- ------------------ ----------- ----------------------------------------------svm_linux /vol/dest_vol/lun1 linux_iscsi cluster1-1,cluster1-,cluster1-3,cluster1-4 3. In the primary RHEL terminal, issue the rescan-scsi-bus.sh command. This will initiate RHEL to scan its SCSI bus to pick up the new paths from the destination HA pair. [root@rhel1 ~]# rescan-scsi-bus.sh Host adapter (ata_piix) found. Host adapter 1 (ata_piix) found. Host adapter 1 (iscsi_tcp) found. Host adapter (mptsas) found. Host adapter 3 (iscsi_tcp) found. Host adapter 4 (iscsi_tcp) found. Host adapter (iscsi_tcp) found. Host adapter 6 (iscsi_tcp) found. Host adapter 7 (iscsi_tcp) found. Host adapter 8 (iscsi_tcp) found. Host adapter 9 (iscsi_tcp) found. Scanning SCSI subsystem for new devices Scanning host for SCSI target IDs 1 3 4 6 Scanning for device... OLD: Host: scsi Channel: Id: Lun: Vendor: NECVMWar Model: VMware IDE CDR Rev: Type: CD-ROM ANSI Scanning host 1 channels for SCSI target IDs Scanning host channels for SCSI target IDs Scanning for device... OLD: Host: scsi Channel: Id: Lun: Vendor: VMware Model: Virtual disk Rev: Type: Direct-Access ANSI Scanning host 3 for all SCSI target IDs, all LUNs Scanning for device 3 1... OLD: Host: scsi3 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host 4 for all SCSI target IDs, all LUNs Scanning for device 4 1... OLD: Host: scsi4 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning for device 4... OLD: Host: scsi4 Channel: Id: Lun: Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host for all SCSI target IDs, all LUNs Scanning for device 1... OLD: Host: scsi Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host 6 for all SCSI target IDs, all LUNs Scanning for device 6 1... OLD: Host: scsi6 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning for device 6... OLD: Host: scsi6 Channel: Id: Lun: Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host 7 for all SCSI target IDs, all LUNs 3 7, all LUNs 1. SCSI revision: 1 3 4 6 7, all LUNs 1 3 4 6 7, all LUNs 1. SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Scanning for device 7 1... OLD: Host: scsi7 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning for device 7... OLD: Host: scsi7 Channel: Id: Lun: Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host 8 for all SCSI target IDs, all LUNs Scanning for device 8 1... OLD: Host: scsi8 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host 9 for all SCSI target IDs, all LUNs Scanning for device 9 1... OLD: Host: scsi9 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning host 1 for all SCSI target IDs, all LUNs Scanning for device 1 1... OLD: Host: scsi1 Channel: Id: Lun: 1 Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI Scanning for device 1... OLD: Host: scsi1 Channel: Id: Lun: Vendor: NETAPP Model: LUN C-Mode Rev: Type: Direct-Access ANSI new device(s) found. device(s) removed. 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 83 SCSI revision: 4. Observe the path changes by running the multipath ll command. There should now be 8 paths shown under mpathb, with direct paths and 6 indirect paths. The direct paths are the paths to cluster1 3, and the 6 indirect paths are the paths to cluster1 1,cluster1, and cluster1 4 ( paths per node). [root@rhel1 ~]# multipath -ll mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN size=1.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 6:::1 sdc 8:3 active ready running `- 4:::1 sdb 8:16 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 7:::1 sdf 8:8 active ready running - 1:::1 sdg 8:96 active ready running - 3:::1 sdj 8:144 active ready running - :::1 sdk 8:16 active ready running - 8:::1 sdl 8:176 active ready running `- 9:::1 sdm 8:19 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN size=.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 7::: sdh 8:11 active ready running `- 1::: sdi 8:18 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4::: sdd 8:48 active ready running `- 6::: sde 8:64 active ready running C-Mode hwhandler='1 alua' wp=rw C-Mode hwhandler='1 alua' wp=rw. To observe traffic as the vol move occurs, click inside the PuTTY session running sysstat on cluster1 4 and click Ctrl-D to move back to the cluster shell. Then issue the run -node cluster1-1 command sysstat I command. 9% 4 4 11 6% 3 6 16 cluster1::> run -node cluster1-1 -command sysstat -i CPU NFS CIFS Net kb/s Disk kb/s in out read write % 7 9 7 16 3% 4 % 4 8 3 1 31 in kb/s out - >6 >6 Cache age >6 >6 >6 1 NetApp, Inc. All rights reserved. NetApp Proprietary

3.8.3 DataMotion for Volumes 1. Now that RHEL has paths to all the nodes, you can move the volume to aggr1 and RHEL s native MPIO will handle the direct/indirect path changes so you can maintain direct paths to the LUN. Issue the vol move start -vserver svm_linux -volume dest_vol -destination-aggregate aggr1 command. cluster1::*> vol move start -vserver svm_linux -volume dest_vol -destination-aggregate aggr1 [Job 11] Job is queued: Move "dest_vol" in Vserver "svm_linux" to aggregate "aggr1". Use the "volume move show -vserver svm_linux -volume dest_vol" command to view the status of this operation.. To show the status of the vol move, issue the vol command. move show -vserver svm_linux -volume dest_vol cluster1::*> vol move show -vserver svm_linux -volume dest_vol Vserver Name: svm_linux Volume Name: dest_vol Actual Completion Time: Bytes Remaining: 1.17GB Specified Action For Cutover: retry_on_failure Specified Cutover Time Window: 4 Destination Aggregate: aggr1 Destination Node: cluster1-1 Detailed Status: Transferring data: 169.4MB sent. Estimated Time of Completion: Mon Sep 8 14:3:7 14 Job ID: 11 Managing Node: cluster1-3 Percentage Complete: 1% Move Phase: replicating Prior Issues Encountered: Estimated Remaining Duration: :1:3 Replication Throughput: 18.8MB/s Duration of Move: ::13 Source Aggregate: aggr3 Source Node: cluster1-3 Start Time of Move: Mon Sep 8 14::11 14 Move State: healthy Move Initiated by Auto Balance Aggregate: false cluster1::*> vol move show -vserver svm_linux -volume dest_vol Vserver Name: svm_linux Volume Name: dest_vol Actual Completion Time: Bytes Remaining: 63.1MB Specified Action For Cutover: retry_on_failure Specified Cutover Time Window: 4 Destination Aggregate: aggr1 Destination Node: cluster1-1 Detailed Status: Waiting to Cutover. (1.8GB Sent)::Reason: Preparing source volume for cutover: Volume quiesce failed because there are outstanding file system requests on the volume (Volume can't be quiesced as it did not drain in time.) Volume move job at decision point Estimated Time of Completion: Mon Sep 8 14::6 14 Job ID: 11 Managing Node: cluster1-3 Percentage Complete: 66% Move Phase: cutover_soft_deferred Prior Issues Encountered: 9/8/14 14:3:4 : Preparing source volume for cutover: Volume quiesce failed because there are outstanding file system requests on the volume (Volume can't be quiesced as it did not drain in time.) Estimated Remaining Duration: ::4 Replication Throughput:.33MB/s Duration of Move: ::3 Source Aggregate: aggr3 Source Node: cluster1-3 Start Time of Move: Mon Sep 8 14::11 14 Move State: healthy Move Initiated by Auto Balance Aggregate: false 3. Observe the status of the sysstat traffic on cluster1 3 and cluster1 1 as the volume is moved and cutover occurs. Note: Since there is a constant high level of I/O on the volume, cutover may be deferred if ONTAP is unable to quiesce the volume, usually cutover is successful after the first deferral. cluster1::*> vol move show -vserver svm_linux -volume dest_vol 3 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Vserver Name: svm_linux Volume Name: dest_vol Actual Completion Time: Mon Sep 8 14::9 14 Bytes Remaining: Specified Action For Cutover: retry_on_failure Specified Cutover Time Window: 4 Destination Aggregate: aggr1 Destination Node: cluster1-1 Detailed Status: Successful Estimated Time of Completion: Job ID: 11 Managing Node: cluster1-3 Percentage Complete: 1% Move Phase: completed Prior Issues Encountered: 9/8/14 14:3:4 : Preparing source volume for cutover: Volume quiesce failed because there are outstanding file system requests on the volume (Volume can't be quiesced as it did not drain in time.) Estimated Remaining Duration: Replication Throughput: 3.3MB/s Duration of Move: :3:48 Source Aggregate: aggr3 Source Node: cluster1-3 Start Time of Move: Mon Sep 8 14::11 14 Move State: done Move Initiated by Auto Balance Aggregate: false 4. Observe the path changes by running the multipath -ll command in the primary rhel1 PuTTY session window. There should still be 8 paths shown under mpathb, with direct paths and 6 indirect paths. However, the direct paths have changed. Before, sdc/sdb were direct paths, but now sdm/sdl are direct paths, because dest_vol (which owns lun1) is now hosted by cluster1 1 on aggr1. [root@rhel1 ~]# multipath -ll mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN size=1.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 9:::1 sdm 8:19 active ready running `- 8:::1 sdl 8:176 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4:::1 sdb 8:16 active ready running - 6:::1 sdc 8:3 active ready running - 7:::1 sdf 8:8 active ready running - 1:::1 sdg 8:96 active ready running - 3:::1 sdj 8:144 active ready running `- :::1 sdk 8:16 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN size=.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 7::: sdh 8:11 active ready running `- 1::: sdi 8:18 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4::: sdd 8:48 active ready running `- 6::: sde 8:64 active ready running C-Mode hwhandler='1 alua' wp=rw C-Mode hwhandler='1 alua' wp=rw. Verify the volume now resides on aggr1. cluster1::*> vol show -vserver svm_linux Vserver Volume Aggregate State --------- ------------ ------------ ---------svm_linux dest_vol aggr1 online svm_linux svm_linux_root aggr3 online svm_linux vol_linux aggr4 online 3 entries were displayed. Type Size Available Used% ---- ---------- ---------- ----RW 1GB 7.99GB % RW RW MB 1GB 18.88MB 8.4GB % 1% 3.8.4 Remove paths 1. To remove the previous HA pair from the lun1 masking, issue the lun mapping remove-reporting-nodes vserver svm_linux -path /vol/dest_vol/lun1 -igroup linux_iscsi -remote-nodes true command. cluster1::> lun mapping remove-reporting-nodes -vserver svm_linux -path /vol/dest_vol/lun1 igroup linux_iscsi -remote-nodes true 33 1 NetApp, Inc. All rights reserved. NetApp Proprietary

. To verify the reporting nodes for lun1, enter advanced mode (if not already in it) and issue the lun mapping show -vserver svm_linux -fields reporting-nodes command. cluster1::> set adv Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y n}: y cluster1::*> lun mapping show -vserver svm_linux -path /vol/dest_vol/lun1 -igroup linux_iscsi -fields reporting-nodes vserver path igroup reporting-nodes --------- ------------------ ----------- ----------------------svm_linux /vol/dest_vol/lun1 linux_iscsi cluster1-1,cluster1-3. Observe the path changes by issuing the multipath -ll command in the primary rhel1 PuTTY session window. There should still be 8 paths shown under mpathb, with direct paths, indirect paths, and 4 faulty paths. The 4 faulty paths are a result of removing the cluster1 3/cluster1 4 HA pair from the SLM configuration for lun1. [root@rhel1 ~]# multipath -ll Sep 8 14:3:44 sdb: couldn't get target port group Sep 8 14:3:44 sdc: couldn't get target port group Sep 8 14:3:44 sdf: couldn't get target port group Sep 8 14:3:44 sdg: couldn't get target port group mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN size=1.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 9:::1 sdm 8:19 active ready running `- 8:::1 sdl 8:176 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4:::1 sdb 8:16 failed faulty running - 6:::1 sdc 8:3 failed faulty running - 7:::1 sdf 8:8 failed faulty running - 1:::1 sdg 8:96 failed faulty running - 3:::1 sdj 8:144 active ready running `- :::1 sdk 8:16 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN size=.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 7::: sdh 8:11 active ready running `- 1::: sdi 8:18 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 4::: sdd 8:48 active ready running `- 6::: sde 8:64 active ready running C-Mode hwhandler='1 alua' wp=rw C-Mode hwhandler='1 alua' wp=rw 4. To remove these stale paths, use the same utility script, and request a scan on existing block devices. [root@rhel1 ~]# /usr/sbin/rescan-scsi-bus.sh -r. Observe the path changes by issuing the multipath -ll command in the primary rhel1 PuTTY session window. Notice how sdb, sdc, sdf, and sdg have been removed from the multipath configuration. There should now be 4 paths total, direct and indirect. The direct paths belong to cluster1 1, and the indirect paths belong to cluster1. [root@rhel1 ~]# multipath -ll mpathb (36a98774f6a344dd4631667a63) dm- NETAPP,LUN size=1.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 8:::1 sdl 8:176 active ready running `- 9:::1 sdm 8:19 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 3:::1 sdj 8:144 active ready running `- :::1 sdk 8:16 active ready running mpatha (36a98774f6a344dd4631667a64) dm-3 NETAPP,LUN size=.g features='3 queue_if_no_path pg_init_retries ' -+- policy='round-robin ' prio= status=active - 1::: sdi 8:18 active ready running `- 7::: sdh 8:11 active ready running `-+- policy='round-robin ' prio=1 status=enabled - 6::: sde 8:64 active ready running `- 4::: sdd 8:48 active ready running C-Mode hwhandler='1 alua' wp=rw C-Mode hwhandler='1 alua' wp=rw You have just completed a DataMotion for Volume operation while utilizing Selective LUN Mapping in an SAN environment. Selective LUN Mapping is a new SAN feature in the next major release of 34 1 NetApp, Inc. All rights reserved. NetApp Proprietary

clustered Data ONTAP. Notice how Selective LUN Mapping enables greater, more granular control of SAN paths to data across the cluster. This enables you to choose which controllers present particular LUNs on their SAN LIFs, and also helps you maintain direct (Active/Optimized) paths to data while engaging in non-disruptive data mobility operations within the cluster. Note: Selective LUN Mapping in conjunction with DataMotion for LUN is also supported. 3 1 NetApp, Inc. All rights reserved. NetApp Proprietary

4 Lab Limitations 36 Fibre Channel is not supported. Performance is limited. 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Software 37 Clustered Data ONTAP 8.3 Data ONTAP DSM SnapDrive System Manager PuTTY Iometer 1 NetApp, Inc. All rights reserved. NetApp Proprietary

6 Version History 38 Version Date Document Version History Version 1. October 14 Initial Release Version 1.1. October 1 Corrected linux/windows section swap 1 NetApp, Inc. All rights reserved. NetApp Proprietary

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature versions described in this document are supported for your specific environment. The NetApp IMT defines product components and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each customer's installation in accordance with published specifications. NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer s responsibility and depends on the customer s ability to evaluate and integrate them into the customer s operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document. Go further, faster 1 NetApp, Inc. All rights reserved. No portions of this presentation may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp and the NetApp logo are registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such.