MetroCluster IP Installation and Configuration Guide

Similar documents
Using the Cisco NX-OS Setup Utility

Clustered Data ONTAP 8.2

Using the Cisco NX-OS Setup Utility

Clustered Data ONTAP 8.2

Upgrading or Downgrading the Cisco Nexus 3500 Series NX-OS Software

Cluster Management Workflows for OnCommand System Manager

Cisco Nexus Switch Configuration Guide for Dell SC Series SANs. Dell Storage Engineering August 2015

FlexPod Express with VMware vsphere 6.0: Small and Medium Configurations

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager

FlexPod Datacenter with NetApp MetroCluster

FlexPod Express with Microsoft Windows Server 2012 R2 Hyper-V: Large Configuration

FlexPod Express with VMware vsphere 5.1u1 Implementation Guide

SAN Configuration Guide

Data Fabric Solution for Cloud Backup Workflow Guide

FlexPod Express with Microsoft Windows Server 2012 R2 Hyper-V: Small and Medium Configurations

Cluster Switch Setup Guide for Cisco Switches. May _A0_UR006

FlexPod Express with VMware vsphere 6.7 and NetApp AFF A220

FlexPod Express with Microsoft Windows 2016 Hyper-V and FAS2600

Cisco Nexus 1100 Series Virtual Services Appliances

SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP

NS0-180 netapp

MetroCluster Service and Expansion Guide

Installation and Cluster Deployment Guide for VMware

VSB Backup and Recovery

Cisco Nexus 5648Q. Switch Configuration Guide for Dell PS Series SANs

Outline: ONTAP 9 Cluster Administration and Data Protection Bundle (CDOTDP9)

FlexPod Express with VMware vsphere 6.7U1 and NetApp AFF A220 with Direct- Attached IP-Based Storage

Data ONTAP 8.1 Software Setup Guide for 7-Mode

Cluster Management Using OnCommand System Manager

Cluster and SVM Peering Express Guide

Replacing the boot media

Cisco Nexus 5648Q. Switch Configuration Guide for Dell EMC SC Series SANs

Working with Configuration Files

Software Images. About Software Images. Dependent Factors. Send documentation comments to CHAPTER

Cisco Nexus 3500 Series NX-OS Software Upgrade and Downgrade Guide, Release 7.x

FlexPod Express with VMware vsphere 6.5 and FAS2600

High Availability and MetroCluster Configuration Guide For 7-Mode

Cisco Nexus 9372PX. Switch Configuration Guide for Dell EMC SC Series SANs. A Dell EMC Deployment and Configuration Guide

MAX Data 1.1 Installation and Setup Guide. November _A0

Data Fabric Solution for Cloud Backup Workflow Guide Using SnapCenter

Abstract. Data Classification. Technical Report. Roy Scaife, NetApp Version March 2016 TR-4375

Data ONTAP 8.2 Software Setup Guide For 7-Mode

Netapp Exam NS0-157 NetApp Certified Data Administrator, Clustered Data ONTAP Version: 11.0 [ Total Questions: 302 ]

Cisco Nexus 9372PX. Switch Configuration Guide for Dell PS Series SANs. A Dell EMC Deployment and Configuration Guide. Dell EMC Engineering June 2017

Installation and Cluster Deployment Guide for VMware

Data ONTAP 8.1 High Availability and MetroCluster Configuration Guide For 7-Mode

FlexArray Virtualization

Installing the Cisco Nexus 1000V Software Using ISO or OVA Files

Replacing the boot media

FibreBridge 7500N and 6500N Configuring a MetroCluster system with SAS disk shelves and FibreBridge bridges in 7-mode

Exam Questions NS0-180

NetApp. Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0.

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

Installation and Cluster Deployment Guide for KVM

NS Number: NS0-507 Passing Score: 800 Time Limit: 120 min File Version: NS0-507

Copy-Free Transition Guide

Configuring Virtual Service Blades

Upgrading the Cisco VSG and the Cisco Prime NSC

Installation and Cluster Deployment Guide for KVM

StorSimple Appliance Quick Start Guide for Software Version 1.2.1

Cisco Nexus 1000V Software Upgrade Guide, Release 4.2(1)SV1(4a)

Installation and Cluster Deployment Guide

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ns0-157 Passing Score: 700 Time Limit: 120 min File Version: 1.0

VMware vsphere 6.0 on NetApp MetroCluster. September 2016 SL10214 Version 1.0.1

This guide presents the instructions and other information concerning the lab activities for this course.

Controller Hardware Upgrade Express Guide

Performing Software Maintenance Upgrades (SMUs)

Troubleshoot Firmware

Copy-Based Transition Guide

ONTAP 9.3 Cluster Administration and Data Protection Bundle (CDOTDP9)

Cisco Nexus 1000V Installation and Upgrade Guide, Release 5.2(1)SV3(1.4)

Initial Configuration

Installation and User Guide

Lenovo RackSwitch G8272. Release Notes. For Cloud Network Operating System 10.3

7-Mode Transition Tool 2.2

Cluster Expansion Express Guide

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Initial Configuration for the Switch

Clustered Data ONTAP Administration and Data Protection

Configuring Control Plane Policing

NS0-171.network appliance

Data ONTAP Edge 8.3. Express Setup Guide For Clustered Data ONTAP. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

NetApp Encryption Power Guide

Monitoring and Reporting for an ONTAP Account

Maintaining the System Software

Exam Questions NS0-157

Server Utilities. Enabling Or Disabling Smart Access USB. This chapter includes the following sections:

This table describes the supervisor module options for switches in the Cisco MDS 9000 Family.

FlexPod Datacenter with

FlexPod Datacenter with IP-Based Storage using VMware vsphere 6.5 Update1, NetApp AFF A- Series, and Cisco UCS Manager 3.2

Clustered Data ONTAP Administration (DCADM)

Reimage Procedures. Firepower 2100 Series Software Reimage and Disaster Recovery

Data ONTAP 8.1 Upgrade and Revert/Downgrade Guide for 7-Mode

Configuring Cisco UCS Server Pools and Policies

Cluster Switch Setup Guide for Cisco Switches. October _A0_ur005

Overview. ACE Appliance Device Manager Overview CHAPTER

Data ONTAP Edge 8.2. Express Setup Guide For Clustered Data ONTAP. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Transcription:

ONTAP 9 MetroCluster IP Installation and Configuration Guide May 2018 215-12877_C0 doccomments@netapp.com Updated for ONTAP 9.4

Table of Contents 3 Contents Deciding whether to use this guide... 5 Preparing for the MetroCluster installation... 6 Differences between the ONTAP MetroCluster configurations... 6 Access to remote storage in MetroCluster IP configurations... 7 Considerations for MetroCluster IP configuration... 7 Considerations for ADP systems in ONTAP 9.4... 8 Considerations for configuring cluster peering... 10 Prerequisites for cluster peering... 10 Considerations when using dedicated ports... 11 Considerations when sharing data ports... 11 Preconfigured settings for new MetroCluster systems from the factory... 12 Hardware setup checklist... 12 Using the Interoperability Matrix Tool to find MetroCluster information... 14 Configuring the MetroCluster hardware components... 15 Parts of a MetroCluster IP configuration... 15 Illustration of the local HA pairs in a MetroCluster configuration... 17 Illustration of the MetroCluster IP and cluster interconnect network... 17 Illustration of the cluster peering network... 19 Required MetroCluster IP components and naming conventions... 20 Installing and cabling MetroCluster components... 23 Racking the hardware components... 23 Cabling the IP switches... 24 Cabling the cluster peering connections... 26 Cabling the management and data connections... 27 Configuring the IP switches... 27 Configuring the MetroCluster software in ONTAP... 33 Gathering required information... 34 IP network information worksheet for site A... 34 IP network information worksheet for site B... 36 Similarities and differences between standard cluster and MetroCluster configurations... 38 Restoring system defaults on a previously used controller module... 39 Verifying the ha-config state of components... 40 Manually assigning drives to pool 0... 40 Manually assigning drives for pool 0 (ONTAP 9.4)... 41 Manually assigning drives for pool 0 (ONTAP 9.3)... 42 Setting up ONTAP... 44 Configuring the clusters into a MetroCluster configuration... 48 Disabling automatic drive assignment (if doing manual assignment in ONTAP 9.4)... 48 Verifying drive assignment of pool 0 drives... 48

4 MetroCluster IP Installation and Configuration Guide Peering the clusters... 49 Creating the DR group... 55 Configuring and connecting the MetroCluster IP interfaces... 57 Verifying or manually performing pool 1 drives assignment... 63 Enabling automatic drive assignment (if doing manual assignment in ONTAP 9.4)... 68 Mirroring the root aggregates... 69 Creating a mirrored data aggregate on each node... 69 Implementing the MetroCluster configuration... 70 Checking the MetroCluster configuration... 72 Completing ONTAP configuration... 75 Verifying switchover, healing, and switchback... 75 Installing the MetroCluster Tiebreaker software... 75 Protecting configuration backup files... 75 Considerations when removing MetroCluster configurations... 76 Requirements and limitations when using ONTAP in a MetroCluster configuration... 77 Job schedules in a MetroCluster configuration... 77 Cluster peering from the MetroCluster site to a third cluster... 77 Volume creation on a root aggregate... 77 Networking and LIF creation guidelines for MetroCluster configurations... 77 IPspace object replication and subnet configuration requirements... 78 Requirements for LIF creation in a MetroCluster configuration... 79 LIF replication and placement requirements and issues... 79 Output for the storage aggregate plex show command is indeterminate after a MetroCluster switchover... 82 Modifying volumes to set the NVFAIL flag in case of switchover... 82 Monitoring and protecting the file system consistency using NVFAIL... 82 How NVFAIL impacts access to NFS volumes or LUNs... 83 Commands for monitoring data loss events... 84 Accessing volumes in NVFAIL state after a switchover... 84 Recovering LUNs in NVFAIL states after switchover... 85 Where to find additional information... 86 Glossary of MetroCluster terms... 87 Copyright information... 89 Trademark information... 90 How to send comments about documentation and receive update notifications... 91 Index... 92

5 Deciding whether to use the MetroCluster IP Installation and Configuration Guide This guide describes how to install and configure the MetroCluster IP hardware and software components. You should use this guide for planning, installing, and configuring a MetroCluster IP configuration under the following circumstances: You want to understand the architecture of a MetroCluster IP configuration. You want to understand the requirements and best practices for configuring a MetroCluster IP configuration. You want to use the command-line interface (CLI), not an automated scripting tool. General information about ONTAP and MetroCluster configurations is also available. ONTAP 9 Documentation Center

6 Preparing for the MetroCluster installation As you prepare for the MetroCluster installation, you should understand the MetroCluster hardware architecture and required components. Differences between the ONTAP MetroCluster configurations The various MetroCluster configurations have key differences in the required components. In all configurations, each of the two MetroCluster sites is configured as an ONTAP cluster. In a twonode MetroCluster configuration, each node is configured as a single-node cluster. Feature IP configurations Fabric-attached configurations Stretch configurations Number of controllers Uses an FC switch storage fabric Uses an IP switch storage fabric Uses FC-to- SAS bridges Uses directattached SAS storage Supports ADP Supports local HA Supports automatic switchover Supports unmirrored aggregates Supports array LUNs Four- or eight-node Two-node Two-node bridgeattached Two-node directattached Four Four or eight Two Two Two No Yes Yes No No Yes No No No No No Yes Yes Yes No Yes (local attached only) Yes (starting in ONTAP 9.4) No No No Yes Yes Yes Yes Yes Yes Yes No No No No Yes Yes Yes Yes No Yes Yes Yes Yes No Yes Yes Yes Yes

Preparing for the MetroCluster installation 7 Access to remote storage in MetroCluster IP configurations In MetroCluster IP configurations, the only way the local controllers can reach the remote storage pools is via the remote controllers. The IP switches are connected to the Ethernet ports on the controllers; they do not have direct connections to the disk shelves. If the remote controller is down, the local controllers cannot reach their remote storage pools. This is different than MetroCluster FC configurations, in which the remote storage pools are connected to the local controllers via the FC fabric or the SAS connections. The local controllers still have access to the remote storage even if the remote controllers are down. Considerations for MetroCluster IP configuration You should be aware of how the MetroCluster IP addresses and interfaces are implemented in a MetroCluster IP configuration, as well as the associated requirements. In a MetroCluster IP configuration, replication of storage and nonvolatile cache is performed over high-bandwidth dedicated links in the MetroCluster IP fabric. iscsi connections are used for storage replication. The IP switches are also used for all intra-cluster traffic within the local clusters. The MetroCluster traffic is kept separate from the intra-cluster traffic by using separate IP subnets and VLANs. The MetroCluster IP fabric is distinct and different from the cluster peering network. cluster_a cluster_b MetroCluster-IP LIF 1 IP 10.1.1.1 subnet 10.1.1/24 IP_switch_A_1 IP_switch_B_1 MetroCluster-IP LIF 1 IP 10.1.1.3 subnet 10.1.1/24 MetroCluster-IP LIF 2 IP 10.1.2.1 subnet 10.1.2/24 MetroCluster-IP LIF 2 IP 10.1.2.3 Subnet 10.1.2/24 IP_switch_A_2 IP_switch_B_2 MetroCluster-IP LIF 1 subnet 10.1.1.2 Subnet 10.1.1/24 MetroCluster-IP LIF 1 IP 10.1.1.4 subnet 10.1.1/24 MetroCluster-IP LIF 2 subnet 10.1.2.2 Subnet 10.1.2/24 MetroCluster-IP LIF 2 IP 10.1.2.4 Subnet 10.1.2/24 The MetroCluster IP configuration requires two IP addresses on each node that are reserved for the back-end MetroCluster IP fabric. The reserved IP addresses are assigned to MetroCluster IP logical interfaces (LIFs) during initial configuration, and have the following requirements: Note: You must choose the MetroCluster IP addresses carefully because you cannot change them after initial configuration. They must fall in a unique IP range. They must not overlap with any IP space in the environment. They must reside in one of two IP subnets that separate them from all other traffic. For example, the nodes might be configured with the following IP addresses:

8 MetroCluster IP Installation and Configuration Guide Node Interface IP address Subnet node_a_1 MetroCluster IP interface 1 10.1.1.1 10.1.1/24 MetroCluster IP interface 2 10.1.2.1 10.1.2/24 node_a_2 MetroCluster IP interface 1 10.1.1.2 10.1.1/24 MetroCluster IP interface 2 10.1.2.2 10.1.2/24 node_b_1 MetroCluster IP interface 1 10.1.1.3 10.1.1/24 MetroCluster IP interface 2 10.1.2.3 10.1.2/24 node_b_2 MetroCluster IP interface 1 10.1.1.4 10.1.1/24 MetroCluster IP interface 2 10.1.2.4 10.1.2/24 Characteristics of MetroCluster IP interfaces The MetroCluster IP interfaces are specific to MetroCluster IP configurations. They have different characteristics from other ONTAP interface types: They are created by the metrocluster configuration-settings interface create command as part the initial MetroCluster configuration. They are not created or modified by the network interface commands. They do not appear in the output of the network interface show command. They do not fail over, but remain associated with the port on which they were created. MetroCluster IP configurations use a 40/100-Gbps Ethernet adapter for the IP interfaces to the IP switches used for the MetroCluster IP fabric. In MetroCluster IP configurations on AFF A700 and FAS9000 systems, the X91146A-C 40/100-Gbps Ethernet adapter is required in slot 5 of each controller module. In MetroCluster IP configurations on AFF A800 systems, the X1146A 40/100-Gbps Ethernet adapter is required in slot 0 and slot 1. Considerations for ADP systems in ONTAP 9.4 Starting with ONTAP 9.4, MetroCluster IP configurations support new installations with AFF systems using ADP (Advanced Drive Partitioning). ONTAP 9.4 includes the following changes for ADP support: Pool 0 disk assignments are done at the factory. The unmirrored root is created at the factory. Data partition assignment is done at the customer site during the setup procedure. In most cases, drive assignment and partitioning is done automatically during the setup procedures. Supported configurations for automatic drive assignment The following table describes the supported configurations for automatic drive assignment and partitioning.

Preparing for the MetroCluster installation 9 Platform Drive shelf arrangement Assignment rules AFF A700 systems Four external shelves Drives are automatically assigned on a shelf-by-shelf basis. AFF A800 systems Internal drives only The internal partitions are divided into four equal groups (quarters). Each quarter is automatically assigned to a separate pool. Internal drives and four external shelves The internal partitions are divided into four equal groups (quarters). Each quarter is automatically assigned to a separate pool. The drives on the external shelves are automatically assigned on a shelf-by-shelf basis, with all of the drives on each shelf assigned to one of the four nodes in the MetroCluster configurations. How shelf-by-shelf automatic assignment works If there are four external shelves per site, each shelf is assigned to a different node and different pool, as shown in the following example: All of the disks on site_a-shelf_1 are automatically assigned to pool 0 of node_a_1 All of the disks on site_a-shelf_3 are automatically assigned to pool 0 of node_a_2 All of the disks on site_b-shelf_1 are automatically assigned to pool 0 of node_b_1 All of the disks on site_b-shelf_3 are automatically assigned to pool 0 of node_b_2 All of the disks on site_b-shelf_2 are automatically assigned to pool 1 of node_a_1 All of the disks on site_b-shelf_4 are automatically assigned to pool 1 of node_a_2 All of the disks on site_a-shelf_2 are automatically assigned to pool 1 of node_b_1 All of the disks on site_a-shelf_4 are automatically assigned to pool 1 of node_b_2 How manual assignment of a shelf works Automatic drive assignment does not occur on ADP systems with the following shelf configurations: Fewer than four external shelves per site. The drives must be assigned manually to ensure symmetrical assignment of the drives, with each pool having an equal number of drives. More than four shelves per site, but the total number of shelves is not a multiple of four. Extra shelves above the nearest multiple of four are left unassigned and the drives must be assigned manually. When manually assigning drives, you should assign disks symmetrically, with an equal number of drives assigned to each pool. If the configuration has two storage shelves at each site, you would assign half the drives on each shelf to a different pool. Related concepts Required MetroCluster IP components and naming conventions on page 20

10 MetroCluster IP Installation and Configuration Guide Related information Disk and aggregate management Considerations for configuring cluster peering Each MetroCluster site is configured as a peer to its partner site. You should be familiar with the prerequisites and guidelines for configuring the peering relationships and when deciding whether to use shared or dedicated ports for those relationships. Related information Cluster and SVM peering express configuration Prerequisites for cluster peering Before you set up cluster peering, you should confirm that the connectivity, port, IP address, subnet, firewall, and cluster-naming requirements are met. Connectivity requirements Intercluster LIFs must have pair-wise full-mesh connectivity: Every intercluster LIF on the local cluster must be able to communicate with every intercluster LIF on the remote cluster. Although it is not required, it is typically simpler to configure the IP addresses used for intercluster LIFs in the same subnet. The IP addresses can reside in the same subnet as data LIFs, or in a different subnet. The subnet used in each cluster must meet the following requirements: The subnet must have enough IP addresses available to allocate to one intercluster LIF per node. For example, in a six-node cluster, the subnet used for intercluster communication must have six available IP addresses. Each node must have an intercluster LIF with an IP address on the intercluster network. Intercluster LIFs can have an IPv4 address or an IPv6 address. Port requirements You can use dedicated ports for intercluster communication, or share ports used by the data network. Ports must meet the following requirements: All ports that are used to communicate with a given remote cluster must be in the same IPspace. You can use multiple IPspaces to peer with multiple clusters. Pair-wise full-mesh connectivity is required only within an IPspace. The broadcast domain that is used for intercluster communication must include at least two ports per node so that intercluster communication can fail over from one port to another port. Ports added to a broadcast domain can be physical network ports, VLANs, or interface groups (ifgrps). All ports must be cabled. All ports must be in a healthy state. The MTU settings of the ports must be consistent. Firewall requirements Firewalls and the intercluster firewall policy must allow the following protocols: ICMP service TCP to the IP addresses of all the intercluster LIFs over the ports 10000, 11104, and 11105 HTTPS

Preparing for the MetroCluster installation 11 The default intercluster firewall policy allows access through the HTTPS protocol and from all IP addresses (0.0.0.0/0). You can modify or replace the policy if necessary. Cluster requirements Clusters must meet the following requirements: The time on the clusters in a cluster peering relationship must be synchronized within 300 seconds (5 minutes). Cluster peers can be in different time zones. Considerations when using dedicated ports When determining whether using a dedicated port for intercluster replication is the correct intercluster network solution, you should consider configurations and requirements such as LAN type, available WAN bandwidth, replication interval, change rate, and number of ports. Consider the following aspects of your network to determine whether using a dedicated port is the best intercluster network solution: If the amount of available WAN bandwidth is similar to that of the LAN ports and the replication interval is such that replication occurs while regular client activity exists, then you should dedicate Ethernet ports for intercluster replication to avoid contention between replication and the data protocols. If the network utilization generated by the data protocols (CIFS, NFS, and iscsi) is such that the network utilization is above 50 percent, then you should dedicate ports for replication to allow for nondegraded performance if a node failover occurs. When physical 10 GbE or faster ports are used for data and replication, you can create VLAN ports for replication and dedicate the logical ports for intercluster replication. The bandwidth of the port is shared between all VLANs and the base port. Consider the data change rate and replication interval and whether the amount of data that must be replicated on each interval requires enough bandwidth that it might cause contention with data protocols if sharing data ports. Considerations when sharing data ports When determining whether sharing a data port for intercluster replication is the correct intercluster network solution, you should consider configurations and requirements such as LAN type, available WAN bandwidth, replication interval, change rate, and number of ports. Consider the following aspects of your network to determine whether sharing data ports is the best intercluster connectivity solution: For a high-speed network, such as a 40-Gigabit Ethernet (40-GbE) network, a sufficient amount of local LAN bandwidth might be available to perform replication on the same 40-GbE ports that are used for data access. In many cases, the available WAN bandwidth is far less than 10 GbE LAN bandwidth. All nodes in the cluster might have to replicate data and share the available WAN bandwidth, making data port sharing more acceptable. Sharing ports for data and replication eliminates the extra port counts required to dedicate ports for replication. The maximum transmission unit (MTU) size of the replication network will be the same size as that used on the data network.

12 MetroCluster IP Installation and Configuration Guide Consider the data change rate and replication interval and whether the amount of data that must be replicated on each interval requires enough bandwidth that it might cause contention with data protocols if sharing data ports. When data ports for intercluster replication are shared, the intercluster LIFs can be migrated to any other intercluster-capable port on the same node to control the specific data port that is used for replication. Preconfigured settings for new MetroCluster systems from the factory New MetroCluster nodes are preconfigured with a root aggregate. Additional hardware and software settings are configured using the detailed procedures provided in this guide. Hardware racking and cabling Depending on the configuration you ordered, you might need to rack the systems and complete the cabling. Software configuration of the MetroCluster configuration Nodes received with the new MetroCluster configuration are preconfigured with a single root aggregate. Additional configuration must be performed using the detailed procedures provided in this guide. Hardware setup checklist You need to know which hardware setup steps were completed at the factory and which steps you need to complete at each MetroCluster site. Step Completed at factory Completed by you Mount components in one or more cabinets. Yes No Position cabinets in the desired location. No Yes Position them in the original order so that the supplied cables are long enough. Connect multiple cabinets to each other, if applicable. Secure the cabinets to the floor, if applicable. No No Yes Use the cabinet interconnect kit if it is included in the order. The kit box is labeled. Yes Use the universal bolt-down kit if it is included in the order. The kit box is labeled.

Preparing for the MetroCluster installation 13 Step Cable the components within the cabinet. Connect the cables between cabinets, if applicable. Connect management cables to the customer's network. Connect console ports to the customer's terminal server, if applicable. Connect the customer's data cables to the cluster. Connect the cabinets to power and power on the components. Assign IP addresses to the management ports of the cluster switches and to the management ports of the management switches, if present. Verify cabling by running the Config Advisor tool. Completed at factory Yes Cables 5 meters and longer are removed for shipping and placed in the accessories box. No No No No No No No Completed by you No Yes Cables are in the accessories box. Yes Connect them directly or through the CN1601 management switches, if present. Yes Yes Yes Attention: To avoid address conflicts, do not connect management ports to the customer's network until after you change the default IP addresses to the customer's values. Power them on in the following order: 1. PDUs 2. Disk shelves 3. Nodes Yes Connect to the serial console port of each switch and log in with user name admin with no password. Suggested management addresses are 10.10.10.81, 10.10.10.82, 10.10.10.83, and 10.10.10.84. Yes

14 MetroCluster IP Installation and Configuration Guide Using the Interoperability Matrix Tool to find MetroCluster information When setting up the MetroCluster configuration, you can use the Interoperability Tool to ensure you are using supported software and hardware versions. NetApp Interoperability Matrix Tool After opening the Interoperability Matrix, you can use the Storage Solution field to select your MetroCluster solution. You use the Component Explorer to select the components and ONTAP version to refine your search. You can click Show Results to display the list of supported configurations that match the criteria.

15 Configuring the MetroCluster hardware components The MetroCluster components must be physically installed, cabled, and configured at both geographic sites. Parts of a MetroCluster IP configuration As you plan your MetroCluster IP configuration, you should understand the hardware components and how they interconnect. Key hardware elements A MetroCluster IP configuration includes the following key hardware elements: Storage controllers The storage controllers are configured as two two-node clusters. IP network This back-end IP network provides connectivity for two distinct uses: Standard cluster connectivity for intra-cluster communications. This is the same cluster switch functionality used in non-metrocluster switched ONTAP clusters. MetroCluster back-end connectivity for replication of storage data and non-volatile cache. Cluster peering network The cluster peering network provides connectivity for mirroring of the cluster configuration, which includes storage virtual machine (SVM) configuration. The configuration of all of the SVMs on one cluster is mirrored to the partner cluster.

16 MetroCluster IP Installation and Configuration Guide cluster_a cluster_b node_a_1 IP_switch_A_1 IP_switch_B_1 node_b_1 shelf_a_1 shelf_b_1 IP_switch_A_2 IP_switch_B_2 node_a_2 node_b_2 shelf_a_2 shelf_b_2 IP switch Cluster Peering Network IP switch Disaster Recovery (DR) groups A MetroCluster IP configuration consists of one DR group of four nodes. The following illustration shows the organization of nodes in a four-node MetroCluster configuration: cluster_a cluster_b DR Group One node_a_1 DR pair node_b_1 HA pair HA pair node_a_2 DR pair node_b_2

Configuring the MetroCluster hardware components 17 Illustration of the local HA pairs in a MetroCluster configuration Each MetroCluster site consists of storage controllers configured as an HA pair. This allows local redundancy so that if one storage controller fails, its local HA partner can take over. Such failures can be handled without a MetroCluster switchover operation. Local HA failover and giveback operations are performed with the storage failover commands, in the same manner as a non-metrocluster configuration. cluster_a cluster_b node_a_1 IP_switch_A_1 IP_switch_B_1 node_b_1 shelf_a_1 shelf_b_1 IP_switch_A_2 IP_switch_B_2 node_a_2 node_b_2 shelf_a_2 shelf_b_2 IP switch IP switch Cluster Peering Network Related information ONTAP concepts Illustration of the MetroCluster IP and cluster interconnect network ONTAP clusters typically include a cluster interconnect network for traffic between the nodes in the cluster. In MetroCluster IP configurations, this network is also used for carrying data replication traffic between the MetroCluster sites.

18 MetroCluster IP Installation and Configuration Guide cluster_a cluster_b node_a_1 IP_switch_A_1 IP_switch_B_1 node_b_1 shelf_a_1 shelf_b_1 IP_switch_A_2 IP_switch_B_2 node_a_2 node_b_2 shelf_a_2 shelf_b_2 IP switch IP switch Cluster Peering Network Each node in the MetroCluster IP configuration has specialized LIFs for connection to the back-end IP network: Two MetroCluster IP interfaces One intercluster LIF The following illustration shows these interfaces. The port usage shown is for an AFF A700 or FAS9000 system.

Configuring the MetroCluster hardware components 19 MetroCluster-IP Interface 1 Subnet A e5a IP_switch_A_1 MetroCluster ISLs MetroCluster-IP Interface 2 Subnet B e5b Local ISLs node_a_1 Cluster LIF 1 e4a IP_switch_A_2 MetroCluster ISLs Cluster LIF 2 e4b Intercluster LIFs Cluster peering network Related concepts Considerations for MetroCluster IP configuration on page 7 Illustration of the cluster peering network The two clusters in the MetroCluster configuration are peered through a customer-provided cluster peering network. Cluster peering supports the synchronous mirroring of storage virtual machines (SVMs, formerly known as Vservers) between the sites. Intercluster LIFs must be configured on each node in the MetroCluster configuration, and the clusters must be configured for peering. The ports with the intercluster LIFs are connected to the customerprovided cluster peering network. Replication of the SVM configuration is carried out over this network through the Configuration Replication Service.

20 MetroCluster IP Installation and Configuration Guide cluster_a cluster_b node_a_1 IP_switch_A_1 IP_switch_B_1 node_b_1 shelf_a_1 shelf_b_1 IP_switch_A_2 IP_switch_B_2 node_a_2 node_b_2 shelf_a_2 shelf_b_2 IP switch IP switch Cluster Peering Network Related concepts Considerations for configuring cluster peering on page 10 Related tasks Cabling the cluster peering connections on page 26 Peering the clusters on page 49 Related information Cluster and SVM peering express configuration Required MetroCluster IP components and naming conventions When planning your MetroCluster IP configuration, you must understand the required and supported hardware and software components. For convenience and clarity, you should also understand the naming conventions used for components in examples throughout the documentation. For example, one site is referred to as Site A and the other site is referred to as Site B. Supported software and hardware The hardware and software must be supported for the MetroCluster IP configuration. NetApp Interoperability Matrix Tool

Configuring the MetroCluster hardware components 21 In the IMT, you can use the Storage Solution field to select your MetroCluster solution. You use the Component Explorer to select the components and ONTAP version to refine your search. You can click Show Results to display the list of supported configurations that match the criteria. NetApp Hardware Universe When using All Flash Optimized systems, all controller modules in the MetroCluster configuration must be configured as All Flash Optimized systems. Hardware redundancy requirements in a MetroCluster IP configuration Because of the hardware redundancy in the MetroCluster IP configuration, there are two of each component at each site. The sites are arbitrarily assigned the letters A and B, and the individual components are arbitrarily assigned the numbers 1 and 2. ONTAP cluster requirements in a MetroCluster IP configuration MetroCluster IP configurations require two ONTAP clusters, one at each MetroCluster site. Naming must be unique within the MetroCluster configuration. names: Site A: cluster_a Site B: cluster_b IP switch requirements in a MetroCluster IP configuration MetroCluster IP configurations require four IP switches. The four switches form two switch storage fabrics that provide the ISL between each of the clusters in the MetroCluster IP configuration. The IP switches also provide cluster communication among the controller modules in each cluster. Naming must be unique within the MetroCluster configuration. names: Site A: cluster_a: IP_switch_A_1 IP_switch_A_2 Site B: cluster_b IP_switch_B_1 IP_switch_B_2 Controller module requirements in a MetroCluster IP configuration MetroCluster IP configurations require four controller modules. The controller modules at each site form an HA pair. Each controller module has a DR partner at the other site. Each controller module must be running the same ONTAP version. Supported platform models depend on the ONTAP version: New MetroCluster IP installations on FAS systems are not supported in ONTAP 9.4. Existing MetroCluster IP configurations on FAS systems can be upgraded to ONTAP 9.4. Starting with ONTAP 9.4, controller modules configured for ADP are supported. names:

22 MetroCluster IP Installation and Configuration Guide Site A: cluster_a controller_a_1 controller_a_2 Site B: cluster_b controller_b_1 controller_b_2 40/100-Gbps Ethernet adapter requirements in a MetroCluster IP configuration MetroCluster IP configurations use a 40/100-Gbps Ethernet adapter for the IP interfaces to the IP switches used for the MetroCluster IP fabric. In MetroCluster IP configurations on AFF A700 and FAS9000 systems, the X91146A-C 40/100- Gbps Ethernet adapter is required in slot 5 of each controller module. In MetroCluster IP configurations on AFF A800 systems, the X1146A 40/100-Gbps Ethernet adapter is required in slot 0 and slot 1. SAS disk shelf requirements in a MetroCluster IP configuration Eight SAS disk shelves are recommended (four shelves at each site) to allow disk ownership on a per-shelf basis. A minimum of four disk shelves is required (two shelves at each site). Shelf IDs must be unique within the MetroCluster IP configuration. names: Site A: site_a-shelf_1 site_a-shelf_2 site_a-shelf_3 site_a-shelf_4 Site B: site_b-shelf_1 site_b-shelf_2 site_b-shelf_3 site_b-shelf_4 Drive location considerations for AFF A800 internal drives For correct implementation of the ADP feature, theaff A800 system's disk slots must be divided into quarters and the disks must be located symmetrically in the quarters. An AFF A800 system has 48 drive bays. The bays can be divided into quarters: Bays 0-11 Bays 12-23 Bays 24-35 Bays 36-47

Configuring the MetroCluster hardware components 23 If this system is populated with 16 drives, they must be symmetrically distributed among the four quarters: Four drives in the first quarter: 0, 1, 2, 3 Four drives in the second quarter: 12, 13, 14, 15 Four drives in the third quarter: 24, 25, 26, 27 Four drives in the fourth quarter: 36, 37, 38, 39 Related concepts Considerations for ADP systems in ONTAP 9.4 on page 8 Installing and cabling MetroCluster components The storage controllers must be cabled to the IP switches and the ISLs must be cabled to link the MetroCluster sites. The storage controllers must also be cabled to the storage, to each other, and to the data and management networks. Steps 1. Racking the hardware components on page 23 2. Cabling the IP switches on page 24 3. Cabling the cluster peering connections on page 26 4. Cabling the management and data connections on page 27 5. Configuring the IP switches on page 27 Racking the hardware components If you have not received the equipment already installed in cabinets, you must rack the components. About this task This task must be performed on both MetroCluster sites. Steps 1. Plan out the positioning of the MetroCluster components. The rack space depends on the platform model of the controller modules, the switch types, and the number of disk shelf stacks in your configuration. 2. Properly ground yourself. 3. Install the controller modules in the rack or cabinet. Each AFF A700 or FAS9000 controller module must have a X91146A-C 40/100-Gbps Ethernet adapter in slot 5. Each AFF A800 controller module must have a X1146A 40/100-Gbps Ethernet adapter in slot 0 and slot 1. Installation and Setup Instructions for AFF A700 and FAS9000 4. Install the IP switches in the rack or cabinet. 5. Install the disk shelves, power them on, and set the shelf IDs. NetApp Documentation: Disk Shelves

24 MetroCluster IP Installation and Configuration Guide You must power-cycle each disk shelf. Shelf IDs must be unique for each SAS disk shelf within each MetroCluster DR group (including both sites). Cabling the IP switches You must cable each IP switch to the local controllers and to the ISLs. About this task This task must be repeated for each switch in the MetroCluster configuration. The port usage shown applies to both Cisco 3232 and Cisco 3132 switches. The controller port usage depends on the model of the controller: The AFF A700 and FAS9000 systems use the X91146A-C 40-Gbps Ethernet adapter in slot 5 of each controller. The AFF A800 systems use two X91146A-C 40-Gbps Ethernet adapters in each controller, one in slot 0 and one in slot 1. Steps 1. Cable the switches to the local nodes. The node ports used depend on the platform model. Switc h port Site A: IP_switch_A_1 Local interconnect connections Node Port Usage AFF A800 AFF A700 and FAS9000 systems 1 node_a_1 e0a e4a Local cluster interconnect 2 node_a_2 e0a e4a Local cluster interconnect 3 - - - Unused 4 - - - Unused 5 - - - Unused 6 - - - Unused - - - - 9 node_a_1 e0b e5a MetroCluster IP interconnect 10 node_a_2 e0b e5a MetroCluster IP interconnect

Configuring the MetroCluster hardware components 25 Switc h port Site A: IP_switch_A_2 Local interconnect connections Node Port Usage AFF A800 systems AFF A700 and FAS9000 systems 1 node_a_1 e1a e4b Local cluster interconnect 2 node_a_2 e1a e4b Local cluster interconnect 3 - - - Unused 4 - - - Unused 5 - - - Unused 6 - - - Unused - - - - 9 node_a_1 e1b e5b MetroCluster IP interconnect 10 node_a_2 e1b e5b MetroCluster IP interconnect 2. Cable the switch ISL connections. One, two or three 40-Gbps ISLs are supported or up to six 10-Gbps MetroCluster ISLs. If using the Cisco 3232C switch in breakout mode, ports 21-24 are used as MetroCluster ISLs. In this case these 40-Gbps ports are split into four 10-Gbps ports. You must be using the correct RCF files to support the breakout configuration. The switch cannot be configured with both 40-Gbps and 10-Gbps ports. Site A: IP_switch_A_1 ISL connections Switch port Switch Usage 7 IP_switch_A_2 Local cluster ISL 8 IP_switch_A_2 Local cluster ISL 9-14 - - 15 IP_switch_B_1 MetroCluster ISL 16 IP_switch_B_1 MetroCluster ISL 17 IP_switch_B_1 MetroCluster ISL 18 IP_switch_B_1 MetroCluster ISL 19 IP_switch_B_1 MetroCluster ISL 20 IP_switch_B_1 MetroCluster ISL 21 IP_switch_B_1 MetroCluster ISL (when using the 22 IP_switch_B_1 Cisco 3232C switch in breakout mode). 23 IP_switch_B_1 24 IP_switch_B_1

26 MetroCluster IP Installation and Configuration Guide Site A: IP_switch_A_2 ISL connections Switch port Switch Usage 7 IP_switch_A_1 Local cluster ISL 8 IP_switch_A_1 Local cluster ISL 9-14 - - 15 IP_switch_B_2 MetroCluster ISL 16 IP_switch_B_2 MetroCluster ISL 17 IP_switch_B_2 MetroCluster ISL 18 IP_switch_B_2 MetroCluster ISL 19 IP_switch_B_2 MetroCluster ISL 20 IP_switch_B_2 MetroCluster ISL 21 MetroCluster ISL (when using the 22 Cisco 3232C switch in breakout mode). 23 24 3. Repeat the previous steps on the partner site, using the same cabling. Cabling the cluster peering connections You must cable the controller module ports used for cluster peering so that they have connectivity with the cluster on the partner site. About this task This task must be performed on each controller module in the MetroCluster configuration. At least two ports on each controller module should be used for cluster peering. The recommended minimum bandwidth for the ports and network connectivity is 1 GbE. Step 1. Identify and cable at least two ports for cluster peering and verify they have network connectivity with the partner cluster. Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports provides higher throughput for the cluster peering traffic. Cluster and SVM peering express configuration Related concepts Considerations for configuring cluster peering on page 10 Related information Cluster and SVM peering express configuration

Configuring the MetroCluster hardware components 27 Cabling the management and data connections You must cable the management and data ports on each storage controller to the site networks. About this task This task must be repeated for each new controller at both MetroCluster sites. You can connect the controller and cluster switch management ports to existing switches in your network or to new dedicated network switches such as NetApp CN1601 cluster management switches. Step 1. Cable the controller's management and data ports to the management and data networks at the local site. Installation and Setup Instructions for AFF A700 and FAS9000 Configuring the IP switches You must configure the IP switches for use as the cluster interconnect and for backend MetroCluster IP connectivity. Steps 1. Copying the switch NX-OS software and RCF files to the MetroCluster IP switches on page 27 2. Installing the IP switch software on page 31 Copying the switch NX-OS software and RCF files to the MetroCluster IP switches You must download the switch operating system file and RCF file to each switch in the MetroCluster IP configuration. Before you begin You need a transfer protocol, such as FTP, TFTP, SFTP, or SCP, to copy the files to the switches. About this task You must use the supported switch software version. NetApp Interoperability Matrix Tool Using the Interoperability Matrix Tool to find MetroCluster information on page 14 There are four RCF files, one for each of the four switches in the MetroCluster IP configuration. You must use the correct RCF files for the switch model you are using. Switch IP_switch_A_1 IP_switch_B_1 IP_switch_A_2 IP_switch_B_2 RCF file switch-model_rcf_v1.2-metrocluster-ipswitch-a-1.txt switch-model_rcf_v1.2-metrocluster-ipswitch-b-1.txt switch-model_rcf_v1.2-metrocluster-ipswitch-a-2.txt switch-model_rcf_v1.2-metrocluster-ipswitch-b-2.txt

28 MetroCluster IP Installation and Configuration Guide Steps 1. Download the MetroCluster IP RCF files: Cisco Cluster and Management Network Switch Reference Configuration File Download for MetroCluster IP 2. Reset the switch to factory defaults: a. Erase the existing configuration: write erase b. Reload the switch software: reload The system reboots and enters the configuration wizard. During the boot, if you receive the prompt Abort Auto Provisioning and continue with normal setup?(yes/no) [n], you should respond yes to proceed. c. In the configuration wizard, enter the basic switch settings: Admin password Switch name Out-of-band management configuration Default gateway SSH service (RSA) d. When prompted, enter the user name and password to log in to the switch. The following example shows the prompts and system responses when configuring the switch. The angle brackets (<<<) show where you enter the information. After resetting the switch to factory defaults the configuration wizard should be entered automatically: All fields that need to be entered as marked with <value> ---- System Admin Account Setup ---- Do you want to enforce secure password standard (yes/no) [y]:y <<< Enter the password for "admin": password <<< Confirm the password for "admin": password <<< ---- Basic System Configuration Dialog VDC: 1 ---- This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system. Please register Cisco Nexus3000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. Nexus3000 devices must be registered to receive entitled support services. Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs. In the next set of prompts, you enter basic information including the switch name, management address and gateway, and select SSH with RSA. Would you like to enter the basic configuration dialog (yes/no): yes Create another login account (yes/no) [n]: Configure read-only SNMP community string (yes/no) [n]: Configure read-write SNMP community string (yes/no) [n]: Enter the switch name : switch-name <<< Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:

Configuring the MetroCluster hardware components 29 Mgmt0 IPv4 address : management-ip-address <<< Mgmt0 IPv4 netmask : management-ip-netmask <<< Configure the default gateway? (yes/no) [y]: y <<< IPv4 address of the default gateway : gateway-ip-address <<< Configure advanced IP options? (yes/no) [n]: Enable the telnet service? (yes/no) [n]: Enable the ssh service? (yes/no) [y]: y <<< Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa <<< Number of rsa key bits <1024-2048> [1024]: Configure the ntp server? (yes/no) [n]: Configure default interface layer (L3/L2) [L2]: Configure default switchport interface state (shut/noshut) [noshut]: Configure CoPP system profile (strict/moderate/lenient/dense) [strict]: The final set of prompts complete the configuration: The following configuration will be applied: password strength-check switchname IP_switch_A_1 vrf context management ip route 0.0.0.0/0 10.10.99.1 exit no feature telnet ssh key rsa 1024 force feature ssh system default switchport no system default switchport shutdown copp profile strict interface mgmt0 ip address 10.10.99.10 255.255.255.0 no shutdown Would you like to edit the configuration? (yes/no) [n]: Use this configuration and save it? (yes/no) [y]: 2017 Jun 13 21:24:43 A1 %$ VDC-1 %$ %COPP-2-COPP_POLICY: Control- Plane is protected with policy copp-system-p-policy-strict. [########################################] 100% Copy complete. User Access Verification IP_switch_A_1 login: admin Password: Cisco Nexus Operating System (NX-OS) Software... IP_switch_A_1# 3. Download the supported NX-OS software file. NetApp Downloads: Cisco Ethernet Switch 4. Copy the switch software to the switch: copy sftp://root@server-ip-address/tftpboot/nx-os-file-name bootflash: vrf management In this example, the nxos.7.0.3.i4.6.bin file is copied from SFTP server 10.10.99.99 to the local bootflash: IP_switch_A_1# copy sftp://root@10.10.99.99/tftpboot/nxos.7.0.3.i4.6.bin bootflash: vrf management root@10.10.99.99's password: sundance sftp> progress Progress meter enabled sftp> get /tftpboot/nxos.7.0.3.i4.6.bin /bootflash/nxos.7.0.3.i4.6.bin

30 MetroCluster IP Installation and Configuration Guide Fetching /tftpboot/nxos.7.0.3.i4.6.bin to /bootflash/nxos.7.0.3.i4.6.bin /tftpboot/nxos.7.0.3.i4.6.bin 100% 666MB 7.2MB/s 01:32 sftp> exit Copy complete, now saving to disk (please wait)... 5. Copy the RCF files to the switches: copy sftp://root@ftp-server-ip-address/tftpboot/rcf-filename bootflash: vrf management a. Copy the RCF files to the first switch: copy sftp://root@ftp-server-ip-address/tftpboot/rcf-filename bootflash: vrf management In this example, the NX3132_RCF_v1.2-MetroCluster-IP-IP_switch_A_1.txt RCF file is copied from the SFTP server at 10.10.99.99 to the local bootflash. You need to use the IP address of your TFTP/SFTP server and the file name of the RCF file that you need to install. IP_switch_A_1# copy sftp://root@10.10.99.99/tftpboot/nx3132_rcf_v1.2-metrocluster-ip- IP_switch_A_1.txt bootflash: vrf management root@10.10.99.99's password: sundance sftp> progress Progress meter enabled sftp> get /tftpboot/nx3132_rcf_v1.2-metrocluster-ip-switch-a-1.txt /bootflash/ NX3132_RCF_v1.2-MetroCluster-IP-switch-A-1.txt Fetching /tftpboot/nx3132_rcf_v1.2-metrocluster-ip-switch-a-1.txt to /bootflash/ NX3132_RCF_v1.2-MetroCluster-IP-switch-A-1.txt /tftpboot/nx3132_rcf_v1.2-metrocluster-ip-switch-a-1.txt 100% 5141 5.0KB/ s 00:00 sftp> exit Copy complete, now saving to disk (please wait)... IP_switch_A_1# b. Repeat the previous substep for each of the other three switches, being sure to copy the correct RCF file to the corresponding switch. 6. Verify on each switch that the RCF and switch NX-OS files are present in each switch's bootflash directory: dir bootflash: The following example shows that the files are present on IP_switch_A_1: IP_switch_A_1# dir bootflash:... 5514 Jun 13 22:09:05 2017 NX3132_RCF_v1.2-MetroCluster-IPswitch-A-1.txt 698629632 Jun 13 21:37:44 2017 nxos.7.0.3.i4.6.bin... Usage for bootflash://sup-local 1779363840 bytes used 13238841344 bytes free 15018205184 bytes total IP_switch_A_1#

Configuring the MetroCluster hardware components 31 Installing the IP switch software You must install the supported version of the switch NX-OS operating system. About this task This task must be repeated on each switch in the MetroCluster configuration. Steps 1. Install the switch software: install all nxos bootflash:nxos.version-number.bin The switch will reload (reboot) automatically after the switch software has been installed. The following example shows the software installation on IP_switch_A_1: IP_switch_A_1# install all nxos bootflash:nxos.7.0.3.i4.6.bin Installer will perform compatibility check first. Please wait. Installer is forced disruptive Verifying image bootflash:/nxos.7.0.3.i4.6.bin for boot variable "nxos". [####################] 100% -- SUCCESS Verifying image type. [####################] 100% -- SUCCESS Preparing "nxos" version info using image bootflash:/nxos.7.0.3.i4.6.bin. [####################] 100% -- SUCCESS Preparing "bios" version info using image bootflash:/nxos.7.0.3.i4.6.bin. [####################] 100% -- SUCCESS [####################] 100% -- SUCCESS Performing module support checks. [####################] 100% -- SUCCESS Notifying services about system upgrade. [####################] 100% -- SUCCESS Compatibility check is done: Module bootable Impact Install-type Reason ------ -------- -------------- ------------ ------ 1 yes disruptive reset default upgrade is not hitless Images will be upgraded according to following table: Module Image Running-Version(pri:alt) New-Version Upg-Required ------ ---------- ------------------------ ------------------ ------------ 1 nxos 7.0(3)I4(1) 7.0(3)I4(6) yes 1 bios v04.24(04/21/2016) v04.24(04/21/2016) no Switch will be reloaded for disruptive upgrade. Do you want to continue with the installation (y/n)? [n] y Install is in progress, please wait. Performing runtime checks. [####################] 100% -- SUCCESS Setting boot variables. [####################] 100% -- SUCCESS Performing configuration copy. [####################] 100% -- SUCCESS Module 1: Refreshing compact flash and upgrading bios/loader/bootrom. Warning: please do not remove or power off the module at this time. [####################] 100% -- SUCCESS Finishing the upgrade, switch will reboot in 10 seconds. IP_switch_A_1#

32 MetroCluster IP Installation and Configuration Guide 2. Wait for the switch to reload and then log in to the switch. After the switch has rebooted the login prompt is displayed: User Access Verification IP_switch_A_1 login: admin Password: Cisco Nexus Operating System (NX-OS) Software TAC support: http://www.cisco.com/tac Copyright (C) 2002-2017, Cisco and/or its affiliates. All rights reserved.... MDP database restore in progress. IP_switch_A_1# The switch software is now installed. 3. Verify that the switch software has been installed: show version The following example shows the output: IP_switch_A_1# show version Cisco Nexus Operating System (NX-OS) Software TAC support: http://www.cisco.com/tac Copyright (C) 2002-2017, Cisco and/or its affiliates. All rights reserved.... Software BIOS: version 04.24 NXOS: version 7.0(3)I4(6) <<< switch software version BIOS compile time: 04/21/2016 NXOS image file is: bootflash:///nxos.7.0.3.i4.6.bin NXOS compile time: 3/9/2017 22:00:00 [03/10/2017 07:05:18] Hardware cisco Nexus 3132QV Chassis Intel(R) Core(TM) i3- CPU @ 2.50GHz with 16401416 kb of memory. Processor Board ID FOC20123GPS Device name: A1 bootflash: 14900224 kb usb1: 0 kb (expansion flash) Kernel uptime is 0 day(s), 0 hour(s), 1 minute(s), 49 second(s) Last reset at 403451 usecs after Mon Jun 10 21:43:52 2017 Reason: Reset due to upgrade System version: 7.0(3)I4(1) Service: plugin Core Plugin, Ethernet Plugin IP_switch_A_1# 4. Repeat these steps on the remaining three IP switches in the MetroCluster IP configuration.

33 Configuring the MetroCluster software in ONTAP You must set up each node in the MetroCluster configuration in ONTAP, including the node-level configurations and the configuration of the nodes into two sites. You must also implement the MetroCluster relationship between the two sites. Verify HA state and boot ONTAP software Configure the clusters Implement the MetroCluster IP configuration Verify that the HA state is mccip (Maintenance mode) Run System Setup on the first cluster Peer the clusters Assign pool 1 disks Create the DR group Create and mirror aggregates Assign pool 0 disks (Maintenance mode) Join second node to cluster Create the MetroCluster IP interfaces Enable the MetroCluster IP configuration (metrocluster configure command) Boot to ONTAP Run System Setup on the partner cluster Confirm the configuration Steps 1. Gathering required information on page 34 2. Similarities and differences between standard cluster and MetroCluster configurations on page 38 3. Restoring system defaults on a previously used controller module on page 39 4. Verifying the ha-config state of components on page 40 5. Manually assigning drives to pool 0 on page 40 6. Setting up ONTAP on page 44 7. Configuring the clusters into a MetroCluster configuration on page 48 8. Verifying switchover, healing, and switchback on page 75 9. Installing the MetroCluster Tiebreaker software on page 75 10. Protecting configuration backup files on page 75