Clustered Data ONTAP 8.2

Similar documents
Clustered Data ONTAP 8.2

Clustered Data ONTAP 8.2 Software Setup Guide

N3150 Installation and Setup Instructions

Data ONTAP 8.1 Software Setup Guide for 7-Mode

N3240 Installation and Setup Instructions

N3220 Installation and Setup Instructions

Data ONTAP 8.2 Software Setup Guide For 7-Mode

Cluster Management Workflows for OnCommand System Manager

Cluster Switch Setup Guide for Cisco Switches. May _A0_UR006

Cluster Management Workflows for OnCommand System Manager

High Availability and MetroCluster Configuration Guide For 7-Mode

Exam Questions NS0-180

Installing and Managing the Switch

Cluster Expansion Express Guide

Replacing a Cisco Nexus 3232C cluster switch

Before you begin. Unpack & Prepare. Install Hardware. Connect Drive Shelves. Apply Power. Connect Data Hosts. Connect Ethernet Cables

FlexPod Express with VMware vsphere 6.0: Small and Medium Configurations

iscsi Configuration for ESXi using VSC Express Guide

MetroCluster IP Installation and Configuration Guide

Replacing Cisco Nexus 3132Q-V cluster switches

VTRAK E-Class/J-Class Quick Start Guide

How to Configure NVIDIA GPU M6 Cards with Citrix XenDesktop 7.7 and vsphere 6.0 in FlexPod Datacenter with Cisco B-Series Servers

SAN Configuration Guide

Data ONTAP Edge 8.2. Express Setup Guide For Clustered Data ONTAP. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Clustered Data ONTAP 8.2 High-Availability Configuration Guide

Cluster and SVM Peering Express Guide

NetApp Encryption Power Guide

Cluster Management Using OnCommand System Manager

Data ONTAP Edge 8.3. Express Setup Guide For Clustered Data ONTAP. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

OnCommand Unified Manager 6.1

Data ONTAP 8.1 High Availability and MetroCluster Configuration Guide For 7-Mode

N6000 Series Filer Installation and Setup Instructions

Copy-Free Transition Guide

NS0-180 netapp

Installation and User Guide

Fidelis Network High Capacity Collector QUICK START GUIDE. Rev-I Collector Controller Appliances Based on HP DL360-G9 and DL380-G9 Platforms

Fidelis Network High Capacity Collector QUICK START GUIDE. Rev-H Collector Controller Appliances Based on HP DL360-G9 and DL380-G9 Platforms

Dell FluidFS Version 6.0 FS8600 Appliance Deployment Guide

NFS Client Configuration with VAAI for ESX Express Guide

VTRAK E610f, J610s Quick Start Guide

Replacing CN1610 cluster switches with Cisco Nexus 5596 cluster switches

EMC Celerra NS-120 System (Single Blade) Installation Guide

Configure the Cisco DNA Center Appliance

Cluster Management Workflows for OnCommand System Manager

FibreBridge 7500N and 6500N Configuring a MetroCluster system with SAS disk shelves and FibreBridge bridges in 7-mode

StorageGRID Webscale 10.4 Appliance Installation and Maintenance Guide

Before you begin. Unpack & Prepare. Install Hardware. Connect Drive Shelves. Apply Power. Connect Data Hosts. Connect Ethernet Cables

OnCommand System Manager 3.1.1

Upgrading a single-controller FAS22xx system with internal storage to another single-controller system by copying volumes

SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP

OnCommand System Manager 3.1.1

EMC Celerra NS-120 System (Dual Blade) Installation Guide

Installation and Cabling Guide

FlexPod Express with Microsoft Windows Server 2012 R2 Hyper-V: Large Configuration

VMware vsphere Storage Appliance Installation and Configuration

Network Management Guide

Install your TPS 440T and 2200T security devices

ForeScout CounterACT. Single CounterACT Appliance. Quick Installation Guide. Version 8.0

[ Quick Start Guide ]

Data Fabric Solution for Cloud Backup Workflow Guide

Fidelis Enterprise Collector Cluster QUICK START GUIDE. Rev-H Collector Controller2 (HP DL360-G9) and Collector XA2 (HP DL360-G9) Platforms

NetApp Encryption Power Guide

Data ONTAP Mode Software Setup Guide

Fidelis Enterprise Collector Cluster QUICK START GUIDE. Rev-I Collector Controller2 (HP DL360-G10) and Collector XA2 (HP DL360-G10) Platforms

Deploy the ExtraHop Discover 3100, 6100, 8100, or 9100 Appliances

Deploy the ExtraHop Discover 3000, 6000, or 8000 Appliances

Universal SAS and ACP Cabling Guide

Simulate ONTAP 9.3. Installation and Setup Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

24-Port Gigabit with 4 Optional 10G Slots. Layer 3 Managed Stackable Switch XGS Quick Installation Guide

Configure the Cisco DNA Center Appliance

24-Port Gigabit with 4 Optional 10G Slots. Layer 3 Managed Stackable Switch XGS / XGS Quick Installation Guide

Configure the Cisco DNA Center Appliance

Deploy the ExtraHop Discover Appliance 1100

OnCommand System Manager 3.1.2

FlexArray Virtualization

Replacing a PCIe card

Microsoft Azure StorSimple Appliance 8600 Hardware Installation Guide

FlexPod Express with Microsoft Windows 2016 Hyper-V and FAS2600

Clustered Data ONTAP 8.3 Update 2, IPspaces. Self-paced Lab NETAPP UNIVERSITY. NetApp University - Do Not Distribute

Clustered Data ONTAP 8.3

Netapp Exam NS0-157 NetApp Certified Data Administrator, Clustered Data ONTAP Version: 11.0 [ Total Questions: 302 ]

MetroCluster Service and Expansion Guide

FlexPod Datacenter with Microsoft Exchange 2016, SharePoint 2016, and NetApp AFF A300

Replacing a PCIe card

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

Juniper Secure Analytics Quick Start Guide

1 Getting Started Installing & Configuring

Clustered Data ONTAP 8.3

Fidelis Network Sensor Appliances QUICK START GUIDE

NS0-191 netapp. Number: NS0-191 Passing Score: 800 Time Limit: 120 min.

24-Port Gigabit + 4-Port 10G SFP+ Slot. Layer 3 Stackable Managed Switch XGS Quick Installation Guide

CounterACT 7.0 Single CounterACT Appliance

Exam Questions NS0-157

Forescout. Quick Installation Guide. Single Appliance. Version 8.1

Parallels Virtuozzo Containers 4.6 for Windows

16/24/48-Port 10/100/1000T + 2/4-Port 100/1000X SFP Managed Switch GS T2S/GS T2S/GS T4S. Quick Installation Guide

Cascade Sensor Installation Guide. Version 8.2 March 2009

QUICK START GUIDE. Fidelis Network K2 Appliances. Rev-I K2 (HP DL360-G10) Platforms.

Managing Data ONTAP Operating in 7-Mode Using the GUI

LepideAuditor for File Server. Installation and Configuration Guide

Transcription:

Clustered Data ONTAP 8.2 NAS Express Setup Guide for 62xx Systems NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277 Web: www.netapp.com Feedback: doccomments@netapp.com Part number: 215-08358_A0 ur001 September 2013

Table of Contents 3 Contents Deciding whether to use this guide... 5 System setup workflow... 6 Preparing for installation... 7 Gathering network information... 7 Obtaining licenses... 10 Downloading and installing System Manager... 10 Downloading and installing Config Advisor... 11 Unpacking and mounting the components... 12 Unpacking the components... 12 Locating the disk shelves for each controller... 12 Mounting the components... 14 Cabling the 62xx system... 16 Labeling the cables... 17 Cabling the disk shelf alternate control path ports... 17 Cabling the disk shelf SAS data ports... 18 Cabling the HA pair NVRAM ports... 25 Cabling the console ports... 25 Cabling the management ports... 26 Cabling the private cluster interconnect for NetApp CN1610 switches... 27 Cabling the private cluster ISLs for NetApp CN1610 switches... 29 Cabling the Ethernet data networks... 29 Cabling the power supplies... 30 Setting up the cluster... 31 Creating the cluster on the first node... 31 Joining a node to the cluster... 33 Troubleshooting boot failure of a node... 34 Connecting System Manager to the cluster... 35 Synchronizing the system time using NTP... 36 Configuring AutoSupport... 36 Configuring a node for troubleshooting and maintenance... 37 Verifying system operation... 39 Verifying cabling... 39

4 NAS Express Setup Guide for 62xx Systems Creating failover groups... 40 Testing storage failover... 44 Where to go next... 45 Copyright information... 47 Trademark information... 48 How to send your comments... 49 Index... 50

5 Deciding whether to use this guide This guide describes how to complete the initial mounting, cabling, and typical configuration of a NAS configuration using clustered Data ONTAP 8.2 software and 62xx controllers. Most procedures are based on OnCommand System Manager 3.0. The guide covers commonly used options and provides minimal background information. Most examples in this guide depict a six-node cluster, but you can also use the guide to configure two-node, four-node, or eight-node clusters. This guide is intended for customers who want to install a storage system cluster themselves. It starts with the components delivered to your site and ends with the system tested and ready for CIFS/SMB and NFS protocol configuration. You should use this guide if you want a standard configuration following NetApp best practices, and you do not want a lot of conceptual background information for the tasks. This guide is for switched clusters using NetApp cluster switches. It does not cover two-node switchless clusters or single-node clusters. This guide is for clusters using two, four, six, or eight nodes. This guide assumes that all nodes in the cluster are the same hardware model and use SASconnected disk shelves. This guide assumes a new installation and not an upgrade from a previous version of Data ONTAP software. If these assumptions are not correct for your installation, or if you want more conceptual background information, you should see the following documentation instead: Installation and Setup Instructions for your hardware platform model Universal SAS and ACP Cabling Guide Clustered Data ONTAP Software Setup Guide (for new systems) Clustered Data ONTAP Upgrade and Revert/Downgrade Guide (for upgrades) Clustered Data ONTAP Switch Setup Guide for Cisco Switches OnCommand System Manager Online Help (available both from within the product and as a PDF download) This documentation is available from the Product Documentation section of the NetApp Support Site. Related information Documentation on the NetApp Support Site: support.netapp.com

6 NAS Express Setup Guide for 62xx Systems System setup workflow Setting up a 62xx system involves some preliminary preparation, setting up the physical components, cabling those components, and then setting up the cluster software. After completing these steps, you should verify that the system is operating.

7 Preparing for installation Before setting up your cluster, you must gather the required network information, obtain the required licenses, and download and install System Manager and Config Advisor. 1. Gathering network information on page 7 You must obtain IP addresses and other network information from your network administrator before you begin your configuration. 2. Obtaining licenses on page 10 Before you set up a cluster, you must obtain a base cluster license key and you should obtain any feature licenses, such as protocol licenses, that you want to add to the cluster. 3. Downloading and installing System Manager on page 10 You must install OnCommand System Manager 3.0 or later on your local computer to complete the setup of the cluster. 4. Downloading and installing Config Advisor on page 11 You must install the Config Advisor tool so that you can verify that the system is cabled correctly later in the setup process. Gathering network information You must obtain IP addresses and other network information from your network administrator before you begin your configuration. Switch information When you cable the system, you require a host name and management IP address for each cluster switch. Cluster switch Host name IP address Network mask Default gateway Interconnect 1 Interconnect 2 Management 1 Management 2

8 NAS Express Setup Guide for 62xx Systems Cluster creation information When you first create the cluster, you require the following information. Type of information Cluster name DNS domain DNS name servers Location Administrator password Your values Port IP address Network mask Default gateway Cluster management Node information For each node in the cluster, you require a management IP address, a network mask, and a default gateway. Node Port IP address Network mask Default gateway Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Node 8 Time server information You must synchronize the time, which will require one or more NTP time servers. Node Host name IP address Network mask Default gateway NTP server 1 NTP server 2

AutoSupport information Preparing for installation 9 You must configure AutoSupport on each node, which will require the following information. Type of information From email address Mail hosts Transport protocol Recipient email addresses or distribution lists IP addresses or names HTTP, HTTPS, or SMTP Proxy server Full-length messages Concise messages Partners Your values Service processor information You must enable access to the service processor of each node for troubleshooting and maintenance, which requires the following network information for each node. Node IP address Network mask Default gateway Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Node 8

10 NAS Express Setup Guide for 62xx Systems Obtaining licenses Before you set up a cluster, you must obtain a base cluster license key and you should obtain any feature licenses, such as protocol licenses, that you want to add to the cluster. 1. Log in to the NetApp Support Site at support.netapp.com, and click My Support > Software Licenses. 2. Record the cluster base license key for later use. 3. Search for feature license keys, and record them for later use. 4. If you cannot locate your license keys from the NetApp Support Site, contact your sales or support representative. Downloading and installing System Manager You must install OnCommand System Manager 3.0 or later on your local computer to complete the setup of the cluster. Before you begin You must have a computer running a supported version of Windows or Linux. System requirements are listed on the NetApp Support Site where you download the System Manager software. 1. Download OnCommand System Manager 3.0 or later for your operating system from the NetApp Support Site. 2. Follow the installation instructions in the OnCommand System Manager 3.0 Quick Start Guide to install and launch the software. A reboot is not required. Related information NetApp Support Site software download: support.netapp.com/now/cgi-bin/software/

Preparing for installation 11 Downloading and installing Config Advisor You must install the Config Advisor tool so that you can verify that the system is cabled correctly later in the setup process. Before you begin You must have a laptop that you can connect to the cluster. About this task If you need support for the Config Advisor tool, you must follow the procedure in the tool's online Help topic Reporting issues in Config Advisor. The Config Advisor tool is not supported by the typical NetApp support process. Step 1. Download the Config Advisor Installation and Administration Guide from the NetApp Support Site, and follow the instructions on downloading and installing Config Advisor. Related information Config Advisor from NetApp TechNet: tech.netapp.com/configadvisor Config Advisor from NetApp Support Site: support.netapp.com/eservice/toolchest

12 NAS Express Setup Guide for 62xx Systems Unpacking and mounting the components The first step in setting up your cluster is unpacking the cluster components. Unless you ordered the cluster in NetApp cabinets, you must locate the disk shelves for each controller and mount the components. 1. Unpacking the components on page 12 As you unpack the cluster components, it is important to track which disk shelves go with which storage controllers. Keeping track of this information and locating the disk shelves with their storage controllers simplifies cabling and ensures that each controller has its root volume and Data ONTAP software. 2. Locating the disk shelves for each controller on page 12 Before mounting the components, you must locate the disk shelves that belong to each controller. The factory installs the root volume and Data ONTAP software on one disk shelf for each controller. By cabling the correct disk shelves to each controller, you ensure that each controller has its root volume. 3. Mounting the components on page 14 You should mount the components in the rack or cabinet in the recommended positions to simplify cabling and service access. Unpacking the components As you unpack the cluster components, it is important to track which disk shelves go with which storage controllers. Keeping track of this information and locating the disk shelves with their storage controllers simplifies cabling and ensures that each controller has its root volume and Data ONTAP software. The factory labels the disk shelves with the storage controller to which they should connect. Locating the disk shelves for each controller Before mounting the components, you must locate the disk shelves that belong to each controller. The factory installs the root volume and Data ONTAP software on one disk shelf for each controller.

00000011 S H U 9 0 9 5 6 0 0 0 E D 2 7 X X X Unpacking and mounting the components 13 By cabling the correct disk shelves to each controller, you ensure that each controller has its root volume. Before you begin The components must not be in the rack or cabinet so that you can see the labels on the top of the component chassis. About this task Each disk shelf has a sticker on the top of its chassis that lists the serial number of the controller to which the shelf connects. The sticker also lists the stack number for configurations with more than one stack of disk shelves per controller. 1. As you unpack the disk shelves and storage controllers, check the labels on the top of each chassis. The SYSTEM SN is the identifier of the storage controller and the Chassis SN is the identifier of the disk shelf. 2. Using the label information, group or mark the disk shelves and controllers so that you can cable the disk shelves to the correct storage controllers after the components are mounted. The following illustration is an example of the label for disk shelves. SYSTEM SN: 00000011 Loop or Stack #: X Chassis SN: SHU90956000ED27XXX

14 NAS Express Setup Guide for 62xx Systems Mounting the components You should mount the components in the rack or cabinet in the recommended positions to simplify cabling and service access. Note: If you ordered the cluster in NetApp cabinets, the components are already mounted. The disk shelf containing the root volume is mounted next to its storage controller. Depending on the number of disk shelves and size of the rack or cabinet, you might require multiple racks or cabinets. The general rule is to keep the components of an HA pair together to simplify cabling. Each disk shelf has a label on the top of its chassis that lists the storage controller to which that disk shelf connects. Note that the label is not visible after mounting. Typical configuration The following graphic illustrates a typical configuration of a four-node cluster. The main cabinet (1) contains the nodes and switches while the other cabinets (2, 3) contain additional disk shelves. The components are installed as follows: The cluster switches (A) are always at the top if present. The management switches (B) are optional. The FC switches (C) are optional. The first disk shelf in each stack (E, F, H, I) is located as close to the nodes as possible. Additional disk shelves in the stacks are installed in the main cabinet as space permits (D, J). Two or four nodes (G) can be installed in the main cabinet. The disk stacks that start in the main cabinet continue in the adjacent cabinets (K). Blank plates are installed to cover any empty spaces in the cabinet (L).

Unpacking and mounting the components 15

16 NAS Express Setup Guide for 62xx Systems Cabling the 62xx system Correctly cabling the system enables the components to communicate with each other and with the rest of your storage environment in both normal and failure conditions. 1. Labeling the cables on page 17 You should label the cables to simplify future expansion and to make troubleshooting of the cluster easier. 2. Cabling the disk shelf alternate control path ports on page 17 The alternate control path (ACP) uses the ACP ports on the disk shelves to connect to the storage controllers. Having a control path that is separate from the data path increases reliability and simplifies troubleshooting. 3. Cabling the disk shelf SAS data ports on page 18 The serial-attached SCSI (SAS) ports enable each controller in the HA pair to access its own disks and the disks of its partner controller. 4. Cabling the HA pair NVRAM ports on page 25 The nonvolatile RAM (NVRAM) ports of the two controllers in an HA pair enable the pair to mirror its write cache and perform HA monitoring. 5. Cabling the console ports on page 25 The console ports enable you to manage the server when a network connection is not available, for example, during software installation or network troubleshooting. 6. Cabling the management ports on page 26 The storage controller management ports enable you to manage the cluster using the CLI or software such as System Manager. The private cluster switch management ports enable you to manage the switches. 7. Cabling the private cluster interconnect for NetApp CN1610 switches on page 27 The private cluster interconnect enables the cluster nodes to communicate with each other and to move data within the cluster. The private cluster interconnect uses dedicated 10 GbE switches. 8. Cabling the private cluster ISLs for NetApp CN1610 switches on page 29 The four inter-switch links (ISLs) between the two private cluster interconnect switches enable high availability of the cluster network. 9. Cabling the Ethernet data networks on page 29 The Ethernet ports on the storage controllers enable data traffic for NFS and CIFS. 10. Cabling the power supplies on page 30 The power supplies provide the electrical power needed to operate the system. The storage components should connect to two independent power sources.

Cabling the 62xx system 17 Labeling the cables You should label the cables to simplify future expansion and to make troubleshooting of the cluster easier. Before you begin You must have the binder of labels shipped in the accessories box for the cluster. About this task You can label the cables at any point during the installation process. 1. Using the labels in the binder supplied with the cluster, label each end of each cable. You do not label the power cables. 2. Save the binder containing any remaining labels for future expansion of the cluster. Cabling the disk shelf alternate control path ports The alternate control path (ACP) uses the ACP ports on the disk shelves to connect to the storage controllers. Having a control path that is separate from the data path increases reliability and simplifies troubleshooting. About this task The ACP cabling uses standard CAT6 Ethernet cables. 1. Connect the private management port e0p of controller 1 in an HA pair to the ACP port with the square symbol on the first disk shelf. The e0p port is labeled with a wrench and padlock symbol ( ). 2. Connect the ACP port with the round symbol to the next ACP port with a square symbol. Continue in a daisy-chain until all ACP ports for the shelves used by the HA pair are connected. 3. Connect the final ACP port with the round symbol to the private management port e0p of controller 2 in the HA pair. 4. Repeat for the other HA pairs in the cluster.

18 NAS Express Setup Guide for 62xx Systems ACP cabling example Controller 1 Controller 2 e0p e0p Stack 1 Stack 2 ACP SAS ACP SAS ACP SAS ACP SAS Cabling the disk shelf SAS data ports The serial-attached SCSI (SAS) ports enable each controller in the HA pair to access its own disks and the disks of its partner controller. Before you begin You must have located and marked which disk shelves go with each storage controller. About this task Disk shelf cabling is always isolated within the HA pair that owns the disks. Disk shelves are never connected to other HA pairs in the cluster. QSFP to QSFP SAS cables are used to connect disk shelves together and to connect disk shelves to the SAS ports on the controller. Attention: It is possible to insert the QSFP connector incorrectly (upside down). The connector can slide in and even appear to click into place. However, the latch cannot engage unless the connector is inserted correctly. After inserting the cable, pull on the connector to ensure that it is latched. For disk shelf I/O modules and controller onboard SAS ports, the cables must be inserted with the bottom of the connector facing down, as shown in the following diagram. The bottom of the connector is slightly longer than the top, and the latch release is typically on the bottom.

Cabling the 62xx system 19 For controller SAS ports on add-in cards, the orientation depends on the slot in which the SAS card is installed. Insert the connector gently, and be sure that it latches securely. If the connector does not latch, rotate it 180 degrees and try again. These instructions assume two stacks of disk shelves per HA pair. You can split your disk shelves into additional stacks to increase I/O throughput. Additional stacks require adding additional SAS ports to the storage controllers. See the Universal SAS and ACP Cabling Guide if you want to use additional stacks of disk shelves. 1. Set the disk shelf ID numbers for each disk shelf. Note: The disk shelf ID numbers might have been set in the factory. The shelf IDs in the first stack should be set to 10, 11, 12, and so on, with the first shelf ID set to 10. The second stack IDs should be 20, 21, 22, and so on. The third stack IDs should start with 30, the fourth stack IDs with 40, the fifth stack IDs with 50, and the sixth stack IDs with 60. This pattern makes it easy to keep track of each stack. IDs must be unique within an HA pair. a) Temporarily connect power cords to the disk shelf, and then turn on the power to the disk shelf. b) Remove the left ear cover from the shelf. c) Press and hold the U-shaped tab above the LEDs (DS4243) or shelf ID button (DS2246) until the first digit blinks. d) Press the tab or button until the correct number is displayed. e) Repeat steps c and d for the second digit. f) Press and hold the tab or button until the second number stops blinking. Both numbers blink and the operator display panel fault LED illuminates in about five seconds. g) Power-cycle the disk shelf to make the new disk shelf ID take effect. 2. Starting with the first stack of disk shelves, cable the I/O modules of the stacks to each other as follows. If your system has only one disk shelf per stack, skip this step. a) Connect the SAS port with the round symbol from the first shelf to the SAS port with the square symbol on the second shelf. is either on the top or on the left, depending on your disk shelf model. b) Continue connecting ports on the remaining disk shelves in the stack, from the ports with a round symbol to the ports with a square symbol. c) Connect the SAS port with the round symbol from the first shelf to the SAS port with the square symbol on the second shelf. is either on the bottom or on the right, depending on your disk shelf model.

20 NAS Express Setup Guide for 62xx Systems d) Continue connecting ports on the remaining disk shelves in the stack, from the ports with a round symbol to the ports with a square symbol. Example Controller 1 Controller 2 SAS ports 11a 12a SAS ports 11a 12a 11b 12b 11b 12b 11c 12c 11c 12c 11d 12d 11d 12d ACP SAS ACP SAS 10 20 11 21 12 22 Disk stack 1 Disk stack 2 3. Repeat all previous steps for each stack of disk shelves. If your system has only one disk shelf per stack, skip this step. 4. Connect the first set of controller-to-shelf SAS cables as shown in the following diagram. These instructions use two quad SAS cards in slots 11 and 12. If your SAS cards are in different slots, adjust the port numbers for the slots actually used. For two stacks of disks, you do not use two ports on each card. However you still need two cards to avoid a single point of failure.

Cabling the 62xx system 21 Example Controller 1 Controller 2 SAS ports 11a 12a SAS ports 11a 12a 11b 12b 11b 12b 11c 12c 11c 12c 11d 12d 11d 12d ACP SAS ACP SAS 10 20 11 21 12 22 Disk stack 1 Disk stack 2 a) Connect controller 1 SAS port 11a to the SAS port with the square symbol in I/O module A of the first shelf in the first stack. b) Connect controller 2 SAS port 11a to the SAS port with the square symbol in I/O module B of the first shelf in the first stack. 5. Connect the next set of SAS cables as shown in the following diagram:

22 NAS Express Setup Guide for 62xx Systems Example Controller 1 Controller 2 SAS ports 11a 12a SAS ports 11a 12a 11b 12b 11b 12b 11c 12c 11c 12c 11d 12d 11d 12d ACP SAS ACP SAS 10 20 11 21 12 22 Disk stack 1 Disk stack 2 a) Connect controller 1 SAS port 11c to the SAS port with the square symbol in I/O module A of the first shelf in the second stack. b) Connect controller 2 SAS port 11c to the SAS port with the square symbol in I/O module B of the first shelf in the second stack. 6. Connect the next set of SAS cables as shown in the following diagram:

Cabling the 62xx system 23 Example Controller 1 Controller 2 SAS ports 11a 12a SAS ports 11a 12a 11b 12b 11b 12b 11c 12c 11c 12c 11d 12d 11d 12d ACP SAS ACP SAS 10 20 11 21 12 22 Disk stack 1 Disk stack 2 a) Connect controller 1 SAS port 12b to the SAS port with the round symbol in I/O module B of the last shelf in the first stack. b) Connect controller 2 SAS port 12b to the SAS port with the round symbol in I/O module A of the last shelf in the first stack. 7. Connect the final set of SAS cables as shown in the following diagram:

24 NAS Express Setup Guide for 62xx Systems Example Controller 1 Controller 2 SAS ports 11a 12a SAS ports 11a 12a 11b 12b 11b 12b 11c 12c 11c 12c 11d 12d 11d 12d ACP SAS ACP SAS 10 20 11 21 12 22 Disk stack 1 Disk stack 2 a) Connect controller 1 SAS port 12d to the SAS port with the round symbol in I/O module B of the last shelf in the second stack. b) Connect controller 2 SAS port 12d to the SAS port with the round symbol in I/O module A of the last shelf in the second stack. 8. Repeat all steps in this topic for the other HA pairs in the cluster.

Cabling the 62xx system 25 Cabling the HA pair NVRAM ports The nonvolatile RAM (NVRAM) ports of the two controllers in an HA pair enable the pair to mirror its write cache and perform HA monitoring. About this task This task does not apply to systems with two controllers in the same chassis; the HA pair connection is made internally through the chassis backplane. You use the supplied QSFP to QSFP cables or SFP modules and optical cables. 1. Connect port 2a (top port of NVRAM8 card in vertical slot 2) on the first controller in the HA pair to port 2a on the other controller. 2. Connect port 2b (bottom port of NVRAM8 card in vertical slot 2) on the first controller in the HA pair to port 2b on the other controller. 3. Repeat for the other HA pairs in the cluster. HA pair cabling example Slot 2 (NVRAM) e0e e0f Console USB USB 0 e0a e0b e0c e0d e0a e0b e0c e0d 0 e0a e0b e0c e0d e0a e0b e0c e0d e0e e0f Console Slot 1 Slot 2 (NVRAM) HA traffic Cabling the console ports The console ports enable you to manage the server when a network connection is not available, for example, during software installation or network troubleshooting. About this task You can connect the storage controller console ports to a physical terminal or to a terminal or console server. The console ports use an RJ-45 connector. An RJ-45 to DB-9 console adapter is included with the storage controller. You supply the appropriate cable for your terminal server or terminal.

26 NAS Express Setup Guide for 62xx Systems 1. Connect a cable from a terminal server or terminal to the console port of the first storage controller. The console port is labeled with a serial data ( 2. Repeat for each storage controller in the cluster. ) symbol. After you finish Record the mapping of terminal server ports to storage controllers. Cabling the management ports The storage controller management ports enable you to manage the cluster using the CLI or software such as System Manager. The private cluster switch management ports enable you to manage the switches. About this task You can connect the controller and cluster switch management ports to existing switches in your network or to new dedicated network switches, such as NetApp CN1601 cluster management switches. 1. Connect a cable from the management port of the first storage controller in an HA pair to one switch in your Ethernet network. The management port is labeled with a wrench symbol ( commands and output. ) and is identified as e0m in 2. Connect a cable from the management port of the other storage controller in the HA pair to a different switch in your Ethernet network. 3. Repeat for the other HA pairs in the cluster. 4. Connect a cable from the management port of the first private cluster switch to one switch in your Ethernet network. 5. Connect a cable from the management port of the second private cluster switch to a different switch in your Ethernet network.

Cabling the 62xx system 27 Cabling the private cluster interconnect for NetApp CN1610 switches The private cluster interconnect enables the cluster nodes to communicate with each other and to move data within the cluster. The private cluster interconnect uses dedicated 10 GbE switches. About this task The private cluster interconnect uses either 10 GbE copper cables or SFP modules and optional cables. The required cables are supplied with the system. Additional information about the CN1610 switches is available in the CN1601 and CN1610 Switch Setup and Configuration Guide. 1. Using the supplied cable, connect port e0c of the first storage controller to port 1 of the top cluster switch. 2. Connect port e0e of the first storage controller to port 1 of the bottom cluster switch. 3. Repeat for the other storage controllers in the cluster. Connect the second controller to port 2 of each switch, the third controller to port 3, and so on. Private cluster interconnect cabling for NetApp CN1610 switches The following example shows six nodes. You connect the cluster interconnect the same way regardless of the number of nodes in the cluster. If you are installing an eight-node cluster, controller 7 connects to port 7, and controller 8 connects to port 8.

SFP 28 NAS Express Setup Guide for 62xx Systems Controller 1 Controller 2 Controller 3 Controller 4 Controller 5 Controller 6 e0c e0e e0c e0e e0c e0e e0c e0e e0c e0e e0c e0e SFP LUN 1 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 LUN 1 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Controller 1 Controller 2 Controller 3 Controller 4 Controller 5 Controller 6 e0c e0e e0c e0e e0c e0e e0c e0e e0c e0e e0c e0e SFP LUN 1 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 SFP LUN 1 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Related information CN1601 and CN1610 Switch Setup and Configuration Guide: support.netapp.com/documentation/ docweb/index.html?productid=61472

Cabling the 62xx system 29 Cabling the private cluster ISLs for NetApp CN1610 switches The four inter-switch links (ISLs) between the two private cluster interconnect switches enable high availability of the cluster network. About this task The ISLs use 10 GbE copper cables. The required cables are supplied with the system. 1. Connect port 13 of the top cluster switch to port 13 of the bottom cluster switch. 2. Repeat for ports 14 through 16. There are 4 ISLs. NetApp CN1610 cluster switch ISL cabling SFP LUN 1 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 SFP LUN 1 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Cabling the Ethernet data networks The Ethernet ports on the storage controllers enable data traffic for NFS and CIFS. About this task You must connect to two or more separate networks for high availability. The specific controller ports you use depend on the controller model, bandwidth requirements, and the network speed and media. You can use more ports or faster ports for higher bandwidth. 1. Cable one or more Ethernet ports from the first storage controller to an Ethernet switch in your data network.

30 NAS Express Setup Guide for 62xx Systems 2. Cable one or more Ethernet ports from the first storage controller to a different Ethernet switch in your data network. 3. Repeat these steps for the other storage controllers in the cluster. 4. Record which storage controller ports connect to which switch ports in your network. After you finish See the Clustered Data ONTAP Network Management Guide for detailed network configuration information. Cabling the power supplies The power supplies provide the electrical power needed to operate the system. The storage components should connect to two independent power sources. About this task The supplied cables should fit the power receptacles in typical enclosures for your country. You might need to provide different cables. All power supplies must be connected to a power source. 1. Connect the first power supply of each cluster component to the first power source in the enclosure or rack. 2. Connect the second power supply of each cluster component to a second power source in the enclosure or rack.

31 Setting up the cluster Setting up the cluster involves creating the cluster on the first node, joining any remaining nodes to the cluster, and configuring a number of features such as synchronizing the system time that enable the cluster to operate nondisruptively. 1. Creating the cluster on the first node on page 31 You use the Cluster Setup wizard to create the cluster on the first node. The wizard helps you to configure the cluster network that connects the nodes (if the cluster consists of two or more nodes), create the cluster admin Vserver, add feature license keys, and create the node management interface for the first node. 2. Joining a node to the cluster on page 33 After creating a new cluster, you use the Cluster Setup wizard to join each remaining node to the cluster one at a time. The wizard helps you to configure each node's node management interface. 3. Connecting System Manager to the cluster on page 35 Before you use OnCommand System Manager to manage your cluster, you have to add the cluster to System Manager. 4. Synchronizing the system time using NTP on page 36 The cluster needs a Network Time Protocol (NTP) server to synchronize the time between the nodes and their clients. You can use the Edit DateTime dialog box in OnCommand System Manager to configure the NTP server. 5. Configuring AutoSupport on page 36 You must configure how and where AutoSupport information is sent from each node, and then test that the configuration is correct. AutoSupport messages are scanned for potential problems and are available to technical support when they assist customers in resolving issues. 6. Configuring a node for troubleshooting and maintenance on page 37 The service processor (SP) enables troubleshooting and maintaining a node even if the node is not running. To enable access to the SP of a node, you need to configure the SP network. Creating the cluster on the first node You use the Cluster Setup wizard to create the cluster on the first node. The wizard helps you to configure the cluster network that connects the nodes (if the cluster consists of two or more nodes),

32 NAS Express Setup Guide for 62xx Systems create the cluster admin Vserver, add feature license keys, and create the node management interface for the first node. Before you begin The cluster setup worksheet should be completed, the storage system hardware should be installed and cabled, and the console should be connected to the node on which you intend to create the cluster. 1. Power on the first node. The node boots, and the Cluster Setup wizard starts on the console. Welcome to the cluster setup wizard. You can enter the following commands at any time: "help" or "?" - if you want to have a question clarified, "back" - if you want to change previously answered questions, and "exit" or "quit" - if you want to quit the cluster setup wizard. Any changes you made before quitting will be saved. You can return to cluster setup at any time by typing "cluster setup". To accept a default or omit a question, do not enter a value. Do you want to create a new cluster or join an existing cluster? {create, join}: Note: If a login prompt appears instead of the Cluster Setup wizard, you must start the wizard by logging in using the factory default setting, and then entering the cluster setup command. 2. Create a new cluster: create 3. Follow the prompts to complete the Cluster Setup wizard: To accept the default value for a prompt, press Enter. The default values are determined automatically based on your platform and network configuration. To enter your own value for the prompt, enter the value, and then press Enter. 4. After the Cluster Setup wizard is completed and exits, verify that the cluster is active and the first node is healthy: cluster show

Setting up the cluster 33 Example The following example shows a cluster in which the first node (cluster1-01) is healthy and eligible to participate: cluster1::> cluster show Node Health Eligibility --------------------- ------- ------------ cluster1-01 true true You can access the Cluster Setup wizard to change any of the values you entered for the admin Vserver or node Vserver by using the cluster setup command. After you finish If the cluster consists of two or more nodes, you should join each remaining node to the cluster. Joining a node to the cluster After creating a new cluster, you use the Cluster Setup wizard to join each remaining node to the cluster one at a time. The wizard helps you to configure each node's node management interface. Before you begin The cluster must be created on the first node. About this task You can only join one node to the cluster at a time. When you start to join a node to the cluster, you must complete the join, and the node must be part of the cluster before you can start to join the next node. 1. Power on the node. The node boots, and the Cluster Setup wizard starts on the console. Welcome to the cluster setup wizard. You can enter the following commands at any time: "help" or "?" - if you want to have a question clarified, "back" - if you want to change previously answered questions, and "exit" or "quit" - if you want to quit the cluster setup wizard. Any changes you made before quitting will be saved. You can return to cluster setup at any time by typing "cluster setup". To accept a default or omit a question, do not enter a value.

34 NAS Express Setup Guide for 62xx Systems Do you want to create a new cluster or join an existing cluster? {create, join}: 2. Enter the following command to join the node to the cluster: join 3. Follow the prompts to set up the node and join it to the cluster: To accept the default value for a prompt, press Enter. To enter your own value for the prompt, enter the value, and then press Enter. 4. After the Cluster Setup wizard is completed and exits, verify that the node is healthy and eligible to participate in the cluster: cluster show Example The following example shows a cluster after the second node (cluster1-02) has been joined to the cluster: cluster1::> cluster show Node Health Eligibility --------------------- ------- ------------ cluster1-01 true true cluster1-02 true true You can access the Cluster Setup wizard to change any of the values you entered for the admin Vserver or node Vserver by using the cluster setup command. 5. Repeat this task for each remaining node. Troubleshooting boot failure of a node If a node fails to boot when you power it on, you can check for common problems and resolve them. Before you begin The system must be cabled and powered on. About this task The root aggregate for each node is loaded onto the disk shelf intended for the node. If the wrong disk shelves are connected to the node, or if the correct shelves are not correctly cabled, the node cannot boot. Each node requires one and only one root aggregate to boot. Troubleshooting boot failures uses the maintenance mode of the node, which can be accessed using the serial console port of the node.

Setting up the cluster 35 1. Check that all disk shelf cables appear to be connected to the correct disk shelves and that the connectors are all fully seated. 2. Verify that all disk shelves are powered on. 3. Connect a computer with a terminal emulator program to the serial console port of the node that does not boot. 4. Turn off the node that does not boot and then turn it back on, observing the startup messages on the console. 5. When the message Press Ctrl-C for Boot Menu appears, press Ctrl-C to display the boot menu. 6. Select option 5 at the boot menu and continue with the boot in maintenance mode. The Maintenance mode command prompt (*>) is displayed. 7. Run the aggr status command, and then verify that exactly one aggregate is shown for the node. 8. Run the sasadmin shelf command, and then verify that the number of disk shelves displayed matches the number of shelves you connected to the node. There should be two paths to each shelf, so if there are five shelves, you should see 10 entries. 9. Run the storage show disk -p command, and then verify that each disk is connected to two different ports. 10. If any of these tests fail, correct the disk shelf arrangement or cabling, and then rerun the command. Connecting System Manager to the cluster Before you use OnCommand System Manager to manage your cluster, you have to add the cluster to System Manager. 1. Double-click the NetApp OnCommand System Manager icon on your desktop. 2. From the home page, click Add. 3. Type the fully qualified DNS host name or the IPv4 address of the cluster. You must enter the cluster management interface IP address. 4. Click the More arrow. 5. Select the method for discovering and adding the cluster: SNMP

36 NAS Express Setup Guide for 62xx Systems You must specify the SNMP community and SNMP version. Credentials You must specify the user name and password. 6. Click Add. Synchronizing the system time using NTP The cluster needs a Network Time Protocol (NTP) server to synchronize the time between the nodes and their clients. You can use the Edit DateTime dialog box in OnCommand System Manager to configure the NTP server. 1. From the home page, double-click the appropriate storage system. 2. Expand the Cluster hierarchy in the left navigation pane. 3. In the navigation pane, click Configuration > System Tools > DateTime. 4. Click Edit. 5. Select the time zone. 6. Specify the IP addresses of the time servers and click Add. You must add an NTP server to the list of time servers. The domain controller can be an authoritative server. 7. Click OK. 8. Verify the changes you made to the date and time settings in the DateTime window. Configuring AutoSupport You must configure how and where AutoSupport information is sent from each node, and then test that the configuration is correct. AutoSupport messages are scanned for potential problems and are available to technical support when they assist customers in resolving issues. 1. From the home page, double-click the appropriate storage system. 2. Expand the Nodes hierarchy in the left navigation pane. 3. In the navigation pane, select the node, and then click Configuration > AutoSupport. 4. Click Edit.

5. In the From Email Address field, type the email address from which AutoSupport messages are sent. 6. In the Email Recipients area, add email addresses or distribution lists that will receive the AutoSupport messages and specify what content each recipient receives. You should enter at least one recipient outside your network so that you can verify that messages are being received outside your network. 7. In the Mail Hosts area, add the name or IP address of one or more mail hosts at your organization. 8. In the Others tab, select a transport protocol for delivering email messages from the drop-down list and specify HTTP or the HTTPS proxy for HTTP. 9. Click OK. 10. Test the configuration by performing the following steps: a) Click Test, type a subject line that includes the word Test, and click Test. b) Confirm that your organization received an automated response from NetApp, which indicates that NetApp received the AutoSupport message. When NetApp receives an AutoSupport message with Test in the subject, it automatically sends a reply to the email address it has on file as the system owner. c) Confirm that the message was received by the email addresses that you specified. For more information about configuring AutoSupport, see the Clustered Data ONTAP System Administration Guide for Cluster Administrators. 11. Repeat this entire procedure for each node in the cluster. Setting up the cluster 37 Configuring a node for troubleshooting and maintenance The service processor (SP) enables troubleshooting and maintaining a node even if the node is not running. To enable access to the SP of a node, you need to configure the SP network. About this task You must configure the SP network using the CLI. For more information about configuring the SP, see the Clustered Data ONTAP System Administration Guide for Cluster Administrators. 1. Configure and enable the SP by using the system node service-processor network modify CLI command. 2. Display the SP network configuration to verify the settings by using the system node service-processor network show command.

38 NAS Express Setup Guide for 62xx Systems 3. Repeat for each node in the cluster. Example of configuring the SP network The following command configures the SP of a node: cluster1::> system node service-processor network modify -node cluster1-01 -address-type IPv4 -enable true -ip-address 192.168.123.98 -netmask 255.255.255.0 -gateway 192.168.123.1

39 Verifying system operation After your cluster is set up, you should ensure that it is operating correctly by verifying the cabling with the Config Advisor tool, creating a LIF failover group for cabled ports, and testing storage failover. 1. Verifying cabling on page 39 You can use the Config Advisor tool to verify that the cluster is cabled correctly. 2. Creating failover groups on page 40 You must define failover groups of cabled network ports for both the cluster management network and the data network and then assign the management failover group to the cluster management LIFs to help ensure that LIFs fail over to ports that can maintain communication. 3. Testing storage failover on page 44 You should verify that each node can successfully fail over to another node. This helps ensure that the cluster is configured correctly. It also helps to ensure that you can maintain access to data if a real failure occurs. Verifying cabling You can use the Config Advisor tool to verify that the cluster is cabled correctly. Before you begin The hardware setup must be complete and all the components must be powered on. About this task If you require support for the Config Advisor tool, you must follow the procedure in the tool's online Help topic Reporting issues in Config Advisor. The Config Advisor tool is not supported by the typical NetApp support process. 1. Connect the laptop that contains Config Advisor to the management network for the cluster. 2. Optional: Change the IP address of your laptop to an unused address on the subnet for the management network. 3. Start Config Advisor and select the profile Clustered Data ONTAP. 4. Select the cluster switch model. 5. Enter the requested IP addresses and credentials.

40 NAS Express Setup Guide for 62xx Systems 6. Click Collect Data. The Config Advisor tool displays any problems found. If problems are found, correct them and run the tool again. Related information Config Advisor from NetApp Support Site: support.netapp.com/eservice/toolchest Creating failover groups You must define failover groups of cabled network ports for both the cluster management network and the data network and then assign the management failover group to the cluster management LIFs to help ensure that LIFs fail over to ports that can maintain communication. About this task The order in which the ports are added to a failover group can affect the failover order, depending on the LIF's failover policy. This procedure assumes that the cluster has a single management network and a single data network. If the cluster is divided into different VLANs or interface groups, see the Clustered Data ONTAP Network Management Guide instead of using this procedure. You must perform this procedure from the Data ONTAP command line. 1. Create the failover group for the ports in the management network by performing the following actions: a) Starting with the first node, create the failover group by using the network interface failover-groups create command with the e0a port on the first node. Example The following command creates a failover group called mgmt that includes the management port on the first node: cluster1::> network interface failover-groups create -failover-group mgmt - node cluster1-01 -port e0a b) Add the management port from each additional node in sequence by using the network interface failover-groups create command with the e0a port from each node. Example The following commands add management ports from each additional nodes in a six-node cluster to the mgmt failover group:

Verifying system operation 41 cluster1::> network interface failover-groups create -failover-group mgmt - node cluster1-02 -port e0a cluster1::> network interface failover-groups create -failover-group mgmt - node cluster1-03 -port e0a cluster1::> network interface failover-groups create -failover-group mgmt - node cluster1-04 -port e0a cluster1::> network interface failover-groups create -failover-group mgmt - node cluster1-05 -port e0a cluster1::> network interface failover-groups create -failover-group mgmt - node cluster1-06 -port e0a c) Verify that the failover group contains the correct ports in the correct order by using the network interface failover-groups show command with the -failover-group parameter. Example The following command displays the six ports in the mgmt failover group of a six-node cluster: cluster1::> network interface failover-groups show -failover-group mgmt Failover Group Node Port ------------------- ----------------- ---------- mgmt cluster1-01 e0a cluster1-02 e0a cluster1-03 e0a cluster1-04 e0a cluster1-05 e0a cluster1-06 e0a 6 entries were displayed. 2. Assign the management failover group to the cluster management LIF by performing the following actions: a) Identify the name and node of the cluster management LIF by using the network interface show command with the -lif * and -role node-mgmt parameters. Example cluster1::> network interface show -lif * -role cluster-mgmt Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- cluster1 cluster_mgmt up/up 10.53.33.16/18 cluster1-02 e0a false b) Enable the mgmt failover group to the cluster-management LIF by using the network interface modify command with the -vserver * parameter.

42 NAS Express Setup Guide for 62xx Systems Example cluster1::> network interface modify -lif cluster_mgmt -failover-group mgmt -vserver * 1 entry was modified. c) Verify that the cluster-management LIF uses the mgmt failover group by using the network interface show command with the -failover parameter. Example cluster1::> network interface show -failover Logical Home Failover Failover Vserver Interface Node:Port Policy Group -------- --------------- --------------------- --------------- --------------- cluster1 cluster_mgmt cluster1-01:e0a nextavail mgmt Failover Targets: cluster1-01:e0a, cluster1-02:e0a, cluster1-03:e0a, cluster1-04:e0a, cluster1-05:e0a, cluster1-06:e0a... 3. Create the failover group for the ports in the data network by performing the following actions: a) Starting with the first node, create the failover group by using the network interface failover-groups create command with the e0d port on the first node. Example The following command creates a failover group called data that contains the first data port on the first node: cluster1::> network interface failover-groups create -failover-group data - node cluster1-01 -port e0d b) Continuing with the first node, add the other data port to the failover group by using the network interface failover-groups create command with the e0f port on the first node. Example The following command adds the second data port on the first node to the data failover group: cluster1::> network interface failover-groups create -failover-group data - node cluster1-01 -port e0f c) Add both data ports from each additional node in sequence by using the network interface failover-groups create command with the e0d and e0f ports from each node.