Test Range NetApp Build Document

Similar documents
NFS Client Configuration with VAAI for ESX Express Guide

NS0-171.network appliance

FlexPod Express with VMware vsphere 5.1u1 Implementation Guide

Exam Questions NS0-157

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

Outline: ONTAP 9 Cluster Administration and Data Protection Bundle (CDOTDP9)

FlexPod Express with VMware vsphere 6.5 and FAS2600

FlexPod Datacenter with Microsoft Exchange 2016, SharePoint 2016, and NetApp AFF A300

Cluster Management Workflows for OnCommand System Manager

FlexPod Express with VMware vsphere 6.0: Small and Medium Configurations

Deploying VMware vsphere 6.0 with HPE BladeSystem, HPE OneView, Cisco Nexus, and NetApp FAS

FlexPod Express with VMware vsphere 6.7 and NetApp AFF A220

Exam Questions NS0-180

SAN Configuration Guide

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Cluster Management Workflows for OnCommand System Manager

How to Configure NVIDIA GPU M6 Cards with Citrix XenDesktop 7.7 and vsphere 6.0 in FlexPod Datacenter with Cisco B-Series Servers

Clustered Data ONTAP 8.2

ns0-157 Passing Score: 700 Time Limit: 120 min File Version: 1.0

NS0-506 Q&As NetApp Certified Implementation Engineer - SAN, Clustered Data ONTAP

HCI File Services Powered by ONTAP Select

Netapp Exam NS0-157 NetApp Certified Data Administrator, Clustered Data ONTAP Version: 11.0 [ Total Questions: 302 ]

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

iscsi Configuration for ESXi using VSC Express Guide

Network Management Guide

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

FlexPod Express with Microsoft Windows 2016 Hyper-V and FAS2600

FlexPod Datacenter FedRAMP Readiness with VMware vsphere 6.0, HyTrust CloudControl and DataControl

Clustered Data ONTAP 8.3 Update 2, IPspaces. Self-paced Lab NETAPP UNIVERSITY. NetApp University - Do Not Distribute

Cluster Management Workflows for OnCommand System Manager

NetApp HCI Network Setup Guide

NFS Client Configuration for ESX Express Guide

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

FlexPod Express with Microsoft Windows Server 2012 R2 Hyper-V: Small and Medium Configurations

Clustered Data ONTAP 8.2

NS0-191 netapp. Number: NS0-191 Passing Score: 800 Time Limit: 120 min.

FlexPod Express with VMware vsphere 6.7U1 and NetApp AFF A220 with Direct- Attached IP-Based Storage

Monitoring and Reporting for an ONTAP Account

SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP

VMware View on NetApp Deployment Guide

VMware vsphere on NetApp (VVNA)

Data ONTAP Edge 8.2. Express Setup Guide For Clustered Data ONTAP. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Data Fabric Solution for Cloud Backup Workflow Guide

Data ONTAP Edge 8.3. Express Setup Guide For Clustered Data ONTAP. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

FlexPod Express with Microsoft Windows Server 2012 R2 Hyper-V: Large Configuration

Data ONTAP 7-Mode Administration Course D7ADM; 5 Days, Instructor-led

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

PASS4TEST 専門 IT 認証試験問題集提供者

Configuring and Managing Virtual Storage

NS

iscsi Configuration for ESX Express Guide

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

EXAM - NS NetApp Certified Implementation Engineer - SAN, Clustered Data ONTAP. Buy Full Product.

VMware vsphere with ONTAP

NS Number: NS0-507 Passing Score: 800 Time Limit: 120 min File Version: NS0-507

FlexPod Express with VMware vsphere Implementation Guide

Installation and Cluster Deployment Guide for VMware

A. Both Node A and Node B will remain on line and all disk shelves will remain online.

Managing Virtual Machines Using the Cisco SRE-V CLI

Latest IT Exam Questions & Answers

Copy-Free Transition Guide

Data Fabric Solution for Cloud Backup Workflow Guide Using SnapCenter

Data ONTAP 7-Mode Administration (D7ADM)

Ethernet Storage Best Practices

Data ONTAP 8.1 Software Setup Guide for 7-Mode

VMware Horizon View 5.2 on NetApp Clustered Data ONTAP at $35/Desktop

FlexPod Datacenter with VMware vsphere 6.5, NetApp AFF A-Series and Fibre Channel

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Installation and Cluster Deployment Guide for VMware

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

NetApp Integrated EVO:RAIL Solution

VMware vsphere Storage Appliance Installation and Configuration

Data ONTAP 8.2 Software Setup Guide For 7-Mode

Infinite Volumes Management Guide

FlexPod Datacenter with IP-Based Storage using VMware vsphere 6.5 Update1, NetApp AFF A- Series, and Cisco UCS Manager 3.2

FlexPod Datacenter with SolidFire All-Flash Array Add-On

Virtual Storage Console 6.1 for VMware vsphere

Installation and Cluster Deployment Guide

Cisco Nexus 1000V Installation and Upgrade Guide, Release 5.2(1)SV3(1.4)

NetApp HCI. Deployment Guide. Version 1.2. April _B0

OnCommand Unified Manager 6.1

MetroCluster IP Installation and Configuration Guide

Data ONTAP 7-Mode Administration

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

Disaster Recovery-to-the- Cloud Best Practices

Cluster Management Using OnCommand System Manager

VMware vsphere 4. Architecture VMware Inc. All rights reserved

Clustered Data ONTAP Administration (DCADM)

NetApp HCI. Deployment Guide. Version July _A0

Virtual Storage Console 5.0 for VMware vsphere Performing Basic Tasks

SMB/CIFS Configuration Power Guide

OnCommand Unified Manager 6.1 Administration Guide

NetApp. Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0.

FlexArray Virtualization

Ethernet Storage Design Considerations and Best Practices for Clustered Data ONTAP Configurations

Clustered Data ONTAP 8.3

Datrium DVX Networking Best Practices

Data Protection Guide

Netapp Exam NS0-506 NetApp Certified Implementation Engineer - SAN, Clustered Data ONTAP Version: 6.0 [ Total Questions: 65 ]

Transcription:

Test Range NetApp Build Document Purpose: This document describes the hardware, software, cabling and configuration for the NDB Test Range NetApp as built during NetApp SE onsite support visit. In addition, this document covers the configuration of the ESXi hosts and switches to support NetApp connectivity. Net App Hardware: Model: FAS2552 Double Controllers (24) 1.2T SAS disks Software: NetApp Clustered Data ONTAP Release 8.3RC2 This software version makes more efficient use of storage space by striping root partitions across multiple disks. With this software version, each disk has two partitions: one small partition for root and one large partition for data. Physical Cabling: Followed the NetApp cabling guide for switchless Clustered Data ONTAP (guide included in the box with NetApp). All cabling is described in the table below: NetApp Port Connected Host/Interface Description Node 1 e0a Node 1 e0b Node 1 e0c SAN-SW-1 TE1 IP Storage Node 1 e0d SAN-SW-2 TE1 IP Storage Node 1 e0e Node 2 e0e Cluster Connection Node 1 e0f Node 2 e0f Cluster Connection Node 1 e0m TR-CORE-1 g1/5 Wrench port (for mgmt cx) Node 1 SAS 0a Node 2 SAS 0b For Shelf-Shelf Comms Node 1 SAS 0b Node 2 SAS 0a For Shelf-Shelf Comms Node 2 e0a Node 2 e0b Node 2 e0c SAN-SW-1 TE2 IP Storage Node 2 e0d SAN-SW-2 TE2 IP Storage Node 2 e0e Node 1 e0e Cluster Connection Node 2 e0f Node 1 e0f Cluster Connection Node 2 e0m TR-CORE-1 g1/6 Wrench port (for mgmt cx) Node 2 SAS 0a Node 1 SAS 0b For Shelf-Shelf Comms Node 2 SAS 0b Node 1 SAS 0a For Shelf-Shelf Comms Net App Configuration: Node 1 and Node 2 configured as an Active/Passive Cluster named cluster01. In an

Active/Passive cluster, there is only 1 large data aggregate and it is assigned to Node 1. Node 2 will take over the data aggregate if Node 1 fails. Node 1 and Node 2 also each have their own root partition. NetApp can be managed through SSH or through web client (https://10.9.8.30) Networking Configurations: Create if_groups (2) these create the Etherchannel for IP Storage connections a1a e0c and e0d on Node 1 are members a2a e0c and e0d on Node 2 are members both if_groups use Create Policy multimode LACP Create Broadcast Domains (4) Default Cluster IP-Storage Physical interfaces e0a and e0b on both Nodes are assigned to this Broadcast Domain This broadcast domain is just a parking lot for unused interfaces IP space Default assigned to this broadcast domain Physical interfaces e0e and e0f on both Nodes are assigned to this Broadcast Domain This broadcast domain is used for Cluster communications MTU is set to 9000 IP Space Cluster assigned to this broadcast domain Cluster IP space is dynamically assigned if_group a1a on Node 1 and a2a on Node 2 are assigned to this Broadcast Domain MTU is set to 9000 for IP storage optimization IP space Default assigned to this broadcast domain Management Physical interfaces e0m on both Nodes are assigned to this Broadcast Domain

Create Subnets (1) This broadcast domain is used for management traffic IP space Default assigned to this broadcast domain IP-Storage IP range 10.0.7.30 10.0.7.35 These IPs will automatically be allocated to a new Datastore when it is created Create lifs (4) Logical interfaces for Volumes lif_vmw_ds01 through lif_vmw-ds04 created lifs are needed so that each Volume is assigned its own IP When a Volume is moved to a different node, the lif will move to the new Node, preventing suboptimal routing, for example Node 1-> Node 2-> Volume Subnet IP-Storage assigned to lifs Storage Virtual Machine vmw_svm assigned to lifs Other Management Network Interfaces Cluster Management 10.9.8.30 Node 1 Management 10.9.8.31 Node 2 Management 10.9.8.32 Node 1 Service Port 10.9.8.33 (this is an OOB console port) Node 2 Service Port 10.9.8.34 (this is an OOB console port) Create Aggregates 2 root and 1 data n1_aggr_root Total space 368.42GB striped across 10 disks Assigned to Node 1 n2_aggr_root Total space 368.42GB striped across 10 disks Assigned to Node 2 n3_aggr_data Total space 19.59T striped across 23 disks Assigned to Node 1 (Node 2 will take over if Node 1 fails) All aggregates configured with RAID-DP

Enter License Keys Cluster Base License NFS License CIFS License iscsi License FCP License Create a Storage Virtual Machine (SVM) this is a virtual machine to manage the storage vmw-svm there are 3 main sections to configure: Storage TR SAN-SWITCH Configuration Create Volumes (4 data and 1 root) vmw_ds01 through vmw_ds04 created for data- 4TB each Storage efficiency enabled (deduplication) Security Style = UNIX Vmwsvm_root is the root volume and storage efficiency is disabled Mount Volumes to Namespace Namespace is the root volume Mount each Volume Create Export Policies (2) Apply Default Export Policy to root volume vmwsvm_root Apply vmw_policy Export Policy to data volumes Default allows everyone read-only access to root volume via NFS protocol (Access Details select all read-only) vmw_policy allows IP Storage subnet (10.0.7.0/24) full access to data volume via NFS protocol (Access Details - select UNIX read-write) Configure Protocol Services enable NFS for this SVM

Set system MTU 900 to enable jumbo frames globally. Jumbo frames are recommended by NetApp to optimize performance on the storage network. Port-channel 1 created to support Etherchannel for Node 1 IP Storage connectivity Interfaces TE1/1/1 and TE2/1/1 channel-group 1 mode active switchport access vlan 7 spanning-tree portfast Port-channel 2 created to support Etherchannel for Node 2 IP Storage connectivity Interfaces TE1/1/2 and TE2/1/2 channel-group 2 mode active switchport access vlan 7 spanning-tree portfast TR-CORE-1 Configuration Ports g1/5 and g1/6 configured in VLAN 98 for NetApp Management traffic g1/5 Node 1 management g1/6 Node 2 management TR ESXi Host Configuration Vswitch1 120 ports MTU set to 9000 Active adapters vmnic2, vmnic3 Load-balance route based on originating virtual port Vmk1 IP-storage networking interface MTU set to 9000 VLAN ID 7 Load-balance route based on originating virtual port Active Adapters vmnic2, vmnic3 Added (4) NetApp NFS datastores to the ESXi hosts in a cluster named cluster_ds vmw_ds01 10.0.7.30

vmw_ds02 10.0.7.31 vmw_ds03 10.0.7.32 vmw_ds04 10.0.7.33 Total storage capacity 16TB Installed NetApp VSC plugin TR-monitor (10.9.8.22) and registered to vcenter server Optimized NFS settings on ESXi hosts through VSC Allows management of NetApp through VMware Installed NetApp VAAI plugin on each ESXi host via command line From SSH prompt on ESXi server, type: esxcli software sources vib list d NetAppNasPlugin.zip esxcli software vib install n NetAppNasPlugin d NetAppNasPlugin.zip Enabled hardware acceleration on datastores through VMware