Data ONTAP 8.2 Storage Management Guide For 7-Mode

Similar documents
Data ONTAP 8.1 High Availability and MetroCluster Configuration Guide For 7-Mode

Data ONTAP 8.2 Upgrade and Revert/Downgrade Guide For 7-Mode

FlexArray Virtualization

Data ONTAP 8.1 Storage Efficiency Management Guide for 7-Mode

Data ONTAP 8.1 SAN Administration Guide For 7-Mode

Data ONTAP 8.1 Upgrade and Revert/Downgrade Guide for 7-Mode

OnCommand Unified Manager 6.1 Administration Guide

Clustered Data ONTAP 8.2 High-Availability Configuration Guide

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager

Outline: ONTAP 9 Cluster Administration and Data Protection Bundle (CDOTDP9)

NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode

Cluster Management Workflows for OnCommand System Manager

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

High Availability and MetroCluster Configuration Guide For 7-Mode

Fibre Channel and iscsi Configuration Guide for the Data ONTAP 8.0 Release Family

Disks and Aggregates Power Guide

Infinite Volumes Management Guide

ns0-157 Passing Score: 700 Time Limit: 120 min File Version: 1.0

OnCommand Unified Manager 6.1

Accelerated NCDA Boot Camp Data ONTAP 7-Mode (ANCDABC87)

Copy-Based Transition Guide

Monitoring and Reporting for an ONTAP Account

7-Mode Transition Tool 2.2

OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3

Data ONTAP 8.2. Upgrade and Revert/Downgrade Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

64-Bit Aggregates. Overview and Best Practices. Abstract. Data Classification. Technical Report. GV Govindasamy, NetApp April 2015 TR-3978

Data ONTAP 7-Mode Administration Course D7ADM; 5 Days, Instructor-led

NetApp Encryption Power Guide

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates

NetApp Encryption Power Guide

OnCommand System Manager 3.1.1

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

Data ONTAP 8.2 Software Setup Guide For 7-Mode

Volume Disaster Recovery Express Guide

Managing Data ONTAP Operating in 7-Mode Using the GUI

OnCommand System Manager 3.1.2

Cluster Management Using OnCommand System Manager

OnCommand System Manager 3.1.1

Data ONTAP 8.1 Software Setup Guide for 7-Mode

Upgrading a single-controller FAS22xx system with internal storage to another single-controller system by copying volumes

Data ONTAP 8.2. System Administration Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Virtual Storage Console 5.0 for VMware vsphere Performing Basic Tasks

Data ONTAP Mode Release Notes

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

FlexArray Virtualization Implementation Guide for Third- Party Storage

Virtual Storage Console 5.0 for VMware vsphere Installation and Administration Guide

Accelerated NCDA Boot Camp Data ONTAP 7-Mode Course ANCDABC87; 5 Days, Instructor-led

OnCommand Unified Manager 6.0

Data ONTAP Release Notes

Dynamic Storage Using IBM System Storage N series

Data ONTAP Release Notes for 7-Mode

Data ONTAP 7.3 Documentation Roadmap

Clustered Data ONTAP 8.3

Clustered Data ONTAP Administration and Data Protection

NetApp. Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0.

SnapManager 6.0 for Microsoft SQL Server Installation and Administration Guide

NetApp Encryption Power Guide

Data ONTAP 7.3 Data Protection Online Backup and Recovery Guide

IBM System Storage N3000 Express series Modular Disk Storage Systems

FlexArray Virtualization

Data ONTAP 7-Mode Administration (D7ADM)

Network- Appliance EXAM - NS NetApp Certified Implementation Engineer - SAN, Cluster-Mode. Buy Full Product

Midsize Enterprise Solutions Selling Guide. Sell NetApp s midsize enterprise solutions and take your business and your customers further, faster

This course is intended for those who provide basic support for and perform administrative functions of the Data ONTAP operating system.

GA NA _A0. IBM System Storage N series Gateway Implementation Guide for IBM XIV Storage

Clustered Data ONTAP 8.3 Administration and Data Protection Workshop

NDMP in Clustered Data ONTAP for Tape Backup Software Applications

SnapDrive 6.4 for Windows Installation and Administration Guide

Clustered Data ONTAP Release Notes

Windows Host Utilities Installation and Setup Guide

FlexArray Virtualization

FlexArray Virtualization

FlexArray Virtualization Implementation Guide for NetApp E-Series Storage

FlexArray Virtualization

Data ONTAP 7-Mode Administration

NS0-180 netapp

DataONTAP8.1Archiveand Compliance Management Guide For 7-Mode

7-Mode Data Transition Using SnapMirror Technology

NetApp FAS3200 Series

NetApp Unified Storage Capacity Management Using Open Interfaces

Server Fault Protection with NetApp Data ONTAP Edge-T

NS0-511 netapp. Number: NS0-511 Passing Score: 800 Time Limit: 120 min.

NetApp NS0-511 Exam. Volume: 65 Questions

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing

7-Mode Data Transition Using SnapMirror

Abstract. Data Classification. Technical Report. Roy Scaife, NetApp Version March 2016 TR-4375

Data Protection Guide

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo

ONTAP 9.3 Cluster Administration and Data Protection Bundle (CDOTDP9)

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

Datasheet NetApp FAS6200 Series

NetApp FAS8000 Series

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Copy-Free Transition Guide

SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP

Clustered Data ONTAP 8.3

Data Protection Guide

IBM N Series. Store the maximum amount of data for the lowest possible cost. Matthias Rettl Systems Engineer NetApp Austria GmbH IBM Corporation

Transcription:

IBM System Storage N series Data ONTAP 8.2 Storage Management Guide For 7-Mode SC27-5928-02

Table of Contents 3 Contents Preface... 17 About this guide... 17 Supported features... 17 Websites... 18 Getting information, help, and service... 18 Before you call... 18 Using the documentation... 19 Hardware service and support... 19 Firmware updates... 19 How to send your comments... 19 Data ONTAP storage architecture overview... 21 Managing disks using Data ONTAP... 22 How Data ONTAP reports disk types... 22 Storage connection types and topologies supported by Data ONTAP... 24 How disks can be combined for the SAS storage connection type... 24 How disks can be combined for the FC-AL storage connection type... 24 Methods of calculating aggregate and system capacity... 24 Disk speeds supported by Data ONTAP... 25 How drive checksum types affect aggregate and spare management... 26 Drive name formats... 26 Loop IDs for FC-AL connected disks... 28 Understanding RAID disk types... 28 How disk sanitization works... 29 Disk sanitization process... 29 When disk sanitization cannot be performed... 30 What happens if disk sanitization is interrupted... 30 How selective disk sanitization works... 31 Tips for creating and backing up aggregates containing data to be sanitized... 31 How Data ONTAP monitors disk performance and health... 31 What happens when Data ONTAP takes disks offline... 31 How Data ONTAP reduces disk failures using Rapid RAID Recovery... 32

4 Storage Management Guide How the maintenance center helps prevent drive errors... 33 How Data ONTAP uses continuous media scrubbing to prevent media errors... 34 Increasing storage availability by using ACP... 35 Enabling ACP... 35 How you use SSDs to increase storage performance... 37 How Data ONTAP manages SSD wear life... 37 Capability differences between SSDs and HDDs... 38 Guidelines and requirements for using multi-disk carrier storage shelves... 38 How Data ONTAP avoids RAID impact when a multi-disk carrier must be removed... 39 How to determine when it is safe to remove a multi-disk carrier... 39 Spare requirements for multi-disk carrier disks... 40 Shelf configuration requirements for multi-disk carrier storage shelves... 40 Aggregate requirements for disks in multi-disk carrier storage shelves... 40 Considerations for using disks from a multi-disk carrier storage shelf in an aggregate... 41 Adding disks to a storage system... 42 When you need to update the Disk Qualification Package... 43 Replacing disks that are currently being used in an aggregate... 43 Replacing a self-encrypting disk... 45 Converting a data disk to a hot spare... 45 Removing disks from a storage system... 46 Removing a failed disk... 46 Removing a hot spare disk... 47 Removing a data disk... 48 Using disk sanitization to remove data from disks... 49 Removing data from disks using selective disk sanitization... 52 Stopping disk sanitization... 57 How ownership for disks and array LUNs works... 59 Reasons to assign ownership of disks and array LUNs... 59 What it means for Data ONTAP to own an array LUN... 59 Why you might assign array LUN ownership after installation... 60 How disks and array LUNs become available for use... 61 How automatic ownership assignment works for disks... 62 What automatic ownership assignment does... 62

Table of Contents 5 When automatic ownership assignment is invoked... 63 Examples showing when Data ONTAP can use array LUNs... 63 Managing ownership for disks and array LUNs... 66 Guidelines for assigning ownership for disks... 66 Assigning ownership for unowned disks and array LUNs... 67 Removing ownership from a disk... 69 Configuring automatic ownership assignment of disks... 70 Modifying assignment of spare disks or array LUNs... 71 Verifying the existence of two paths to an array LUN... 73 Verifying the existence of two paths: storage show disk command... 73 Verifying the existence of two paths: storage array show-config command... 74 Verifying path failover for array LUNs... 74 Verifying path failover for array LUNs in a stand-alone system... 74 Verifying path failover for array LUNs in an HA pair... 75 Guidelines for assigning disks or array LUNs to SyncMirror pools... 76 How you use the wildcard character with the disk ownership commands... 76 Managing array LUNs using Data ONTAP... 78 Systems running Data ONTAP that can use array LUNs on storage arrays... 78 Array LUN name format... 79 Guidelines for adding storage to a storage system that uses array LUNs... 80 Checking the checksum type of spare array LUNs... 81 Changing the checksum type of an array LUN... 81 Prerequisites to reconfiguring an array LUN on the storage array... 82 Changing array LUN size or composition... 83 Removing one array LUN from use by Data ONTAP... 84 Preparing array LUNs before removing a system running Data ONTAP from service... 84 Commands to display information about your storage... 85 Commands to display drive and array LUN information... 85 Commands to display space information... 87 Commands to display storage subsystem information... 87 Enabling or disabling a host adapter... 90 Introduction to Storage Encryption... 91 What Storage Encryption is... 91 Purpose of the external key management server... 91

6 Storage Management Guide How Storage Encryption works... 92 Disk operations with SEDs... 92 Benefits of using Storage Encryption... 93 Data protection in case of disk loss or theft... 93 Data protection when returning disks to vendors... 93 Data protection when moving disks to end-of-life... 94 Data protection through emergency data shredding... 94 Limitations of Storage Encryption... 94 Managing Storage Encryption... 96 Displaying Storage Encryption disk information... 96 Displaying key management server information... 97 Verifying key management server links... 97 Adding key management servers... 99 Removing key management servers... 100 What happens when key management servers are not reachable during the boot process... 100 Changing the authentication key... 101 Retrieving authentication keys... 102 Deleting an authentication key... 103 SSL issues due to expired certificates... 103 Removing old SSL certificates before installing new ones... 104 Installing replacement SSL certificates on the storage system... 105 Returning SEDs to unprotected mode... 105 Destroying data on disks using Storage Encryption... 107 Sanitizing disks using Storage Encryption before return to vendor... 107 Setting the state of disks using Storage Encryption to end-of-life... 108 Emergency shredding of data on disks using Storage Encryption... 109 How Data ONTAP uses RAID to protect your data and data availability... 111 RAID protection levels for disks... 111 What RAID-DP protection is... 111 What RAID4 protection is... 112 RAID protection for array LUNs... 112 Protection provided by RAID and SyncMirror... 113 Understanding RAID disk types... 115 How RAID groups work... 116

Table of Contents 7 How RAID groups are named... 116 About RAID group size... 116 Considerations for sizing RAID groups for drives... 116 Considerations for Data ONTAP RAID groups for array LUNs... 117 How Data ONTAP works with hot spare disks... 118 How many hot spares you should have... 118 What disks can be used as hot spares... 119 What a matching spare is... 119 What an appropriate hot spare is... 120 About degraded mode... 120 How low spare warnings can help you manage your spare drives... 121 How Data ONTAP handles a failed disk with a hot spare... 121 How Data ONTAP handles a failed disk that has no available hot spare... 122 Considerations for changing the timeout RAID option... 122 How RAID-level disk scrubs verify data integrity... 123 How you schedule automatic RAID-level scrubs... 123 How you run a manual RAID-level scrub... 124 Customizing the size of your RAID groups... 126 Controlling the impact of RAID operations on system performance.. 128 Controlling the performance impact of RAID data reconstruction... 128 Controlling the performance impact of RAID-level scrubbing... 129 Controlling the performance impact of plex resynchronization... 130 Controlling the performance impact of mirror verification... 131 What aggregates are... 132 Introduction to 64-bit and 32-bit aggregate formats... 132 Best practices for expanding a 32-bit aggregate to 64-bit... 133 How unmirrored aggregates work... 134 How mirrored aggregates work... 135 What a Flash Pool aggregate is... 136 How Flash Pool aggregates work... 137 Requirements for using Flash Pool aggregates... 137 How Flash Pool aggregates and Flash Cache compare... 138 About read and write caching for Flash Pool aggregates... 139 How the available Flash Pool cache capacity is calculated... 139 Restrictions for using aggregates composed of SSDs... 140 How you can use disks with mixed speeds in the same aggregate... 140

8 Storage Management Guide How to control disk selection from heterogeneous storage... 141 Rules for mixing HDD types in aggregates... 142 Rules for mixing drive types in Flash Pool aggregates... 143 How drive checksum types affect aggregate and spare management... 143 Rules for mixing storage in array LUN aggregates... 144 How the checksum type is determined for array LUN aggregates... 144 Understanding the root aggregate... 145 Managing aggregate Snapshot copies... 145 Considerations for increasing the aggregate Snapshot reserve... 146 Managing aggregates... 147 Creating an aggregate... 147 Creating a Flash Pool aggregate... 149 Determining and enabling volume write-caching eligibility... 151 Using the Automated Workflow Analyzer (AWA) feature to optimize Flash Pool cache size... 153 Changing the RAID type of RAID groups in a Flash Pool aggregate... 156 Increasing the size of an aggregate... 157 What happens when you add storage to an aggregate... 160 Forcibly adding disks to aggregates... 161 Expanding an aggregate to 64-bit without adding storage... 161 Taking an aggregate offline... 163 Bringing an aggregate online... 163 Putting an aggregate into restricted state... 164 Changing the RAID level of an aggregate... 164 Changing an aggregate's RAID level from RAID4 to RAID-DP... 164 Changing an aggregate's RAID level from RAID-DP to RAID4... 165 Destroying an aggregate... 166 Restoring a destroyed aggregate... 167 Physically moving an aggregate composed of disks... 168 Moving an aggregate composed of array LUNs... 171 Using volumes... 174 How FlexVol volumes work... 174 Differences between 64-bit and 32-bit FlexVol volumes... 174 Interoperability between 64-bit and 32-bit FlexVol volumes... 175 How traditional volumes work... 175 How the volume language attribute affects data visibility and availability... 176

Table of Contents 9 How file access protocols affect what language to use for your volumes.. 176 How to manage duplicate volume names... 177 Volume states and status... 177 How security styles affect data access... 180 Improving client performance with traditional and lease oplocks... 180 How Data ONTAP can automatically provide more space for full FlexVol volumes... 181 Considerations for changing the maximum number of files allowed on a volume. 181 Cautions for increasing the maximum directory size for FlexVol volumes... 182 Understanding the root volume... 182 Recommendations for the root volume... 183 Special system files... 184 General volume operations... 185 Migrating from traditional volumes to FlexVol volumes... 185 Preparing your destination volume... 185 Migrating your data... 187 Completing the migration... 188 Putting a volume into restricted state... 189 Taking a volume offline... 189 Bringing a volume online... 190 Renaming a volume... 190 Destroying a volume... 191 Displaying file or inode usage... 192 Changing the maximum number of files allowed in a volume... 192 Changing the language for a volume... 193 Changing the root volume... 194 FlexVol volume operations... 196 Creating a FlexVol volume... 196 Resizing a FlexVol volume... 198 Displaying the containing aggregate for a FlexVol volume... 199 Traditional volume operations... 200 Creating a traditional volume... 200 Using FlexCache volumes to accelerate data access... 203 How FlexCache volumes serve read requests... 203 FlexCache hardware and software requirements... 203 Limitations of FlexCache volumes... 204

10 Storage Management Guide Types of volumes you can use for FlexCache... 206 How the FlexCache Autogrow capability works... 206 How FlexCache volumes use space management... 207 How FlexCache volumes share space with other volumes... 207 Methods to view FlexCache statistics... 208 What happens when connectivity to the origin system is lost... 208 How the NFS export status of the origin volume affects FlexCache access... 211 How FlexCache caching works... 211 What a cached file contains... 211 How data changes affect FlexCache volumes... 211 How cache consistency is achieved... 212 What cache hits and misses are... 214 Typical FlexCache deployments... 215 WAN deployment... 215 LAN deployment... 216 Using FlexCache volumes to cache clustered Data ONTAP volumes... 216 About using LUNs in FlexCache volumes... 217 What FlexCache status messages mean... 217 How FlexCache volumes connect to their origin volume... 218 FlexCache volume operations... 219 Creating FlexCache volumes... 219 Displaying free space for FlexCache volumes... 220 Configuring the FlexCache Autogrow capability... 220 Flushing files from FlexCache volumes... 221 Displaying FlexCache client statistics... 221 Displaying FlexCache server statistics... 222 Displaying FlexCache status... 222 Using FlexClone volumes to create efficient copies of your FlexVol volumes... 223 Understanding FlexClone volumes... 223 FlexClone volumes and space guarantees... 224 How to identify shared Snapshot copies in FlexClone volumes... 225 FlexClone volumes and shared Snapshot copies... 225 How you use volume SnapMirror replication with FlexClone volumes... 226 Considerations for creating a FlexClone volume from a SnapMirror source or destination volume... 226

Table of Contents 11 How splitting a FlexClone volume from its parent works... 226 FlexClone volumes and LUNs... 227 FlexClone volume operations... 229 Creating a FlexClone volume... 229 Splitting a FlexClone volume from its parent... 230 Determining the parent volume and base Snapshot copy for a FlexClone volume... 231 Determining the space used by a FlexClone volume... 231 Moving your volumes nondisruptively... 233 Ways to use volume move... 233 Requirements for performing a volume move... 233 How the setup phase of volume move works... 234 How the data copy phase of volume move works... 235 How the cutover phase of volume move works... 235 Performing the volume move operation... 236 Pausing the volume move operation... 237 Resuming the volume move operation... 238 Monitoring the volume move status... 238 Performing manual cutover of the volume move operation... 239 Canceling the volume move operation... 239 Using FlexClone files and FlexClone LUNs to create efficient copies of files and LUNs... 241 Benefits of FlexClone files and FlexClone LUNs... 241 How FlexClone files and FlexClone LUNs work... 241 Considerations for working with FlexClone files and FlexClone LUNs... 243 Creating a FlexClone file or FlexClone LUN... 243 Viewing the space savings due to FlexClone files and FlexClone LUNs... 245 Features supported with FlexClone files and FlexClone LUNs... 247 How deduplication works with FlexClone files and FlexClone LUNs... 247 How Snapshot copies work with FlexClone files and FlexClone LUNs... 248 How access control lists work with FlexClone files and FlexClone LUNs... 248 How vfiler units work with FlexClone files and FlexClone LUNs... 248 How quotas work with FlexClone files and FlexClone LUNs... 249 How FlexClone volumes work with FlexClone files and FlexClone LUNs... 249 How NDMP works with FlexClone files and FlexClone LUNs... 250 How synchronous SnapMirror works with FlexClone files and FlexClone LUNs. 250

12 Storage Management Guide How volume SnapMirror works with FlexClone files and FlexClone LUNs... 250 How qtree SnapMirror and SnapVault work with FlexClone files and FlexClone LUNs... 251 How volume move affects FlexClone files and FlexClone LUNs... 251 How volume copy works with FlexClone files and FlexClone LUNs... 251 How space reservation works with FlexClone files and FlexClone LUNs... 252 How an HA configuration works with FlexClone files and FlexClone LUNs... 252 Using deduplication and data compression to increase storage efficiency... 253 How to set up efficiency operations... 253 Configuring deduplication... 253 How deduplication works... 253 What deduplication metadata is... 254 Guidelines for using deduplication... 255 Enabling deduplication on a volume... 256 Disabling deduplication on a volume... 257 Configuring data compression... 257 How data compression works... 258 How data compression detects incompressible data and saves system resources... 258 Enabling data compression on a volume... 259 Disabling data compression on a volume... 260 Managing volume efficiency operations using schedules... 260 Modifying scheduling of efficiency operations... 261 Running efficiency operations manually... 262 Running efficiency operations depending on the amount of new data written... 263 Using checkpoints to resume efficiency operation... 263 Running efficiency operations manually on existing data... 265 Monitoring volume efficiency operations... 265 Viewing the status of efficiency operations on a FlexVol volume... 266 Viewing efficiency space savings on a FlexVol volume... 267 Stopping volume efficiency operations... 268 Information about removing space savings from a volume... 268 Deduplication interoperability with Data ONTAP features... 269 How fractional reserve works with deduplication... 270

Table of Contents 13 How Snapshot copies work with deduplication... 270 How volume SnapMirror works with deduplication... 270 How qtree SnapMirror works with deduplication... 271 How SnapVault works with deduplication... 271 How tape backup works with deduplication... 272 How SnapRestore works with deduplication... 272 How MetroCluster configurations work with deduplication... 272 How OnCommand Unified Manager server works with deduplication... 273 How volume copy works with deduplication... 273 How deduplication works with data compression... 274 How FlexClone volumes work with deduplication... 274 How HA pairs work with deduplication... 274 How vfiler units work with deduplication... 275 How Data Motion for Volumes works with deduplication... 275 Data compression interoperability with Data ONTAP features... 275 How fractional reserve works with data compression... 276 How Snapshot copies work with data compression... 276 How volume SnapMirror works with data compression... 276 How qtree SnapMirror works with data compression... 277 How SnapVault works with data compression... 278 How tape backup works with data compression... 278 How SnapLock works with data compression... 279 How volume-based SnapRestore works with data compression... 279 How single file SnapRestore works with data compression... 279 How MetroCluster configurations work with data compression... 279 How volume copy works with data compression... 280 How aggregate copy works with data compression... 280 How deduplication works with data compression... 281 How FlexClone volumes work with data compression... 281 How FlexClone files work with data compression... 281 How HA pairs work with data compression... 281 How Performance Acceleration Module and Flash cache cards work with data compression... 282 How vfiler units work with data compression... 282 How Data Motion for Volumes works with data compression... 282 How Flash Pool aggregates work with data compression... 282

14 Storage Management Guide How you use space management capabilities... 283 How volume guarantees work with FlexVol volumes... 283 Enabling guarantees for FlexVol volumes... 285 How the guarantee affects FlexVol volume space requirements... 286 How file and LUN reservations work... 287 Considerations for setting fractional reserve... 287 How Data ONTAP can automatically provide more space for full FlexVol volumes... 289 Selecting the first method to increase space for full FlexVol volumes... 289 Deleting Snapshot copies automatically... 290 How a FlexVol volume can automatically change its size... 293 Configuring a FlexVol volume to automatically change its size... 294 Requirements for enabling both autoshrink and automatic Snapshot copy deletion... 295 How the autoshrink functionality interacts with Snapshot copy deletion... 295 Considerations for using thin provisioning with FlexVol volumes... 296 How to determine space usage in a volume or aggregate... 296 How to determine space usage in an aggregate... 297 How you can determine and control a volume's space usage in the aggregate... 298 How you can determine and control space usage in a volume... 301 How Snapshot copies and Snapshot reserve use space in a volume... 305 When to use the df command and the space usage commands... 306 Methods to create space in a FlexVol volume... 306 Methods to create space in an aggregate... 307 About qtrees... 309 When to use qtrees... 309 How qtrees compare with FlexVol volumes... 309 Qtree name restrictions... 310 Managing qtrees... 311 Creating a qtree... 311 Displaying qtree status... 312 Displaying qtree access statistics... 312 Converting a directory to a qtree... 313 Converting a directory to a qtree using a Windows client... 314 Converting a directory to a qtree using a UNIX client... 314

Table of Contents 15 Deleting a qtree... 315 Renaming a qtree... 316 About quotas... 318 Why you use quotas... 318 Overview of the quota process... 318 Understanding quota notifications... 318 Quota targets and types... 319 Special kinds of quotas... 320 How default quotas work... 320 How you use explicit quotas... 321 How derived quotas work... 322 How you use tracking quotas... 322 How quotas are applied... 323 How quotas work with users and groups... 324 How you specify UNIX users for quotas... 324 How you specify Windows users for quotas... 324 How default user and group quotas create derived quotas... 326 How quotas are applied to the root user... 327 How quotas work with special Windows groups... 327 How quotas are applied to users with multiple IDs... 328 How Data ONTAP determines user IDs in a mixed environment... 328 How quotas with multiple users work... 329 How you link UNIX and Windows names for quotas... 329 How quotas work with qtrees... 331 How tree quotas work... 331 How user and group quotas work with qtrees... 332 How default tree quotas on a volume create derived tree quotas... 332 How default user quotas on a volume affect quotas for the qtrees in that volume... 333 How qtree changes affect quotas... 334 How deleting a qtree affects tree quotas... 334 How renaming a qtree affects quotas... 334 How changing the security style of a qtree affects user quotas... 334 Differences among hard, soft, and threshold quotas... 335 How the quotas file works... 336 The syntax of quota entries... 336

16 Storage Management Guide How Data ONTAP reads the quotas file... 340 What character encodings are supported by the quotas file... 340 Sample quotas file... 341 How quotas are activated... 342 When you can use resizing... 343 When a full quota reinitialization is required... 344 How quotas work with vfiler units... 345 How quota reports work... 345 What fields quota reports contain... 345 How quota report options affect quota reports... 346 How the ID field is displayed in quota reports... 348 How you can use the quota report to see what quotas are in effect... 348 Difference in space usage displayed by a quota report and a UNIX client... 351 How a quota report accounts for disk space and file usage... 351 How the ls command accounts for space usage... 352 How the df command accounts for file size... 353 How the du command accounts for space usage... 353 Difference between a quota report and the df command in accounting space usage of CIFS sparse files... 354 Examples of using quotas... 355 Managing quotas... 359 Activating quotas... 359 Reinitializing quotas... 360 Deactivating quotas... 361 Canceling quota initialization... 361 Resizing quotas... 361 Deleting quotas... 362 Deleting a quota by removing resource restrictions... 362 Deleting a quota by removing the quotas file entry... 362 Managing quota message logging... 363 Displaying a quota report... 363 Using the quota report to determine which quotas limit writes to a specific file... 364 Storage limits... 365 Copyright information... 369 Trademark information... 370 Index... 373

17 Preface About this guide This document applies to IBM N series systems running Data ONTAP. If the term 7-Mode is used in the document, it refers to Data ONTAP operating in 7-Mode, which has the same features and functionality found in the prior Data ONTAP 7.1, 7.2, and 7.3 release families. Starting with Data ONTAP 8.2.1, the capability of using LUNs on a storage array, formerly identified as gateway functionality, has a new name Data ONTAP FlexArray Virtualization Software. In this document, the term gateway describes IBM N series storage systems prior to Data ONTAP 8.2.1 that have been ordered with the FlexArray Virtualization Software functionality (formerly known as gateway functionality). Gateways support various types of storage, and they are used with third-party disk storage systems for example, disk storage systems from IBM, HP, Hitachi Data Systems, and EMC. In this case, disk storage for customer data and the RAID controller functionality is provided by the back-end disk storage system. A gateway might also be used with disk storage expansion units specifically designed for the IBM N series models. Although the gateway product name has been retired as of Data ONTAP 8.2.1, Data ONTAP can access LUNs on all the same storage arrays as before, with the same functionality that the gateway product provided. The term filer describes IBM N series storage systems that either contain internal disk storage or attach to disk storage expansion units specifically designed for the IBM N series storage systems. New filer storage systems released with Data ONTAP 8.2.1 and later can support third-party disk storage systems if the appropriate license is installed. Supported features IBM System Storage N series storage systems are driven by NetApp Data ONTAP software. Some features described in the product software documentation are neither offered nor supported by IBM. Please contact your local IBM representative or reseller for further details. Information about supported features can also be found on the N series support website (accessed and navigated as described in Websites on page 18).

18 Storage Management Guide Websites IBM maintains pages on the World Wide Web where you can get the latest technical information and download device drivers and updates. The following web pages provide N series information: A listing of currently available N series products and features can be found at the following web page: www.ibm.com/storage/nas/ The IBM System Storage N series support website requires users to register in order to obtain access to N series support content on the web. To understand how the N series support web content is organized and navigated, and to access the N series support website, refer to the following publicly accessible web page: www.ibm.com/storage/support/nseries/ This web page also provides links to AutoSupport information as well as other important N series product resources. IBM System Storage N series products attach to a variety of servers and operating systems. To determine the latest supported attachments, go to the IBM N series interoperability matrix at the following web page: www.ibm.com/systems/storage/network/interophome.html For the latest N series hardware product documentation, including planning, installation and setup, and hardware monitoring, service and diagnostics, see the IBM N series Information Center at the following web page: publib.boulder.ibm.com/infocenter/nasinfo/nseries/index.jsp Getting information, help, and service If you need help, service, or technical assistance or just want more information about IBM products, you will find a wide variety of sources available from IBM to assist you. This section contains information about where to go for additional information about IBM and IBM products, what to do if you experience a problem with your IBM N series product, and whom to call for service, if it is necessary. Before you call Before you call, make sure you have taken these steps to try to solve the problem yourself: Check all cables to make sure they are connected. Check the power switches to make sure the system is turned on. Use the troubleshooting information in your system documentation and use the diagnostic tools that come with your system. Refer to the N series support website (accessed and navigated as described in Websites on page 18) for information on known problems and limitations.

Preface 19 Using the documentation The latest versions of N series software documentation, including Data ONTAP and other software products, are available on the N series support website (accessed and navigated as described in Websites on page 18). Current N series hardware product documentation is shipped with your hardware product in printed documents or as PDF files on a documentation CD. For the latest N series hardware product documentation PDFs, go to the N series support website. Hardware documentation, including planning, installation and setup, and hardware monitoring, service, and diagnostics, is also provided in an IBM N series Information Center at the following web page: publib.boulder.ibm.com/infocenter/nasinfo/nseries/index.jsp Hardware service and support You can receive hardware service through IBM Integrated Technology Services. Visit the following web page for support telephone numbers: www.ibm.com/planetwide/ Firmware updates IBM N series product firmware is embedded in Data ONTAP. As with all devices, ensure that you run the latest level of firmware. Any firmware updates are posted to the N series support website (accessed and navigated as described in Websites on page 18). Note: If you do not see new firmware updates on the N series support website, you are running the latest level of firmware. Verify that the latest level of firmware is installed on your machine before contacting IBM for technical support. How to send your comments Your feedback helps us to provide the most accurate and high-quality information. If you have comments or suggestions for improving this document, please send them by email to starpubs@us.ibm.com. Be sure to include the following: Exact publication title

20 Storage Management Guide Publication form number (for example, GC26-1234-02) Page, table, or illustration numbers A detailed description of any information that should be changed

21 Data ONTAP storage architecture overview Storage architecture refers to how Data ONTAP provides data storage resources to host or client systems and applications. Data ONTAP distinguishes between the physical layer of data storage resources and the logical layer. The physical layer includes drives, array LUNs, RAID groups, plexes, and aggregates. Note: A drive (or disk) is the basic unit of storage for storage systems that use Data ONTAP to access native disk shelves. An array LUN is the basic unit of storage that a storage array provides to a storage system that runs Data ONTAP. The logical layer includes the file systems volumes, qtrees, logical unit numbers (LUNs) and the directories and files that store data. Note: LUNs are storage target devices in iscsi and FC networks. Aggregates provide storage to volumes. Aggregates can be composed of either drives or array LUNs, but not both. Data ONTAP organizes the drives or array LUNs in an aggregate into one or more RAID groups. RAID groups are then collected into one or two plexes, depending on whether RAIDlevel mirroring (SyncMirror) is in use. Aggregates can have two formats: 32-bit and 64-bit. An aggregate's format affects its maximum size. Volumes are data containers. Clients can access the data in volumes through the access protocols supported by Data ONTAP. These protocols include Network File System (NFS), Common Internet File System (CIFS), HyperText Transfer Protocol (HTTP), Web-based Distributed Authoring and Versioning (WebDAV), Fibre Channel (FC), and Internet SCSI (iscsi). You can partition volumes and control resource usage using qtrees. You can create LUNs for use in a SAN environment, using the FC or iscsi access protocols. Volumes, qtrees, and LUNs contain directories and files. Related concepts Managing disks using Data ONTAP on page 22 Managing array LUNs using Data ONTAP on page 78 How Data ONTAP uses RAID to protect your data and data availability on page 111 What aggregates are on page 132 Using volumes on page 174 About qtrees on page 309

22 Storage Management Guide Managing disks using Data ONTAP Disks (sometimes also called drives) provide the basic unit of storage for storage systems running Data ONTAP that use native storage shelves. Understanding how Data ONTAP uses and classifies disks will help you manage your storage more effectively. How Data ONTAP reports disk types Data ONTAP associates a type with every disk. Data ONTAP reports some disk types differently than the industry standards; you should understand how Data ONTAP disk types map to industry standards to avoid confusion. When Data ONTAP documentation refers to a disk type, it is the type used by Data ONTAP unless otherwise specified. RAID disk types denote the role a specific disk plays for RAID. RAID disk types are not related to Data ONTAP disk types. For a specific configuration, the disk types supported depend on the storage system model, the shelf type, and the I/O modules installed in the system. For more information about the types of disks supported by your configuration, see the IBM System Storage N series Introduction and Planning Guide at the N series support website (accessed and navigated as described in Websites on page 18). The following tables show how Data ONTAP disk types map to industry standard disk types for the SAS and FC storage connection types, and for storage arrays: Table 1: SAS-connected storage Data ONTAP disk type Disk class Industry standard disk type Description BSAS Capacity SATA Bridged SAS-SATA disks with added hardware to enable them to be plugged into a SAS-connected storage shelf. FSAS Capacity NL-SAS Near Line SAS MSATA Capacity SATA SATA disk in multi-disk carrier storage shelf SAS Performance SAS Serial-Attached SCSI SSD Ultra-performance SSD Solid-state drives

Managing disks using Data ONTAP 23 Table 2: FC-connected storage Data ONTAP disk type Disk class ATA Capacity SATA FCAL Performance FC Industry standard disk type Description Table 3: Storage arrays Data ONTAP disk type Disk class Industry standard disk type Description LUN N/A LUN A logical storage device backed by storage arrays and used by Data ONTAP as a disk. These LUNs are referred to as array LUNs to distinguish them from the LUNs that Data ONTAP serves to clients. For information about best practices for working with different types of disks, see Technical Report 3437: Storage Subsystem Resiliency Guide. Note: This technical report contains information about NetApp products that IBM licenses and in some cases customizes. Technical reports might contain information about models and features that are not supported by IBM. Related concepts Rules for mixing HDD types in aggregates on page 142 Related information TR 3437: Storage Subsystem Resiliency Guide

24 Storage Management Guide Storage connection types and topologies supported by Data ONTAP Data ONTAP supports two storage connection types: Serial-Attached SCSI (SAS) and Fibre Channel (FC). The FC connection type supports three topologies: arbitrated loop, switched, and point-topoint. SAS, BSAS, FSAS, SSD, and MSATA disks use the SAS connection type. SAS-connected storage shelves are connected to the controller on a daisy chain called a stack. FC and ATA disks use the FC connection type with an arbitrated-loop topology (FC-AL). FC-connected storage shelves are connected to the controller on a loop. Array LUNs use the FC connection type, with either the point-to-point or switched topology. You cannot combine different connection types in the same loop or stack. However, for MetroCluster configurations, the FC and SAS connection types can be combined in a bridged connection, with FC on the controller side and SAS on the shelf side. The bridged connection can be used in either a direct-attached or switched topology. For more information, see Configuring a MetroCluster system with SAS disk shelves and FibreBridge 6500N bridges. How disks can be combined for the SAS storage connection type You can combine SAS-connected storage shelves containing performance disks and SAS-connected storage shelves containing capacity disks within the same stack, although this configuration is not recommended. Each SAS-connected storage shelf can contain only one class of disk (capacity, performance, or SSDs). The only exception to this rule is if the shelf is being used for a Flash Pool aggregate. In that case, for some SSD sizes and shelf models, you can combine SSDs and HDDs in the same shelf. For more information, see the IBM System Storage N series Introduction and Planning Guide. How disks can be combined for the FC-AL storage connection type You cannot combine storage shelves containing FC disks and storage shelves containing ATA disks in the same loop. Methods of calculating aggregate and system capacity You use the physical and usable capacity of the drives you employ in your storage systems to ensure that your storage architecture conforms to the overall system capacity limits and the size limits of your aggregates. To maintain compatibility across different brands of drives, Data ONTAP rounds down (right-sizes) the amount of space available for user data. In addition, the numerical base used to calculate capacity

Managing disks using Data ONTAP 25 (base 2 or base 10) also impacts sizing information. For these reasons, it is important to use the correct size measurement, depending on the task you want to accomplish: For calculating overall system capacity, you use the physical capacity of the drive, and count every drive that is owned by the storage system. For calculating how many drives you can put into an aggregate before you exceed its maximum size, you use the right-sized, or usable, capacity of all data drives in that aggregate. Parity, dparity, and cache drives are not counted against the maximum aggregate size. To see the physical and usable capacity for a specific drive, see the IBM System Storage N series Introduction and Planning Guide at the N series support website (accessed and navigated as described in Websites on page 18). Disk speeds supported by Data ONTAP For hard disk drives, which use rotating media, speed is measured in revolutions per minute (RPM). Faster drives provide more input/output operations per second (IOPS) and faster response time. It is best to use disks of the same speed in an aggregate. Data ONTAP supports the following rotational speeds for hard disk drives: Performance disks (SAS-connected) 10K RPM 15K RPM Capacity disks (SAS-connected) 7.2K RPM Performance disks (FC-connected) 10K RPM 15K RPM Capacity disks (FC-connected) 7.2K RPM Solid-state drives, or SSDs, are flash media-based devices and therefore the concept of rotational speed does not apply to them. For more information about which disks are supported with specific hardware configurations, see the IBM System Storage N series Introduction and Planning Guide at the N series support website (accessed and navigated as described in Websites on page 18). Related concepts How you can use disks with mixed speeds in the same aggregate on page 140 What aggregates are on page 132

26 Storage Management Guide How drive checksum types affect aggregate and spare management There are two checksum types available for drives used by Data ONTAP: BCS (block) and AZCS (zoned). Understanding how the checksum types differ and how they impact storage management enables you to manage your storage more effectively. Both checksum types provide the same resiliency capabilities. BCS optimizes for data access speed, and reserves the smallest amount of capacity for the checksum for drives with 520-byte sectors. AZCS provides enhanced storage utilization and capacity for drives with 512-byte sectors. You cannot change the checksum type of a drive. To determine the checksum type of a specific drive model, see the IBM System Storage N series Introduction and Planning Guide. Aggregates have a checksum type, which is determined by the checksum type of the drives or array LUNs that compose the aggregate. The following configuration rules apply to aggregates, drives, and checksums: Checksum types cannot be combined within RAID groups. This means that you must consider checksum type when you provide hot spare drives. When you add storage to an aggregate, if it has a different checksum type than the storage in the RAID group to which it would normally be added, Data ONTAP creates a new RAID group. An aggregate can have RAID groups of both checksum types. These aggregates have a checksum type of mixed. For mirrored aggregates, both plexes must have the same checksum type. Drives of a different checksum type cannot be used to replace a failed drive. You cannot change the checksum type of a drive. Drive name formats Each drive has a name that differentiates it from all other drives. Drive names have different formats, depending on the connection type (FC or SAS) and how the drive is attached. The following table shows the various formats for drive names, depending on how they are connected to the storage system. Note: For internal drives, the slot number is zero, and the internal port number depends on the system model.

Managing disks using Data ONTAP 27 Drive connection Drive name Example SAS, direct-attached <slot><port>.<shelfid>.<bay> The drive in shelf 2, bay 11, connected to onboard port 0a is named 0a.2.11. The drive in shelf 6, bay 3, connected to an HBA in slot 1, port c, is named 1c.6.3. SAS, direct-attached in multi-disk carrier disk shelf <slot><port>.<shelfid>.<bay>l<carrierposition> Carrier position is 1 or 2. SAS, bridge-attached (FibreBridge, used for MetroCluster configurations) SAS, bridge-attached through a switch (FibreBridge, used for MetroCluster configurations) <slot><port>.<loopid>l<lun> <switch_name>:<switch_port>.<loopid>l<lun> The drive with LUN 2 behind the bridge connected to port a in slot 3, loop ID 125, is named 3a.125L2. The drive with LUN 5 behind the bridge connected to port 2 of switch brcd44, loop ID 126, is named brcd44:2.126l5. FC, direct-attached <slot><port>.<loopid> The drive with loop ID 19 (bay 3 of shelf 1) connected to onboard port 0a is named 0a.19. The drive with loop ID 34 connected to an HBA in slot 8, port c is named 8c. 34.

28 Storage Management Guide Drive connection Drive name Example FC, switch-attached <switch_name>.<switch_port>.<loopid> The drive with loop ID 51 connected to port 3 of switch SW7 is named SW7.3.51. For information about determining the LUN for drives using the FibreBridge connection architecture, see Configuring a MetroCluster system with SAS disk shelves and FibreBridge 6500N bridges. Loop IDs for FC-AL connected disks For disks connected using Fibre Channel-Arbitrated Loop (FC-AL or FC), the loop ID is an integer between 16 and 126. The loop ID identifies the disk within its loop, and is included in the disk name, which identifies the disk uniquely for the entire system. The loop ID corresponds to the storage shelf number and the bay in which the disk is installed. The lowest loop ID is always in the far right bay of the first storage shelf. The next higher loop ID is in the next bay to the left, and so on. You can view the device map for your storage shelves by using the fcadmin device_map command. For more information about the loop ID map for your storage shelf, see the hardware guide for the storage shelf. Understanding RAID disk types Data ONTAP classifies disks as one of four types for RAID: data, hot spare, parity, or dparity. The RAID disk type is determined by how RAID is using a disk; it is different from the Data ONTAP disk type. Data disk Spare disk Parity disk Holds data stored on behalf of clients within RAID groups (and any data generated about the state of the storage system as a result of a malfunction). Does not hold usable data, but is available to be added to a RAID group in an aggregate. Any functioning disk that is not assigned to an aggregate but is assigned to a system functions as a hot spare disk. Stores row parity information that is used for data reconstruction when a single disk drive fails within the RAID group. dparity disk Stores diagonal parity information that is used for data reconstruction when two disk drives fail within the RAID group, if RAID-DP is enabled.

Managing disks using Data ONTAP 29 How disk sanitization works Disk sanitization is the process of physically obliterating data by overwriting disks or SSDs with specified byte patterns or random data so that recovery of the original data becomes impossible. You use the sanitization process to ensure that no one can recover the data on the disks. Related tasks Using disk sanitization to remove data from disks on page 49 Destroying data on disks using Storage Encryption on page 107 Disk sanitization process Understanding the basics of the disk sanitization process helps you understand what to anticipate during the sanitization process and after it is complete. The disk sanitization process uses three successive default or user-specified byte overwrite patterns for up to seven cycles per operation. The random overwrite pattern is repeated for each cycle. Depending on the disk capacity, the patterns, and the number of cycles, the process can take several hours. Sanitization runs in the background. You can start, stop, and display the status of the sanitization process. The sanitization process contains two phases: 1. Formatting phase The operation performed for the formatting phase depends on the class of disk being sanitized, as shown in the following table: Disk class Capacity HDDs Performance HDDs SSDs Formatting phase Skipped SCSI format operation SCSI sanitize operation 2. Pattern overwrite phase The specified overwrite patterns are repeated for the specified number of cycles. When the sanitization process is complete, the specified disks are in a sanitized state. They are not returned to spare status automatically. You must return the sanitized disks to the spare pool before the newly sanitized disks are available to be added to another aggregate.

30 Storage Management Guide When disk sanitization cannot be performed Disk sanitization is not supported for all disk types. In addition, there are times when disk sanitization cannot be performed. You should be aware of the following facts about the disk sanitization process: It is not supported on all SSD part numbers. For information about which SSD part numbers support disk sanitization, see the IBM System Storage N series Introduction and Planning Guide at the N series support website (accessed and navigated as described in Websites on page 18). It is not supported in takeover mode for systems in an HA pair. It cannot be performed on disks that were failed due to readability or writability problems. It cannot be performed on disks that belong to an SEC 17a-4-compliant SnapLock volume until the expiration periods on all files have expired--that is, all of the files have reached their retention dates. It does not perform its formatting phase on ATA drives. If you are using the random pattern, it cannot be performed on more than 100 disks at one time. It is not supported on array LUNs. If you sanitize both SES disks in the same ESH shelf at the same time, you see errors on the console about access to that shelf, and shelf warnings are not reported for the duration of the sanitization. However, data access to that shelf is not interrupted. You can perform disk sanitization on disks using Storage Encryption. However, there are other methods to obliterate data on disks using Storage Encryption that are faster and do not require an operational storage system. What happens if disk sanitization is interrupted Disk sanitization is a long-running operation. If disk sanitization is interrupted by user intervention or an unexpected event such as a power outage, Data ONTAP takes action to return the disks that were being sanitized to a known state, but you must also take action before the sanitization process can finish. If the sanitization process is interrupted by power failure, system panic, or manual intervention, the sanitization process must be repeated from the beginning. The disk is not designated as sanitized. If the formatting phase of disk sanitization is interrupted, Data ONTAP must recover any disks that were corrupted by the interruption. After a system reboot and once every hour, Data ONTAP checks for any sanitization target disk that did not complete the formatting phase of its sanitization. If any such disks are found, Data ONTAP recovers them. The recovery method depends on the type of the disk. After a disk is recovered, you can rerun the sanitization process on that disk; for HDDs, you can use the -s option to specify that the formatting phase is not repeated again.

Managing disks using Data ONTAP 31 How selective disk sanitization works Selective disk sanitization consists of physically obliterating data in specified files or volumes while preserving all other data located on the affected aggregate for continued user access. Because a file can be stored on multiple disks, there are three parts to the process. To selectively sanitize data contained in an aggregate, you must carry out three general tasks: 1. Delete the files, directories or volumes that contain the data you want to sanitize from the aggregate that contains them. 2. Migrate the data that you want to preserve to a new set of disks in a destination aggregate on the same storage system. You can migrate data using the ndmpcopy command or qtree SnapMirror. 3. Destroy the original aggregate and sanitize all the disks that were RAID group members in that aggregate. Related tasks Removing data from disks using selective disk sanitization on page 52 Tips for creating and backing up aggregates containing data to be sanitized If you are creating or backing up aggregates to contain data that might need to be sanitized, following some simple guidelines will reduce the time it takes to sanitize your data. Make sure your aggregates containing sensitive data are not larger than they need to be. If they are larger than needed, sanitization requires more time, disk space, and bandwidth. When you back up aggregates containing sensitive data, avoid backing them up to aggregates that also contain large amounts of nonsensitive data. This reduces the resources required to move nonsensitive data before sanitizing sensitive data. How Data ONTAP monitors disk performance and health Data ONTAP continually monitors disks to assess their performance and health. When Data ONTAP encounters certain errors or behaviors from a disk, it takes the disk offline temporarily or takes the disk out of service to run further tests. What happens when Data ONTAP takes disks offline Data ONTAP temporarily stops I/O activity to a disk and takes a disk offline when Data ONTAP is updating disk firmware in background mode or when disks become non-responsive. While the disk is offline, Data ONTAP performs a quick check on it to reduce the likelihood of forced disk failures. A disk can be taken offline only if its containing RAID group is in a normal state and the plex or aggregate is not offline.