HP IBRIX X9000 Network Storage System File System User Guide

Size: px
Start display at page:

Download "HP IBRIX X9000 Network Storage System File System User Guide"

Transcription

1 HP IBRIX X9000 Network Storage System File System User Guide Abstract This guide describes how to configure and manage X9000 Software file systems and how to use NFS, CIFS, FTP, and HTTP to access file system data. The guide also describes the following file system features: quotas, remote replication, snapshots, data retention and validation, data tiering, and file allocation. The guide is intended for system administrators managing X9300 Network Storage Gateway systems, X9320 Network Storage Systems, X9720 Network Storage Systems, and X9730 Network Storage Systems. For the latest X9000 guides, browse to In the storage section, select NAS Systems and then select HP X9000 Network Storage Systems from the IBRIX Storage Systems section. HP Part Number: TA Published: June 2012 Edition: 8

2 Copyright 2009, 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgments Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. UNIX is a registered trademark of The Open Group. Revision History Edition Date Software Version Description 1 November Initial release of HP X9000 File Serving Software 2 December Updated license and quotas information 3 April Added information about file cloning, CIFS, directory tree quotas, the Statistics tool, and GUI procedures 4 July Removed information about the Statistics tool 5 December Added information about authentication, CIFS, FTP, HTTP, SSL certificates, and remote replication 6 April Updated CIFS, FTP, HTTP, and snapshot information 7 September Added or updated information about data retention and validation, software snapshots, block snapshots, remote replication, HTTP, case insensitivity, quotas 8 June Added or updated information about file systems, file share creation, rebalancing segments, remote replication, user authentication, CIFS, LDAP, data retention, data tiering, file allocation, quotas, Antivirus software

3 Contents 1 Using X9000 Software file systems...8 File system operations...8 File system building blocks...10 Configuring file systems...10 Accessing file systems Creating and mounting file systems...12 Creating a file system...12 Using 32-bit or 64-bit mode...12 Using the New Filesystem Wizard...12 Configuring additional file system options...16 Creating a file system using the CLI...17 File limit for directories...18 Managing mountpoints and mount/unmount operations...18 GUI procedures...18 CLI procedures...20 Mounting and unmounting file systems locally on X9000 clients...21 Limiting file system access for X9000 clients...22 Using Export Control Setting up quotas...24 How quotas work...24 Enabling quotas on a file system and setting grace periods...24 Setting user and group quotas...25 Setting directory tree quotas...27 Using a quotas file...29 Importing quotas from a file...29 Exporting quotas to a file...29 Format of the quotas file...29 Using online quota check...30 Configuring notifications for quota events...31 Deleting quotas...31 Troubleshooting quotas Maintaining file systems...33 Best practices for file system performance...33 Viewing information about file systems and components...33 Viewing physical volume information...34 Viewing volume group information...34 Viewing logical volume information...35 Viewing file system information...35 Viewing disk space information from a Linux X9000 client...37 Extending a file system...37 Rebalancing segments in a file system...38 How rebalancing works...38 Rebalancing segments on the GUI...39 Rebalancing segments from the CLI...40 Tracking the progress of a rebalance task...40 Viewing the status of rebalance tasks...41 Stopping rebalance tasks...41 Disabling 32-bit mode on a file system...41 Deleting file systems and file system components...41 Deleting a file system...41 Contents 3

4 Deleting segments, volume groups, and physical volumes...42 Deleting file serving nodes and X9000 clients...42 Checking and repairing file systems...42 Analyzing the integrity of a file system on all segments...43 Clearing the INFSCK flag on a file system...44 Troubleshooting file systems...44 ibrix_pv -a discovers too many or too few devices...44 Cannot mount on an X9000 client...44 NFS clients cannot access an exported file system...44 User quota usage data is not being updated...44 File system alert is displayed after a segment is evacuated...45 SegmentNotAvailable is reported...45 SegmentRejected is reported...45 ibrix_fs -c failed with "Bad magic number in super-block" Using NFS...48 Exporting a file system...48 Unexporting a file system...51 Using case-insensitive file systems...51 Setting case insensitivity for all users (NFS/Linux/Windows)...51 Viewing the current setting for case insensitivity...52 Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows)...52 Log files...52 Case insensitivity and operations affecting directories Configuring authentication for CIFS, FTP, and HTTP...54 Using Active Directory with LDAP ID mapping...54 Using LDAP as the primary authentication method...55 Requirements for LDAP users and groups...55 Configuring LDAP for X9000 software...55 Configuring authentication from the GUI...56 Viewing or changing authentication settings...64 Configuring authentication from the CLI...65 Configuring Active Directory...65 Configuring LDAP...65 Configuring LDAP ID mapping...66 Configuring Local Users and Groups authentication Using CIFS...69 Configuring file serving nodes for CIFS...69 Starting or stopping the CIFS service and viewing CIFS statistics...69 Monitoring CIFS services...70 CIFS shares...71 Configuring CIFS shares with the GUI...71 Configuring SMB signing...75 Managing CIFS shares with the GUI...76 Configuring and managing CIFS shares with the CLI...77 Managing CIFS shares with Microsoft Management Console...78 Linux static user mapping with Active Directory...83 Configuring Active Directory...83 Assigning attributes...85 Consolidating SMB servers with common share names...86 CIFS clients...87 Viewing quota information...87 Differences in locking behavior...88 CIFS shadow copy Contents

5 Permissions in a cross-protocol CIFS environment...90 How the CIFS server handles UIDs and GIDs...90 Permissions, UIDs/GIDs, and ACLs...91 Changing the way CIFS inherits permissions on files accessed from Linux applications...92 Troubleshooting CIFS Using FTP...94 Best practices for configuring FTP...94 Managing FTP from the GUI...94 Configuring FTP...94 Managing the FTP configuration...98 Managing FTP from the CLI...99 Configuring FTP...99 Managing the FTP configuration...99 The vsftpd service Starting or stopping the FTP service manually Accessing shares Using HTTP Best practices for configuring HTTP Managing HTTP from the GUI Configuring HTTP Managing the HTTP configuration Tuning the socket read block size and file write block size Managing HTTP from the CLI Configuring HTTP Managing the HTTP configuration Starting or stopping the HTTP service manually Accessing shares Configuring Windows clients to access HTTP WebDAV shares Troubleshooting HTTP Managing SSL certificates Creating an SSL certificate Adding a certificate to the cluster Exporting a certificate Deleting a certificate Using remote replication Overview Continuous or run-once replication modes Using intercluster replications Using intracluster replications File system snapshot replication Configuring the target export for replication to a remote cluster GUI procedure CLI procedure Configuring and managing replication tasks on the GUI Viewing replication tasks Starting a replication task Pausing or resuming a replication task Stopping a replication task Configuring and managing replication tasks from the CLI Starting a remote replication task to a remote cluster Starting an intracluster remote replication task Starting a run-once directory replication task Stopping a remote replication task Contents 5

6 Pausing a remote replication task Resuming a remote replication task Querying remote replication tasks Replicating WORM/retained files Configuring remote failover/failback Troubleshooting remote replication Managing data retention and validation Overview WORM and WORM-retained files Data retention attributes for a file system Data validation scans Enabling file systems for data retention and validation Viewing the retention profile for a file system Changing the retention profile for a file system Managing WORM and retained files Creating WORM and WORM-retained files Viewing the retention information for a file File administration Running data validation scans Scheduling a validation scan Starting an on-demand validation scan Viewing, stopping, or pausing a scan Viewing validation scan results Viewing and comparing hash sums for a file Handling validation scan errors Creating data retention reports Generating and managing reports Generating reports from the CLI Using hard links with WORM files Using remote replication Backup support for data retention Troubleshooting data retention Configuring Antivirus support Adding or removing external virus scan engines Enabling or disabling Antivirus on X9000 file systems Updating Antivirus definitions Configuring Antivirus settings Viewing Antivirus statistics Antivirus quarantines and software snapshots Creating X9000 software snapshots File system limits for snap trees and snapshots Configuring snapshot directory trees and schedules Modifying a snapshot schedule Managing software snapshots Taking an on-demand snapshot Determining space used by snapshots Accessing snapshot directories Restoring files from snapshots Deleting snapshots Moving files between snap trees Backing up snapshots Creating block snapshots Setting up snapshots Contents

7 Preparing the snapshot partition Registering for snapshots Discovering LUNs in the array Reviewing snapshot storage allocation Automated block snapshots Creating automated snapshots using the GUI Creating an automated snapshot scheme from the CLI Other automated snapshot procedures Managing block snapshots Creating an on-demand snapshot Mounting or unmounting a snapshot Recovering system resources on snapshot failure Deleting snapshots Viewing snapshot information Accessing snapshot file systems Troubleshooting block snapshots Using data tiering Configuring data tiers Assigning segments to tiers Defining the primary tier Creating a tiering policy for a file system Running a migration task Changing the tiering configuration with the GUI Configuring tiers and migrating data using the CLI Changing the tiering configuration with the CLI Writing tiering rules Rule attributes Operators and date/time qualifiers Rule keywords Migration rule examples Ambiguous rules Using file allocation Overview File allocation policies How file allocation settings are evaluated When file allocation settings take effect on X9000 clients Using CLI commands for file allocation Setting file and directory allocation policies Setting file and directory allocation policies from the CLI Setting segment preferences Creating a pool of preferred segments from the CLI Restoring the default segment preference Tuning allocation policy settings Listing allocation policies Support and other resources Contacting HP Related information HP websites Subscription service Documentation feedback Glossary Index Contents 7

8 1 Using X9000 Software file systems File system operations The following diagram highlights the operating principles of the X9000 file system. The topology in the diagram reflects the architecture of the HP X9320, which uses a building block of server pairs (known as couplets) with SAS attached storage. In the diagram: There are four file serving nodes, SS1 SS4. These nodes are also called segment servers. SS1 and SS2 share access to segments 1 4 through SAS connections to a shared storage array. SS3 and SS4 share access to segments 5-8 through SAS connections to a shared storage array. One client is accessing the name space using NAS protocols. One client is using the proprietary X9000 client. The following steps correspond to the numbering in the diagram: 1. The namespace of the file system is a collection of segments. Each segment is simply a repository for files and directories with no implicit namespace relationships among them. 8 Using X9000 Software file systems

9 (Specifically, a segment need not be a complete, rooted directory tree). Segments can be any size and different segments can be different sizes. 2. The location of files and directories within particular segments in the file space is independent of their respective and relative locations in the namespace. For example, a directory (Dir1) can be located on one segment, while the files contained in that directory (File1 and File2) are resident on other segments. The selection of segments for placing files and directories is done dynamically when the file/directory is created, as determined by an allocation policy. The allocation policy is set by the system administrator in accordance with the anticipated access patterns and specific criteria relevant to the installation (such as performance and manageability). The allocation policy can be changed at any time, even when the file system is mounted and in use. Files can be redistributed across segments using a rebalancing utility. For example, rebalancing can be used when some segments are too full while other have free capacity, or when files need to be distributed across new segments. 3. Segment servers are responsible for managing individual segments of the file system. Each segment is assigned to one segment server and each server may own multiple segments, as shown by the color coding in the diagram. Segment ownership can be migrated between servers with direct access to the storage volume while the file system is mounted. For example, Seg1 can be migrated between SS1 and SS2 but not to SS3 or SS4. Additional servers can be added to the system dynamically to meet growing performance needs, without adding more capacity, by distributing the ownership of existing segments for proper load balancing and utilization of all servers. Conversely, additional capacity can be added to the file system while in active use without adding more servers ownership of the new segments is distributed among existing servers. Servers can be configured with failover protection, with other servers being designated as standby servers that automatically take control of a server s segments if a failure occurs. 4. Clients run the applications that use the file system. Clients can access the file system either as a locally mounted cluster file system using the X9000 Client or using standard network attached storage (NAS) protocols such as NFS and Common Internet File System (CIFS). 5. Use of the X9000 Client on a client system has some significant advantages over the NAS approach specifically, the X9000 Client driver is aware of the segmented architecture of the file system and, based on the file/directory being accessed, can route requests directly to the correct segment server, yielding balanced resource utilization and high performance. However, the X9000 Client is available only for a limited range of operating systems. 6. NAS protocols such as NFS and CIFS offer the benefits of multi-platform support and low cost of administration of client software, as the client drivers for these protocols are generally available with the base operating system. When using NAS protocols, a client must mount the file system from one (or more) of the segment servers. As shown in the diagram, all requests are sent to the server from which the share is mounted, which then performs the required routing. 7. Any segment server in the namespace can access any segment. There are three cases: a. Selected segment is owned by the segment server initiating the operation (for example, SS1 accessing Seg1). b. Selected segment is owned by another segment server but is directly accessible at the block level by the segment server initiating the operation (for example, SS1 accessing Seg3). c. Selected segment is owned by another segment server and is not directly accessible by the segment server initiating the operation (for example, SS1 accessing Seg5). Each case is handled differently. The data paths are shown in heavy red broken lines in the diagram: a. The segment server initiating the operation services the read or write request to the local segment. b. In this case, reads and writes take different routes: File system operations 9

10 1) The segment server initiating the operation can read files directly from the segment across the SAN; this is called a SAN READ. 2) The segment server initiating the operation routes writes over the IP network to the segment server owning the segment. That server then writes data to the segment. c. All reads and writes must be routed over the IP network between the segment servers. 8. Step 7 assumed that the server had to go to a segment to read a file. However, every segment server that reads a file keeps a copy of it cached in its memory regardless of which segment it was read from (in the diagram, two servers have cached copies of File 1). The cached copies are used to service local read requests for the file until the copy is made invalid, for example, because the original file has been changed. The file system keeps track of which servers have cached copies of a file and manages cache coherency using delegations, which are X9000 file system metadata structures used to track cached copies of data and metadata. File system building blocks A file system is created from building blocks. The first block comprises the underlying physical volumes, which are combined in volume groups. Segments (logical volumes) are created from the volume groups. The built-in volume manager handles all space allocation considerations involved in file system creation. Configuring file systems You can configure your file systems to use the following features: Quotas. This feature allows you to assign quotas to individual users or groups, or to a directory tree. Individual quotas limit the amount of storage or the number of files that a user or group can use in a file system. Directory tree quotas limit the amount of storage and the number of files that can be created on a file system located at a specific directory tree. See Setting up quotas (page 24). Remote replication. This feature provides a method to replicate changes in a source file system on one cluster to a target file system on either the same cluster or a second cluster. See Using remote replication (page 121). 10 Using X9000 Software file systems

11 Data retention and validation. Data retention ensures that files cannot be modified or deleted for a specific retention period. Data validation scans can be used to ensure that files remain unchanged. See Managing data retention and validation (page 134). Antivirus support. This feature is used with supported Antivirus software, allowing you to scan files on an X9000 file system. See Configuring Antivirus support (page 152). X9000 software snapshots. This feature allows you to capture a point-in-time copy of a file system or directory for online backup purposes and to simplify recovery of files from accidental deletion. Users can access the filesystem or directory as it appeared at the instant of the snapshot. See Creating X9000 software snapshots (page 160). Block Snapshots. This feature uses the array capabilities to capture a point-in-time copy of a file system for online backup purposes and to simplify recovery of files from accidental deletion. The snapshot replicates all file system entities at the time of capture and is managed exactly like any other file system. See Creating block snapshots (page 169). Data tiering. This feature allows you to set a preferred tier where newly created files will be stored. You can then create a tiering policy to move files from initial storage, based on file attributes such as such as modification time, access time, file size, or file type. See Using data tiering (page 181). File allocation. This feature allocates new files and directories to segments according to the allocation policy and segment preferences that are in effect for a client. An allocation policy is an algorithm that determines the segments that are selected when clients write to a file system. See Using file allocation (page 195). Accessing file systems Clients can use the following standard NAS protocols to access file system data: NFS. See Using NFS (page 48) or more information. CIFS. See Using CIFS (page 69) for more information. FTP. See Using FTP (page 94) for more information. HTTP. See Using HTTP (page 103) for more information. You can also use X9000 clients to access file systems. Typically, these clients are installed during the initial system setup. See the HP IBRIX X9000 Network Storage System Installation Guide for more information. Accessing file systems 11

12 2 Creating and mounting file systems This chapter describes how to create file systems and mount or unmount them. Creating a file system You can create a file system using the New Filesystem Wizard provided with the GUI, or you can use CLI commands. The New Filesystem Wizard also allows you to create an NFS export or a CIFS share for the file system. Using 32-bit or 64-bit mode A file system can be created to use either 32-bit or 64-bit mode. In 32-bit mode, clients can run both 32-bit and 64-bit applications. In 64-bit mode, clients can run only 64-bit applications. If all file system clients (NFS, CIFS, and X9000 clients) will run only 64-bit applications, HP recommends that you use 64-bit mode because more inodes will be available per segment for the applications. For information about enabling 32-bit mode, see Configuring additional file system options (page 16). File systems created with 32-bit mode compatibility can be converted later to allow clients to run 64-bit applications (see Disabling 32-bit mode on a file system (page 41)). This is a one-time-only operation and cannot be reversed. If clients may need to run a 32-bit application, do not disable 32-bit mode. Using the New Filesystem Wizard To start the wizard, click New on the Filesystems top panel. The wizard includes several steps and a summary, starting with selecting the storage for the file system. NOTE: For details about the prompts for each step of the wizard, see the GUI online help. On the Select Storage dialog box, select the storage that will be used for the file system. If your cluster includes storage that has not yet been discovered by the X9000 software, click Discover. 12 Creating and mounting file systems

13 On the Configure Options dialog box, enter a name for the file system, and specify the appropriate configuration options. Creating a file system 13

14 If data retention will be used on the file system, enable it and set the retention policy on the WORM/Data Retention dialog box. See Managing data retention and validation (page 134) for more information. The default retention period determines whether you can manage WORM (non-retained) files as well as WORM-retained files. (WORM (non-retained) files can be deleted at any time; WORM-retained files can be deleted only after the file's retention period has expired.) To manage only WORM-retained files, set the default retention period to a non-zero value. WORM-retained files then use this period by default; however, you can assign a different retention period if desired. To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention Period. The default retention period is then set to 0 seconds. When you make a WORM file retained, you will need to assign a retention period to the file. The Set Auto-Commit Period option specifies that files will become WORM or WORM-retained if they are not changed during the specified period. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time period. The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period. Optionally, check Enable Data Validation to schedule periodic scans on the file system. Use the default schedule, or click Modify to open the Data Validation Scan Schedule dialog box and configure your own schedule. 14 Creating and mounting file systems

15 If you want to create data retention reports, click Enable Report Data Generation. Use the default schedule, or click Modify to open the Report Data Generation Schedule dialog box and configure your own schedule. Creating a file system 15

16 The Default File Shares page allows you to create an NFS export and/or a CIFS share at the root of the file system. The default settings are used. See Using NFS (page 48) and Using CIFS (page 69) for more information. Review the Summary to ensure that the file system is configured properly. If necessary, you can return to a dialog box and make any corrections. Configuring additional file system options The New Filesystem wizard creates the file system with the default settings for several options. You can change these settings on the Modify Filesystem Properties dialog box, and can also configure data tiering and file allocation on the file system. To open the dialog box, select the file system on the Filesystems panel. Select Summary from the lower Navigator, and then click Modify on the Summary panel. The General tab allows you to enable quotas, Export Control, and 32-bit compatibility mode on the file system. When Export Control is enabled on a file system, by default, X9000 clients have no access to the file system. Instead, the system administrator grants the clients access by executing the ibrix_mount command. Enabling Export Control does not affect access from a file serving node to a file system (and thereby, NFS/CIFS client access). File serving nodes always have RW access. 16 Creating and mounting file systems

17 The Data Retention tab allows you to change the data retention configuration. The file system must be unmounted. See Configuring data retention on existing file systems (page 138) for more information. NOTE: Data retention cannot be enabled on a file system created on X9000 software 5.6 or earlier versions. Instead, create a new file system on X9000 software 6.0 or later, and then copy or move files from the old file system to the new file system. The Allocation, Segment Preference, and Host Allocation tabs are used to modify file allocation policies and to specify segment preferences for file serving nodes and X9000 clients. See Using file allocation (page 195) for more information. Creating a file system using the CLI The ibrix_fs command is used to create a file system. It can be used in the following ways: Create a file system with the specified segments (segments are logical volumes): ibrix_fs -c -f FSNAME -s LVLIST [-t TIERNAME] [-a] [-q] [-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL] Create a file system and assign specify segments to specific file serving nodes: ibrix_fs -c -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... [-a] [-q] [-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL] Create a file system from physical volumes in a single step: ibrix_fs -c -f FSNAME -p PVLIST [-a] [-q] [-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL] Creating a file system 17

18 In the commands, the t option specifies a tier. TIERNAME can be any alphanumeric, case-sensitive, text string. Tier assignment is not affected by other options that can be set with the ibrix_fs command. NOTE: A tier is created whenever a segment is assigned to it. Be careful to spell the name of the tier correctly when you add segments to an existing tier. If you make an error in the name, a new tier is created with the incorrect tier name, and no error is recognized. To enable data retention on the file system, include the following -o options: o "retenmode=<mode>,retendefperiod=<period>,retenminperiod=<period>, retenmaxperiod=<period>,retenautocommitperiod=<period>" For example: ibrix_fs -o "retenmode=enterprise,retendefperiod=5m,retenminperiod=2,retenmaxperiod=30y, retenautocommitperiod=1d" -c -f ifs1 -s ilv_[1-4] -a Creating a file system manually from physical volumes This procedure is equivalent to using ibrix_fs to create a file system from physical volumes in a single step. Instead of a single command, you build the file system components individually: 1. Discover the physical volumes in the system. Use the ibrix_pv command. 2. Create volume groups from the discovered physical volumes. Use the ibrix_vg command. 3. Create logical volumes (also called segments) from volume groups. Use the ibrix_lv command. 4. Create the file system from the new logical volumes. Use the ibrix_fs command. See the HP IBRIX X9000 Network Storage System CLI Reference Guide for details about these commands. File limit for directories The maximum number of files in a directory depends on the length of the file names, and also the names themselves. The maximum size of a directory is approximately 4GB (double indirect blocks). An average file name length of eight characters allows about 12 million entries. However, because directories are hashed, it is unlikely that a directory can contain this number of entries. Files with a similar naming pattern are hashed into the same bucket. If that bucket fills up, another file cannot be created there, even if free space is available elsewhere in the directory. If you try to create another file with a different name, it may succeed, but randomly. Managing mountpoints and mount/unmount operations GUI procedures When you use the New Filesystem Wizard to create a file system, you can specify a name for the mountpoint and indicate whether the file system should be mounted after it is created. The wizard will create the mountpoint if necessary. The Filesystems panel shows the file systems created on the cluster. To view the mountpoint information for a file system, select the file system on the Filesystems panel, and click Mountpoints in the lower Navigator. The Mountpoints panel shows the hosts that have mounted the file system, the name of the mountpoint, the access (RW or RO) allowed to the host, and whether the file system is mounted. 18 Creating and mounting file systems

19 To mount or remount a file system, select it on the Filesystems panel and click Mount. You can select several mount options on the Mount Filesystem dialog box. To remount the file system, click remount. The available mount options are: atime: Update the inode access time when a file is accessed nodiratime: Do not update the directory inode access time when the directory is accessed nodquotstatfs: Disable file system reporting based on directory tree quota limits You can also view mountpoint information for a particular server. Select that server on the Servers panel, and select Mountpoints from the lower Navigator. To delete a mountpoint, select that mountpoint and click Delete. Managing mountpoints and mount/unmount operations 19

20 CLI procedures The CLI commands are executed immediately on file serving nodes. For X9000 clients, the command intention is stored in the active Fusion Manager. When X9000 software services start on a client, the client queries the active Fusion Manager for any commands. If the services are already running, you can force the client to query the Fusion Manager by executing either ibrix_client or ibrix_lwmount -a on the client, or by rebooting the client. If you have configured hostgroups for your X9000 clients, you can apply a command to a specific hostgroup. For information about creating hostgroups, see the administration guide for your system. Creating mountpoints Mountpoints must exist before a file system can be mounted. To create a mountpoint on file serving nodes and X9000 clients, use the following command. ibrix_mountpoint -c [-h HOSTLIST] -m MOUNTPOINT To create a mountpoint on a hostgroup, use the following command: ibrix_mountpoint -c -g GROUPLIST -m MOUNTPOINT Deleting mountpoints Before deleting mountpoints, verify that no file systems are mounted on them. To delete a mountpoint from file serving nodes and X9000 clients, use the following command: ibrix_mountpoint -d [-h HOSTLIST] -m MOUNTPOINT To delete a mountpoint from specific hostgroups, use the following command: ibrix_mountpoint -d -g GROUPLIST -m MOUNTPOINT Viewing mountpoint information To view mounted file systems and their mountpoints on all nodes, use the following command: ibrix_mountpoint -l Mounting a file system File system mounts are managed with the ibrix_mount command. The command options and the default file system access allowed for X9000 clients depend on whether the optional Export Control feature has been enabled on the file system (see Using Export Control (page 22) for more information). This section assumes that Export Control is not enabled, which is the default. 20 Creating and mounting file systems

21 NOTE: A file system must be mounted on the file serving node that owns the root segment (that is, segment 1) before it can be mounted on any other host. X9000 Software automatically mounts a file system on the root segment when you mount it on all file serving nodes in the cluster. The mountpoints must already exist. Mount a file system on file serving nodes and X9000 clients: ibrix_mount -f FSNAME [-o {RW RO}] [-O MOUNTOPTIONS] -h HOSTLIST -m MOUNTPOINT Mount a file system on a hostgroup: ibrix_mount -f FSNAME [-o {RW RO}] -g GROUP -m MOUNTPOINT Unmounting a file system Use the following commands to unmount a file system. NOTE: Be sure to unmount the root segment last. Attempting to unmount it while other segments are still mounted will result in failure. If the file system was exported using NFS, you must unexport it before you can unmount it (see Exporting a file system (page 48)). To unmount a file system from one or more file serving nodes, X9000 clients, or hostgroups: ibrix_umount -f FSNAME [-h HOSTLIST -g GROUPLIST] To unmount a file system from a specific mountpoint on a file serving node, X9000 client, or hostgroup: ibrix_umount -m MOUNTPOINT [-h HOSTLIST -g GROUPLIST] Mounting and unmounting file systems locally on X9000 clients On both Linux and Windows X9000 clients, you can locally override a mount. For example, if the Fusion Manager configuration database has a file system marked as mounted for a particular client, that client can locally unmount the file system. Linux X9000 clients To mount a file system locally, use the following command on the Linux X9000 client. A Fusion Manager name (fmname) is required only if this X9000 client is registered with multiple Fusion Managers. ibrix_lwmount -f [fmname:]fsname -m mountpoint [-o options] To unmount a file system locally, use one of the following commands on the Linux X9000 client. The first command detaches the specified file system from the client. The second command detaches the file system that is mounted on the specified mountpoint. ibrix_lwumount -f [fmname:]fsname ibrix_lwumount -m MOUNTPOINT Windows X9000 clients Use the Windows X9000 client GUI to mount file systems locally. Click the Mount tab on the GUI and select the cluster name from the list (the cluster name is the Fusion Manager name). Then, enter the name of the file system, select a drive, and click Mount. If you are using Remote Desktop to access the client and the drive letter is not displayed, log out and log back in. This is a known limitation of Windows Terminal Services when exposing new drives. To unmount a file system on the Windows X9000 client GUI, click the Umount tab, select the file system, and then click Umount. Managing mountpoints and mount/unmount operations 21

22 Limiting file system access for X9000 clients By default, all X9000 clients can mount a file system after a mountpoint has been created. To limit access to specific X9000 clients, create an access entry. When an access entry is in place for a file system (or a subdirectory of the file system), it enters secure mode, and mount access is restricted to clients specified in the access entry. All other clients are denied mount access. Select the file system on the Filesystems top panel, and then select Client Exports in the lower navigator. On the Create Client Export(s) dialog box, select the clients or hostgroups that will be allowed access to the file system or a subdirectory of the file system. To remove a client access entry, select the affected file system on the GUI, and then select Client Exports from the lower Navigator. Select the access entry from the Client Exports display, and click Delete. On the CLI, use the ibrix_exportfs command to create an access entry: ibrix_exportfs c f FSNAME p CLIENT:/PATHNAME,CLIENT2:/PATHNAME,... To see all access entries that have been created, use the following command: ibrix_exportfs c l To remove an access entry, use the following command: ibrix_exportfs c U f FSNAME p CLIENT:/PATHNAME, CLIENT2:/PATHNAME,... Using Export Control When Export Control is enabled on a file system, by default, X9000 clients have no access to the file system. Instead, the system administrator grants the clients access by executing the ibrix_mount command. Enabling Export Control does not affect access from a file serving node to a file system (and thereby, NFS/CIFS client access). File serving nodes always have RW access. To determine whether Export Control is enabled, run ibrix_fs -i or ibrix_fs -l. The output indicates whether Export Control is enabled. To enable Export Control, include the -C option in the ibrix_fs command: ibrix_fs -C -E -f FSNAME To disable Export Control, execute the ibrix_fs command with the -C and -D options: ibrix_fs -C -D -f FSNAME To mount a file system that has Export Control enabled, include the ibrix_mount -o {RW RO} option to specify that all clients have either RO or RW access to the file system. The default is RO. 22 Creating and mounting file systems

23 In addition, when specifying a hostgroup, the root user can be limited to RO access by adding the root_ro parameter. Using Export Control 23

24 3 Setting up quotas Quotas can be assigned to individual users or groups, or to a directory tree. Individual quotas limit the amount of storage or the number of files that a user or group can use in a file system. Directory tree quotas limit the amount of storage and the number of files that can be created on a file system located at a specific directory tree. Note the following: Although it is best to set up quotas when you create a file system, you can configure them at any time. Configuring quotas later on requires that you unmount the file system, which impacts system availability. You can assign quotas to a user, group, or directory on the GUI or from the CLI. You can also import quota information from a file. If a user has a user quota and a group quota for the same file system, the first quota reached takes precedence. Nested directory quotas are not supported. You cannot configure quotas on a subdirectory differently than the parent directory. The existing quota configuration can be exported to a file at any time. How quotas work NOTE: HP recommends that you export the quota configuration and save the resulting file whenever you update quotas on your cluster. A quota is delimited by hard and soft storage limits defined either in megabytes of storage or as a number of files. The hard limit is the maximum storage (in terms of file size and number of files) allotted to a user or group. The soft limit specifies the number of megabytes or files that, when reached, starts a countdown timer that runs until the hard storage limit is reached or the grace period elapses, whichever happens first. (The default grace period is seven days.) When the timer stops for either reason, the user or group cannot store any more data and the system issues quota exceeded messages at each write attempt. NOTE: Quota statistics are updated on a regular basis (at one-minute intervals). At each update, the file and storage usage for each quota-enabled user, group, or directory tree is queried, and the result is distributed to all file serving nodes. Users or groups can temporarily exceed their quota if the allocation policy in effect for a file system causes their data to be written to different file serving nodes during the statistics update interval. In this situation, it is possible for the storage usage visible to each file serving node to be below or at the quota limit while the aggregate storage use exceeds the limit. There is a delay of several minutes between the time a command to update quotas is executed and when the results are displayed by the ibrix_edquota -l command. This is normal behavior. Enabling quotas on a file system and setting grace periods Before you can set quota limits, quotas must be enabled on the file system. You can enable quotas when you create the file system or at a later time. 24 Setting up quotas

25 On the GUI, select the file system and then select Quotas from the lower Navigator. On the Quota Summary bottom panel, click Modify. To enable quotas from the CLI, run the following command: ibrix_fs -q -E -f FSNAME Setting user and group quotas Before configuring quotas, the quota feature must be enabled on the file system and the file system must be mounted. NOTE: For the purpose of setting quotas, no UID or GID can exceed 2,147,483,647. Setting user quotas to zero removes the quotas. GUI procedure To configure a user quota, select the file system where the quotas will be configured. Next, select Quotas > User Quotas from the lower Navigator, and then, on the User Quota Usage Limits bottom panel, click Set. User quotas can be specified by either the user name or ID. Specifying quota limits is optional. Setting user and group quotas 25

26 To configure a group quota, select the file system where the quotas will be configured. Next, select Quotas > Group Quotas from the lower Navigator, and then, on the Group Quota Usage Limits bottom panel, click Set. Group quotas can be identified by either the group name or GID. Specifying quota limits is optional. To change user or group quotas, select the appropriate user or group on the Quota Usage Limits bottom panel, and then select Modify. 26 Setting up quotas

27 CLI procedure Use the following commands to set quotas for users and groups: Set a quota for a single user: ibrix_edquota -s -u USER -f FSNAME [-M SOFT_MEGABYTES] [-m HARD_MEGABYTES] [-I SOFT_FILES] [-i HARD_FILES] Set a quota for a single group: ibrix_edquota -s -g GROUP -f FSNAME [-M SOFT_MEGABYTES] [-m HARD_MEGABYTES] [-I SOFT_FILES] [-i HARD_FILES] Enclose the user or group name in single or double quotation marks. Setting directory tree quotas Directory tree quotas limit the amount of storage and the number of files that can be created on a file system located at a specific directory tree. Before configuring directory tree quotas, the quota feature must be enabled on the file system and the file system must be mounted. NOTE: When you create a directory tree quota, the system also runs ibrix_onlinequotacheck command in DTREE_CREATE mode. GUI procedure To configure a directory tree quota, select the file system where the quotas will be configured. Next, select Quotas > Directory Quotas from the lower Navigator, and then, on the Directory Tree Quota Usage Limits bottom panel, click Create to open the Create Directory Tree Alias dialog box. For Name (Alias), enter a unique name for the directory tree quota. The name cannot contain a comma (,) character. Setting directory tree quotas 27

28 To change a directory tree quota, select the directory tree on the Quota Usage Limits bottom panel, and then click Modify. CLI procedure To create a directory tree quota and assign usage limits, use the following command: ibrix_edquota -s -d NAME -p PATH -f FSNAME -M SOFT_MEGABYTES -m HARD_MEGABYTES -I SOFT_FILES -i HARD_FILES The -f FSNAME option specifies the name of the file system. The -p PATH option specifies the pathname of the directory tree. If the pathname includes a space, enclose the portion of the pathname that includes the space in single quotation marks, and enclose the entire pathname in double quotation marks. For example: -p "/fs48/data/'quota 4'" The -n NAME option specifies a unique name for the directory tree quota. The name cannot contain a comma (,) character. Use -M SOFT_MEGABYTES and -m HARD_MEGABYTES to specify soft and hard limits for the megabytes of storage allowed on the directory tree. Use -I SOFT_FILES and -i HARD_FILES to specify soft and hard limits for the number of files allowed on the directory tree. If you are creating multiple directory tree quotas, you can import the quotas from a file. The system then uses batch processing to create the quotas. If you add the quotas individually, you will need to wait for ibrix_onlinequotacheck to finish after creating each quota. 28 Setting up quotas

29 Using a quotas file Quota limits can be imported into the cluster from the quotas file, and existing quotas can be exported to the file. See Format of the quotas file (page 29) for the format of the file. Importing quotas from a file From the GUI, select the file system, select Quotas from the lower Navigator, and then click Import. From the CLI, use the following command to import quotas from a file, where PATH is the path to the quotas file: ibrix_edquota -t -p PATH -f FSNAME Exporting quotas to a file From the GUI, select the file system, select Quotas from the lower Navigator, and then click Export. From the CLI, use the following command to export the existing quotas information to a file, where PATH is the pathname of the quotas file: ibrix_edquota -e -p PATH -f FSNAME Format of the quotas file The quotas file contains a line for each user, group, or directory tree assigned a quota. When you add quota entries, the lines must use one of the following formats. The A format specifies a user or group ID. The B format specifies a user or group name, or a directory tree that has already been assigned an identifier name. The C format specifies a directory tree, where the path exists, but the identifier name for the directory tree will not be created until the quotas are imported. Using a quotas file 29

30 A,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},{id} B,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},"{name}" C,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit}, "{name}","{path}" The fields in each line are: {type} Either 0 for a user quota; 1 for a group quota; 2 for a directory tree quota. {block_hardlimit} The maximum number of 1K blocks allowed for the user, group, or directory tree. (1 MB = 1024 blocks). {block_soft-limit} The number of 1K blocks that, when reached, starts the countdown timer. {inode_hardlimit} The maximum number of files allowed for the user, group, or directory tree. {inode_softlimit} The number of files that, when reached, starts the countdown timer. {id} The UID for a user quota or the GID for a group quota. {name} A user name, group name, or directory tree identifier. {path} The full path to the directory tree. The path must already exist. NOTE: When a quotas file is imported, the quotas are stored in a different, internal format. When a quotas file is exported, it contains lines using the internal format. However, when adding entries, you must use the A, B, or C format. Using online quota check Online quota checks are used to rescan quota usage, initialize directory tree quotas, and remove directory tree quotas. There are three modes: FILESYSTEM_SCAN mode. Use this mode in the following scenarios: You turned quotas off for a user, the user continued to store data in a file system, and you now want to turn quotas back on for this user. You are setting up quotas for the first time for a user who has previously stored data in a file system. You renamed a directory on which quotas are set. You moved a subdirectory into another parent directory that is outside of the directory having the directory tree quota. DTREE_CREATE mode. After setting quotas on a directory tree, use this mode to take into account the data used under the directory tree. DTREE_DELETE mode. After deleting a directory tree quota, use this mode to unset quota IDs on all files and folders in that directory. CAUTION: When ibrix_onlinequotacheck is started in DTREE_DELETE mode, it removes quotas for the specified directory. Be sure not to use this mode on directories that should retain quota information. 30 Setting up quotas

31 To run an online quota check from the GUI, select the file system and then select Online quota check from the lower Navigator. On the Task Summary panel, select Start to open the Start Online quota check dialog box and select the appropriate mode. The Task Summary panel displays the progress of the scan. If necessary, select Stop to stop the scan. To run an online quota check in FILESYSTEM_SCAN mode from the CLI, use the following command: ibrix_onlinequotacheck s S -f FSNAME To run an online quota check in DTREE_CREATE mode, use this command: ibrix_onlinequotacheck -s -c -f FSNAME -p PATH To run an online quota check in DTREE_DELETE mode, use this command: ibrix_onlinequotacheck -s -d -f FSNAME -p PATH The command must be run from a file serving node that has the file system mounted. Configuring notifications for quota events If you would like to be notified when certain quota events occur, you can set up notification for those events. On the GUI, select Configuration. On the Events Notified by panel, select the appropriate events and specify the addresses to be notified. Deleting quotas To delete quotas from the GUI, select the quota from the appropriate Quota Usage Limits panel and then click Delete. To delete quotas from the CLI, use the following commands. To delete quotas for a user, use the following command: ibrix_edquota -D -u UID [-f FSNAME] To delete quotas for a group, use the following command: ibrix_edquota -D -g GID [-f FSNAME] To delete the entry and quota limits for a directory tree quota, use the following command: ibrix_edquota -D -d NAME -f FSNAME The -d NAME option specifies the name of the directory tree quota. Configuring notifications for quota events 31

32 Troubleshooting quotas Recreated directory does not appear in directory tree quota If you create a directory tree quota on a specific directory and delete the directory (for example, with rmdir/rm -rf) and then recreate it on the same path, the directory does not count as part of the directory tree, even though the path is the same. Consequently, the ibrix_onlinequotacheck command does not report on the directory. Moving directories After moving a directory into or out of a directory containing quotas, run the ibrix_onlinequotacheck command as follows: After moving a directory from a directory tree with quotas (the source) to a directory without quotas (the destination), take these steps: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree to remove the usage information for the moved directory. 2. Run ibrix_onlinequotacheck in DTREE_DELETE mode on the directory that was moved to delete residual quota information. After moving a directory from a directory without quotas (the source) to a directory tree with quotas (the destination), take this step: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory tree to add the usage for the moved directory. After moving a directory from one directory tree with quotas (the source) to another directory tree with quotas (the destination), take these steps: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree to remove the usage information for the moved directory. 2. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory tree to add the usage for the moved directory. 32 Setting up quotas

33 4 Maintaining file systems This chapter describes how to extend a file system, rebalance segments, delete a file system or file system component, and check or repair a file system. The chapter also includes file system troubleshooting information. Best practices for file system performance It is important to monitor the space used in the segments making up the file system. If segments are filled to 90% or greater and the segments are actively being used based on the file system allocation policy, performance degradation is likely because of extra housekeeping tasks incurred in the file system. Also, at this point, automatic write behavior changes can cause all new creates to go to the segment with the most available capacity, causing a slowdown. To maintain file system performance, follow these recommendations: If segments are approaching 85% full, either expand the file system with new segments or clean up the file system. If only a few segments are between 85% and 90% and other segments are much lower, run a rebalance task. However, if those few segments are at 90% or higher, it is best to adjust the file system allocation policy to exclude the full segments from being used. Then initiate a rebalance task to balance the full segments out onto other segments with more available space. When the rebalance task is complete and all segments are below the 85% threshold, you can reapply the original file system allocation policy. The GUI displays the space used in each segment. Select the file system, and then select Segments from the lower Navigator. Viewing information about file systems and components The Filesystems top panel on the GUI displays comprehensive information about a file system and its components. This section describes how to view the same information from the command line. Best practices for file system performance 33

34 Viewing physical volume information Use the following command to view information about physical volumes: ibrix_pv -l The following table lists the output fields for ibrix_pv -l. Field PV_Name Size (MB) VG name RAID type RAID host RAID device Network host Network port Description Physical volume name. Regular physical volume names begin with the letter d. The names of physical volumes that are part of a mirror device begin with the letter m. Both are numbered sequentially. Physical volume size, in MB. Name of volume group created on this physical volume, if any. Not applicable for this release. Not applicable for this release. Not applicable for this release. Not applicable for this release. Not applicable for this release. Viewing volume group information To display summary information about all volume groups, use the ibrix_vg -l command: ibrix_vg -l The VG_FREE field indicates the amount of group space that is not allocated to any logical volume. The VG_USED field reports the percentage of available space that is allocated to a logical volume. To display detailed information about volume groups, use the ibrix_vg -i command. The -g VGLIST option restricts the output to the specified volume groups. ibrix_vg -i [-g VGLIST] The following table lists the output fields for ibrix_vg -i. Field Name Size (MB) Free (MB) Used (percentage) File System Name Physical Volume Name Physical Volume Size Logical Volume Name Logical Volume Size File System Generation Segment Number Host Name State Description Volume group name. Volume group size in MB. Free (unallocated) space, in MB, available on this volume group. Percentage of total space in the volume group allocated to logical volumes. File system to which this logical volume belongs. Name of the physical volume used to create this volume group. Size, in MB, of the physical volume used to create this volume group. Names of logical volumes created from this volume group. Size, in MB, of each logical volume created from this volume group. Number of times the structure of the file system has changed (for example, new segments were added). Number of this segment (logical volume) in the file system. File serving node that owns this logical volume. Operational state of the file serving node. See the administration guide for your system for a list of the states. 34 Maintaining file systems

35 Viewing logical volume information To view information about logical volumes, use the ibrix_lv -l command. The following table lists the output fields for this command. Field LV_NAME LV_SIZE FS_NAME SEG_NUM VG_NAME OPTIONS Description Logical volume name. Logical volume size, in MB. File system to which this logical volume belongs. Number of this segment (logical volume) in the file system. Name of the volume group created on this physical volume, if any. Linux lvcreate options that have been set on the volume group. Viewing file system information To view information about all file systems, use the ibrix_fs -l command. This command also displays information about any file system snapshots. The following table lists the output fields for ibrix_fs -l. Field FS_NAME STATE CAPACITY (GB) USED% Files FilesUsed% GEN NUM_SEGS Description File system name. State of the file system (for example, Mounted). Total space available in the file system, in GB. Amount of space used in the file system. Number of files that can be created in this file system. Percentage of total storage used by files and directories. Number of times the structure of the file system has changed (for example, new segments were added). Number of file system segments. To view detailed information about file systems, use the ibrix_fs -i command. To view information for all file systems, omit the -f FSLIST argument. ibrix_fs -i [-f FSLIST] The following table lists the file system output fields reported by ibrix_fs -i. Field Total Segments STATE Mirrored? Compatible? Generation FS_ID FS_NUM Description Number of segments. State of the file system (for example, Mounted). Not applicable for this release. Yes indicates that the file system is 32-bit compatible; the maximum number of segments (maxsegs) allowed in the file system is also specified. No indicates a 64-bit file system. Number of times the structure of the file system has changed (for example, new segments were added). File system ID for NFS access. Unique X9000 Software internal file system identifier. Viewing information about file systems and components 35

36 Field EXPORT_CONTROL_ENABLED QUOTA_ENABLED RETENTION DEFAULT_BLOCKSIZE CAPACITY FREE AVAIL USED PERCENT FILES FFREE Prealloc Readahead NFS Readahead Default policy Default start segment File replicas Dir replicas Mount Options Root Segment Hint Root Segment Replica(s) Hint Snap FileSystem Policy Description Yes if enabled; No if not. Yes if enabled; No if not. If data retention is enabled, the retention policy is displayed. Default block size, in KB. Capacity of the file system. Amount of free space on the file system. Space available for user files. Percentage of total storage occupied by user files. Number of files that can be created in this file system. Number of unused file inodes available in this file system. Number of KB a file system preallocates to a file; default: 1,024 KB. Number of KB that X9000 Software will pre-fetch; default: 512 KB. Number of KB that X9000 Software pre-fetches under NFS; default: 256 KB. Allocation policy assigned on this file system. Defined policies are: ROUNDROBIN, STICKY, DIRECTORY, LOCAL, RANDOM, and NONE. See File allocation policies (page 195) for information on these policies. The first segment to which an allocation policy is applied in a file system. If a segment is not specified, allocation starts on the segment with the most storage space available. NA. NA. Possible root segment inodes. This value is used internally. Current root segment number, if known. This value is used internally. Possible segment numbers for root segment replicas. This value is used internally. Snapshot strategy, if defined. The following table lists the per-segment output fields reported by ibrix_fs -i. Field SEGMENT OWNER LV_NAME STATE BLOCK_SIZE CAPACITY (GB) FREE (GB) AVAIL (GB) FILES FFREE USED% Description Number of segments. The host that owns the segment. Logical volume name. The current state of the segment (for example, OK or UsageStale). Default block size, in KB. Size of the segment, in GB. Free space on this segment, in GB. Space available for user files, in GB. Inodes available on this segment. Free inodes available on this segment. Percentage of total storage occupied by user files. 36 Maintaining file systems

37 Field BACKUP TYPE TIER LAST_REPORTED HOST_NAME MOUNTPOINT PERMISSION Root_RO Description Backup host name. Segment type. MIXED means the segment can contain both files and directories. Tier to which the segment was assigned. Last time the segment state was reported. Host on which the file system is mounted. Host mountpoint. File system access privileges: RO or RW. Specifies whether the root user is limited to read-only access, regardless of the access setting. Lost+found directory When browsing the contents of X9000 Software file systems, you will see a directory named lost+found. This directory is required for file system integrity and should not be deleted. Viewing disk space information from a Linux X9000 client Because file systems are distributed among segments on many file serving nodes, disk space utilities such as df must be provided with collated disk space information about those nodes. The Fusion Manager collects this information periodically and collates it for df. X9000 software includes a disk space utility, ibrix_df, that enables Linux X9000 clients to obtain utilization data for a file system. Execute the following command on any Linux X9000 client: ibrix_df The following table lists the output fields for ibrix_df. Field Name CAPACITY FREE AVAIL USED PERCENT FILES FFREE Description File system name. Number of blocks in the file system. Number of unused blocks of storage. Number of blocks available for user files. Percentage of total storage occupied by user files. Number of files that can be created in the file system. Number of unused file inodes in the file system. Extending a file system You can extend a file system from the GUI or the CLI. NOTE: If a continuous remote replication (CRR) task is running on a file system, the file system cannot be extended until the CRR task is complete. If the file system uses tiers, verify that no tiering task is running before executing the file system expansion commands. If a tiering task is running, the expansion takes priority and the tiering task is terminated. Select the file system on the Filesystems top panel, and then select Extend on the Summary bottom panel. The Extend Filesystem dialog box allows you to select the storage to be added to the file system. If data tiering is used on the file system, you can also enter the name of the appropriate tier. Extending a file system 37

38 On the CLI, use the ibrix_fs command to extend a file system. Segments are added to the file serving nodes in a round-robin manner. If tiering rules are defined for the file system, the -t option is required. Avoid expanding a file system while a tiering job is running. The expansion takes priority and the tiering job is terminated. Extend a file system with the logical volumes (segments) specified in LVLIST: ibrix_fs -e -f FSNAME -s LVLIST [-t TIERNAME] Extend a file system with segments created from the physical volumes in PVLIST: ibrix_fs -e -f FSNAME -p PVLIST [-t TIERNAME] Extend a file system with specific logical volumes on specific file serving nodes: ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2... Extend a file system with the listed tiered segment/owner pairs: ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... -t TIERNAME Rebalancing segments in a file system Segment rebalancing involves redistributing files among segments in a file system to balance segment utilization and server workload. For example, after adding new segments to a file system, you can rebalance all segments to redistribute files evenly among the segments. Usually, you will want to rebalance all segments, possibly as a cron job. In special situations, you might want to rebalance specific segments. Segments marked as bad (that is, segments that cannot be activated for some reason) are not candidates for rebalancing. A file system must be mounted when you rebalance its segments. If necessary, you can evacuate segments (or logical volumes) located on storage that will be removed from the cluster, moving the data on the segments to other segments in the file system. You can evacuate a segment with the GUI or the ibrix_evacuate command. For more information, see the HP IBRIX X9000 Network Storage System CLI Reference Guide or the administrator guide for your system. How rebalancing works During a rebalance operation on a file system, files are moved from source segments to destination segments. X9000 Software calculates the average aggregate utilization of the selected source 38 Maintaining file systems

39 segments, and then moves files from sources to destinations to bring each candidate source segment as close as possible to the calculated utilization threshold. The final absolute percent usage in the segments depends on the average file size for the target file system. If you do not specify any sources or destinations for a rebalance task, candidate segments are sorted into sources and destinations and then rebalanced as evenly as possible. If you specify sources, all other candidate segments in the file system are tagged as destinations, and vice versa if you specify destinations. Following the general rule, X9000 Software will calculate the utilization threshold from the sources, and then bring the sources as close as possible to this value by evenly distributing their excess files among all destinations. If you specify sources, only those segments are rebalanced, and the overflow is distributed among all remaining candidate segments. If you specify destinations, all segments except the specified destinations are rebalanced, and the overflow is distributed only to the destinations. If you specify both sources and destinations, only the specified sources are rebalanced, and the overflow is distributed only among the specified destinations. If there is not enough aggregate room in destination segments to hold the files that must be moved from source segments in order to balance the sources, X9000 Software issues an error message and does not move any files. The more restricted the number of destinations, the higher the likelihood of this error. When rebalancing segments, note the following: To move files out of certain overused segments, specify source segments. To move files into certain underused segments, specify destination segments. To move files out of certain segments and place them in certain destinations, specify both source and destination segments. Rebalancing segments on the GUI Select the file system on the GUI, expand Active Tasks in the lower Navigator, and select Rebalancer. Select New on the Task Summary panel to open the Start Rebalancing dialog box. The Rebalancer can determine how to distribute the data, or, if necessary, you can select the source segments, destination segments, or both for the rebalancing task. Rebalancing segments in a file system 39

40 Rebalancing segments from the CLI To rebalance all segments, use the following command. Include the -a option to run the rebalance operation in analytical mode. ibrix_rebalance -r -f FSNAME To rebalance by specifying specific source segments, use the following command: ibrix_rebalance -r -f FSNAME [[-s SRCSEGMENTLIST] [-S SRCLVLIST]] For example, to rebalance segments 2 and 3 only and to specify them by segment name: ibrix_rebalance -r -f ifs1 -s 2,3 To rebalance segments 1 and 2 only and to specify them by their logical volume names: ibrix_rebalance -r -f ifs1 -S ilv1,ilv2 To rebalance by specifying specific destination segments, use the following command: ibrix_rebalance -r -f FSNAME [[-d DESTSEGMENTLIST] [-D DESTLVLIST]] For example, to rebalance segments 3 and 4 only and to specify them by segment name: ibrix_rebalance -r -f ifs1 -d 3,4 To rebalance segments 3 and 4 only and to specify them by their logical volume names: ibrix_rebalance -r -f ifs1 -D ilv3,ilv4 Tracking the progress of a rebalance task You can use the GUI or CLI to track the progress of a rebalance task. As a rebalance task progresses, usage approaches an average value across segments, excluding bad segments that are not candidates for rebalancing or segments containing files that are in heavy use during the operation. To track the progress of a rebalance task on the GUI, select the file system, and then select Rebalancer from the lower Navigator. The Task Summary displays details about the rebalance task. Also examine Used (%) on the Segments panel for the file system. To track rebalance job progress from the CLI, use the ibrix_fs -i command. The output lists detailed information about the file system. The USED% field shows usage per segments. 40 Maintaining file systems

41 Viewing the status of rebalance tasks Use the following commands to view status for jobs on all file systems or only on the file systems specified in FSLIST: ibrix_rebalance -l [-f FSLIST] ibrix_rebalance -i [-f FSLIST] The first command reports summary information. The second command lists jobs by task ID and file system and indicates whether the job is running or stopped. Jobs that are in the analysis (Coordinator) phase are listed separately from those in the implementation (Worker) phase. Stopping rebalance tasks You can stop running or stalled rebalance tasks. If Fusion Manager cannot stop the task for some reason, you can force the task to stop. Stopping a task poses no risks for the file system. The system completes any file migrations that are in process when you issue the stop command. Depending on when you stop a task, segments might contain more or fewer files than before the operation started. To stop a rebalance task on the GUI, select the file system, and then select Rebalancer from the lower Navigator. Click Stop on the Task Summary to stop the task. To stop a task from the CLI, first execute ibrix_rebalance -i to obtain the TASKID, and then execute the following command: ibrix_rebalance -k -t TASKID [-F] To force the task to stop, include the -F option. Disabling 32-bit mode on a file system If your cluster clients are converting from 32-bit to 64-bit applications, you can disable 32-bit mode on the file system, which enables 64-bit mode. (For information about 64-bit mode, see Using 32-bit or 64-bit mode (page 12).) To determine whether 64-bit mode is enabled on a file system, execute the command ibrix_fs -i. If the output reports Compatible? : No, 64-bit mode is enabled. NOTE: A file system using 64-bit mode cannot be changed to use 32-bit mode. If there is a chance that clients will need to run a 32-bit application, do not disable 32-bit mode. To disable 32-bit mode, complete these steps: 1. Unmount the file system. 2. On the GUI, select the file system and click Modify on the Summary bottom panel. On the Modify Filesystems Properties dialog box, select Disable 32 Bit Compatibility Mode. From the CLI, execute the following command: ibrix_fs -w -f FSNAME 3. Remount the file system. Deleting file systems and file system components Deleting a file system Before deleting a file system, unmount it from all file serving nodes and clients. (See Unmounting a file system (page 21).) Also delete any exports. CAUTION: When a file system is deleted from the configuration database, its data becomes inaccessible. To avoid unintended service interruptions, be sure you have specified the correct file system. Disabling 32-bit mode on a file system 41

42 To delete a file system, use the following command: ibrix_fs -d [ R] f FSLIST For example, to delete file systems ifs1 and ifs2: ibrix_fs -d -f ifs1,ifs2 If data retention is enabled on the file system, include the -R option in the command. For example: ibrix_fs -d -R -f ifs2 Deleting segments, volume groups, and physical volumes When deleting segments, volume groups, or physical volumes, you should be aware of the following: A segment cannot be deleted until the file system to which it belongs is deleted. A volume group cannot be deleted until all segments that were created on it are deleted. A physical volume cannot be deleted until all volume groups created on it are deleted. If you delete physical volumes but do not remove the physical storage from the network, the volumes might be rediscovered when you next perform a discovery scan on the cluster. To delete segments: ibrix_lv -d -s LVLIST For example, to delete segments ilv1 and ilv2: ibrix_lv -d -s ilv1,ilv2 To delete volume groups: bin/ibrix_vg -d -g VGLIST For example, to delete volume groups ivg1 and ivg2: ibrix_vg -d -g ivg1,ivg2 To delete physical volumes: ibrix_pv -d -p PVLIST [-h HOSTLIST] For example, to delete physical volumes d1, d2, and d3: ibrix_pv -d -p d[1-3] Deleting file serving nodes and X9000 clients Before deleting a file serving node, unmount all file systems from it and migrate any segments that it owns to a different server. Ensure that the file serving node is not serving as a failover standby and is not involved in network interface monitoring. To delete a file serving node, use the following command: ibrix_server -d -h HOSTLIST For example, to delete file serving nodes s1.hp.com and s2.hp.com: ibrix_server -d -h s1.hp.com,s2.hp.com To delete X9000 clients, use the following command: ibrix_client -d -h HOSTLIST Checking and repairing file systems The ibrix_fsck command analyzes inconsistencies in a file system. CAUTION: Do not run ibrix_fsck in corrective mode without the direct guidance of HP Support. If run improperly, the command can cause data loss and file system damage. CAUTION: Do not run e2fsck (or any other off-the-shelf fsck program) on any part of a file system. Doing this can damage the file system. 42 Maintaining file systems

43 The ibrix_fsck command can detect and repair file system inconsistencies. File system inconsistencies can occur for many reasons, including hardware failure, power failure, switching off the system without proper shutdown, and failed migration. The command runs in four phases and has two running modes: analytical and corrective. You must run the phases in order and you must run all of them: Phase 0 checks host connectivity and the consistency of segment byte blocks and repairs them in corrective mode. Phase 1 checks segments and repairs them in corrective mode. Results are stored locally. Phase 2 checks the file system and repairs it in corrective mode. Results are stored locally. Phase 3 moves files from lost+found on each segment to the global lost+found directory on the root segment of the file system. If a file system shows evidence of inconsistencies, contact HP Support. A representative will ask you to run ibrix_fsck in analytical mode and, based on the output, will recommend a course of action and assist in running the command in corrective mode. HP strongly recommends that you use corrective mode only with the direct guidance of HP Support. Corrective mode is complex and difficult to run safely. Using it improperly can damage both data and the file system. Analytical mode is completely safe, by contrast. NOTE: During an ibrix_fsck run, an INFSCK flag is set on the file system to protect it. If an error occurs during the job, you must explicitly clear the INFSCK flag (see Clearing the INFSCK flag on a file system (page 44)), or you will be unable to mount the file system. Analyzing the integrity of a file system on all segments Observe the following requirements when executing ibrix_fsck: Unmount the file system for phases 0 and 1 and mount the file system for phases 2 and 3. Turn off automated failover by executing ibrix_host -m -U -h SERVERNAME. Unmount all NFS clients and stop NFS on the servers. Use the following procedure to analyze file system integrity: Runs phase 0 in analytic mode: ibrix_fsck -p 0 -f FSNAME [-s LVNAME] [-c] The command can be run on the specified file system or optionally only on the specified segment LVNAME. Run phase 1 in analytic mode: ibrix_fsck -p 1 -f FSNAME [-s LVNAME] [-c] [-B BLOCKSIZE] [-b ALTSUPERBLOCK] The command can be run on file system FSNAME or optionally only on segment LVNAME. This phase can be run with a specified block size and an alternate superblock number. For example: ibrix_fsck -p 1 -f ifs1 -B b NOTE: If phase 1 is run in analytic mode on a mounted file system, false errors can be reported. Checking and repairing file systems 43

44 Run phase 2: ibrix_fsck -p 2 -f FSNAME [-s LVNAME] [-c] [-o "options"] The command can be run on the specified file system or optionally only on segment LVNAME. Use -o to specify any options. Run phase 3: ibrix_fsck -p 3 -f FSNAME [-c] Clearing the INFSCK flag on a file system To clear the INFSCK flag, use the following command: ibrix_fsck -f FSNAME -C Troubleshooting file systems ibrix_pv -a discovers too many or too few devices This situation occurs when file serving nodes see devices multiple times. To prevent this, modify the LVM2 filter in /etc/lvm/lvm.conf to filter only on devices used by X9000 Software. This will change the output of lvmdiskscan. By default, the following filter finds all devices: filter = [ "a/.*/" ] The following filter finds all sd devices: filter = [ "a ^/dev/sd.* ", "r ^.* " ] Contact HP Support if you need assistance. Cannot mount on an X9000 client Verify the following: The file system is mounted and functioning on the file serving nodes. The mountpoint exists on the X9000 client. If not, create the mountpoint locally on the client. Software management services have been started on the X9000 client (see Starting and stopping processes in the administrator guide for your system). NFS clients cannot access an exported file system An exported file system has been unmounted from one or more file serving nodes, causing X9000 software to automatically disable NFS on those servers. Fix the issue causing the unmount and then remount the file system. User quota usage data is not being updated Restart the quota monitor service to force a read of all quota usage data and update usage counts to the file serving nodes in your cluster. Use the following command: ibrix_qm restart 44 Maintaining file systems

45 File system alert is displayed after a segment is evacuated When a segment is successfully evacuated, a segment unavailable alert is displayed in the GUI and attempts to mount the file system will fail. There are several options at this point: Mark the evacuated segment as bad (retired), using the following command. The file system state changes to okay and the file system can now be mounted. However, the operation marking the segment as bad cannot be reversed. ibrix_fs -B -f FSNAME {-n RETIRED_SEGNUMLIST -s RETIRED_LVLIST} Keep the evacuated segment in the file system. Take one of the following steps to enable mounting the file system: Use the force option (-X) when mounting the file system: ibrix_mount f myfilesystem m /mymountpoint X Clear the unavailable segment flag on the file system with the ibrix_fsck command and then mount the file system normally: ibrix_fsck -f FSNAME -C -s LVNAME_OF_EVACUATED_SEG SegmentNotAvailable is reported When writes to a segment do not succeed, the segment status may change to SegmentNotAvailable on the GUI and an alert message may be generated. To correct this situation, take the following steps: 1. Identify the file serving node that owns the segment. This information is reported on the Filesystem Segments panel on the GUI. 2. Fail over the file serving node to its standby. See the administration guide for your system for more information about this procedure. 3. Reboot the file serving node. 4. When the file serving node is up, verify that the segment, or LUN, is available. If the segment is still not available, contact HP Support. SegmentRejected is reported This alert is generated by a client call for a segment that is no longer accessible by the segment owner or file serving node specified in the client's segment map. The alert is logged to the Iad.log and messages files. It is usually an indication of an out-of-date or stale segment map for the affected file system and is caused by a network condition. Other possible causes are rebooting the node, unmounting the file system on the node, segment migrations, and, in a failover scenario, stale IAD, an unresponsive kernel, or a network RPC condition. To troubleshoot this alert, check network connectivity among the nodes, ensuring that the network is optimal and any recent network conditions have been resolved. From the file system perspective, verify segment maps by comparing the file system generation numbers and the ownership for those segments being rejected by the clients. Use the following commands to compare the file system generation number on the local file serving nodes and the clients logging the error. /usr/local/ibrix/bin/rtool enumseg <FSNAME> <SEGNUMBER> For example: rtool enumseg ibfs1 3 segnum=3 of fsid... 7b3ea a5e-9b08-daf9f9f4c027 fsname... ibfs1 device_name... /dev/ivg3/ilv3 host_id... 1e9e3a6e-74e a843-c0abb6fec3a6 host_name... ib50-87 <-- Verify owner of segment ref_counter Troubleshooting file systems 45

46 state_flags... SEGMENT_LOCAL SEGMENT_PREFERED SEGMENT_DHB <SEGMENT_ ORPHAN_LIST_CREATED (0x ) write_wm K-blocks (387 Mbytes) create_wm K-blocks (3097 Mbytes) spillover_wm K-blocks (3485 Mbytes) generation quota... usr,grp,dir f_blocks K-blocks (== K-blocks, M) f_bfree K-blocks (== K-blocks, M) f_bused K-blocks (== K-blocks, 431 M) f_bavail K-blocks (== K-blocks, M) f_files f_ffree used files (f_files - f_ffree) Segment statistics for seconds : n_reads=0, kb_read=0, n_writes=0, kb_written=0, n_creates=2, n_removes=0 Also run the following command: /usr/local/ibrix/bin/rtool enumfs <FSNAME> For example: rtool enumfs ibfs1 1: fsname... ibfs1 fsid... 7b3ea a5e-9b08-daf9f9f4c027 fsnum... 1 fs_flags... operational total_number_of_segments... 4 mounted... TRUE ref_counter... 6 generation < FS generation number for comparison alloc_policy... RANDOM dir_alloc_policy... NONE cur_segment... 0 sup_ap_on... NONE local_segments... 3 quota... usr,grp,dir f_blocks K-blocks (== K-blocks) f_bfree K-blocks (== K-blocks) f_bused K-blocks (== K-blocks) f_bavail K-blocks (== K-blocks) f_files f_ffree used files (f_files - f_free) FS statistics for 0.0 seconds : n_reads=0, kb_read=0, n_writes=0, kb_written=0, n_creates=0, n_removes=0 Use the output to determine whether the FS generation number is in sync and whether the file serving nodes agree on the ownership of the rejected segments. In the rtool enumseg output, check the state_flags field for SEGMENT_IN_MIGRATION, which indicates that the segment is stuck in migration because of a failover. Typically, if the segment has a healthy state flag on the file serving node that owns the segment and all file serving nodes agree on the owner of the segment, this is not a file system or file serving node issue. If a state flag is stale or indicates that a segment is in migration, call HP Support for a recovery procedure. Otherwise, the alert indicates a file system generation mismatch. Take the following steps to resolve this situation: 1. From the active Fusion Manager, run the following command to propagate a new file system segment map throughout the cluster. This step takes a few minutes. ibrix_dbck -I -f <FSNAME> 2. If problems persist, try restarting the client's IAD: /usr/local/ibrix/init/ibrix_iad restart 46 Maintaining file systems

47 ibrix_fs -c failed with "Bad magic number in super-block" If a file system creation command fails with an error such as the following, the command may have failed to preformat the LUN. # ibrix_fs -c -f fs1 -s seg1_4 Calculated owner for seg1_4 : glory22 failed command (/usr/local/ibrix/bin/tuneibfs -F 3e2a9657-fc8b-46b2-96b0-1dc27e8002f3 -H glory2 -G 1 -N 1 -S fs1 -R 1 /dev/vg1_4/seg1_4 2>&1) status (1) output: (/usr/local/ibrix/bin/tuneibfs: Bad magic number in super-block while trying to open /dev/vg1_4/seg1_4 Couldn't find valid filesystem superblock. /usr/local/ibrix/bin/tuneibfs Rpc Version:5 Rpc Ports base=ibrix_ports_base (Using EXT2FS Library version ) [ipfs1_open] reading superblock from blk 1 ) Iad error on host glory2 To work around the problem, recreate the segment on the failing LUN. To identify the LUN associated with the failure, run a command such as the following on the first server in the system: # ibrix_pv -l -h glory2 PV_NAME SIZE(MB) VG_NAME DEVICE RAIDTYPE RAIDHOST RAIDDEVICE d vg1_1 /dev/mxso/dev4a d vg1_2 /dev/mxso/dev5a d vg1_3 /dev/mxso/dev6a d vg1_5 /dev/mxso/dev8a d vg1_4 /dev/mxso/dev7a The Device column identifies the LUN number. In this example, the volume group vg1_4 is created from LUN 7. Recreate the segment and then run the file system creation command again. Troubleshooting file systems 47

48 5 Using NFS To allow NFS clients to access an X9000 file system, the file system must be exported. You can export a file system using the GUI or CLI. By default, X9000 file systems and directories follow POSIX semantics and file names are case-sensitive for Linux/NFS users. If you prefer to use Windows semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive. Exporting a file system Exporting a file system makes local directories available for NFS clients to mount. The Fusion Manager manages the table of exported file systems and distributes the information to the /etc/ exports files on the file serving nodes. All entries are automatically re-exported to NFS clients and to the file serving node standbys unless you specify otherwise. On the exporting file serving node, configure the number of NFS server threads based on the expected workload. The default is 8 threads. If the node will service many clients, you can increase the value to 16 or 64. To configure server threads, use the following command to change the default value of RPCNFSDCOUNT in the /etc/sysconfig/nfs file from 8 to 16 or 64. ibrix_host_tune C h HOSTS o nfsdcount=64 A file system must be mounted before it can be exported. NOTE: When configuring options for an NFS export, do not use the no_subtree_check option. This option is not compatible with the X9000 software. Export a file system using the GUI Use the Add a New File Share Wizard to export a file system. Select File Shares from the Navigator, and click Add on the File Shares panel to open the wizard. On the File Share window, select the file system to be exported, select NFS as the file sharing protocol, and enter the export path. 48 Using NFS

49 Use the Settings window to specify the clients allowed to access the share. Also select the permission and privilege levels for the clients, and specify whether the export should be available from a backup server. The Advanced Settings window allows you to set NFS options on the share. On the Host Servers window, select the servers that will host the NFS share. By default, the share is hosted by all servers that have mounted the file system. Exporting a file system 49

50 The Summary window shows the configuration of the share. You can go back and revise the configuration if necessary. When you click Finish, the export is created and appears on the File Shares panel. Export a file system using the CLI To export a file system from the CLI, use the ibrix_exportfs command: ibrix_exportfs -f FSNAME -h HOSTNAME -p CLIENT1:PATHNAME1,CLIENT2:PATHNAME2,.. [-o "OPTIONS"] [-b] The options are as follows: Option f FSNAME -h HOSTNAME -p CLIENT1:PATHNAME1, CLIENT2:PATHNAME2,.. -o "OPTIONS" -b Description The file system to be exported. The file serving node containing the file system to be exported. The clients that will access the file system can be a single file serving node, file serving nodes represented by a wildcard, or the world (:/PATHNAME). Note that world access omits the client specification but not the colon (for example, :/usr/ src). The default Linux exportfs mount options are used unless specific options are provided. The standard NFS export options are supported. Options must be enclosed in double quotation marks (for example, -o "ro"). Do not enter an FSID= or sync option; they are provided automatically. By default, the file system is exported to the NFS client s standby. This option excludes the standby for the file serving node from the export. For example, to provide NFS clients *.hp.com with read-only access to file system ifs1 at the directory /usr/src on file serving node s1.hp.com: ibrix_exportfs -f ifs1 -h s1.hp.com -p *.hp.com:/usr/src -o "ro" To provide world read-only access to file system ifs1 located at /usr/src on file serving node s1.hp.com: ibrix_exportfs -f ifs1 -h s1.hp.com -p :/usr/src -o "ro" 50 Using NFS

51 Unexporting a file system A file system should be unexported before it is unmounted. On the GUI, select the file system, select NFS Exports from the lower Navigator, and then select Unexport. On the CLI, use the following command: ibrix_exportfs -f FSNAME -U -h HOSTNAME -p CLIENT:PATHNAME [-b] Using case-insensitive file systems By default, X9000 file systems and directories follow POSIX semantics and file names are case-sensitive for Linux/NFS users. (File names are always case-insensitive for Windows clients.) If you prefer to use Windows semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive. Doing this prevents a Linux/NFS user from creating two files that differ only in case (such as foo and FOO). If Windows users are accessing the directory, two files with the same name but different case might be confusing, and the Windows users may be able to access only one of the files. CAUTION: Caution is advised when using this feature. It breaks POSIX semantics and can cause problems for Linux utilities and applications. Before enabling the case-insensitive feature, be sure the following requirements are met: The file system or directory must be created under the X9000 File Serving Software 6.0 or later release. The file system must be mounted. Setting case insensitivity for all users (NFS/Linux/Windows) The case-insensitive setting applies to all users of the file system or directory. Select the file system on the GUI, expand Active Tasks in the lower Navigator, and select Case Insensitivity On the Task Summary bottom panel, click New to open the New Case Insensitivity Task dialog box. Select the appropriate action to change case insensitivity. NOTE: When specifying a directory path, the best practice is to change case insensitivity at the root of a CIFS share and to avoid mixed case insensitivity in a given share. Using case-insensitive file systems 51

52 To set case insensitivity from the CLI, use the following command: ibrix_caseinsensitive -s -f FSNAME -c [ON OFF] -p PATH Viewing the current setting for case insensitivity Select Report Current Case Insensitivity Setting on the New Case Insensitivity Task dialog box to view the current setting for a file system or directory. Click Perform Recursively to see the status for all descendent directories of the specified file system or directory. From the CLI, use the following command to determine whether case-insensitivity is set on a file system or directory: ibrix_caseinsensitive -i -f FSNAME -p PATH [-r] The -r option includes all descendent directories of the specified path. Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows) Log files When you set the directory tree to be case insensitive OFF, the directory and all recursive subdirectories are again case sensitive, restoring the POSIX semantics for Linux users. A new task is created when you change case insensitivity or query its status recursively. A log file is created for each task and an ID is assigned to the task. The log file is placed in the directory /usr/local/ibrix/log/case_insensitive on the server specified as the coordinating server for the task. Check that server for the log file. NOTE: To verify the coordinating server, select File System > Inactive Tasks. Then select the task ID from the display and select Details. The log file names have the format IDtask.log, such as ID26.log. The following sample log file is for a query reporting case insensitivity: 0:0:26275:Reporting Case Insensitive status for the following directories 1:0:/fs_test1/samename-T: TRUE 52 Using NFS

53 2:0:/fs_test1/samename-T/samename: TRUE 2:0:DONE The next sample log file is for a change in case insensitivity: 0:0:31849:Case Insensitivity is turned ON for the following directories 1:0:/fs_test2/samename-true 2:0:/fs_test2/samename-true/samename 3:0:/fs_test2/samename-true/samename/samename-snap 3:0:DONE The first line of the output contains the PID for the process and reports the action taken. The first column specifies the number of directories visited. The second column specifies the number of errors found. The third column reports either the results of the query or the directories where case insensitivity was turned on or off. Displaying and terminating a case insensitivity task To display a task, use the following command: # ibrix_task -l For example: # ibrix_task -l TASK ID TYPE FILE SYSTEM SUBMITTED BY TASK STATUS IS COMPLETED? EXIT STATUS STARTED AT ENDED AT caseins_237 caseins fs_test1 root from Local Host STARTING No Jun 17, :31:38 To terminate a task, run the following command and specify the task ID: # ibrix_task -k -n <task ID> For example: # ibrix_task -k -n caseins_237 Case insensitivity and operations affecting directories A newly created directory retains the case-insensitive setting of its parent directory. When you use commands and utilities that create a new directory, that directory has the case-insensitive setting of its parent. This situation applies to the following: Windows or Mac copy and paste tar/untar compress/uncompress cp -R rsync Remote replication xcopy robocopy Restoring directories and folders from snapshots The case-insensitive setting of the source directories is not retained on the destination directories. Instead, the setting for the destination file system is applied. However, if you use a command such as the Linux mv command, a Windows drag and drop operation, or a Mac uncompress operation, a new directory is not created, and the affected directory retains its original case-insensitive setting. Using case-insensitive file systems 53

54 6 Configuring authentication for CIFS, FTP, and HTTP X9000 software supports several services for authenticating users accessing shares on X9000 file systems: Active Directory (supported for CIFS, FTP, and HTTP) Active Directory with LDAP ID mapping as a secondary lookup source (supported for CIFS) LDAP (supported for CIFS) Local Users and Groups (supported for CIFS, FTP, and HTTP) Local Users and Groups can be used with Active Directory or LDAP. NOTE: Active Directory and LDAP cannot be used together. You can configure authentication from the GUI or CLI. When you configure authentication with the GUI, the selected authentication services are configured on all servers. The CLI commands allow you to configure authentication differently on different servers. Using Active Directory with LDAP ID mapping When LDAP ID mapping is a secondary lookup method, the system reads CIFS client UIDs and GIDs from LDAP if it cannot locate the needed ID in an AD entry. The name in LDAP must match the name in AD without respect for case or pre-appended domain. If the user configuration differs in LDAP and Windows AD, the LDAP ID mapping feature uses the AD configuration. For example, the following AD configuration specifies that the primary group for user1 is Domain Users, but in LDAP, the primary group is group1. AD configuration LDAP Configuration user: user1 uid: user1 primary group: Domain Users uidnumber: 1010 UNIX uid: not specified gidnumber: 1001 (group1) UNIX gid: not specified cn: Domain Users gidnumber: 1111 The Linux id command returns the primary group specified in LDAP: user: user1 primary group: group1 (1001) LDAP ID mapping uses AD as the primary source for identifying the primary group and all supplemental groups. If AD does not specify a UNIX GID for a user, LDAP ID mapping looks up the GID for the primary group assigned in AD. In the example, the primary group assigned in AD is Domain Users, and LDAP ID mapping looks up the GID of that group in LDAP. The lookup operation returns: user: user1 primary group: Domain Users (1111) AD does not force the supplied primary group to match the supplied UNIX GID. The supplemental groups assigned in AD do not need to match the members assigned in LDAP. LDAP ID mapping uses the members list assigned in AD and ignores the members list configured in LDAP. 54 Configuring authentication for CIFS, FTP, and HTTP

55 Using LDAP as the primary authentication method Requirements for LDAP users and groups X9000 supports only OpenLDAP. If you are using LDAP or LDAP ID mapping for authentication, follow these requirements when setting up users and groups: UID and GID values cannot be set to less than 1. Use the uid schema attribute to add user account names. Use the cn schema attribute to add group account names. UID and GIDs must be stored in UidNumber and GidNumber schema attributes. Configuring LDAP for X9000 software To configure LDAP, complete the following steps: 1. Update a configuration template on the remote LDAP server. 2. Run the configuration script on the remote LDAP server. 3. Configure LDAP authentication on the cluster nodes. Update the template on the remote LDAP server OpenLDAP ships with three configuration templates: customized-schema-template.conf samba-schema-template.conf posix-schema-template.conf Make a copy of the template corresponding to the schema your LDAP server supports, and update the copy with your configuration information. Customized template. Provide values (equivalent names) for all virtual attributes in the configuration. For example: mandatory; virtual; uid; your-schema-equivalent-of-uid optional; virtual; homedirectory; your-schema-equivalent-of-homedirectory Samba template. Enter the required attributes for Samba/POSIX templates. You can use the default values specified in the Map (mandatory) variables and Map (Optional) variables sections of the template. POSIX template. Enter the required attributes for Samba/POSIX templates. Also remove or comment out the following virtual attributes: # mandatory; virtual; SID;sambaSID # mandatory; virtual; PrimaryGroupSID;sambaPrimaryGroupSID # mandatory; virtual; sambagroupmapping;sambagroupmapping Required attributes for Samba/POSIX templates Nonvirtual attribute name VERSION LDAPServerHost LdapConfigurationOU Value Any arbitrary string IP Address string Writable OU name string Description Helps identify the configuration version uploaded. Potentially used for reports, audit history, and troubleshooting. A FQDN or IP. Typically, it is a front-ended switch or an IP LDAP proxy/balancer name/address for multiple backend high-availability LDAP servers. The LDAP OU (organizational unit) to which configuration entries can be written. This OU must exist on the server and must be readable and writable using LDAPWriteDN. Using LDAP as the primary authentication method 55

56 Nonvirtual attribute name LdapWriteDN LDAPWritePassword schematype Value DN name string Unencrypted password string. LDAP encrypts the string on storage. Samba, posix, or user defined schema Description Limited write DN credentials. HP recommends that you do not use cn=manager credentials. Instead, use an account DN with very restricted write permissions to the LdapConfigurationOU and beneath. Password for the LdapWriteDN account. Supported schema for the OpenLDAP server. Run the configuration script on the remote LDAP server The X9000 gen_ldap-lwtools.sh script performs the configuration based on the template you updated (UserConf.conf in the examples). Run the following command to validate your changes: sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf v If the configuration looks okay, run the command with added security by removing all temporary files: sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf -rm If you need to troubleshoot the configuration, run the command as follows: sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf Configure LDAP authentication on the cluster nodes You can configure LDAP authentication from the GUI, as described in the next section, or by using the ibrix_ldapconfig command (see Configuring LDAP (page 65). Configuring authentication from the GUI You can use the Authentication Wizard to perform the initial configuration or to modify it at a later time. Select Cluster Configuration > File Sharing Authentication from the Navigator to open the File Sharing Authentication Settings panel. This panel shows the current authentication configuration on each server. Click Authentication Wizard to start the wizard. On the Configure Options page, select the authentication service to be applied to the servers in the cluster. 56 Configuring authentication for CIFS, FTP, and HTTP

57 The wizard displays the configuration pages corresponding to the option you selected. Active Directory. See Active Directory (page 57). LDAP. See LDAP (page 59). LDAP ID Mapping. See LDAP ID mapping (page 58). Local Groups. See Local Groups (page 61). Local Users. See Local Users (page 62). Share Administrators. See Windows Share Administrators (page 64). Summary. See Summary (page 64). Active Directory Enter your domain name, the Auth Proxy username (an AD domain user with privileges to join the specified domain; typically a Domain Administrator), and the password for that user. These credentials are used only to join the domain and do not persist on the cluster nodes. Optionally, you can enable Linux static user mapping; for more information see Linux static user mapping with Active Directory (page 83). NOTE: When you successfully configure Active Directory authentication, the machine is part of the domain until you remove it from the domain, either with the ibrix_auth -n command or with Windows tools. Because Active Directory authentication is a one-time event, it is not necessary to update authentication if you change the proxy user information. Configuring authentication from the GUI 57

58 If you want to use LDAP ID mapping as a secondary lookup for Active Directory, select Enabled with LDAP ID Mapping and AD in the Linux Static User Mapping field. When you click Next, the LDAP ID Mapping dialog box appears. LDAP ID mapping If the system cannot locate a UID/GID in Active Directory, it searches for the UID/GID in LDAP. On the LDAP ID Mapping dialog box, specify the appropriate search parameters. 58 Configuring authentication for CIFS, FTP, and HTTP

59 Enter the following information on the dialog box: LDAP Server Host Port Base of Search Bind DN Password Max Entries Max Wait Time LDAP Scope Namesearch Case Sensitivity Enter the server name or IP address of the LDAP server host. Enter the LDAP server port (TCP port 389 for unencrypted or TLS encrypted; 636 for SSL encrypted). Enter the LDAP base for searches. This is normally the root suffix of the directory, but you can provide a base lower down the tree for business rules enforcement, ACLs, or performance reasons. For example, ou=people,cd=enx,dc=net. Enter the LDAP user account used to authenticate to the LDAP server to read data. This account must have privileges to read the entire directory. Write credentials are not required. For example, scn=hpx9000-readonly-user,dc=enxt,dc=net. Enter the password for the LDAP user account. Enter the maximum number of entries to return from the search (the default is 10). Enter 0 (zero) for no limit. Enter the local maximum search time-out value in seconds. This value determines how long the client will wait for search results. Select the level of entries to search: base: search the base level entry only sub: search the base level entry and all entries in sub-levels below the base entry one: search all entries in the first level below the base entry, excluding the base entry If LDAP searches should be case sensitive, check this box. LDAP Enter the server name or IP address of the LDAP server host and the password for the LDAP user account. NOTE: LDAP cannot be used with Active Directory. Configuring authentication from the GUI 59

60 Enter the following information in the remaining fields: Bind DN Write OU Base of Search NetBIOS Name Enter the LDAP user account used to authenticate to the LDAP server to read data, such as cn=hpx9000-readonly-user,dc=enxt,dc=net. This account must have privileges to read the entire directory. Write credentials are not required. Enter the OU (organizational unit) on the LDAP server to which configuration entries can be written. This OU must be pre-provisioned on the remote LDAP server. The LDAPBindDN credentials must be able to read (but not write) from the LDAPWriteOU. For example, ou=x9000config,ou=configuration,dc=enxt,dc=net. This is normally the root suffix of the directory, but you can provide a base lower down the tree for business rules enforcement, ACLs, or performance reasons. For example, ou=people,cd=enx,dc=net. Enter any string that identifies the X9000 host, such as X9000. If your LDAP configuration requires a certificate for secure access, click Edit to open the LDAP dialog box. You can enter a TLS or SSL certificate. When no certificate is used, the Enable SSL field shows Neither TLS or SSL. 60 Configuring authentication for CIFS, FTP, and HTTP

61 NOTE: If LDAP is the primary authentication service, Windows clients such as Explorer or MMC plugins cannot be used to add new users. Local Groups Specify local groups allowed to access shares. On the Local Groups page, enter the group name and, optionally, the GID and RID. If you do not assign a GID and RID, they are generated automatically. Click Add to add the group to the list of local groups. Repeat this process to add other local groups. When naming local groups, you should be aware of the following: Group names must be unique. The new name cannot already be used by another user or group. The following names cannot be used: administrator, guest, root. Configuring authentication from the GUI 61

62 NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as Explorer or MMC plugins cannot be used to add new users. Local Users Specify local users allowed to access shares. On the Local Users page, enter a user name and password. Click Add to add the user to the Local Users list. When naming local users, you should be aware of the following: User names must be unique. The new name cannot already be used by another user or group. The following names cannot be used: administrator, guest, root. 62 Configuring authentication for CIFS, FTP, and HTTP

63 To provide account information for the user, click Advanced. The default home directory is /home/ <username> and the default shell program is /bin/false. Configuring authentication from the GUI 63

64 NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as Explorer or MMC plugins cannot be used to add new users. Windows Share Administrators If you will be using the Windows Share Management MMC plug-in to manage CIFS shares, enter your share administrators on this page. You can skip this page if you will be managing shares entirely from the X9000 Management Console. To add an Active Directory or LDAP share administrator, enter the administrator name (such as domain\user1 or domain\group1) and click Add to add the administrator to the Windows Share Administrators list. To add an existing Local User as a share administrator, select the user and click Add. Summary The Summary page shows the authentication configuration. You can go back and revise the configuration if necessary. When you click Finish, authentication is configured, and the details appear on the File Sharing Authentication panel. Viewing or changing authentication settings Expand File Sharing Authentication in the lower Navigator, and then select an authentication service to display the current configuration for that service. On each panel, you can start the Authentication Wizard and modify the configuration if necessary. 64 Configuring authentication for CIFS, FTP, and HTTP

65 You cannot change the UID or RID for a Local User account. If it is necessary to change a UID or RID, first delete the account and then recreate it with the new UID or RID. The Local Users and Local Groups panels allow you to delete the selected user or group. Configuring authentication from the CLI You can configure Active Directory, LDAP, LDAP ID mapping, or Local Users and Groups. Configuring Active Directory To configure Active Directory authentication, use the following command: ibrix_auth -n DOMAIN_NAME A AUTH_PROXY_USER_NAME@domain_name [-P AUTH_PROXY_PASSWORD] [-S SETTINGLIST] [-h HOSTLIST] RFC2307 is the protocol that enables Linux static user mapping with Active Directory. To enable RFC2307 support, use the following command: ibrix_cifsconfig -t [-S SETTINGLIST] [-h HOSTLIST] Enable RFC2307 in the SETTINGLIST as follows: rfc2307_support=rfc2307 For example: ibrix_cifsconfig -t -S "rfc2307_support=rfc2307" To disable RFC2307, set rfc2307_support to unprovisioned. For example: ibrix_cifsconfig -t -S "rfc2307_support=unprovisioned" IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S command, use the following command to restart the CIFS services on all nodes affected by the change. ibrix_server s t cifs c restart [ h SERVERLIST] Clients will experience a temporary interruption in service during the restart. Configuring LDAP Use the ibrix_ldapconfig command to configure LDAP as the authentication service for CIFS shares. IMPORTANT: Before using ibrix_ldapconfig to configure LDAP on the cluster nodes, you must configure the remote LDAP server. For more information, see Configuring LDAP for X9000 software (page 55). Add an LDAP configuration and enable LDAP: Configuring authentication from the CLI 65

66 ibrix_ldapconfig -a -h LDAPSERVERHOST [-P LDAPSERVERPORT] -b LDAPBINDDN -p LDAPBINDDNPASSWORD -w LDAPWRITEOU -B LDAPBASEOFSEARCH -n NETBIOS -E ENABLESSL [-f CERTFILEPATH] [-c CERTFILECONTENTS] The options are: -h LDAPSERVERHOST -P LDAPSERVERPORT -b LDAPBINDDN -p LDAPBINDDNPASSWORD -w LDAPWRITEOU -B LDAPBASEOFSEARCH -n NETBIOS -E ENABLESSL -f CERTFILEPATH -c CERTFILECONTENTS The LDAP server host (server name or IP address). The LDAP server port. The LDAP bind Distinguished Name. For example: cn=hpx9000-readonly-user,dc=enxt,dc=net. The LDAP bind password. The LDAP write Organizational Unit, or OU (for example, ou=x9000config,,ou=configuration,dc=enxt,dc=net). The LDAP base for searches (for example, ou=people,cd=enx,dc=net). The NetBIOS name, such as X9000. The type of certificate required. Enter 0 for no certificate, 1 for TLS, or 2 for SSL. The path to the TLS or SSL certificate file, such as /usr/local/ibrix/ldap/ key.pem. The contents of the certificate file. Copy the contents and paste them between quotes. Modify an LDAP configuration: ibrix_ldapconfig -m -h LDAPSERVERHOST [-P LDAPSERVERPORT] [e D] [-b LDAPBINDDN] [-p LDAPBINDDNPASSWORD] [-w LDAPWRITEOU] [-B LDAPBASEOFSEARCH] [-n NETBIOS] [-E ENABLESSL] [-f CERTFILEPATH] [-c CERTFILECONTENTS] View the LDAP configuration: ibrix_ldapconfig -i Delete LDAP settings for an LDAP server host: ibrix_ldapconfig -d -h LDAPSERVERHOST Enable LDAP: ibrix_ldapconfig -e -h LDAPSERVERHOST Disable LDAP: ibrix_ldapconfig -D -h LDAPSERVERHOST Configuring LDAP ID mapping Use the ibrix_ldapidmapping command to configure LDAP ID mapping as a secondary lookup source for Active Directory. LDAP ID mapping can be used only for CIFS shares. Add an LDAP ID mapping: ibrix_ldapidmapping -a -h LDAPSERVERHOST -B LDAPBASEOFSEARCH [-P LDAPSERVERPORT] [-b LDAPBINDDN] [-p LDAPBINDDNPASSWORD] [-m MAXWAITTIME] [-M MAXENTRIES] [-n] [-s] [-o] [-u] This command automatically enables LDAP RFC 2307 ID Mapping. The options are: -h LDAPSERVERHOST -B LDAPBASEOFSEARCH -P LDAPSERVERPORT -b LDAPBINDDN The LDAP server host (server name or IP address). The LDAP base for searches (for example, ou=people,cd=enx,dc=net). The LDAP server port (TCP port 389). The LDAP bind Distinguished Name (the default is anonymous). For example: cn=hpx9000-readonly-user,dc=enxt,dc=net. 66 Configuring authentication for CIFS, FTP, and HTTP

67 -p LDAPBINDDNPASSWORD -m MAXWAITTIME -M MAXENTRIES -n -s -o -u The LDAP bind password. The maximum amount of time to allow the search to run. The maximum number of entries (the default is 10). Case sensitivity for name searches (the default is false, or case-insensitive). Search the LDAP scope base (search the base level entry only). LDAP scope one (search all entries in the first level below the base entry, excluding the base entry). LDAP scope sub (search the base-level entries and all entries below the base level). Display information for LDAP ID mapping: ibrix_ldapidmapping -i Enable an existing LDAP ID mapping: ibrix_ldapidmapping -e -h LDAPSERVERHOST Disable an existing LDAP ID mapping: ibrix_ldapidmapping -d -h LDAPSERVERHOST Configuring Local Users and Groups authentication Use ibrix_auth to configure Local Users authentication. Use ibrix_localusers and ibrix_localgroups to manage user and group accounts. Configure Local Users authentication: ibrix_auth -N [-h HOSTLIST] Be sure to create a local user account for each user that will be accessing CIFS, FTP, or HTTP shares, and create at least one local group account for the users. The account information is stored internally in the cluster. Add a Local User account: ibrix_localusers -a -u USERNAME -g DEFAULTGROUP -p PASSWORD [-h HOMEDIR] [-s SHELL] [-i USERINFO] [-U USERID] [-S RID] [-G GROUPLIST] Modify a Local User account: ibrix_localusers -m -u USERNAME [-g DEFAULTGROUP] [-p PASSWORD] [-h HOMEDIR] [-s SHELL] [-i USERINFO] [-G GROUPLIST] View information for all Local User accounts: ibrix_localusers -L View information for a specific Local User account: ibrix_localusers -l -g USERNAME Delete a Local User account: ibrix_localusers -d -u USERNAME Add a Local Group account: ibrix_localgroups -a -g GROUPNAME [-G GROUPID] [-S RID] Modify a Local Group account: ibrix_localgroups -m -g GROUPNAME [-G GROUPID] [-S RID] View information about all Local Group accounts: ibrix_localgroups -L View information for a specific Local Group account: ibrix_localgroups -l -g GROUPNAME Configuring authentication from the CLI 67

68 Delete a Local Group account: ibrix_localgroups -d -g GROUPNAME 68 Configuring authentication for CIFS, FTP, and HTTP

69 7 Using CIFS The IBRIX CIFS server implementation allows you to create file shares for data stored on the cluster. The CIFS server provides a true Windows experience for Windows clients. A user accessing a file share on an X9000 system will see the same behavior as on a Windows server. IMPORTANT: CIFS and X9000 Windows clients cannot be used together because of incompatible AD user to UID mapping. You can use either CIFS or X9000 Windows clients, but not both at the same time. IMPORTANT: Before configuring CIFS, select an authentication method. See Configuring authentication for CIFS, FTP, and HTTP (page 54) for more information. Configuring file serving nodes for CIFS To enable file serving nodes to provide CIFS services, you will need to configure the resolv.conf file. On each node, the /etc/resolv.conf file must include a DNS server that can resolve SRV records for your domain. For example: # cat /etc/resolv.conf search mycompany.com nameserver To verify that a file serving node can resolve SRV records for your AD domain, run the Linux dig command. (In the following example, the Active Directory domain name is mydomain.com.) % dig SRV _ldap._tcp.mydomain.com In the output, verify that the ANSWER SECTION contains a line with the name of a domain controller in the Active Directory domain. Following is some sample output: ; <<>> DiG P1 <<>> SRV _ldap._tcp.mydomain.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2 ;; QUESTION SECTION: ;_ldap._tcp.mydomain.com. IN SRV ;; ANSWER SECTION: _ldap._tcp.mydomain.com. 600 IN SRV adctrlr.mydomain.com. ;; ADDITIONAL SECTION: adctrlr.mydomain.com IN A ;; Query time: 0 msec ;; SERVER: #53( ) ;; WHEN: Tue Mar 16 09:56: ;; MSG SIZE rcvd: 113 For more information, see the Linux resolv.conf(5) man page. Starting or stopping the CIFS service and viewing CIFS statistics IMPORTANT: You will need to start the CIFS service initially on the file serving nodes. Subsequently, the service is started automatically when a node is rebooted. Use the CIFS panel on the GUI to start, stop, or restart the CIFS service on a particular server, or to view CIFS activity statistics for the server. Select Servers from the Navigator and then select the appropriate server. Select CIFS in the lower Navigator to display the CIFS panel, which shows CIFS activity statistics on the server. You can start, stop, or restart the CIFS service by clicking the appropriate button. Configuring file serving nodes for CIFS 69

70 NOTE: Click CIFS Settings to configure SMB signing on this server. See Configuring SMB signing (page 75) for more information. To start, stop, or restart the CIFS service from the CLI, use the following command: ibrix_server s t cifs c {start stop restart} Monitoring CIFS services The ibrix_cifsmonitor command configures monitoring for the following CIFS services: lwreg dcerpc eventlog lsass lwio netlogin srvsvc If the monitor finds that a service is not running, it attempts to restart the service. If the service cannot be restarted, that particular service is not monitored. The command can be used for the following tasks. Start the CIFS monitoring daemon and enable monitoring: ibrix_cifsmonitor m [ h HOSTLIST] Display the health status of the CIFS services: ibrix_cifsmonitor l The command output reports status as follows: Health Status Up Degraded Down Condition All monitored CIFS services are up and running The lwio service is running but one or more of the other services are down The lwio service is down and one or more of the other services are down 70 Using CIFS

71 Health Status Not Monitored N/A Condition Monitoring is disabled The active Fusion Manager could not communicate with other file serving nodes in the cluster CIFS shares Disable monitoring and stop the CIFS monitoring daemon: ibrix_cifsmonitor u [ h HOSTLIST] Restart CIFS service monitoring: ibrix_cifsmonitor c [ h HOSTLIST] Windows clients access file systems through CIFS shares. You can use the X9000 GUI or CLI to manage shares, or you can use the Microsoft Management Console interface. The CIFS service must be running when you add shares. IMPORTANT: When working with CIFS shares, you should be aware of the following: The permissions on the directory exporting a CIFS share govern the access rights that are given to the Everyone user as well as to the owner and group of the share. Consequently, the Everyone user may have more access rights than necessary. The administrator should set ACLs on the CIFS share to ensure that users have only the appropriate access rights. Alternatively, permissions can be set more restrictively on the directory exporting the CIFS share. When the cluster and Windows clients are not joined in a domain, local users are not visible when you attempt to add ACLs on files and folders in a CIFS share. A directory tree on a CIFS share cannot be copied if there are more than 50 ACLs on the share. Also, because of technical constraints in the CIFS service, you cannot create subfolders in a directory on a CIFS share having more than 50 ACLs. When configuring a CIFS share, you can specify IP addresses or ranges that should be allowed or denied access to the share. However, if your network includes packet filters, a NAT gateway, or routers, this feature cannot be used because the client IP addresses are modified while in transit. Configuring CIFS shares with the GUI Use the Add New File Share Wizard to configure CIFS shares. You can then view or modify the configuration as necessary. On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click Add to start the Add New File Share Wizard. On the File Share page, select CIFS as the File Sharing Protocol. Select the file system, which must be mounted, and enter a name, directory path, and description for the share. Note the following: Do not include any of the following special characters in a share name. If the name contains any of these special characters, the share might not be set up properly on all nodes in the cluster. ' & ( [ { $ `, / \ Do not include any of the following special characters in the share description. If a description contains any of these special characters, the description might not propagate correctly to all nodes in the cluster. * % + & ` CIFS shares 71

72 72 Using CIFS On the Permissions page, specify permissions for users and groups allowed to access the share.

73 Click Add to open the New User/Group Permission Entry dialog box, where you can configure permissions for a specific user or group. The completed entries appear in the User/Group Entries list on the Permissions page. On the Client Filtering page, specify IP addresses or ranges that should be allowed or denied access to the share. NOTE: routers. This feature cannot be used if your network includes packet filters, a NAT gateway, or Click Add to open the New Client UP Address Entry dialog box, where you can allow or deny access to a specific IP address or a range of addresses. Enter a single IP address, or include a bitmask to specify entire subnets of IP addresses, such as /25. The valid range for the CIFS shares 73

74 bitmask is The completed entry appears on the Client IP Filters list on the Client Filtering page. On the Advanced Settings page, enable or disable Access Based Enumeration and specify the default create mode for files and directories created in the share. The Access Based Enumeration option allows users to see only the files and folders to which they have access on the file share. On the Host Servers page, select the servers that will host the share. 74 Using CIFS

75 Configuring SMB signing The SMB signing feature specifies whether clients must support SMB signing to access CIFS shares. You can apply the setting to all servers, or to a specific server. To apply the same setting to all server, select File Shares from the Navigator and click Settings on the File Shares panel. To apply a setting to a specific server, select that server on the GUI, select CIFS from the lower Navigator, and click Settings. The dialog is the same for both selection methods. CIFS shares 75

76 When configuring SMB signing, note the following: SMB2 is always enabled. Use the Required check box to specify whether SMB signing (with either SMB1 or SMB2) is required. The Disabled check box applies only to SMB1. Use this check box to enable or disable SMB signing with SMB1. You should also be aware of the following: The File Share Settings dialog box does not display whether SMB signing is currently enabled or disabled. Use the following command to view the current setting for SMB signing: ibrix_cifsconfig -i SMB signing must not be required to support connections from 10.5 and 10.6 Mac clients. It is possible to configure SMB signing differently on individual servers. Backup CIFS servers should have the same settings to ensure that clients can connect after a failover. The SMB signing settings specified here are not affected by Windows domain group policy settings when joined to a Windows domain. Configuring SMB signing from the CLI To configure SMB signing from the command line, use the following command: ibrix_cifsconfig -t -S SETTINGLIST You can specify the following values in the SETTINGLIST: smb signing enabled smb signing required Use commas to separate the settings, and enclose the list in quotation marks. For example, the following command sets SMB signing to enabled and required: ibrix_cifsconfig t S smb signing enabled=1,smb signing required=1" To disable SMB signing, enter settingname= with no value. For example: ibrix_cifsconfig t S smb signing enabled=,smb signing required=" IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S command, use the following command to restart the CIFS services on all nodes affected by the change. ibrix_server s t cifs c restart [ h SERVERLIST] Clients will experience a temporary interruption in service during the restart. Managing CIFS shares with the GUI To view existing CIFS shares on the GUI, select File Shares > CIFS from the Navigator. The CIFS Shares panel shows the file system being shared, the hosts (or servers) providing access, the name of the share, the export path, and the options applied to the share. NOTE: When externally managed appears in the option list for a share, that share is being managed with the Microsoft Management Console interface. The X9000 Management Console GUI or CLI cannot be used to change the permissions for the share. 76 Using CIFS

77 On the CIFS Shares panel, click Add or Modify to open the File Shares wizard, where you can create a new share or modify the selected share. Click Delete to remove the selected share. Click CIFS Settings to configure global file share settings; see Configuring SMB signing (page 75)) for more information. You can also view CIFS shares for a specific file system. Select that file system on the GUI, and then select CIFS Shares from the lower Navigator. Configuring and managing CIFS shares with the CLI Adding, modifying, or deleting shares Use the ibrix_cifs command to add, modify, or delete shares. For detailed information, see the HP IBRIX X9000 Network Storage System CLI Reference Guide. NOTE: Be sure to use the ibrix_cifs command located in <installdirectory>/bin. The ibrix_cifs command located in /usr/local/bin/init is used internally by X9000 Software and should not be run directly. Add a share: ibrix_cifs -a f FSNAME s SHARENAME -p SHAREPATH [-D SHAREDESCRIPTION] [-S SETTINGLIST] [-A ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F FILEMODE] [-M DIRMODE] [-h HOSTLIST] CIFS shares 77

78 Use the -A ALLOWCLIENTIPSLIST or E DENYCLIENTIPSLIST options to list client IP addresses allowed or denied access to the share. Use commas to separate the IP addresses, and enclose the list in quotes. You can include an optional bitmask to specify entire subnets of IP addresses (for example, ibrix_cifs -A , /16 ). The default is "", which allows (or denies) all IP addresses. The -F FILEMODE and -M DIRMODE options specify the default mode for newly created files or directories, in the same manner as the Linux chmod command. The range of values is The default is To see the valid settings for the -S option, use the following command: ibrix_cifs -L View share information: ibrix_cifs -i [-h HOSTLIST] Modify a share: ibrix_cifs -m -s SHARENAME [-D SHAREDESCRIPTION] [-S SETTINGLIST] [-A ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F FILEMODE] [-M DIRMODE] [-h HOSTLIST] Delete a share: ibrix_cifs d -s SHARENAME [-h HOSTLIST] Managing user and group permissions Use the ibrix_cifsperms command to manage share-level permissions for users and groups. Add a user or group to a share and assign share-level permissions: ibrix_cifsperms -a -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h HOSTLIST] For -t TYPE, specify either allow or deny. For -p PERMISSION, specify one of the following: fullcontrol change read For example, the following command gives everyone read permission on share1: ibrix_cifsperms -a -s share1 -u Everyone -t allow -p read Modify share-level permissions for a user or group: ibrix_cifsperms -m -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h HOSTLIST] Delete share-level permissions for a user or group: ibrix_cifsperms -d -s SHARENAME [-u USERNAME] [-t TYPE] [-h HOSTLIST] Display share-level permissions: ibrix_cifsperms -i -s SHARENAME [-t TYPE] [-h HOSTLIST] Managing CIFS shares with Microsoft Management Console The Microsoft Management Console (MMC) can be used to add, view, or delete CIFS shares. Administrators running MMC must have X9000 Software share management privileges. 78 Using CIFS

79 NOTE: To use MMC to manage CIFS shares, you must be authenticated as a user with share modification permissions. NOTE: If you will be adding users with the MMC, the primary authentication method must be Active Directory. NOTE: The permissions for CIFS shares managed with the MMC cannot be changed with the X9000 Management Console GUI or CLI. Connecting to cluster nodes When connecting to cluster nodes, use the procedure corresponding to the Windows operating system on your machine. Windows XP, Windows 2003 R2: Complete the following steps: 1. Open the Start menu, select Run, and specify mmc as the program to open. 2. On the Console Root window, select File > Add/Remove Snap-in. 3. On the Add/Remove Snap-in window, click Add. 4. On the Add Standalone Snap-in window, select Shared Folders and click Add. 5. On the Shared Folders window, select Another computer as the computer to be managed, enter or browse to the computer name, and click Finish. 6. Click Close > OK to exit the dialogs. 7. Expand Shared Folders (\\<address>). 8. Select Shares and manage the shares as needed. CIFS shares 79

80 Windows Vista, Windows 2008, Windows 7: Complete the following steps: 1. Open the Start menu and enter mmc in the Start Search box. You can also enter mmc in a DOS cmd window. 2. On the User Account Control window, click Continue. 3. On the Console 1 window, select File > Add/Remove Snap-in. 4. On the Add or Remove Snap-ins window, select Shared Folders and click Add. 5. On the Shared Folders window, select Another computer as the computer to be managed, enter or browse to the computer name, and click Finish. 6. Click OK to exit the Add or Remove Snap-ins window. 7. Expand Shared Folders (\\<address>). 8. Select Shares and manage the shares as needed. Saving MMC settings You can save your MMC settings to use when managing shares on this server in later sessions. Complete these steps: 1. On the MMC, select File > Save As. 2. Enter a name for the file. The name must have the suffix.msc. 3. Select Desktop as the location to save the file, and click Save. 4. Select File > Exit. Granting share management privileges Use the following command to grant administrators X9000 Software share management privileges. The users you specify must already exist. Be sure to enclose the user names in square brackets. ibrix_auth -t -S 'share admins=[domainname\username,domainname\username]' 80 Using CIFS

81 The following example gives share management privileges to a single user: ibrix_auth -t -S 'share admins=[domain\user1]' If you specify multiple administrators, use commas to separate the users. For example: ibrix_auth -t -S 'share admins=[domain\user1, domain\user2, domain\user3]' Adding CIFS shares CIFS shares can be added with the MMC, using the share management plug-in. When adding shares, you should be aware of the following: The share path must include the X9000 file system name. For example, if the file system is named data, you could specify C:\data1\folder1. NOTE: The Browse button cannot be used to locate the file system. The directory to be shared will be created if it does not already exist. The permissions on the shared directory will be set to 777. It is not possible to change the permissions on the share. Do not include any of the following special characters in a share name. If the name contains any of these special characters, the share might not be set up properly on all nodes in the cluster. ' & ( [ { $ `, / \ Do not include any of the following special characters in the share description. If a description contains any of these special characters, the description might not propagate correctly to all nodes in the cluster. * % + & ` The management console GUI or CLI cannot be used to alter the permissions for shares created or managed with Windows Share Management. The permissions for these shares are marked as externally managed on the GUI and CLI. Open the MMC with the Shared Folders snap-in that you created earlier. On the Select Computer dialog box, enter the IP address of a server that will host the share. The Computer Management window shows the shares currently available from server. CIFS shares 81

82 To add a new share, select Shares > New Share and run the Create A Shared Folder Wizard. On the Folder Path panel, enter the path to the share, being sure to include the file system name. When you complete the wizard, the new share appears on the Computer Management window. 82 Using CIFS

83 Deleting CIFS shares To delete a CIFS share, select the share on the Computer Management window, right-click, and select Delete. Linux static user mapping with Active Directory Linux static user mapping (also called UID/GID mapping or RFC2307 support) allows you to use LDAP as a Network Information Service. Linux static user mapping must be enabled when you configure Active Directory for user authentication (see Configuring authentication for CIFS, FTP, and HTTP (page 54)). If you configure LDAP ID mapping as the secondary authentication service, authentication uses the IDs assigned in AD if they exist. If an ID is not found in an AD entry, authentication looks in LDAP for a user or group of the same name and uses the corresponding ID assigned in LDAP. The primary group and all supplemental groups are still determined by the AD configuration. You can also assign UIDs, GIDs, and other POSIX attributes such as the home directory, primary group and shell to users and groups in Active Directory. To add static entries to Active Directory, complete these steps: Configure Active Directory. Assign POSIX attributes to users and groups in Active Directory. NOTE: Mapping UID 0 and GID 0 to any AD user or group is not compatible with CIFS static mapping. Configuring Active Directory Your Windows Domain Controller machines must be running Windows Server 2003 R2 or Windows Server 2008 R2. Configure the Active Directory domain as follows: Install Identity Management for UNIX. Activate the Active Directory Schema MMC snap-in. Add the uidnumber and gidnumber attributes to the partial-attribute-set of the AD global catalog. You can perform these procedures from any domain controller. However, the account used to add attributes to the partial-attribute-set must be a member of the Schema Admins group. Linux static user mapping with Active Directory 83

84 Installing Identity Management for UNIX To install Identity Management for UNIX on a domain controller running Windows Server 2003 R2, see the following Microsoft TechNet Article: To install Identity Management for UNIX on a domain controller running Windows Server 2008 R2, see the following Microsoft TechNet article: Activating the Active Directory Schema MMC snap-in Use the Active Directory Schema MMC snap-in to add the attributes. To activate the snap-in, complete the following steps: 1. Click Start, click Run, type mmc, and then click OK. 2. On the MMC Console menu, click Add/Remove Snap-in. 3. Click Add, and then click Active Directory Schema. 4. Click Add, click Close, and then click OK. Adding uidnumber and gidnumber attributes to the partial-attribute-set To make modifications using the Active Directory Schema MMC snap-in, complete these steps: 1. Click the Attributes folder in the snap-in. 2. In the right panel, scroll to the desired attribute, right-click the attribute, and then click Properties. Select Replicate this attribute to the Global Catalog, and click OK. The following dialog box shows the properties for the uidnumber attribute: The next dialog box shows the properties for the gidnumber attribute. 84 Using CIFS

85 The following article provides more information about modifying attributes in the Active Directory global catalog: Assigning attributes To set POSIX attributes for users and groups, start the Active Directory Users and Computers GUI on the Domain Controller. Open the Administrator Properties dialog box, and go to the UNIX Attributes tab. For users, you can set the UID, login shell, home directory, and primary group. For groups, set the GID. Linux static user mapping with Active Directory 85

86 Consolidating SMB servers with common share names 86 Using CIFS If your SMB servers previously used the same share names, you can consolidate the servers without changing the share name requested on the client side. For example, you might have three SMB servers, SRV1, SRV2, and SRV3, that each have a share named DATA. SRV3 points to a shared drive that has the same path as \\SRV1\DATA; however, users accessing SRV3 have different permissions on the share. To consolidate the three servers, we will take these steps: 1. Assign Vhost names SRV1, SRV2, and SRV3. 2. Create virtual interfaces (VIF) for the IP addresses used by the servers. For example, Vhost SRV1 has VIF and Vhost SRV2 has VIF Map the old share names to new share names. For example, map \\SRV1\DATA to new share srv1-data, map \\SRV2\DATA to new share srv2-data, and map \\SRV3\DATA to srv3-data. 4. Create the new shares on the cluster storage and assign each share the appropriate path. For example, assign srv1-data to /srv1/data, and assign srv2-data to /srv2/data. Because SRV3 originally pointed to the same share as SRV1, we will assign the share srv3-data the same path as srv1-data, but set the permissions differently. 5. Optionally, create a share having the original share name, DATA in our example. Assign a path such as /ERROR/DATA and place a file in it named SHARE_MAP_FAILED. Doing this ensures that if a user configuration error occurs or the map fails, clients will not gain access to the wrong shares. The file name notifies the user that their access has failed. When this configuration is in place, a client request to access share \\srv1\data will be translated to share srv1-data at /srv1/data on the file system. Client requests for \\srv3\data will also be translated to /srv1/data, but the clients will have different permissions. The client requests for \\srv2\data will be translated to share srv2-data at /srv2/data.

87 CIFS clients Client utilities such as net use will report the requested share name, not the new share name. Mapping old share names to new share names Mappings are defined in the /etc/likewise/vhostmap file. Use a text editor to create and update the file. Each line in the file contains a mapping in the following format: VIF (or VhostName) oldsharename newsharename If you enter a VhostName, it will be changed to a VIF internally. The oldsharename is the user-requested share name from the client that needs to be translated into a unique name. This unique name (the newsharename) is used when establishing a mount point for the share. Following are some entries from a vhostmap file: salesd q1salesd salesd q2salesd salessrv salesq q3salesd When editing the /etc/likewise/vhostmap file, note the following: All VIF oldsharename pairs must be unique. The following characters cannot be used in a share name: / \ [ ] < > + : ;,? * = Share names are case insensitive, and must be unique with respect to case. The oldsharename and newsharename do not need to exist when creating the file; however, they must exist for a connection to be established to the share. If a client specifies a share name that is not in the file, the share name will not be translated. Care should be used when assigning share names longer than 12 characters. Some clients impose a limit of 12 characters for a share name. Verify that the IP addresses specified in the file are legal and that Vhost names can be resolved to an IP address. IP addresses must be IP4 format, which limits the addresses to 15 characters. IMPORTANT: When you update the vhostmap file, the changes take effect a few minutes after the map is saved. If a client attempts a connection before the changes are in effect, the previous map settings will be used. To avoid any delays, make your changes to the file when the CIFS service is down. After creating or updating the vhostmap file, copy the file manually to the other servers in the cluster. CIFS clients access shares on the X9000 Software cluster in the same way they access shares on a Windows server. Viewing quota information When user or group quotas are set on a file system exported as a CIFS share, users accessing the share can see the quota information on the Quotas tab of the Properties dialog box. Users cannot modify quota settings from the client end. CIFS clients 87

88 CIFS users cannot view directory tree quotas. Differences in locking behavior When CIFS clients access a share from different servers, as in the X9000 Software environment, the behavior of byte-range locks differs from the standard Windows behavior, where clients access a share from the same server. You should be aware of the following: Zero-length byte-range locks acquired on one file serving node are not observed on other file serving nodes. Byte-range locks acquired on one file serving node are not enforced as mandatory on other file serving nodes. If a shared byte-range lock is acquired on a file opened with write-only access on one file serving node, that byte-range lock will not be observed on other file serving nodes. ("Write-only access" means the file was opened with GENERIC_WRITE but not GENERIC_READ access.) If an exclusive byte-range lock is acquired on a file opened with read-only access on one file serving node, that byte-range lock will not be observed on other file serving nodes. ("Read-only access" means the file was opened with GENERIC_READ but not GENERIC_WRITE access.) CIFS shadow copy Users who have accidently lost or changed a file can use the CIFS shadow copy feature to retrieve or copy the previous version of the file from a file system snapshot. X9000 software supports CIFS shadow copy operations as follows. Access Control Lists (ACLs) X9000 CIFS shadow copy behaves in the same manner as Windows shadow copy with respect to ACL restoration. When a user restores a deleted file or folder using CIFS shadow copy, the ACLs applied on the individual files or folders are not restored. Instead, the files and folders inherit the permissions from the root of the share or from the parent directory where they were restored. When a user restores on an existing file or folder by restoring it with CIFS shadow copy, the ACLs applied on the individual file or folder are not restored. The ACLS applied on the individual file or folder remain as they were before the restore. 88 Using CIFS

89 Restore operations If a file has been deleted from a directory that has Previous Versions, the user can recover a previous version of the file by performing a Restore of the parent directory. However, the Properties of the restored file will no longer list those Previous Versions. This condition is due to the X9000 snapshot infrastructure; after a file is deleted, a new file in the same location is a new inode and will not have snapshots until a new snapshot is subsequently created. However, all pre-existing previous versions of the file continue to be available from the Previous Versions of the parent directory. For example, folder Fold1 contains files f1 and f2. There are two snapshots of the folder at timestamps T1 and T2, and the Properties of Fold1 show Previous Versions T1 and T2. The Properties of files f1 and f2 also show Previous Versions T1 and T2 as long as these files have never been deleted. If the file f1 is now deleted, you can restore its latest saved version from Previous Version T2 on Fold1. From that point on, the Previous Versions of \Fold1\f1 no longer show timestamps T1 and T2. However, the Previous Versions of \Fold1 continue to show T1 and T2, and the T1 and T2 versions of file f1 continue to be available from the folder. Windows Clients Behavior Users should have full access on files and folders to restore them with CIFS shadow copy. If the user does not have adequate permission, an error appears and the user is prompted to skip that file or folder when the failover is complete. After the user skips the file or folder, the restore operation may or may not continue depending on the Windows client being used. For Windows Vista, the restore operation continues by skipping the folder or file. For other Windows clients (Windows 2003, XP, 2008), the operation stops abruptly or gives an error message. Testing has shown that Windows Vista is an ideal client for CIFS shadow copy support. X9000 software does not have any control over the behavior of other clients. NOTE: HP recommends that the share root is not at the same level as the file system root, and is instead a subdirectory of the file system root. This configuration reduces access and other permissions-related issues, as there are many system files (such as lost+found, quota subsystem files, and so on) at the root of the file system. CIFS clients 89

90 CIFS shadow copy restore during node failover If a node fails over while a CIFS shadow copy restore is in progress, the user may see a disruption in the restore operation. After the failover is complete, the user must skip the file that could not be accessed. The restore operation then proceeds. The file will not be restored and can be manually copied later, or the user can cancel the restore operation and then restart it. Permissions in a cross-protocol CIFS environment The manner in which the CIFS server handles permissions affects the use of files by both Windows and Linux clients. Following are some considerations. How the CIFS server handles UIDs and GIDs The CIFS server provides a true Windows experience for Windows users. Consequently, it must be closely aligned with Windows in the way it handles permissions and ownership on files. Windows uses ACLs to control permissions on files. The CIFS server puts a bit-for-bit copy of the ACLs on the Linux server (in the files on the X9000 file system), and validates file access through these permissions. ACLs are tied to Security Identifiers (SIDs) that uniquely identify users in the Windows environment, and which are also stored on the file in the Linux server as a part of the ACLs. SIDs are obtained from the authenticating authority for the Windows client (in X9000 Software, an Active Directory server). However, Linux does not understand Windows-style SIDs; instead, it has its own permissions control scheme based on UID/GID and permissions bits (mode bits, sticky bits). Since this is the native permissions scheme for Linux, the CIFS server must make use of it to access files on behalf of a Windows client; it does this by mapping the SID to a UID/GID and impersonating that UID/GID when accessing files on the Linux file system. From a Windows standpoint, all of the security for the X9000 Software-resident files is self-consistent; Windows clients understand ACLs and SIDs, and understand how they work together to control access to and security for Windows clients. The CIFS server maintains the ACLs as requested by the Windows clients, and emulates the inheritance of ACLs identically to the way Windows servers maintain inheritance. This creates a true Windows experience around accessing files from a Windows client. This mechanism works well for pure Linux environments, but (like the CIFS server) Linux applications do not understand any permissions mechanisms other than their own. Note that a Linux application can also use POSIX ACLs to control access to a file; POSIX ACLs are honored by the CIFS server, 90 Using CIFS

91 but will not be inherited or propagated. The CIFS server also does not map POSIX ACLs to be compatible with Windows ACLs on a file. These permission mechanisms have some ramifications for setting up shares, and for cross-protocol access to files on an X9000 system. The details of these ramifications follow. Permissions, UIDs/GIDs, and ACLs The X9000 Software CIFS server does not attempt to maintain two permission/access schemes on the same file. The CIFS server is concerned with maintaining ACLs, so performs ACL inheritance and honors ACLS. The UID/GIDs and permission bits for files on a directory tree are peripheral to this activity, and are used only as much as necessary to obtain access to files on behalf of a Windows client. The various cases the CIFS server can encounter while accessing files and directories, and what it does with UID/GID and permission bits in that access, are considered in the following sections. Pre-existing directories and files A pre-existing Linux directory will not have ACLs associated with it. In this case, the CIFS server will use the permission bits and the mapped UID/GID of the CIFS user to determine whether it has access to the directory contents. If the directory is written by the CIFS server, the inherited ACLS from the directory tree above that directory (if there are any) will be written into the directory so future CIFS access will have the ACLs to guide it. Pre-existing files are treated like pre-existing directories. The CIFS server uses the UID/GID of the CIFS user and the permission bits to determine the access to the file. If the file is written to, the ACLs inherited from the containing directory for the file are applied to the file using the standard Windows ACL inheritance rules. Working with pre-existing files and directories Pre-existing file treatment has ramifications for cross-protocol environments. If, for example, files are deposited into a directory tree using NFS and then accessed using CIFS clients, the directory tree will not have ACLs associated with it, and access to the files will be moderated by the NFS UID/GID and permissions bits. If those files are then modified by a CIFS client, they will take on the UID/GID of the CIFS client (the new owner) and the NFS clients may lose access to those files. New directories and files New directories created in a tree by the Windows client inherit the ACLs of the parent directory. They ACLs are created with the UID/GID of the Windows user (the UID/GID that the SID for the Windows user is mapped to) and they have a Linux permission bit mask of 700. This translates to Linux applications (which do not understand the Windows ACL) having owner and group (users with the same group ID) with read, write, execute permissions, and everyone else having just read and execute permissions. New files are handled the same way as directories. The files inherit the ACLs of the parent directory according to the Windows rules for ACL inheritance, and they are created with a UID/GID of the Windows user as mapped from the SID. They are assigned a permissions mask of 700. Working with new files and directories The inheritance rules of Windows assume that all directories are created on a Windows machine, where they inherit ACLs from their parent; the top level of a directory tree (the root of the file system) is assigned ACLs by the file system formatting process from the defaults for the system. This process is not in place on file serving nodes. Instead, when you create a share on a node, the share does not have any inherited ACLs from the root of the file system in which it is created. This leads to strange behavior when a Windows client attempts to use permissions to control access to a file in such a directory. The usual CREATOR/OWNER and EVERYBODY ACLs (which are a part of the typical Windows ACLS inheritance ACL set) do not exist on the containing directory for Permissions in a cross-protocol CIFS environment 91

92 the share, and are not inherited downward into the share directory tree. For true Windows-like behavior, the creator of a share must access the root of the share and set the desired ACLs on it manually (using Windows Explorer or a command line tool such as ICACLS). This process is somewhat unnatural for Linux administrators, but should be fairly normal for Windows administrators. Generally, the administrator will need to create a CREATOR/OWNER ACL that is inheritable on the share directory, and then create an inheritable ACL that controls default access to the files in the directory tree. Changing the way CIFS inherits permissions on files accessed from Linux applications To avoid the CIFS server modifying file permissions on directory trees that a user wants to access from Linux applications (so keeping permissions other than 700 on a file in the directory tree), a user can set the setgid bit in the Linux permissions mask on the directory tree. When the setgid bit is set, the CIFS server honors that bit, and any new files in the directory inherit the parent directory permission bits and group that created the directory. This maintains group access for new files created in that directory tree until setgid is turned off in the tree. That is, Linux-style permissions semantics are kept on the files in that tree, allowing CIFS users to modify files in the directory while NFS users maintain their access though their normal group permissions. For example, if a user wants all files in a particular tree to be accessible by a set of Linux users (say, through NFS), the user should set the setgid bit (through local Linux mechanisms) on the top level directory for a share (in addition to setting the desired group permissions, for example 770). Once that is done, new files in the directory will be accessible to the group that creates the directory and the permission bits on files in that directory tree will not be modified by the CIFS server. Files that existed in the directory before the setgid bit was set are not affected by the change in the containing directory; the user must manually set the group and permissions on files that already existed in the directory tree. This capability can be used to facilitate cross-protocol sharing of files. Note that this does not affect the permissions inheritance and settings on the CIFS client side. Using this mechanism, a Windows user can set the files to be inaccessible to the CIFS users of the directory tree while opening them up to the Linux users of the directory tree. Troubleshooting CIFS Changes to user permissions do not take effect immediately The CIFS implementation maintains an authentication cache that is set to four hours. If a user is authenticated to a share, and the user's permissions are then changed, the old permissions will remain in effect until the cache expires, at four hours after the authentication. The next time the user is encountered, the new, correct value will be read and written to the cache for the next four hours. This is not a common occurrence. However, to avoid the situation, use the following guidelines when changing user permissions: After a user is authenticated to a share, wait four hours before modifying the user's permissions. Conversely, it is safe to modify the permissions of a user who has not been authenticated in the previous four hours. Robocopy errors occur during node failover or failback If Robocopy is in use on a client while a file serving node is failed over or failed back, the application repeatedly retries to access the file and reports the error The process cannot access the file because it is being used by another process. These errors 92 Using CIFS

93 occur for 15 to 20 minutes. The client's copy will then continue without error if the retry timeout has not expired. To work around this situation, take one of these steps: Stop and restart the Likewise process on the affected file serving node: # /opt/likewise/bin/lwsm stop lwreg && /etc/init.d/lwsmd stop # /etc/init.d/lwsmd start && /opt/likewise/bin/lwsm start srvsvc Power down the file serving node before failing it over, and do failback operations only during off hours. The following xcopy and robocopy options are recommended for copying files from a client to a highly available CIFS server: xcopy: include the option /C; in general, /S /I /Y /C are good baseline options. robocopy: include the option /ZB; in general, /S /E /COPYALL /ZB are good baseline options. Copy operations interrupted by node failback If a node failback occurs while xcopy or robocopy is copying files to a CIFS share, the copy operation might be interrupted and need to be restarted. Active Directory users cannot access CIFS shares If any AD user is set to UID 0 in Active Directory, you will not be able to connect to CIFS shares and errors will be reported. Be sure to assign a UID other than 0 to your AD users. UID for CIFS Guest account conflicts with another user If the UID for the Guest account conflicts with another user, you can delete the Guest account and recreate it with another UID. Use the following command to delete the Guest account, and enter yes when you are prompted to confirm the operation: /opt/likewise/bin/lw-del-user Guest Recreate the Guest account, specifying a new UID: /opt/likewise/bin/lw-add-user -force --uid <UID_number> Guest To have the system generate the UID, omit the --uid <UID_number> option. Troubleshooting CIFS 93

94 8 Using FTP The FTP feature allows you to create FTP file shares for data stored on the cluster. Clients access the FTP shares using standard FTP and FTPS protocol services. IMPORTANT: Before configuring FTP, select an authentication method (either Local Users or Active Directory). See Configuring authentication for CIFS, FTP, and HTTP (page 54) for more information. An FTP configuration consists of one or more configuration profiles and one or more FTP shares. A configuration profile defines global FTP parameters and specifies the file serving nodes on which the parameters are applied. The vsftpd service starts on these nodes when the cluster services start. Only one configuration profile can be in effect on a particular node. An FTP share defines parameters such as access permissions and lists the file system to be accessed through the share. Each share is associated with a specific configuration profile. The share parameters are added to the profile's global parameters on the file serving nodes specified in the configuration profile. You can create multiple shares having the same physical path, but with different sets of properties, and then assign users to the appropriate share. Be sure to use a different IP address or port for each share. You can configure and manage FTP from the GUI or CLI. Best practices for configuring FTP When configuring FTP, follow these best practices: If an SSL certificate will be required for FTPS access, add the SSL certificate to the cluster before creating the shares. See Managing SSL certificates (page 117) for information about creating certificates in the format required by X9000 Software and then adding them to the cluster. When configuring a share on a file system, the file system must be mounted. If the directory path to the share includes a subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute permissions to it. (X9000 Software does not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the share path.) For High Availability, when specifying IP addresses for accessing a share, use IP addresses for VIFs having VIF backups. See the administrator guide for your system for information about creating VIFs. The allowed ports are 21 (FTP) and 990 (FTPS). Managing FTP from the GUI Use the Add New File Share Wizard to configure FTP. You can then view or modify the configuration as necessary. Configuring FTP On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click Add to start the Add New File Share Wizard. On the File Share page, select FTP as the File Sharing Protocol. Select the file system, which must be mounted, and enter the default directory path for the share. If the directory path includes a subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute permissions to it. (X9000 software does not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the share path.) 94 Using FTP

95 On the Config Profile page, select an existing configuration profile or create a new profile, specifying a name and defining the appropriate parameters. Managing FTP from the GUI 95

96 On the Host Servers page, select the servers that will host the configuration profile. On the Settings page, configure the FTP parameters that apply to the share. The parameters are added to the file serving nodes hosting the configuration profile. Also enter the IP addresses and ports that clients will use to access the share. For High Availability, specify the IP address of a VIF having a VIF backup. NOTE: The allowed ports are 21 (FTP) and 990 (FTPS). NOTE: If you need to allow NAT connections to the share, use the Modify FTP Share dialog box after the share is created. 96 Using FTP

97 On the Users page, specify the users to be given access to the share. IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient access permissions at the file system level for the directories exposed as shares. Managing FTP from the GUI 97

98 To define permissions for a user, click Add to open the Add User to Share dialog box. Managing the FTP configuration Select File Shares > FTP from the Navigator to display the current FTP configuration. The FTP Config Profiles panel lists the profiles that have been created. The Shares panel shows the FTP shares associated with the selected profile. 98 Using FTP

99 Use the buttons on the panels to modify or delete the selected configuration profile or share. You can also add another FTP share to the selected configuration profile. Use the Modify FTP Share dialog box if you need to allow NAT connections on the share. Managing FTP from the CLI FTP is managed with the ibrix_ftpconfig and ibrix_ftpshare commands. For detailed information, see the HP IBRIX X9000 Network Storage System CLI Reference Guide. Configuring FTP To configure FTP, first add a configuration profile, and then add an FTP share: Add a configuration profile: ibrix_ftpconfig a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as passive_enable=true,maxclients=200. To see a list of available settings for the profile, use the following command: ibrix_ftpconfig L Add an FTP share: ibrix_ftpshare -a SHARENAME c PROFILENAME -f FSNAME -p dirpath -I IP-Address:Port [ u USERLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as browseable=true,readonly=true. For the -I option, use a semicolon to separate the IP address:port settings and enclose the settings in quotation marks, such as ip1:port1;ip2:port2;.... To list the available settings for the share, use the following command: ibrix_ftpshare L Managing the FTP configuration Use the following commands to view, modify, or delete the FTP configuration. In the commands, use -v 1 to display detailed information. View configuration profiles: ibrix_ftpconfig i -h HOSTLIST [ v level] Managing FTP from the CLI 99

100 Modify a configuration profile: ibrix_ftpshare -m SHARENAME c PROFILENAME [-f FSNAME -p dirpath] -I IP-Address:Port [ u USERLIST] [-S SETTINGLIST] Delete a configuration profile: ibrix_ftpconfig d PROFILENAME View an FTP share: ibrix_ftpshare -i SHARENAME c PROFILENAME [ v level] List FTP shares associated with a specific profile: ibrix_ftpshare -l c PROFILENAME [ v level] List FTP shares associated with a specific file system: ibrix_ftpshare -l f FSNAME [ v level] Modify an FTP share: ibrix_ftpshare -m SHARENAME c PROFILENAME [-f FSNAME -p dirpath] -I IP-Address:Port [ u USERLIST] [-S SETTINGLIST] Delete an FTP share: ibrix_ftpshare -d SHARENAME c PROFILENAME The vsftpd service When the cluster services are started on a file serving node, the vsftpd service starts automatically if the node is included in a configuration profile. Similarly, when the cluster services are stopped, the vsftpd service also stops. If necessary, use the Linux command ps -ef grep vsftpd to determine whether the service is running. If you do not want vsftpd to run on a particular node, remove the node from the configuration profile. IMPORTANT: For FTP share access to work properly, the vsftpd service must be started by X9000 software. Ensure that the chkconfig of vsftpd is set to OFF (chkconfig vsftpd off). Starting or stopping the FTP service manually Start the FTP service: /usr/local/ibrix/ftpd/etc/vsftpd start /usr/local/ibrix/ftpd/hpconf/ Stop the FTP service: /usr/local/ibrix/ftpd/etc/vsftpd stop /usr/local/ibrix/ftpd/hpconf/ Restart the FTP service: /usr/local/ibrix/ftpd/etc/vsftpd restart /usr/local/ibrix/ftpd/hpconf/ NOTE: When the FTP configuration is changed with the GUI or CLI, the FTP daemon is restarted automatically. 100 Using FTP

101 Accessing shares Clients can access an FTP share by specifying a URL in their browser (Internet Explorer or Mozilla Firefox). In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share. For a share configured with an IP-based virtual host and the anonymous parameter is set to true, use the following URL: ftp://ip_address:port/ For a share configured with a userlist and having the anonymous parameter set to false, use the following URL: ftp://<addomain\username>@ip_address:port/ NOTE: When a file is uploaded into an FTP share, the file is owned by the user who uploaded the file to the share. If a user uploads a file to an FTP share and specifies a subdirectory that does not already exist, the subdirectory will not be created automatically. Instead, the user must explicitly use the mkdir ftp command to create the subdirectory. The permissions on the new directory are set to 777. If the anonymous user created the directory, it is owned by ftp:ftp. If a non-anonymous user created the directory, the directory is owned by user:group. You can also use curl commands to access an FTP share. (The default SSL port is 990.) For anonymous users: Upload a file using FTP protocol: curl -T <filename> -k ftp://ip_address/pub/ -u anonymous Upload a file using FTPS protocol: curl -T <filename> -k --ftp-ssl-reqd ftp://ip_address:990/pub/ -u ftp Download a file using FTP protocol: curl -k ftp://ip_address/pub/<filename> -u anonymous Download a file using FTPS protocol: curl -k --ftp-ssl-reqd ftp://ip_address:990/pub/<file_name> -u ftp The following example shows an anonymous client accessing a share. Accessing shares 101

102 For Active Directory users (specify the user as in this example: ASM2k3.com\\ib1): Upload a file using FTP protocol: curl -T <filename> -k ftp://ip_address/pub/ -u <ADuser> Upload a file using FTPS protocol: curl -T <filename> -k --ftp-ssl-reqd ftp://ip_address:990/pub/ -u <ADuser> Download a file using FTP protocol: curl -k ftp://ip_address/<filename> -u <ADuser> Download a file using FTPS protocol: curl -k --ftp-ssl-reqd ftp://ip_address:990/<filename> -u (ADuser> Shares can be accessed from any Fusion Manager that has FTP clients: ftp <Virtual_IP> For FTPS, use the following command from the active Fusion Manager: lftp -u <user_name> -p <ssl port> -e 'set ftp:ssl-force true' <share_ip> 102 Using FTP

103 9 Using HTTP The HTTP feature allows you to create HTTP file shares for data stored on the cluster. Clients access the HTTP shares using standard HTTP and HTTPS protocol services. IMPORTANT: Before configuring HTTP, select an authentication method (either Local Users or Active Directory). See Configuring authentication for CIFS, FTP, and HTTP (page 54) for more information. The HTTP configuration consists of a configuration profile, a virtual host, and an HTTP share. A profile defines global HTTP parameters that apply to all shares associated with the profile. The virtual host identifies the IP addresses and ports that clients will use to access shares associated with the profile. A share defines parameters such as access permissions and lists the file system to be accessed through the share. HTTP is administered from thee GUI or CLI. On the GUI, select HTTP from the File Shares list in the Navigator. The HTTP Config Profiles panel lists the current HTTP configuration, including the existing configuration profiles and the virtual hosts configured on the selected profile. Best practices for configuring HTTP When configuring HTTP, follow these best practices: If an SSL certificate will be required for HTTPS access, add the SSL certificate to the cluster before creating the shares. See Managing SSL certificates (page 117) for information about creating certificates in the format required by X9000 software and then adding them to the cluster. When configuring a share on a file system, the file system must be mounted. If the directory path to the share includes a subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute permissions to it. (X9000 software does not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the share path.) Best practices for configuring HTTP 103

104 Ensure that all users who are given read or write access to HTTP shares have sufficient access permissions at the file system level for the directories exposed as shares. For High Availability, when specifying IP addresses for accessing a share, use IP addresses for VIFs having VIF backups. See the administrator guide for your system for information about creating VIFs. Managing HTTP from the GUI Configuring HTTP Use the Add New File Share Wizard to configure HTTP. You can then view or modify the configuration as necessary. On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click Add to start the Add New File Share Wizard. On the File Share page, select HTTP as the File Sharing Protocol. Select the file system, which must be mounted, and enter a share name and the default directory path for the share. On the Config Profile page, select on existing profile or configure a new profile, specifying a name and the appropriate parameters for the profile. 104 Using HTTP

105 On the Host Servers page, select the servers that will host the configuration profile. Managing HTTP from the GUI 105

106 106 Using HTTP On the Virtual Host page, enter a name for the virtual host and specify an SSL certificate and domain name if used. Also add one or more IP addresses:ports for the virtual host. For High Availability, specify a VIF having a VIF backup.

107 On the Settings page, set the appropriate parameters for the share. Note the following: When specifying the URL Path, do not include address> or any variation of this in the URL path. For example, /reports/ is a valid URL path. When the WebDAV feature is enabled, the HTTP share becomes a readable and writable medium with locking capability. The primary user can make edits, while other users can only view the resource in read-only mode. The primary user must unlock the resource before another user can make changes. Set the Anonymous field to false only if you want to restrict access to specific users. Managing HTTP from the GUI 107

108 On the Users page, specify the users to be given access to the share. IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient access permissions at the file system level for the directories exposed as shares. 108 Using HTTP

109 To allow specific users read access, write access, or both, click Add. On the Add Users to Share dialog box, assign the appropriate permissions to the user. When you complete the dialog, the user is added to the list on the Users pge. The Summary panel presents an overview of the HTTP configuration. You can go back and modify any part of the configuration if necessary. When the wizard is complete, users can access the share from a browser. For example, if you configured the share with the anonymous user, specified as the IP address on the Create Vhost dialog box, and specified /reports/ as the URL path on the Add HTTP Share dialog box, users can access the share using the following URL: Managing HTTP from the GUI 109

110 The users will see an index of the share (if the browseable property of the share is set to true), and can open and save files. For more information about accessing shares and uploading files, see Accessing shares (page 113). Managing the HTTP configuration Select File Shares > HTTP from the Navigator to display the current HTTP configuration. The HTTP Config Profiles panel lists the profiles that have been created. The Vhosts panel shows the virtual hosts associated with the selected profile. Use the buttons on the panels to modify or delete the selected configuration profile or virtual host. To view HTTP shares on the GUI, select the appropriate profile on the HTTP Config Profiles top panel, and then select the appropriate virtual host from the lower navigator. The Shares bottom panel shows the shares configured on that virtual host. Click Add Share to add another share to the virtual host. For example, you could create multiple shares having the same physical path, but with different sets of properties, and then assign users to the appropriate share. Tuning the socket read block size and file write block size By default, the socket read block size and file write block size used by Apache are set to 8192 bytes. If necessary, you can adjust the values with the ibrix_httpconfig command. The values must be between 8KB and 2GB. 110 Using HTTP

111 ibrix_httpconfig a profile1 h node1,node2 -S wblocksize=<value>,rblocksize=<value> You can also set the values on the Modify HTTP Profile dialog box: Managing HTTP from the CLI On the command line, HTTP is managed by the ibrix_httpconfig, ibrix_httpvhost, and ibrix_httpshare commands. For detailed information, see the HP IBRIX X9000 Network Storage System CLI Reference Guide. Configuring HTTP Add a configuration profile: ibrix_httpconfig -a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as keepalive=true,maxclients=200,.... To see a list of available settings for the share, use ibrix_httpconfig -L. Add a virtual host: ibrix_httpvhost -a VHOSTNAME -c PROFILENAME -I IP-Address:Port [-S SETTINGLIST] Add an HTTP share: ibrix_httpshare -a SHARENAME -c PROFILENAME -t VHOSTNAME -f FSNAME -p dirpath -P urlpath [-u USERLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as davmethods=true,browseable=true,readonly=true. For example, to create a new HTTP share and enable the WebDAV property on that share: # ibrix_httpshare a share3 -c cprofile1 t dav1vhost1 f ifs1 p /ifs1/dir1 P url3 S davmethods=true To see all of the valid settings for an HTTP share, use the following command: ibrix_httpshare L Managing HTTP from the CLI 111

112 Managing the HTTP configuration View a configuration profile: ibrix_httpconfig -i PROFILENAME [-v level] Modify a configuration profile: ibrix_httpconfig -m PROFILENAME [-h HOSTLIST] [-S SETTINGLIST] Delete a configuration profile: ibrix_httpconfig -d PROFILENAME View a virtual host: ibrix_httpvhost -i VHOSTNAME -c PROFILENAME [-v level] Modify a virtual host: ibrix_httpvhost -m VHOSTNAME -c PROFILENAME -I IP-Address:Port [-S SETTINGLIST] Delete a virtual host: ibrix_httpvhost -d VHOSTNAME -c PROFILENAME View an HTTP share: ibrix_httpshare -i SHARENAME -c PROFILENAME -t VHOSTNAME [-v level] Modify an HTTP share: ibrix_httpshare -m SHARENAME -c PROFILENAME -t VHOSTNAME [-f FSNAME -p dirpath] [-P urlpath] [-u USERLIST] [-S SETTINGLIST] The following example modifies an HTTP, enabling WebDAV: # ibrix_httpshare -m share1 c cprofile1 t dav1vhost1 S "davmethods=true" Delete an HTTP share: ibrix_httpshare -d SHARENAME -c PROFILENAME -t VHOSTNAME Starting or stopping the HTTP service manually Start the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k start -f /usr/local/ibrix/httpd/conf/httpd.conf Stop the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k stop -f /usr/local/ibrix/httpd/conf/httpd.conf Restart the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k restart -f /usr/local/ibrix/httpd/conf/httpd.conf NOTE: When the HTTP configuration is changed with the GUI or CLI, the HTTP daemon is restarted automatically. 112 Using HTTP

113 Accessing shares Clients access an HTTP share by specifying a URL in their browser (Internet Explorer or Mozilla Firefox). In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share. For a share configured with an IP-based virtual host and the anonymous parameter is set to true, use the following URL: For a shared configured with a userlist and having the anonymous parameter set to false, use the following URL: Enter your user name and password when prompted. NOTE: When a file is uploaded into an HTTP share, the file is owned by the user who uploaded the file to the share. If a user uploads a file to an HTTP share and specifies a subdirectory that does not already exist, the subdirectory will be created. For example, you could have a share mapped to the directory /ifs/http/ and using the url http_url. A user could upload a file into the share: curl -T file If the directory new_dir does not exist under http_url, the http service automatically creates the directory /ifs/http/new_dir/ and sets the permissions to 777. If the anonymous user performed the upload, the new_dir directory is owned by daemon:daemon. If a non-anonymous user performed the upload, the new_dir directory is owned by user:group. You can also use curl commands to access an HTTP share. For anonymous users: Upload a file using HTTP protocol: curl -T <filename> Upload a file using HTTPS protocol: curl --cacert <cacert_file> -T <filename> Download a file using HTTP protocol: curl -o <path to download>/<filename>/ Download a file using HTTPS protocol: curl --cacert <cacert_file> -o <path to download>/<filename>/ For Active Directory users (specify the user as in this example: mycompany.com\\user1): Upload a file using HTTP protocol: curl T <filename> -u <ADuser> Upload a file using HTTPS protocol: curl --cacert <cacert_file> -T <filename> -u <ADuser> Accessing shares 113

114 Download a file using HTTP protocol: curl -u <ADuser> -o path to download>/<filename>/ Download a file using HTTPS protocol: curl --cacert <cacert_file> -u <ADuser> -o path to download>/<filename>/ Configuring Windows clients to access HTTP WebDAV shares Complete the following steps to set up and access WebDAV enabled shares: Verify the entry in the Windows hosts file. Before mapping a network drive in Windows, verify that an entry exists in the c:\windows\ System32\drivers\etc\hosts file. For example, IP address is assigned to a Vhost named vhost1, and if the Vhost name is not being used to map the network drive, the client should be able to resolve the domain name such as (in reference to domain name-based virtual hosts). Verify the characters in the Windows hosts file. The Windows c:\windows\system32\drivers\etc\hosts file specifies IP versus hostname mapping. Verify that the hostname in the file includes alpha-numeric characters only. Verify that the WebClient Service is started. The WebClient Service must be started on Windows-based clients attempting to access the WebDAV share. The WebClient service is missing by default on Windows To install the WebClient service, the Desktop Experience package must be installed. See technet.microsoft.com/en-us/library/cc aspx for more information. Update the Windows registry. When using WebDAV shares in Windows Explorer, you must edit the Windows registry if there are many files in the WebDAV shares or the files are large. Launch the windows registry editor using the regedit command. Go to: Computer\HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\WebClient\Parameters Change the value of FileSizeLimitInBytes from the default value of to (which is the value of 2 GB in bytes). Change the value of FileAttributesLimitInBytes from the default value of to Enable debug logging on the server. Edit the /usr/local/ibrix/httpd/conf/httpd.conf file and change the line LogLevel warn to LogLevel debug. Next, restart Apache on the file serving nodes: Use the following command to stop Apache: /usr/local/ibrix/httpd/bin/apachectl stop Use the following command to start Apache: /usr/local/ibrix/httpd/bin/apachectl start Save documents during node failovers. During a failover, MS Office 2010 restores the connection when the connection is lost on the server, but you must wait until you are asked to refresh the document being edited. In MS Office 2003 and 2007, you must save the document locally. After the failover is successful, you must re-map the drive and save the document on the WebDAV share. 114 Using HTTP

115 When creating certificates, verify that the hostname matches the Vhost name. When creating a certificate, the hostname should match the Vhost name or the domain name issued when mapping a network drive or opening the file directly using the URL such as storage.hp.com/share/foo.docx. Consider the assigned IP address when mapping a network drive on Windows. When mapping a network drive in Windows, if the IP address assigned to the Vhost is similar to the format , there should be a corresponding entry in the Windows hosts file. Instead of using the IP address in the mapping, use the name specified in the hosts file. For example, can be mapped as srv1vhost1, and you can issue the URL srv1vhost1/share when mapping the network drive. Unlock locked files. Use the command BitKinex to unlock locked files if the files do not unlock before closing the application. Remove zero byte files created by Microsoft Excel. Microsoft Excel creates 0 byte files on the WebDAV shares. For example, after editing the file foo.xlsx and saving it more than once, a file such as ~$foo.xlsx is created with 0 bytes in size. Delete this file using a tool such as BitKinex, or remove the file on the file system. For example, if the file system is mounted at /ifs1 and the share directory is /ifs1/dir1, remove the file /ifs1/dir1/~$foo.xlsx. Use the correct URL path when mapping WebDAV shares on Windows When mapping WebDAV shares on Windows 2003, the URL should not end with a trailing slash (/). For example, can be mapped, but storage.hp.com/ cannot be mapped. Also, you cannot map because of limitations with Windows Delete read-only files through Windows Explorer. If you map a network drive for a share that includes files designated as read-only on the server, and you then attempt to delete one of those files, the file appears to be deleted. However, when you refresh the folder (using the REFRESH command), the folder containing the deleted file in Windows Explorer reappears. This behavior is expected in Windows Explorer. NOTE: Symbolic links are not implemented in the current WebDAV implementation (Apache s mod-dav module). NOTE: After mapping a network drive of a WebDAV share on Windows, Windows Explorer reports an incorrect folder size or available free space on the WebDAV share. Troubleshooting HTTP HTTP WebDAV share is inaccessible through Windows Explorer when files greater than 10k are created When files greater than 10k are created, the HTTP WebDAV share is inaccessible through Windows Explorer and the following error appears: Windows cannot access this disc: This disc might be corrupt. This condition is seen in various Windows clients such as Windows 2008, Windows 7, and Windows Vista. The condition persists even if the share is disconnected and re-mapped through Windows Explorer. The files are accessible on the file serving node and through BitKinex. Use the following workaround to resolve this condition: Troubleshooting HTTP 115

116 1. Disconnect the network drive. 2. In Windows, select Start > Run and enter regedit. 3. Increase FileAttributeLimitInBytes from the default value of to (by a factor of 10). 4. Increase FileSizeLimitInBytes 10 times by adding one extra zero. 5. Save the registry and quit. 6. Reboot the Windows system. 7. Map the network drive to allow you to access the WebDAV share containing large files. HTTP WebDAV share fails when downloading a large file from a mapped network drive When downloading or copying a file greater than 800 MB in Windows Explorer, the HTTP WebDAV share fails. Use the following workaround to resolve this condition: 1. In Windows, select Goto > Start > Run and type regedit to open the Windows registry editor. 2. Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters NOTE: 7. This hierarchy exists only if WebClient is installed on Windows Vista or Windows 3. Change the registry parameter values to allow for the increased file size. a. Set the value of FileAttributesLimitInBytes to in decimal. b. Set the value of FileSizeLimitInBytes to in decimal, which equals 2 GB. Mapping HTTP WebDAV share as AD or local user through Windows Explorer fails if the HTTP Vhost IP address is used Mapping the HTTP WebDAV share to a network drive as Active Directory or local user through Windows Explorer fails on Windows 2008 if the HTTP Vhost IP address is used. To resolve this condition, add the Vhost names and IP addresses in the hosts file on the Windows clients. 116 Using HTTP

117 10 Managing SSL certificates Servers accepting FTPS and HTTPS connections typically provide an SSL certificate that verifies the identity and owner of the web site being accessed. You can add your existing certificates to the cluster, enabling file serving nodes to present the appropriate certificate to FTPS and HTTPS clients. X9000 Software supports PEM certificates. When you configure the FTP share or the HTTP vhost, select the appropriate certificate. You can manage certificates from the GUI or the CLI, On the GUI, select Certificates from the Navigator to open the Certificates panel. The Certificate Summary shows the parameters for the selected certificate. Creating an SSL certificate Before creating a certificate, OpenSSL must be installed and must be included in your PATH variable (in RHEL5, the path is /usr/bin/openssl). There are two parts to a certificate: the certificate contents (specified in a.crt file) and a private key (specified in a.key file). Certificates added to the cluster must meet these requirements: The certificate contents (the.crt file) and the private key (the.key file) must be concatenated into a single file. The concatenated certificate file must include the headers and footers from the.crt and.key files. The concatenated certificate file cannot contain any extra spaces. Before creating a real certificate, you can create a self-signed SSL certificate and test access with it. Complete the following steps to create a test certificate that meets the requirements for use in an X9000 cluster: Creating an SSL certificate 117

118 1. Generate a private key: openssl genrsa -des3 -out server.key 1024 You will be prompted to enter a passphrase. Be sure to remember the passphrase. 2. Remove the passphrase from the private key file (server.key). When you are prompted for a passphrase, enter the passphrase you specified in step 1. cp server.key server.key.org openssl rsa -in server.key.org -out server.key rm -f server.key.org 3. Generate a Certificate Signing Request (CSR): openssl req -new -key server.key -out server.csr 4. Self-sign the CSR: openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt 5. Concatenate the signed certificate and the private key: cat server.crt server.key > server.pem When adding a certificate to the cluster, use the concatenated file (server.pem in our example) as the input for the GUI or CLI. The following example shows a valid PEM encoded certificate that includes the certificate contents, the private key, and the headers and footers: -----BEGIN CERTIFICATE----- MIICUTCCAboCCQCIHW1FwFn2ADANBgkqhkiG9w0BAQUFADBtMQswCQYDVQQGEwJV UzESMBAGA1UECBMJQmVya3NoaXJlMRAwDgYDVQQHEwdOZXdidXJ5MQwwCgYDVQQK EwNhYmMxDDAKBgNVBAMTA2FiYzEcMBoGCSqGSIb3DQEJARYNYWRtaW5AYWJjLmNv btaefw0xmdeymtewndq0mddafw0xmteymtewndq0mddamg0xczajbgnvbaytalvt MRIwEAYDVQQIEwlCZXJrc2hpcmUxEDAOBgNVBAcTB05ld2J1cnkxDDAKBgNVBAoT A2FiYzEMMAoGA1UEAxMDYWJjMRwwGgYJKoZIhvcNAQkBFg1hZG1pbkBhYmMuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDdrjHH/W93X7afTIUOrllCHw21 u31tinmdbzzi+r18r9sz/muuyvg4kjcbooqnohuir/s4aaeulaonf4mvqlfzlkbe 25HgT+ImshLzyHqPImuxTEXvjG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W6 8juMVAw2cFDHxji2GQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAKvYJK8RXKMObCKk ae6oj36fekdl/achcw0nxk/vmr4dv9lik8dv8sdyuuqhkname2yoari190c5bwsa MjhSjOOqUmmgmeDYlAu+ps3/1Fte5yl4ZV8VCu7bHCWx2OSy46Po03MMOu99JXrB /GCKE8fO8Fhyq/7LjFDR5GeghmSw -----END CERTIFICATE BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQDdrjHH/W93X7afTIUOrllCHw21u31tinMDBZzi+R18r9SZ/muu yvg4kjcbooqnohuir/s4aaeulaonf4mvqlfzlkbe25hgt+imshlzyhqpimuxtexv jg5h1sedlnuqkhvl8hf9wxao1tv4el8tl5kqk1w68jumvaw2cfdhxji2gqidaqab AoGBAMXPWryKeZyb2+np7hFbompOK32vAA1vLZHUwFoI0Tch7yQ60vv2PBvlZCQf 4y06ik5xmkqLA+tsGxarx8DnXKUy0PHJ3hu6mTocIJdqqN0n+KO4tG2dvDPdSE7l phx2sy9mvt4x/qn3enb/f3chjnm9byer0by3mtkkxz61jzabakea+m3pproywvs6 P8m4DenZh6ehsu4u/ycjmW/ujdp/PcRd5HBAWJasTXTezF5msugHnnNBe8F1i1q4 9PfL0C+kuQJBAOQXjrmPZxDc8YA/V45MUKv4eHHN0E03p84budtblHQ70BCLaO41 n267t3drzfw+vtsvdvbmja4uhobasgv3rgecqqcildr6k2ymbd+og/xlerd6ww+o G96S/bvpNa7t6qFrj/cHmTxOgCDLv+RVHHG/B2lsGo7Dig2oeL30LU9aoUjZAkBV KSqDw7PyitusS3oQShQQsTufGf385pvDi3yQFxhNcYuUschisCivumyaP3mZEBDz yv9ollz1uvqi79pspfphakeaxsqebd1ymqr2wi0rnktmhfdcb3ywlpi57kc+lgrk LUlxawhTzDwzTWJ9m4gQqRlAaXoIElfk6ITwW0g9Th5Ouw== -----END RSA PRIVATE KEY----- NOTE: When you are ready to create a real SSL certificate, consult the following site for a description of the procedure: Managing SSL certificates

119 Adding a certificate to the cluster To add an existing certificate to the cluster, click Add on the Certificates panel. On the Add Certificate dialog box, enter a name for the certificate. Use a Linux command such as cat to display your concatenated certificate file. For example: cat server.pem Copy the contents of the file to the Certificate Content section of the dialog box. The copied text must include the certificate contents and the private key in PEM encoding. It must also include the proper headers and footers, and cannot contain any extra spaces. NOTE: You can add only one certificate at a time. The certificate is saved on all file serving nodes in the directory /usr/local/ibrix/pki. To add a certificate from the CLI, use the following command. ibrix_certificate -a -c CERTNAME -p CERTPATH For example: # ibrix_certificate -a -c mycert -p server.pem Run the command from the active Fusion Manager. To add a certificate for a different node, copy that certificate to the active Fusion Manager and then add it to the cluster. For example, if node ib87 is hosting the active Fusion Manager and you have generated a certificate for node ib86, copy the certificate to ib87: scp server.pem ib87/tmp Then, on node ib87, add the certificate to the cluster: ibrix_certificate -a -c cert86 p /tmp/server.pem Adding a certificate to the cluster 119

120 Exporting a certificate If necessary, you can display a certificate and then copy and save the contents for future use. This step is called exporting. Select the certificate on the Certificates panel and click Export. To export a certificate from the CLI, use this command: ibrix_certificate -e -c CERTNAME Deleting a certificate To delete a certificate from the GUI, select the certificate on the Certificates panel, click Delete, and confirm the operation. To delete a certificate from the CLI, use this command: ibrix_certificate -d -c CERTNAME 120 Managing SSL certificates

121 11 Using remote replication Overview This chapter describes how to configure and manage the Continuous Remote Replication (CRR) service. The CRR service provides a method to replicate changes in a source file system on one cluster to a target file system on either the same cluster (intra-cluster replication) or a second cluster (inter-cluster replication). Both files and directories are replicated with remote replication, and no special configuration of segments is needed. A remote replication task includes the initial synchronization of the source and target file systems. When selecting file systems for remote replication, you should be aware of the following: One, multiple, or all file systems in a single cluster can be replicated. Remote replication is a one-way process. Bidirectional replication of a single file system is not supported. The mountpoint of the source file system can be different from the mountpoint on the target file system. Remote replication has minimal impact on these cluster operations: Cluster expansion (adding a new server) is allowed as usual on both the source and target. File systems can be exported over NFS, CIFS, FTP, or HTTP. Source or target file systems can be rebalanced while a remote replication job is in progress. File system policies (ibrix_fs_tune) can be set on both the source and target without any restrictions. The Fusion Manager initializes remote replication. However, each file serving node runs its own replication and synchronization processes, independent of and in parallel with other file serving nodes. The individual daemons running on the file serving nodes perform the actual file system replication. The source-side Fusion Manager monitors the replication and reports errors, failures, and so on. Continuous or run-once replication modes CRR can be used in two modes: continuous or run-once. Continuous replication. This method tracks changes on the source file system and continuously replicates these changes to the target file system. The changes are tracked for the entire file system and are replicated in parallel by each file serving node. There is no strict order to replication at either the file system or segment level. The continuous remote replication program tries to replicate on a first-in, first-out basis. When you configure continuous remote replication, you must specify a file system as the source. (A source directory cannot be specified.) File systems specified as the replication source or target must already exist. The replication starts at the root of the source file system (the mount point). Run-once replication. This method replicates a single directory sub-tree or an entire file system from the source file system to the target file system. Run-once is a single-pass replication of all files and subdirectories within the specified directory or file system. All changes that have occurred since the last replication task are replicated from the source file system to the target file system. File systems specified as the replication source or target must exist. If a directory is specified as the replication source, the directory must exist on the source cluster under the specified source file system. Overview 121

122 NOTE: the GUI. Run-once can also be used to replicate a single software snapshot. This must be done on You can replicate to a remote cluster (an intercluster replication) or the same cluster (an intracluster replication). Using intercluster replications Intercluster configurations can be continuous or run-once: Continuous: asynchronously replicates the initial state of a file system and any changes to it. Snapshots cannot be replicated. Run-once: replicates the current state of a file system, folder, or file system snapshot. The examples in the configuration rules use three X9000 clusters: C1, C2, and C3: C1 has two file systems, c1ifs1 and c1ifs2, mounted as /c1ifs1 and /c1ifs2. C2 has two file systems, c2ifs1 and c2ifs2, mounted as /c2ifs1 and /c2ifs2. C3 has two file systems, c3ifs1 and c3ifs2, mounted as /c3ifs1 and /c3ifs2. In the examples, <cluster name>:<target path> designates a replication target such as C1:/c1ifs1/target1. The following rules apply to intercluster replications: Only one continuous Remote Replication task can run per file system. It must replicate from the root of the filesystem; you cannot continuously replicate a subdirectory of a file system. A continuous Remote Replication task can replicate to only one target cluster. Replication targets are directories in an X9000 file system and can be: The root of a filesystem such as /c3ifs1. A subdirectory such as /c3ifs1/target1. Targets must be explicitly exported using CRR commands to make them available to CRR replication tasks. A subdirectory created beneath a CRR export can be used as a target by a replication task without being explicitly exported in a separate operation. For example, if the exported target is /c3ifs1/target1, you can replicate to folder /c3ifs1/target1/subtarget1 if the folder already exists. Directories exported as targets cannot overlap. For example, if C1 is replicating /c1ifs1 to C2:/c2ifs1/target1, C3 cannot replicate /c3ifs1 to C2:/c2ifs1/target1/target2. A cluster can be a target for one replication task at the same time that it is replicating data to another cluster. For example, C1 can replicate /c1ifs1 to C2:/c2ifs1/target1 and C2 can replicate /c2ifs2 to C1:/c1ifs2/target2, with both replications occurring at the same time. A cluster can be a target for multiple replication tasks. For example, C1 can replicate /c1ifs1 to C3:/c3ifs1/target1 and C2 can replicate /c2ifs1 to C3:/c3ifs1/target2, with both replications occurring at the same time. Continuous Remote Replication tasks can be linked. For example: C1 replicates /c1ifs1 to C2:/c2ifs1/target1. C2 replicates /c2ifs1/target1 to C3:/c3ifs2/target Using remote replication

123 NOTE: cluster. If a different file system is used for the target, the linkage can go back to the original To replicate a directory or snapshot on a file system covered by continuous replication, first pause the continuous task and then initiate a run-once replication task. For information about configuring intercluster replications, see Configuring the target export for replication to a remote cluster (page 123). Using intracluster replications There are two forms of intracluster replication: The same cluster and a different file system. Configure either continuous or run-once replication. You will need to specify a target file system and optionally a target directory (the default is the root of the file system or the mount point). The same cluster and the same file system. Configure run-once replication. You will need to specify a file system, a source directory, and a target directory. Be sure to specify two different, non-overlapping subdirectories as the source and target. For example, the following replication is not allowed: From <fs_root>dir1 to <fs_root>dir1/dir2 However, the following replication is allowed: From <fs_root>dir1 to <fs_root>dir3/dir4 File system snapshot replication You can use the run-once replication mode to replicate a single file system snapshot. If a snapshot replication is not explicitly configured, snapshots and all related metadata are ignored/filtered out during remote replications. Replication is not supported for block snapshots. Configuring the target export for replication to a remote cluster Use the following procedure to configure a target export for remote replication. In this procedure, target export refers to the target file system and directory (the default is root of the file system) exported for remote replication. NOTE: These steps are not required when configuring intracluster replication. Register source and destination clusters. The source and target clusters of a remote replication configuration must be registered with each other before remote replication tasks can be created. Create a target export. This step identifies the target file system and directory for replication and associates it with the source cluster. Before replication can take place, you must create a mapping between the source cluster and the target export that receives the replicated data. This mapping ensures that only the specified source cluster can write to the target export. Identify server assignments to use for remote replication. Select the servers and corresponding NICs to handle replication requests, or use the default assignments. The default server assignment is to use all servers that have the file system mounted. NOTE: Do not add or change files on the target system outside of a replication operation. Doing this can prevent replication from working properly. GUI procedure This procedure must be run from the target cluster, and is not required or applicable for intracluster replication. Configuring the target export for replication to a remote cluster 123

124 Select the file system on the GUI, and then select Remote Replication Exports from the lower Navigator. On the Remote Replication Exports bottom panel, select Add. The Create Remote Replication Export dialog box allows you to specify the target export for the replication. The mount point of the file system is displayed as the default export path. You can add a directory to the target export. The Server Assignments section allows you to specify server assignments for the export. Check the box adjacent to Server to use the default assignments. If you choose to assign particular servers to handle replication requests, select those servers and then select the appropriate NICs. If the remote cluster does not appear in the selection list for Export To (Cluster), you will need to register the cluster. Select New to open the Add Remote Cluster dialog box and then enter the requested information. If the remote cluster is running an earlier version of X9000 software, you will be asked to enter the clustername for the remote cluster. This name appears on the Cluster Configuration page on the GUI for the remote cluster. 124 Using remote replication

125 CLI procedure The Remote Replication Exports panel lists the replication exports you created for the file system. Expand Remote Replication Exports in the lower Navigator and select the export to see the configured server assignments for the export. You can modify or remove the server assignments and the export itself. NOTE: This procedure does not apply to intracluster replication. Use the following commands to configure the target file system for remote replication: 1. Register the source and target clusters with each other using the ibrix_cluster -r command if needed. To list the known remote clusters, run ibrix_cluster -l on the source cluster. 2. Create the export on the target cluster. Identify the target export and associate it with the source cluster using the ibrix_crr_export command. 3. Identify server assignments for the replication export using the ibrix_crr_nic command. The default assignments are: Use all servers that have the file system mounted. Use the cluster NIC on each server. Registering source and target clusters Run the following command on both the target cluster and the source cluster to register the clusters with each other. It is necessary to run the command only once per source or target. ibrix_cluster -r -C CLUSTERNAME -H REMOTE_FM_HOST CLUSTERNAME is the name of the Fusion Manager for a cluster. For the -H option, enter the name or IP address of the host where the remote cluster's Fusion Manager is running. For high availability, use the virtual IP address of the Fusion Manager. To list clusters registered with the local cluster, use the following command: ibrix_cluster -l To unregister a remote replication cluster, use the following command: ibrix_cluster d C CLUSTERNAME Creating the target export To create a mapping between the source cluster and the target export that receives the replicated data, execute the following command on the target cluster: ibrix_crr_export f FSNAME [-p DIRECTORY] C SOURCE_CLUSTER [ P] Configuring the target export for replication to a remote cluster 125

126 FSNAME is the target file system to be exported. The p option exports a directory located under the root of the specified file system (the default is the root of the file system). The -C option specifies the source cluster containing the file system to be replicated. Include the -P option if you do not want this command to set the server assignments. You will then need to identify the server assignments manually with ibrix_crr_nic, as described in the next section. To list the current remote replication exports, use the following command on the target cluster: ibrix_crr_export -l To unexport a file system for remote replication, use the following command: ibrix_crr_export U f TARGET_FSNAME [-p DIRECTORY] Identifying server assignments for remote replication To identify the servers that will handle replication requests and, optionally, a NIC for replication traffic, use the following command: ibrix_crr_nic -a -f FSNAME [-p directory] h HOSTLIST [-n IBRIX_NIC] When specifying resources, note the following: Specify servers by their host name or IP address (use commas to separate the names or IP addresses). A host is any server on the target cluster that has the target file system mounted. Specify the network using the X9000 Software network name (NIC). Enter a valid user NIC or the cluster NIC. The NIC assignment is optional. If it is not specified, the host name (or IP) is used to determine the network. A previous server assignment for the same export must not exist, or must be removed before a new assignment is created. The listed servers receive remote replication data over the specified NIC. To increase capacity, you can expand the number of preferred servers by executing this command again with another list of servers. You can also use the ibrix_crr_nic command for the following tasks: Restore the default server assignments for remote replication: ibrix_crr_nic -D -f FSNAME [-p directory] View server assignments for remote replication. The output lists the target exports and associated server assignments on this cluster. The assigned servers and NIC are listed with a corresponding ID number that can be used in commands to remove assignments. ibrix_crr_nic -l Remove a server assignment: ibrix_crr_nic -r -P ASSIGNMENT_ID1[,...,ASSIGNMENT_IDn] To obtain the ID for a particular server, use ibrix_crr_nic -l. Configuring and managing replication tasks on the GUI NOTE: When configuring replication tasks, be sure to following the guidelines described in Overview (page 121). Viewing replication tasks To view replication tasks for a particular file system, select that file system on the GUI and then select Active Tasks > Remote Replication in the lower Navigator. The Remote Replication Tasks bottom panel lists any replication tasks currently running or paused on the file system. 126 Using remote replication

127 Additional reports are available for the active replication tasks. In the lower Navigator, expand Active Tasks > Remote Replication to see a list of active tasks (crr-25 in the following example). Select Overall Status to see a status summary. Select Server Tasks to display the state of the task and other information for the servers where the task is running. Starting a replication task To start a replication task, click New on the Remote Replication Tasks panel and then use the New Remote Replication Task dialog box to configure the replication. Select the target for replication (a remote cluster, the same cluster, or the same cluster and file system), and specify whether this is a continuous or run-once replication. The selections you make for the target and type determine the information you will be asked for on the dialog box. Remote cluster replications Remote cluster replications can be configured for continuous or run-once mode. Configuring and managing replication tasks on the GUI 127

128 For a run-once replication, either specify the source directory or click Use a snapshot and then select the appropriate Snap Tree and snapshot. For both continuous and run-once replications, supply the target side information. Select the target cluster and target export, which must already be configured. If the remote cluster is not in the Target Cluster selection list, you will need to register the cluster. Select New to open the Add Remote Cluster dialog box. (See Configuring the target export for replication to a remote cluster (page 123) for more information.) You can also specify an optional target directory under the target export. For example, you could configure the following replication, which does not include an optional target directory: Source directory: /srcfs/a/b/c Exported file system and directory on target: /destfs/1/2/3 The contents of /srcfs/a/b/c are replicated to destfs/1/2/3/{contents_under_c}. If you also specify the target directory a/b/c, the replication goes to /destfs/1/2/3/a/b/ c{contents_under_c}. The Replication Destination shows the location you have configured. The Continuous example above shows a completed Target Side section. Same cluster replications Same cluster replications can be configured for continuous or run-once mode. 128 Using remote replication

129 For a run-once replication, either specify the source directory or click Use a snapshot and then select the appropriate Snap Tree and snapshot. For both continuous and run-once replications, supply the target side information. Select the appropriate target file system and optionally enter a target directory in that file system. IMPORTANT: If you specify a target directory, be sure that it does not overlap with a previous replication using the same target export. Same cluster and file system replications Same cluster and file system replications can be configured only for run-once mode. Either specify the source directory to be replicated, or click Use a snapshot and select the appropriate Snap Tree and snapshot. Then specify the target directory to receive the replication. Configuring and managing replication tasks on the GUI 129

130 Pausing or resuming a replication task To pause a task, select it on the Remote Replication Tasks panel and click Pause. When you pause a task, the status changes to PAUSED. Pausing a task that involves continuous data capture does not stop the data capture. You must allocate space on the disk to avoid running out of space because the data is captured but not moved. To resume a paused replication task, select the task and click Resume. The status of the task then changes to RUNNING and the task continues from the point where it was paused. Stopping a replication task To stop a task, select that task on the Remote Replication Tasks panel and click Stop. To view stopped tasks, select Inactive Tasks from the lower Navigator. You can delete one or more tasks, or see detailed information about the selected task. Configuring and managing replication tasks from the CLI NOTE: When configuring replication tasks, be sure to following the guidelines described in Overview (page 121). Starting a remote replication task to a remote cluster Use the following command to start a continuous or run-once replication task to a remote cluster. The command is executed from the source cluster. ibrix_crr -s -f SRC_FSNAME [-o] [-S SRCDIR] C TGT_CLUSTERNAME -F TGT_FSNAME [-X TGTEXPORT] [-P TGTDIR] [-R] Use the s option to start a continuous remote replication task. The applicable options are: -f SRC_FSNAME C TGT_CLUSTERNAME -F TGT_FSNAME -X TGTEXPORT -P TGTDIR -R The source file system to be replicated. The remote target cluster. The remote target file system. The remote replication target (exported directory). The default is the root of the file system. NOTE: This option is used only for replication to a remote cluster. The file system specified with -F and the directory specified with -X must both be exported from the target cluster (target export). A directory under the remote replication target export (optional). This directory must exist on the target, but does not need to be exported. Bypass retention compatibility checking. Omit the -o option to start a continuous replication task. A continuous replication task does an initial full synchronization and then continues to replicate any new changes made on the source. Continuous replication tasks continue to run until you stop them manually. Use the -o option for run-once tasks. This option synchronizes single directories or entire file systems on the source and target in a single pass. If you do not specify a source directory with the -S 130 Using remote replication

131 option, the replication starts at the root of the file system. The run-once job terminates after the replication is complete; however, the job can be stopped manually, if necessary. Use -P to specify an optional target directory under the target export. For example, you could configure the following replication, which does not include the optional target directory: Source directory: /srcfs/a/b/c Exported file system and directory on target: /destfs/1/2/3 The replication command is: ibrix_crr s -o f srcfs S a/b/c C tcluster F destfs X 1/2/3 The contents of /srcfs/a/b/c is replicated to destfs/1/2/3/{contents_under_c}. When the same command includes the P option to specify the target directory a/b/c: ibrix_crr s -o f srcfs S a/b/c C tcluster F destfs X 1/2/3 P a/b/c The replication now goes to /destfs/1/2/3/a/b/c{contents_under_c}. Starting an intracluster remote replication task Use the following command to start a continuous or run-once intracluster replication task for the specified file system: ibrix_crr -s -f SRC_FSNAME [-o [-S SRCDIR]] -F TGT_FSNAME [-P TGTDIR] The -F option specifies the name of the target file system (the default is the same as the source file system). The -P option specifies the target directory under the target file system (the default is the root of the file system). Use the -o option to start a run-once task. The -S option specifies a directory under the source file system to synchronize with the target directory. Starting a run-once directory replication task Use the following command to start a run-once directory replication for file system SRC_FSNAME. The -S option specifies the directory under the source file system to synchronize with the target directory. The -P option specifies the target directory. ibrix_crr -s -f SRC_FSNAME -o -S SRCDIR -P TGTDIR Stopping a remote replication task Use the following command to stop a continuous or run-once replication task. Use the ibrix_task -l command to obtain the appropriate ID. ibrix_crr -k n TASKID The stopped replication task is moved to the inactive task list. Use ibrix_task -l -c to view the inactive task list. To forcefully stop a replication task, use the following command: ibrix_crr -k n TASKID The stopped task is removed from the list of inactive tasks. Pausing a remote replication task Use the following command to pause a continuous replication or run-once replication task with the specified task ID. Use the ibrix_task -l command to obtain the appropriate ID. ibrix_crr -p n TASKID Configuring and managing replication tasks from the CLI 131

132 Resuming a remote replication task Use the following command to resume a continuous or run-once replication task with the specified task ID. Use the ibrix_task -l command to obtain the appropriate ID. ibrix_crr -r n TASKID Querying remote replication tasks Use the following command to list all active replication tasks in the cluster, optionally restricted by the specified file system and servers. ibrix_crr -l [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME] To see more detailed information, run ibrix_crr with the -i option. The display shows the status of tasks on each node, as well as task summary statistics (number of files in the queue, number of files processed). The query also indicates whether scanning is in progress on a given server and lists any error conditions. ibrix_crr -i [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME] The following command prints detailed information about replication tasks matching the specified task IDs. Use the -h option to limit the output to the specified server. ibrix_crr -i n TASKIDS [ [-h HOSTNAME] [-C SRC_CLUSTERNAME] Replicating WORM/retained files When using remote replication for file systems enabled for data retention, the following requirements must be met: The source and target file systems must use the same data retention mode (Enterprise or Relaxed). The default, maximum, and minimum retention periods must be the same on the source and target file systems. A clock synchronization tool such as ntpd must be used on the source and target clusters. If the clock times are not in synch, file retention periods might not be handled correctly. Also note the following: Multiple hard links on retained files on the replication source are not replicated. Only the first hard link encountered by remote replication is replicated, and any additional hard links are not replicated. (The retainability attributes on the file on the target prevent the creation of any additional hard links). For this reason, HP strongly recommends that you do not create hard links on files that will be retained. For continuous remote replication, if a file is replicated as retained, but later its retainability is removed on the source filesystem (using data retention management commands), the new file s attributes and any additional changes to that file will fail to replicate. This is because of the retainability attributes that the file already has on the target, which cause the filesystem on the target to prevent remote replication from changing it. When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file on the target should have a legal hold, you will also need to set the legal hold on that file. If a file has been replicated to a target and you then change the file's retention expiration time with the ibrix_reten_adm -e command, the new expiration time is not replicated to the target. If necessary, also change the file's retention expiration time on the target. Configuring remote failover/failback When remote replication is configured from a local cluster to a remote cluster, you can fail over the local cluster to the remote cluster: 132 Using remote replication

133 1. Stop write traffic to the local site. 2. Wait for all remote replication queues to drain. 3. Stop remote replication on the local site. 4. Reconfigure shares as necessary on the remote site. The cluster name and IP addresses (or VIFs) are different on the remote site, and changes are needed to allow clients to continue to access shares. 5. Redirect write traffic to the remote site. When the local cluster is healthy again, take the following steps to perform a failback from the remote site: 1. Stop write traffic to the remote site. 2. Set up Run-Once remote replication, with the remote site acting as the source and the local site acting as the destination. 3. When the Run-Once replication is complete, restore shares to their original configuration on the local site, and verify that clients can access the shares. 4. Redirect write traffic to the local site. Troubleshooting remote replication Continuous remote replication fails when a private network is used Continuous remote replication will fail if the configured cluster interface and the corresponding cluster Virtual Interface (VIF) for the Fusion Manager are in a private network on either the source or target cluster. By default, continuous remote replication uses the cluster interface and the Cluster VIF (the ibrixinit C and v options, respectively) for communication between the source cluster and the target cluster. To work around potential continuous remote replication communication errors, it is important that the ibrixinit -C and -v arguments correspond to a public interface and a public cluster VIF, respectively. If necessary, the ibrix_crr_nic command can be used to change the server assignments (the server/nics pairs that handle remote replication requests). Troubleshooting remote replication 133

134 12 Managing data retention and validation Overview The data retention and validation feature is intended for sites that need to archive read-only files for business purposes. Data retention ensures that files cannot be modified or deleted for a specific retention period. Data validation scans can be used to ensure that files remain unchanged. Data retention must be enabled on a file system. When you enable data retention, you can specify a retention profile that includes minimum, maximum, and default retention periods that specify how long a file must be retained. WORM and WORM-retained files The files in the file system can be in the following states: Normal. The file is created read-only or read-write, and can be modified or deleted at any time. A checksum is not calculated for normal files and they are not managed by data retention. Write-Once Read-Many (WORM). The file cannot be modified, but can be deleted at any time. WORM files can be managed by data retention. A checksum is calculated for WORM files and they can be managed by data retention. WORM-retained. A WORM file becomes WORM-retained when a retention period is applied to it. The file cannot be modified, and cannot be deleted until the retention period expires. WORM-retained files can be managed by data retention. A checksum is calculated for WORM-retained files and they can be managed by data retention. NOTE: You can apply a legal hold to a WORM or WORM-retained file. The file then cannot be deleted until the hold is released, even if the retention period has expired. For WORM and WORM-retained files, the file's contents and the following file attributes cannot be modified: File name (the file cannot be renamed or moved) User and group owners File access permissions File modification time Also, no new hard links can be made to the file and the extended attributes cannot be added, modified, or removed. The following restrictions apply to directories in a file system enabled for data retention: A directory cannot be moved or renamed unless it is empty (even if it contains only normal files). You can delete directories containing only WORM and normal files, but you cannot delete directories containing retained files. Data retention attributes for a file system The data retention attributes configured on a file system are called a retention profile. The profile includes the following: 134 Managing data retention and validation

135 Default retention period. If a specific retention period is not applied to a file, the file will be retained for the default retention period. The setting for this period determines whether you can manage WORM (non-retained) files as well as WORM-retained files: To manage both WORM (non-retained) files and WORM-retained files, set the default retention period to zero. To make a file WORM-retained, you will need to set the atime to a date in the future. To manage only WORM-retained files, set the default retention period to a non-zero value. Minimum and maximum retention periods. Retained files cannot be deleted until their retention period expires, regardless of the file system retention policy. You can set a specific retention period for a file; however, it must be within the minimum and maximum retention periods associated with the file system. If you set a time that is less than the minimum retention period, the expiration time of the period will be adjusted to match the minimum retention period. Similarly, if the new retention period exceeds the maximum retention period, the expiration time will be adjusted to match the maximum retention period. If you do not set a retention period for a file, the default retention period is used. If that default is zero, the file will not be retained. Autocommit period. Files that are not changed during this period automatically become WORM or WORM-retained when the period expires. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) The autocommit period is optional and should not be set if you want to keep normal files in the file system. IMPORTANT: For a file to become WORM, its ctime and mtime must be older than the autocommit period for the file system. On Linux, ctime means any change to the file, either its contents or any metadata such as owner, mode, times, and so on. The mtime is the last modified time of the file's contents. Retention mode. Controls how the expiration time for the retention period can be adjusted: Enterprise mode. The expiration date of the retention period can be extended to a later date. Relaxed mode. The expiration date of the retention period can be moved in or extended to a later date. The autocommit and default retention periods determine the steps you will need to take to make a file WORM or WORM-retained. See Creating WORM and WORM-retained files (page 140) for more information. Data validation scans To ensure that WORM and retained files remain unchanged, it is important to run a data validation scan periodically. Circumstances such as the following can cause a file to change unexpectedly: System hardware errors, such as write errors Degrading of on-disk data over time, which can change the stored bit values, even if no accesses to the data are performed Malicious or accidental changes made by users A data validation scan computes hash sum values for the WORM, WORM-retained, and WORM-hold files in the scanned file system or subdirectory and compares them with the values originally computed for the files. If the scan identifies changes in the values for a particular file, an alert is generated on the GUI. You can then replace the bad file with an unchanged copy from an earlier backup or from a remote replication. NOTE: Normal files are not validated. The time required for a data scan depends on the number of files in the file system or subdirectory. If there are a large number of files, the scan could take up to a few weeks to verify all content on Overview 135

136 storage. A scheduled scan will quit immediately if it detects that a scan of the same file system is already running. You can schedule periodic data validation scans, and you can also run on-demand scans. Enabling file systems for data retention and validation You can enable a new or an existing file system for data retention and, optionally, validation. When you enable a file system, you can define a retention profile that specifies the retention mode and the default, minimum, and maximum retention periods. New file systems The New Filesystem Wizard includes a WORM/Data Retention dialog box that allows you to enable data retention and define a retention profile for the file system. You can also enable and define schedules for data validation scans and data collection for reports. The default retention period determines whether you can manage WORM (non-retained) files as well as WORM-retained files. To manage only WORM-retained files, set the default retention period. WORM-retained files then use this period by default; however, you can assign a different retention period if desired. To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention Period. The default retention period is then set to 0 seconds. When you make a WORM file retained, you will need to assign a retention period to the file. The Set Auto-Commit Period option specifies that files will become WORM or WORM-retained if they are not changed during the specified period. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time period. The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period. 136 Managing data retention and validation

137 Check Enable Data Validation to schedule periodic scans on the file system. Use the default schedule, or select Modify to open the Data Validation Scan Schedule dialog box and configure your own schedule. Check Enable Data Validation to schedule periodic scans on the file system. Use the default schedule, or select Modify to open the Report Data Generation Schedule dialog box and configure your own schedule. Enabling data retention from the CLI You can also enable data retention when creating a new file system from the CLI. Use ibrix_fs -c and include the following-o options: o "retenmode=<mode>,retendefperiod=<period>,retenminperiod=<period>, retenmaxperiod=<period>,retenautocommitperiod=<period>" Enabling file systems for data retention and validation 137

138 The retenmode option is required and is either enterprise or relaxed. You can specify any, all, or none of the period options. retendefperiod is the default retention period, retenminperiod is the minimum retention period, and retenmaxperiod is the maximum retention period. The retenautocommitperiod option specifies that files will become WORM or WORM-retained if they are not changed during the specified period. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period. When using a period option, enter a decimal number, optionally followed by one of these characters: s (seconds) m (minutes) h (hours) d (days) w (weeks) M (months) y (years) If you do not include a character specifier, the decimal number is interpreted as seconds. The following example creates a file system with Enterprise mode retention, with a default retention period of 1 month, a minimum retention period of 3 days, a maximum retention period of 5 years, and an autocommit period of 1 hour: ibrix_fs -o "retenmode=enterprise,retendefperiod=1m,retenminperiod=3d, retenmaxperiod=5y,retenautocommitperiod=1h" -c -f ifs1 -s ilv_[1-4] -a Configuring data retention on existing file systems NOTE: Data retention cannot be enabled on a file system created on X9000 software 5.6 or earlier versions. Instead, create a new file system on X9000 software 6.0 or later, and then copy or move files from the old file system to the new file system. To enable or change the data retention configuration an existing file system, first unmount the file system. Select Active Tasks > WORM/Data Retention from the lower Navigator, and then click Modify on the WORM/Data Retention panel. You do not need to unmount the file system to change the configuration for data validation or report data generation. 138 Managing data retention and validation

139 To enable data retention on an existing file system using the CLI, run this command: ibrix_fs -W -f FSNAME -o "retenmode=<mode>,retendefperiod=<period>,retenminperiod=<period>, retenmaxperiod=<period>" To use the autocommit feature on an existing file system, first upgrade the file system to enable autocommit: ibrix_reten_adm -u -f FSNAME Then set the autocommit period on the file system with the -o "retenautocommitperiod=<period>" option. Viewing the retention profile for a file system To view the retention profile for a file system, select the file system on the GUI, and then select WORM/Data Retention from the lower navigator. The WORM/Data retention panel shows the retention profile. To view the retention profile from the CLI, use the ibrix_fs -i command, as in the following example: ibrix_fs -i -f ifs1 FileSystem: ifs1 Enabling file systems for data retention and validation 139

140 ========================= { } RETENTION : Enterprise [default=15d,mininum=1d,maximum=5y] Changing the retention profile for a file system The file system must be unmounted when you make changes to the retention profile. After unmounting the file system, click Modify on the WORM/Data Retention panel to open the Modify WORM/Data Retention dialog box and then make your changes. To change the configuration from the CLI, use the following command: ibrix_fs -W -f FSNAME -o "retenmode=<mode>,retendefperiod=<period>,retenminperiod=<period>, retenmaxperiod=<period>,retenautocommitperiod=<period>" Managing WORM and retained files You can change a file to the WORM or WORM-retained state, view the retention information associated with a file, and use administrative tools to manage individual files, including setting or removing a legal hold, setting or removing a retention period, and administratively deleting a file. Creating WORM and WORM-retained files The autocommit and default retention periods determine the steps you will need to take. Autocommit period is set and default retention period is zero seconds: To make a WORM file retained, set the atime to a time in the future. Autocommit period is set and default retention period is non-zero: Files remaining unchanged during the autocommit period automatically become WORM-retained and use the default retention period. You can assign a different retention period to a file if necessary. Autocommit period is not set and default retention period is zero seconds: To create a WORM file, run a command to make the file read-only. To make a WORM file retained, set the atime to a time in the future. Auto commit period is not set and default retention period is non-zero: To create a WORM-retained file, run a command to make the file read-only. By default, the file uses the default retention period. To assign a different retention period to the WORM-retained file, set the atime to a time in the future. NOTE: If you are not using autocommit, files must explicitly be made read-only. Typically, you can configure your application to do this. Making a file read-only Linux. Use chmod to make the file read-only. For example: chmod 444 myfile.txt Windows. Use the attrib command to make the file read-only: C:\> attrib +r myfile.txt Setting the atime Linux. Use a command such as touch to set the access time to the future: touch -a -d "30 minutes" myfile.txt 140 Managing data retention and validation

141 See the touch(1) documentation for the time/date formats allowed with the -d option. You can also enter the following on a Linux command line to see the acceptable date/time strings for the touch command: info "Date input formats" Windows. Windows does not include a touch command. Instead, use a third-party tool such as cygwin or FileTouch to set the access time to the future. NOTE: For CIFS users setting the access time manually for a file, the maximum retention period is100 years from the date the file was retained. For NFS users setting the access time manually for a file, the retention expiration date must be before February 5, The access time has the following effect on the retention period: If the access time is set to a future date, the retention period of the file is set so that retention expires at that date. If the access time is not set, the file inherits the default retention period for the file system. Retention expires at that period in the future, starting from the time the file is set read-only. If the access time is not set and the default retention period is zero, the file will become WORM but not retained, and can be deleted. You can change the retention period if necessary; see Changing a retention period (page 143). Viewing the retention information for a file To view the retention information for a file, run the following command: ibrix_reten_adm -l -f FSNAME -P PATHLIST For example: # ibrix_reten_adm -l -f sales_fs -P /sales_fs/dir1/contacts.txt nl /sales_fs/dir1/contacts.txt: state={retained} retain-to:{2011-nov-10 15:55:06} [period: 182d15h ( s)] In this example, contacts.txt is a retained file, its retention period expires on November 10, 2011, and the length of the retention period is 182 days, 15 hours. File administration To administer files from the GUI, select File Administration on the WORM/Data Retention panel. Select the action you want to perform on the WORM/Data Retention File Administration dialog box. Managing WORM and retained files 141

142 To administer files from the CLI, use the ibrix_reten_adm command. IMPORTANT: Do not use the ibrix_reten_adm command on a file system that is not enabled for data retention. Specifying path lists When using the GUI or the ibrix_reten_adm command, you need to specify paths for the files affected by the retention action. The following rules apply when specifying path lists: A path list can contain one or more entries, separated by commas. Each entry can be a fully-qualified path, such as /myfs1/here/a.txt. An entry can also be relative to the file system mount point. For example, if myfs1 is mounted at /myfs1, the path here/a.txt is a valid entry. A relative path cannot begin with a slash (/). Relative paths are always relative to the mount point; they cannot be relative to the user s current directory, unlike other UNIX commands. A directory cannot be specified in a path list. Directories themselves have no retention settings, and the command returns an error message if a directory is entered. To apply an action to all files in a directory, you need to specify the paths to the files. You can use wildcards in the pathnames, such as /my/path/*,/my/path/.??*. The command does not apply the action recursively; you need to enter subdirectories. To apply a command to all files in all subdirectories of the tree, you can wrap the ibrix_reten_adm command in a find script (or other similar script) that calls the command for every directory in the tree. For example, the following command sets a legal hold on all files in the specified directory: find /myfs1/here/usr_local_src/matplotlib-1.0.0/agg24 -type d -exec ibrix_reten_adm -h -f myfs1 -P {}/* \; The following script includes files beginning with a dot, such as.this. (This includes file uploaded to the file system, not file system files such as the.archiving tree.) find /myfs1/here/usr_local_src/matplotlib-1.0.0/agg24 -type d -exec ibrix_reten_adm -h -f myfs1 -P {}/*,{}/.??* \; 142 Managing data retention and validation

143 Setting or removing a legal hold When a legal hold is set on a retained or WORM file, the file cannot be deleted until the hold is released, even if the retention period has expired. On the WORM/Data Retention File Administration dialog box, select Set a Legal Hold and specify the appropriate file. To remove a legal hold from a file, Remove a Legal Hold and specify the appropriate file. When the hold is removed, the file is again under the control of its original retention policy. To set a legal hold from the CLI, use this command: ibrix_reten_adm -h -f FSNAME -P PATHLIST To remove a legal hold from the CLI, use this command: ibrix_reten_adm -r -f FSNAME -P PATHLIST Changing a retention period If necessary, you can change the length of the current retention period. For example, you might want to assign a different retention period to a retained file currently using the default retention period. This is done by resetting the expiration time of the period. If the retention mode is Enterprise, the new expiration time must be later than the current expiration time. If the retention mode is Relaxed, the new expiration time can be earlier or later than the current expiration time. On the WORM/Data Retention File Administration dialog box, select Reset Expiration Time and specify the appropriate file. When you set the new expiration time, the length of the retention period is adjusted accordingly. If you specify a time that is less than the minimum retention period for the file system, the expiration time will be adjusted to match the minimum retention period. Similarly, if the new time will exceed the maximum retention period, the expiration time will be adjusted to match the maximum retention period. To reset the expiration time using the CLI: ibrix_reten_adm -e expire_time -f FSNAME -P PATHLIST If you specify an interval such as 20m (20 minutes) for the expire_time, the retention expiration time is set to that amount of time in the future starting from now, not that amount of time from the original start of retention. If you specify an exact date/time such as 19:20:02 or 2/16/2012 for the expire_time, the command sets the retention expiration time to that exact time. If the file Managing WORM and retained files 143

144 system is in Relaxed retention mode (not Enterprise), the exact date/time can be in the past, in which case the file immediately expires from retention and becomes WORM but no longer retained. See the Linux date(1) man page for a description of the valid date/time formats for the expire_time parameter. Removing the retention period When you remove the retention period from a retained file, the file becomes a WORM file. On the WORM/Data Retention File Administration dialog box, select Remove Retention Period and specify the appropriate file. To remove the retention period using the CLI: ibrix_reten_adm -c -f FSNAME -P PATHLIST Deleting a file administratively This option allows you to delete a file that is under the control of a data retention policy. On the WORM/Data Retention File Administration dialog box, select Administrative Delete and specify the appropriate file. CAUTION: Deleting files administratively removes them from the file system, regardless of the data retention policy. To delete a file using the CLI: ibrix_reten_adm -d -f FSNAME -P PATHLIST Running data validation scans Scheduling a validation scan When you use the GUI to enable a file system for data validation, you can set up a schedule for validation scans. You might want to run additional scans of the file system at other times, or you might want to scan particular directories in the file system. NOTE: Although you can schedule multiple scans of a file system, only one scan can run at a time for a given file system. To schedule a validation scan, select the file system on the GUI, and then select Active Tasks from the lower navigator. Select New to open the Starting a New Task dialog box. Select Data Validation as the Task Type. When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be scanned if necessary. 144 Managing data retention and validation

145 Go to the Schedule tab to specify when you want to run the scan. Starting an on-demand validation scan You can run a validation scan at any time. Select the file system on the GUI, and then select Active Tasks from the lower navigator. Click New to open the Starting a New Task dialog box. Select Data Validation as the Task Type. When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be scanned if necessary and click OK. Running data validation scans 145

146 To start an on-demand validation scan from the CLI, use the following command: ibrix_datavalidation -s -f FSNAME [-d PATH] Viewing, stopping, or pausing a scan Scans in progress are listed on the Active Tasks panel on the GUI. If you need to halt the scan, click Stop or Pause on the Active Tasks panel. Click Resume to resume the scan. To view the progress of a scan from the CLI, use the ibrix_task command. The -s option lists scheduled tasks. ibrix_task -i [-f FILESYSTEMS] [-h HOSTNAME] To stop a scan, use this command: ibrix_task -k -n TASKID [-F] [-s] To pause a scan, use this command: ibrix_task -p -n TASKID To resume a scan, use this command: ibrix_task -r -n TASKID Viewing validation scan results While a validation scan is running, it is listed on the Active Tasks panel on the GUI (select the file system, and then select Active Tasks from the lower Navigator). Information about completed scans is listed on the Inactive Tasks panel (select the file system, and then select Inactive Tasks from the lower Navigator). On the Inactive Tasks panel, select a validation task and then click Details to see more information about the scan. A unique validation summary file is also generated for each scan. The files are located in the root directory of the file system at {filesystem root}/.archiving/validation/history. The validation summary files are named <ID-n>-sum, such as 1 0.sum, 2 0.sum, and so on. The ID is the task ID assigned by X9000 Software when the scan was started. The second number is 0 unless there is an existing summary file with the same task ID, in which case the second number is incremented to make the filename unique. Viewing and comparing hash sums for a file If a validation scan summary file reports inconsistent hash sums for a file and you want to investigate further, use the showsha and showvsm commands to compare the current hash sums with the hash sums that were originally calculated for the file. The showsha command calculates and displays the hash sums for a file. For example: # /usr/local/ibrix/sbin/showsha rhnplugin.py Path hash: f4b82f4da9026ba4aa db46ffda7b 146 Managing data retention and validation

147 Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493 The showvms command displays the hash sums stored for the file. For example: # /usr/local/ibrix/sbin/showvms rhnplugin.py VMSQuery returned 0 Path hash: f4b82f4da9026ba4aa db46ffda7b Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493 last attempt: Wed Dec 31 17:00: last success: Wed Dec 31 17:00: changed: 0 In this example, the hash sums match and there are no inconsistencies. The 1969 dates appearing in the showvms output mean than the file had not yet been validated. Handling validation scan errors When a validation scan detects files having hash values inconsistent with their original values, it displays an alert in the events section of the GUI. However, the alert lists only the first inconsistent file detected. It is important to check the validation summary report to identify all inconsistent files that were flagged during the scan. To replace an inconsistent file, follow these steps: 1. Obtain a good version of the file from a backup or a remote replication. 2. If the file is retained, remove the retention period for the file, using the GUI or the ibrix_reten_adm -c command. 3. Delete the file administratively using the GUI or the ibrix_reten_adm -d command. 4. Copy the good version of the file to the data-retained file system or directory. If you recover the file using an NDMP backup application, the proper retention expiration period is applied from the backup copy of the file. If you copy the file another way, you will need to set the atime and read-only status. Creating data retention reports Three reports are available: data retention, utilization, and validation. The reports can show results either for the entire file system or for individual tiers. To generate a tiered report, the file system must include at least one tier. You can display reports as PDFs, CSV (CLI only), or in html format (GUI only). The latest files in each format are saved in /usr/local/ibrix/reports/output/<report type>/. When you generate a report, the system creates a CSV file containing the data for the report. The latest CSV file is also stored in /usr/local/ibrix/reports/output/<report type>/. NOTE: Older report files are not saved. If you need to keep report files, save them in another location before you generate new reports. The data retention report lists ranges of retention periods and specifies the number of files in each range. The Number of Files reported on the graph scales automatically and is reported as individual files, thousands of files, or millions of files. The following example shows a data retention report for an entire file system. Creating data retention reports 147

148 The utilization report summarizes how storage is utilized between retention states and free space. The next example shows the first page of a utilization report broken out by tiers. The results for each tier appear on a separate page. The total size scales automatically, and is reported as MB, GB, or TB, depending on the size of the file system or tier. A data validation report shows when files were last validated and reports any mismatches. A mismatch can be either content or metadata. The Number of Files scales automatically and is reported as individual files, thousands of files, or millions of files. 148 Managing data retention and validation

149 Generating and managing reports To run an unscheduled report from the GUI, select Filesystems in the upper Navigator and then select WORM/Data Retention in the lower Navigator. On the WORM/Data Retention panel, click Run a Report. On the Run a WORM/Data Protection Summary Report dialog box, select the type of report to view, and then specify the output format. If an error occurs during report generation, a message appears in red text on the report. Simply run the report again. Generating reports from the CLI You can generate reports at any time using the ibrix_reports command. Scheduled reports can be configured only on the GUI. First run the following command to scan the file system and collect the data to be used in the reports: Creating data retention reports 149

150 ibrix_reports -s -f FILESYSTEM Then run the following command to generate the specified report: ibrix_reports -g -f FILESYSTEM -n NAME -o OUTPUT FORMAT Use the -n option to specify the type of report, where NAME is one of the following; retention retention_by_tier validation validation by tier utilization utilization_by_tier The output format specified with -o can be csv or pdf. Using hard links with WORM files You can use the Linux ln command without the -s option to create a hard link to a normal (non-worm) file on an retention-enabled file system. If you later make the file a WORM file, the following restrictions apply until the file is deleted: You cannot make any new hard links to the file. Doing this would increment the metadata of the link count in the file's inode, which is not allowed under WORM rules. You can delete hard links (the original file system entry or a hard-link entry) without deleting the other file system entries or the file itself. WORM rules allow the link count to be decremented. Using remote replication When using remote replication for file systems enabled for retention, the following requirements must be met: The source and target file systems must use the same retention mode (Enterprise or Relaxed). The default, maximum, and minimum retention periods must be the same on the source and target file systems. A clock synchronization tool such as ntpd must be used on the source and target clusters. If the clock times are not in sync, file retention periods might not be handled correctly. Also note the following: Multiple hard links on retained files on the replication source are not replicated. Only the first hard link encountered by remote replication is replicated, and any additional hard links are not replicated. (The retainability attributes on the file on the target prevent the creation of any additional hard links). For this reason, HP strongly recommends that you do not create hard links on retained files. For continuous remote replication, if a file is replicated as retained, but later its retainability is removed on the source filesystem (using data retention management commands), the new file s attributes and any additional changes to that file will fail to replicate. This is because of the retainability attributes that the file already has on the target, which will cause the filesystem on the target to prevent remote replication from changing it. When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file on the target should have a legal hold, you will also need to set the legal hold on that file. If a file has been replicated to a target and you then change the file's retention expiration time with the ibrix_reten_adm -e command, the new expiration time is not replicated to the target. If necessary, also change the file's retention expiration time on the target. 150 Managing data retention and validation

151 Backup support for data retention The supported method for backing up and restoring WORM/retained files is to use NDMP with DMA applications. Other backup methods will back up the file data, but will lose the retention configuration. Troubleshooting data retention Attempts to edit retained files can create empty files It you attempt to edit a WORM file in the retained state, applications such as the vi editor will be unable to edit the file, but can leave empty temp files on the file system. Applications such as vi can appear to update WORM files If you use an application such as vi to edit a WORM file that is not in the retained state, the file will be modified, and it will be retained with the default retention period. This is the expected behavior. The file modification occurs because the editor edits a temporary copy of the file, tries to rename it to the real file, and when that fails, deletes the original file and then does the rename, which succeeds (because unretained WORM files are allowed to be deleted). Cannot enable data retention on a file system with a bad segment Data retention must be set on all segments of a file system to ensure that all files can be managed properly. File systems with bad segments cannot be enabled for data retention. If a file system has a bad segment, evacuate or remove the segment first, and then enable the file system. Backup support for data retention 151

152 13 Configuring Antivirus support The X9000 antivirus feature can be used with supported Antivirus software, which must be run on systems outside the cluster. These systems are called external virus scan engines. To configure the Antivirus feature on an X9000 cluster, complete these steps: 1. Add the external virus scan engines to be used for virus scanning. You can schedule periodic updates of virus definitions from the virus scan engines to the cluster nodes. 2. Enable Antivirus on file systems. 3. Configure Antivirus settings as appropriate for your cluster. For file sharing protocols other than CIFS, when Antivirus is enabled on a file system, scans are triggered when a file is first read. Subsequent reads to the file do not trigger a scan unless the file has been modified or the virus definitions have changed. For CIFS, you must specify the file operations that trigger a scan (open, close, or both). The scans are forwarded to an external scan engine, which blocks the operation until the scan is complete. After a successful scan, if the file is found to be infected, the system reports a permission denied error message as the result of the file operation. If the file is clean, the file operation is allowed to go through. You can define Antivirus exclusions on directories in a file system to exclude files from being scanned. When you define an exclusion rule for a directory, all files/folders in that directory hierarchy are excluded from Antivirus scans based on the rule. Anti-virus support can be configured from the GUI or the CLI. On the GUI, select Cluster Configuration from the Navigator, and then select Antivirus from the lower Navigator. the Antivirus Settings panel displays the current configuration. 152 Configuring Antivirus support

153 On the CLI, use the ibrix_avconfig command to configure Antivirus support. Use the ibrix_av command to update Antivirus definitions or view statistics. Adding or removing external virus scan engines The Antivirus software is run on external virus scan engines. You will need to add these systems to the Antivirus configuration. IMPORTANT: HP recommends that you add a minimum of two virus scan engines to provide load balancing for scan requests and to prevent loss of scanning if one virus scan engine becomes unavailable. On the GUI, select Virus Scan Engines from the lower Navigator to open the Virus Scan Engines panel, and then click Add on that panel. On the Add dialog box, enter the IP address of the external scan engine and the ICAP port number configured on that system. NOTE: The default port number for ICAP is HP recommends that you use this port, unless it is already in use by another activity. You may need to open this port for TCP/UDP in your firewall. To remove an external virus scan engine from the configuration, select that system on the Virus Scan Engines panel and click Delete. To add an external virus scan engine from the CLI, use the following command: ibrix_avconfig -a -S -I IPADDR -p PORTNUM The port number specified here must match the ICAP port number configured on the virus scan engines. Use the following command to remove an external virus scan engine: ibrix_avconfig -r -S -I IPADDR Enabling or disabling Antivirus on X9000 file systems On the GUI, select AV Enable/Disable FileSystems from the lower Navigator to open the AV Enable Disable panel, which lists the file systems in the cluster. Select the file system to be enabled, click Enable, and confirm the operation. To disable Antivirus, click Disable. The CLI commands are as follows: Enable Antivirus on all file systems in the cluster: ibrix_avconfig -e -F Enable Antivirus on specific file systems: ibrix_avconfig -e -f FSLIST If you specify more than one file system, use commas to separate the file systems. Disable Antivirus on all file systems: Adding or removing external virus scan engines 153

154 ibrix_avconfig -d -F Disable Antivirus on specific file systems: ibrix_avconfig -d -f FSLIST Updating Antivirus definitions You should update the virus definitions on the cluster nodes periodically. On the GUI, click Update ClusterWide ISTag on the Antivirus Settings panel. The cluster then connects with the external virus scan engines and synchronizes the virus definitions on the cluster nodes with the definitions on the external virus scan engines. NOTE: All virus scan engines should have the same virus definitions. Inconsistencies in virus definitions can cause files to be rescanned. Be sure to coordinate the schedules for updates to virus definitions on the virtual scan engines and updates of virus definitions on the cluster nodes. On the CLI, use the following commands: Schedule cluster-wide updates of virus definitions: ibrix_av -t [-S CRON_EXPRESSION] The CRON_EXPRESSION specifies the time for the virus definition update. For example, the expression " * *?" executes this command at noon every day. View the current schedule: ibrix_av -l -T Configuring Antivirus settings Defining the Antivirus unavailable policy This policy determines how targeted file operations are handled when an external virus scan engine is not available. The policies are: Allow. All operations triggering scans are allowed to run to completion. Deny. All operations triggering scans are blocked and returned with an error. This policy ensures that a virus is not returned when Antivirus is not available. This is the default. Following are examples of situations that can cause Antivirus to be unavailable: All configured virus scan engines are unreachable. The cluster nodes cannot communicate with the virus scan engines because of network issues. The number of incoming scan requests exceeds the threads available on the cluster nodes to process the requests. The Antivirus Settings panel shows the current setting for this policy. To toggle the policy, click Configure AV Policy. 154 Configuring Antivirus support

155 To set the policy from the CLI, use this command: ibrix_avconfig -u -g A D Defining protocol-specific policies For certain file sharing protocols (currently only CIFS), you can specify the file operations that trigger a scan (open, close, or read). There are three policies: OPEN Scan on open. CLOSE Scan on close. BOTH Scan on open and close. To set the policy, select Protocol Scan Settings from the lower Navigator. The AV Protocol Settings panel then displays the current setting. To set or change the setting, click Set/Modify on the panel and then select the appropriate setting from the Action dialog box. To set the policy from the CLI, use this command: ibrix_avconfig -u -k PROTOCOL -G O C B Defining exclusions Exclusions specify files to be skipped during Antivirus scans. Excluding files can improve performance, as files meeting the exclusion criteria are not scanned. You can exclude files based Configuring Antivirus settings 155

156 on their file extension or size. To configure exclusions on the GUI, click Exclusion on the AV Enable Disable panel. On the Exclusion Property dialog box, select the file system and then specify the directory path where the exclusion is to be applied. By default, when exclusions are set on a particular directory, all of its child directories inherit those exclusions. You can overwrite those exclusions for a child directory by explicitly setting exclusions on the child directory or by using the No rule option to stop exclusion inheritance on the child directory. Select the appropriate type of rule: Inherited Rule/Remove Rule. Use this option to reset or remove exclusions that were explicitly set on the child directory. The child directory will then inherit exclusions from its parent directory. You should also use this option to remove exclusions on the top-most level directory where exclusions rules have been are set. No rule. Use this option to remove or stop exclusions at the child directory. The child directory will no longer inherit the exclusions from its parent directory. Custom rule. Use this option to exclude files having specific file extensions or exceeding a specific size. If you specify multiple file extensions, use commas to separate the extension. To exclude all types of files from scans, enter an asterisk (*) in the file extension field. You can specify either file extensions or a file size (or both). 156 Configuring Antivirus support

157 On the CLI, use the following options to specify exclusions with the ibrix_avconfig command: -x FILE_EXTENSION Excludes all files having the specified extension, such as.jpg. If you specify multiple extensions, use commas to separate the extensions. -s FILE_SIZE Excludes all files larger than the specified size (in MB). -N Does not exclude any files in the directory hierarchy. Add an exclusion to a directory: ibrix_avconfig -a -E -f FSNAME -P DIR_PATH {-N [-x FILE_EXTENSION] [-s FILE_SIZE]} View exclusions on a specific directory: ibrix_avconfig -l -E -f FSNAME -P DIR_PATH Remove all exclusions from a directory: ibrix_avconfig -r -E -f FSNAME -P DIR_PATH Viewing Antivirus statistics Antivirus statistics are accumulated whenever a scan is run. To view statistics, select Statistics from the lower Navigator. Click Clear Stats to clear the current statistics and start accumulating them again. Viewing Antivirus statistics 157

158 The CLI commands are: View statistics from all cluster nodes: ibrix_av -l -s Delete statistics from all nodes: ibrix_av -d -s Antivirus quarantines and software snapshots The quarantine utility has the following limitations when used with snap files. Limitation 1: When the following sequence of events occurs: A virus file is created inside the snap root A snap is taken The original file is renamed or moved to another path The original file is read The quarantine utility cannot locate the snap file because the link was formed with the new filename assigned after the snap was taken. Limitation 2:When the following sequence of events occurs: A virus file is created inside the snap root A snap is taken The original file is renamed or moved to another path The snap file is read The quarantine utility cannot track the original file because the link was not created with its name. That file cannot be listed, reset, moved, or deleted by the quarantine utility. Limitation 3:When the following sequence of events occurs: A virus file is created inside the snap root The original file is read A snap is taken The original file is renamed or moved to another path 158 Configuring Antivirus support

159 The quarantine utility displays both the snap name (which still has the original name), and the new filename, although they are same file. Antivirus quarantines and software snapshots 159

160 14 Creating X9000 software snapshots The X9000 software snapshot feature allows you to capture a point-in-time copy of a file system or directory for online backup purposes and to simplify recovery of files from accidental deletion. Software snapshots can be taken of the entire file system or selected directories. Users can access the filesystem or directory as it appeared at the instant of the snapshot. NOTE: To accommodate software snapshots, the inode format was changed in the X release. Consequently, files used for snapshots must either be created on X9000 File Serving Software 6.0 or later, or the pre-6.0 file system containing the files must be upgraded for snapshots. To upgrade a file system, use the upgrade60.sh utility. For more information, see the HP IBRIX X9000 Network Storage System CLI Reference Guide. Before taking snapshots of a file system or directory, you must enable the directory tree for snapshots. An enabled directory tree is called a snap tree. You can then define a schedule for taking periodic snapshots of the snap tree, and you can also take on-demand snapshots. Users can access snapshots using NFS or CIFS. All users with access rights to the root of the snapshot directory tree can navigate, view, and copy all or part of a snapshot. NOTE: Snapshots are read only and cannot be modified, moved, or renamed. However, they can be copied. NOTE: You can use either the software method or the block method to take snapshots on a file system. Using both snapshot methods simultaneously on the same file system is not supported. File system limits for snap trees and snapshots A file system can have a maximum of 1024 snap trees. Each snap tree can have a maximum of 1024 snapshots. Configuring snapshot directory trees and schedules You can enable a directory tree for snapshots using either the GUI or the CLI; however, the GUI must be used to configure a snapshot schedule. On the GUI, select Snapshots from the Navigator. The Snap Trees panel lists all directory trees currently enabled for snapshots. The Schedule Details panel shows the snapshot schedule for the selected directory tree. 160 Creating X9000 software snapshots

161 To enable a directory tree for snapshots, click Add on the Snap Trees panel. You can create a snapshot directory tree for an entire file system or a directory in that file system. When entering the directory path, do not specify a directory that is a parent or child of another snapshot directory tree. For example, if directory /dir1/dir2 is a snapshot directory tree, you cannot create another snapshot directory tree at /dir1 or /dir1/dir2/dir3. The snapshot schedule can include any combination of hourly, daily, weekly, and monthly snapshots. Also specify the number of snapshots to retain on the system. When that number is reached, the oldest snapshot is deleted. All weekly and monthly snapshots are taken at the same time of day. The default time is 9 pm. To change the time, click the time shown on the dialog box, and then select a new time on the Modify Weekly/Monthly Snapshot Creation Time dialog box. To enable a directory tree for snapshots using the CLI, run the following command: ibrix_snap -m -f FSNAME -p SNAPTREEPATH SNAPTREEPATH is the full directory pathname, starting at the root of the file system. For example: ibrix_snap m f ifs1 -p /ifs1/dir1/dir2 IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees that have scheduled snapshots. If a snapshot reclamation task does not already exist, you will need to configure the task. See Reclaiming file system space previously used for snapshots (page 165). Configuring snapshot directory trees and schedules 161

162 Modifying a snapshot schedule You can change the snapshot schedule at any time. On the Snap Trees panel, select the appropriate snap tree, select Modify, and make your changes on the Modify Snap Tree dialog box. Managing software snapshots To view the snapshots for a specific directory tree, select the appropriate directory tree on the Snap Trees panel, and then select Snapshots from the lower Navigator. The Snapshots panel lists snapshots for the directory tree and allows you to take a new snapshot or delete an existing snapshot. Use the filter at the bottom of the panel to select the snapshots you want to view. The following CLI commands display information about snapshots and snapshot directory trees: List all snapshots, or only the snapshots on a specific file system or snapshot directory tree: ibrix_snap -l -s [-f FSNAME [-P SnapTreePath]] List all snapshot directory trees, or only the snapshot directory trees on a specific file system: ibrix_snap -l [-f FSNAME] Taking an on-demand snapshot To take an on-demand snapshot of a directory tree, select the directory tree on the Snap Trees panel and then click Create on the List of Snapshots panel. 162 Creating X9000 software snapshots

163 To take a snapshot from the CLI, use the following command: ibrix_snap -c -f FSNAME -P SNAPTREEPATH -n NAMEPATTERN SNAPTREEPATH is the full directory path starting from the root of the file system. The name that you specify is appended to the date of the snapshot. The following words cannot be used in the name, as they are reserved for scheduled snapshots: Hourly Daily Weekly Monthly You will need to manually delete on-demand snapshots when they are no longer needed. Determining space used by snapshots Space used by snapshots counts towards the used capacity of the filesystem and towards user quotas. Standard file system space reporting utilities work as follows: The ls and du commands report the size of a file depending on the version you are viewing. if you are looking at a snapshot, the commands report the size of the file when it was snapped. If you are looking at the current version, the commands report the current size. The df command reports the total space used in the file system by files and snapshots. Accessing snapshot directories Snapshots are stored in a read-only directory named.snapshot located under the directory tree. For example, snapshots for directory tree /ibfs1/users are stored in the /ibfs1/users/.snapshot directory. Each snapshot is a separate directory beneath the.snapshot directory. Snapshots are named using the ISO 8601 date and time format, plus a custom value. For example, a snapshot created on June 1, 2011 at 9am will be named T090000_<name>. For snapshots created automatically, <name> is hourly, daily, weekly, or monthly, depending on the snapshot schedule. If you create a snapshot on-demand, you can specify the <name>. The following example lists snapshots created on an hourly schedule for snap tree /ibfs1/users. Using ISO 8601 naming ensures that the snapshot directories are listed in order according to the time they were taken. [root@x9000n1 ~]# # cd /ibfs1/users/.snapshot/ [root@x9000n1.snapshot]# ls T110000_hourly T190000_hourly T030000_hourly T120000_hourly T200000_hourly T040000_hourly T130000_hourly T210000_hourly T050000_hourly Managing software snapshots 163

164 T140000_hourly T220000_hourly T060000_hourly T150000_hourly T230000_hourly T070000_hourly T160000_hourly T000000_hourly T080000_hourly T170000_hourly T010000_hourly T090000_hourly T180000_hourly T020000_hourly Users having access to the root of the snapshot directory tree (in this example, /ibfs1/users/) can navigate the /ibfs1/users/.snapshot directory, view snapshots, and copy all or part of a snapshot. If necessary, users can copy a snapshot and overlay the present copy to achieve manual rollback. NOTE: Access to.snapshot directories is limited to administrators and NFS and CIFS users. Accessing snapshots using NFS Access over NFS is similar to local X9000 access except that the mount point will probably be different. In this example, NFS export /ibfs1/users is mounted as /users1 on an NFS client. [root@rhel5vm1 ~]# cd /users1/.snapshot [root@rhel5vm1.snapshot]# ls T110000_hourly T150000_hourly T190000_hourly T120000_hourly T160000_hourly T200000_hourly T130000_hourly T170000_hourly T140000_hourly T180000_hourly Accessing snapshots using CIFS Over CIFS, Windows users can use Explorer to navigate to the.snapshot folder and view files. In the following example, /ibfs1/users/ is mapped to the Y drive on a Windows system. Restoring files from snapshots Users can restore files from snapshots by navigating to the appropriate snapshot directory and copying the file or files to be restored, assuming they have the appropriate permissions on those files. If a large number of files need to be restored, you may want to use Run Once remote replication 164 Creating X9000 software snapshots

165 to copy files from the snapshot directory to a local or remote directory (see Starting a replication task (page 127)). Deleting snapshots Scheduled snapshots are deleted automatically according to the retention schedule specified for the snapshot tree; however you can delete a snapshot manually if necessary. You also need to delete on-demand snapshots manually. Deleting a snapshot does not free the file system space that was used by the snapshot; you will need to reclaim the space. IMPORTANT: Before deleting a directory that contains snapshots, take these steps: Delete the snapshots (use ibrix_snap). Reclaim the file system space used by the snapshots (use ibrix_snapreclamation). Remove snapshot authorization for the snap tree (use ibrix_snap). Deleting a snapshot manually To delete a snapshot from the GUI, select the appropriate snapshot on the List of Snapshots panel, click Delete, and confirm the operation. To delete the snapshot from the CLI, use the following command: ibrix_snap -d -f FSNAME -P SNAPTREEPATH -n SNAPSHOTNAME If you are unsure of the name of the snapshot, use the following command to locate the snapshot: ibrix_snap -l -s [-f FSNAME] [-P SNAPTREEPATH] Reclaiming file system space previously used for snapshots Snapshot reclamation tasks are used to reclaim file system space previously used by snapshots that have been deleted. IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees that have scheduled snapshots. Using the GUI, you can schedule a snapshot reclamation task to run at a specific time on a recurring basis. The reclamation task runs on an entire file system, not on a specific snapshot directory tree within that file system. If a file system includes two snapshot directory trees, space is reclaimed in both snapshot directory trees. To start a new snapshot reclamation task, select the appropriate file system from the Filesystems panel and then select Active Tasks > Snapshot Space Reclamation from the lower Navigator. Managing software snapshots 165

166 Select New on the Task Summary panel to open the New Snapshot Space Reclamation Task dialog box. On the General tab, select a reclamation strategy: Maximum Space Reclaimed. The reclamation task recovers all snapped space eligible for recovery. It takes longer and uses more system resources than Maximum Speed. This is the default. Maximum Speed of Task. The reclamation task reclaims only the most easily recoverable snapped space. This strategy reduces the amount of runtime required by the reclamation task, but leaves some space potentially unrecovered (that space is still eligible for later reclamation). You cannot create a schedule for this type of reclamation task. If you are using the Maximum Space Reclaimed strategy, you can schedule the task to run periodically. On the Schedule tab, click Schedule this task and select the frequency and time to run the task. 166 Creating X9000 software snapshots

167 To stop a running reclamation task, click Stop on the Task Summary panel. Managing reclamation tasks from the CLI To start a reclamation task from the CLI, use the following command: ibrix_snapreclamation -r -f FSNAME [-s {maxspeed maxspace}] [-v] The reclamation task runs immediately; you cannot create a recurring schedule for it. To stop a reclamation task, use the following command: ibrix_snapreclamation -k -t TASKID [-F] The following command shows summary status information for all replication tasks or only the tasks on the specified file systems: ibrix_snapreclamation -l [-f FSLIST] The following command provides detailed status information: ibrix_snapreclamation -i [-f FSLIST] Removing snapshot authorization for a snap tree Before removing snapshot authorization from a snap tree, you must delete all snapshots in the snap tree and reclaim the space previously used by the snapshots. Complete the following steps: 1. Disable any schedules on the snap tree. Select the snap tree on the Snap Trees panel, select Modify, and remove the Frequency settings on the Modify Snap Tree dialog box. 2. Delete the existing snapshots of the snap tree. See Deleting snapshots (page 165) 3. Reclaim the space used by the snapshots. See Reclaiming file system space previously used for snapshots (page 165). 4. Delete the snap tree. On the Snap Trees panel, select the appropriate snap tree, click Delete, and confirm the operation. To disable snapshots on a directory tree using the CLI, run the following command: ibrix_snap -m -U -f FSNAME -P SnapTreePath Managing software snapshots 167

168 Moving files between snap trees Files created on, copied, or moved to a snap tree directory can be moved to any other snap tree or non-snap tree directory on the same file system, provided they are not snapped. After a snapshot is taken and the files have become part of that snapshot, they cannot be moved to any other snap tree or directory on the same file system. However, the files can be moved to any snap tree or directory on a different file system. Backing up snapshots Snapshots are stored in a.snapshot directory under the directory tree. For example: # ls -alr /fs2/dir.tst /fs2/dir.tst: drwxr-xr-x 4 root root 4096 Feb 8 09:11 dir.dir -rwxr-xr-x 1 root root Jan 31 09:33 file.0 -rwxr-xr-x 1 root root Jan 31 09:33 file.1 drwxr-xr-x 2 root root 4096 Apr 6 15:55.snapshot /fs2/dir.tst/.snapshot: lrwxrwxrwx 1 root root 15 Apr 6 15: T15:39:57_ ->../.@ lrwxrwxrwx 1 root root 15 Apr 6 15: T15:55:07_tst1 ->../.@ /fs2/dir.tst/dir.dir: -rwxr-xr-x 1 root root Jan 31 09:34 file.1 NOTE: The links beginning with.@ are used internally by the snapshot software and cannot be accessed. To back up the snapshots, use the procedure corresponding to your backup method. Backups using NDMP By default, NDMP does not back up the.snapshot directory. For example, if you specify a backup of the /fs2/dir.test directory, NDMP backs up the directory but excludes /fs2/ dir.tst/.snapshot and its contents. To back up the snapshot of the directory, specify the path /fs2/dir.tst/.snapshot/ T15:55:07_tst1. Now you can use the snapshot ( a point in time copy) to restore its associated directory. For example use /fs2/dir.tst/.snapshot/ T15:55:07_tst1 to restore /fs2/dir.tst. Backups without NDMP DMA applications can not backup a snapshot directory tree using a path such as /fs2/dir.tst/.snapshot/time-stamp-name. Instead, mount the snapshot using the mount -o bind option and then back up the mount point. For example, using a mount point such as as /mnt-time_stamp-name, use the following command to mount the snapshot: mount o bind /fs2/dir.tst.snapshot/time-stamp-name /mnt-time_stamp-name Then configure the DMA to back up /mnt-time_stamp-name. Backups with the tar utility The tar symbolic link (h) option can copy snapshots. For example, the following command copies the /snapfs1/test3 directory associated with the point-in-time snapshot. tar cvfh /snapfs1/test3/.snapshot/ t044500_hourly 168 Creating X9000 software snapshots

169 15 Creating block snapshots The block snapshot feature allows you to capture a point-in-time copy of a file system for online backup purposes and to simplify recovery of files from accidental deletion. The snapshot replicates all file system entities at the time of capture and is managed exactly like any other file system. NOTE: You can use either the software method or the block method to take snapshots on a file system. Using both snapshot methods simultaneously on the same file system is not supported. The block snapshot feature is supported as follows: HP X9320 Network Storage System: supported on the HP P2000 G3 MSA Array System or HP 2000 Modular Smart Array G2 provided with the platform. HP X9300 Network Storage Gateway: supported on the HP P2000 G3 MSA Array System; HP 2000 Modular Smart Array G2; HP P4000 G2 Models; HP 3PAR F200, F400, T400 and T800s Storage Systems (OS version (MU3); and Dell EqualLogic storage array (no arrays are provided with the X9300 system). HP X9720/X9730 Network Storage System: no support. The block snapshot feature uses the copy-on-write method to preserve the snapshot regardless of changes to the origin file system. Initially, the snapshot points to all blocks that the origin file system is using (B in the following diagram). When a block in the origin file system is overwritten with additions, edits, or deletions, the original block (prior to changes) is copied to the snapshot store, and the snapshot points to the copied block (C in the following diagram). The snapshot continues to point to the origin file system contents from the point in time that the snapshot was executed. To create a block snapshot, first provision or register the snapshot store. You can then create a snapshot from type-specific storage resources. The snapshot is active from the moment it is created. You can take snapshots via the X9000 Software block snapshot scheduler or manually, whenever necessary. Each snapshot maintains its origin file system contents until deleted from the system. Snapshots can be made visible to users, allowing them to access and restore files (based on permissions) from the available snapshots. NOTE: By default, snapshots are read only. HP recommends that you do not allow writes to any snapshots. Setting up snapshots This section describes how to configure the cluster to take snapshots. Preparing the snapshot partition The block snapshot feature does not require any custom settings for the partition. However, HP recommends that you provide sufficient storage capacity to support the snapshot partition. Setting up snapshots 169

170 NOTE: If the snapshot store is too small, the snapshot will eventually exceed the available space (unless you detect this and manually increase storage). If this situation occurs, the array software deletes the snapshot resources and the X9000 Software snapshot feature invalidates the snapshot file system. Although you can monitor the snapshot and manually increase the snapshot store as needed, the safest policy is to initially provision enough space to last for the expected lifetime of the snapshot. The optimum size of the snapshot store depends on usage patterns in the origin file system and the length of time you expect the snapshot to be active. Typically, a period of trial and error is required to determine the optimum size. See the array documentation for procedures regarding partitioning and allocating storage for file system snapshots. Registering for snapshots After setting up the snapshot partition, you can register the partition with the Fusion Manager. You will need to provide a name for the storage location and specify access parameters (IP address, user name, and password). The following command registers and names the array s snapshot partition on the Fusion Manager. The partition is then recognized as a repository for snapshots. ibrix_vs -r -n STORAGENAME -t { msa lefthand 3PAR eqlogic} -I IP(s) -U USERNAME [-P PASSWORD] To remove the registration information from the configuration database, use the following command. The partition will then no longer be recognized as a repository for snapshots. ibrix_vs -d -n STORAGENAME Discovering LUNs in the array After the array is registered, use the -a option to map the physical storage elements in the array to the logical representations used by X9000 Software. The software can then manage the movement of data blocks to the appropriate snapshot locations on the array. Use the following command to map the storage information for the specified array: ibrix_vs -a [-n STORAGENAME] Reviewing snapshot storage allocation Use the following command to list all of the array storage that is registered for snapshot use: ibrix_vs -l To see detailed information for named snapshot partitions on either a specific array or all arrays, use the following command: ibrix_vs -i [-n STORAGENAME] Automated block snapshots If you plan to take a snapshot of a file system on a regular basis, you can automate the snapshots. To do this, first define an automated snapshot scheme, and then apply the scheme to the file system and create a schedule. A snapshot scheme specifies the number of snapshots to keep and the number of snapshots to mount. You can create a snapshot scheme from either the GUI or the CLI. 170 Creating block snapshots

171 The type of storage array determines the maximum number of snapshots you can keep and mount per file system. Array P2000 G3 MSA System/MSA2000 G2 array EqualLogic array P4000 G2 storage system 3PAR storage system Maximum number of snapshots to keep 32 snapshots per file system 8 snapshots per file system 32 snapshots per file system 32 snapshots per file system Maximum number of snapshots to mount 7 snapshots per file system 7 snapshots per file system 7 snapshots per file system 7 snapshots per file system For the P2000 G3 MSA System/MSA2000, the storage array itself also limits the total number of snapshots that can be stored. Arrays count the number of LUNs involved in each snapshot. For example, if a file system has four LUNs, taking two snapshots of the file system increases the total snapshot LUN count by eight. If a new snapshot will cause the snapshot LUN count limit to be exceeded, an error will be reported, even though the file system limits may not be reached. The snapshot LUN count limit on P2000 G3 MSA System/MSA2000 arrays is 255. The 3PAR storage system allows you to make a maximum of 500 virtual copies of a base volume. Up to 256 virtual copies can be read/write copies. Creating automated snapshots using the GUI Select the file system where the snapshots will be taken, and then select Block Snapshots from the lower Navigator. On the Block Snapshots panel, click New to display the Create Snapshot dialog box. On the General tab, select Recurring as the Snapshot Type. Automated block snapshots 171

172 Under Snapshot Configuration, select New to create a new snapshot scheme. The Create Snapshot Scheme dialog box appears. 172 Creating block snapshots

173 On the General tab, enter a name for the strategy and then specify the number of snapshots to keep and mount on a daily, weekly, and monthly basis. Keep in mind the maximums allowed for your array type. Daily means that one snapshot is kept per day for the specified number of days. For example, if you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day. On the 7th day, the oldest snapshot is deleted. Similarly,Weekly specifies the number of weeks that snapshots are retained, and Monthly specifies the number of months that snapshots are retained. On the Advanced tab, you can create templates for naming the snapshots and mountpoints. This step is optional. Automated block snapshots 173

174 For either template, enter one or more of the following variables. The variables must be enclosed in braces ({ }) and separated by underscores (_). The template can also include text strings. When a snapshot is created using the templates, the variables are replaced with the following values. Variable fsname shortdate fulldate Value File system name yyyy_mm_dd yyyy_mm_dd_hhmmz + GMT When you have completed the scheme, it appears in the list of snapshot schemes on the Create Snapshot dialog box. To create a snapshot schedule using this scheme, select it on the Create Snapshot dialog box and go to the Schedule tab. Click Schedule this task, set the frequency of the snapshots, and schedule when they should occur. You can also set start and end dates for the schedule. When you click OK, the snapshot scheduler will begin taking snapshots according to the specified snapshot strategy and schedule. Creating an automated snapshot scheme from the CLI You can create an automated snapshot scheme with the ibrix_vs_snap_strategy command. However, you will need to use the GUI to create a snapshot schedule. To define a snapshot scheme, execute the ibrix_vs_snap_strategy command with the -c option: 174 Creating block snapshots

175 ibrix_vs_snap_strategy -c -n NAME -k KEEP -m MOUNT [-N NAMESPEC] [-M MOUNTSPEC] The options are: -n NAME -k KEEP -m MOUNT -N NAMESPEC M MOUNTSPEC The name for the snapshot scheme. The number of snapshots to keep per file system. For the P2000 G3 MSA System/MSA2000 G2 array, the maximum is 32 snapshots per file system. For P4000 G2 storage systems, the maximum is 32 snapshots per file system. For P4000 G2 storage systems, the maximum is 32 snapshots per file system. For Dell EqualLogic arrays, the maximum is eight snapshots per file system. Enter the number of days, weeks, and months to retain snapshots. The numbers must be separated by commas, such as -k 2,7,28. NOTE: One snapshot is kept per day for the specified number of days. For example, if you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day. On the 7th day, the oldest snapshot is deleted. Similarly, the weekly count specifies the number of weeks that snapshots are retained, and the monthly count specifies the number of months that snapshots are retained. The number of snapshots to mount per file system. The maximum number of snapshots is 7 per file system. Enter the number of snapshots to mount per day, week, and month. The numbers must be separated by commas, such as -m 2,2,3. The sum of the numbers must be less than or equal to 7. Snapshot name template. The template specifies a scheme for creating unique names for the snapshots. Use the variables listed below for the template. Snapshot mountpoint template. The template specifies a scheme for creating unique mountpoints for the snapshots. Use the variables listed below for the template. Variables for snapshot name and mountpoint templates. fulldate shortdate fsname yyyy_mm_dd_hhmmz + GMT yyyy_mm_dd File system name You can specify one or more of these variables, enclosed in braces ({ }) and separated by underscores (_). The template can also include text strings. Two sample templates follow. When a snapshot is created using one of these templates, the variables will be replaced with the values listed above. {fsname}_snap_{fulldate} snap_{shortdate}_{fsname} Other automated snapshot procedures Use the following procedures to manage automated snapshots. Modifying an automated snapshot scheme A snapshot scheme can be modified only from the CLI. Use the following command: ibrix_vs_snap_strategy -e -n NAME -k KEEP -m MOUNT [-N NAMESPEC] [-M MOUNTSPEC] Viewing automated snapshot schemes On the GUI, you can view snapshot schemes on the Create Snapshot dialog box. Select Recurring as the Snapshot Type, and then select a snapshot scheme. A description of that scheme will be displayed. To view all automated snapshot schemes or all schemes of a specific type using the CLI, execute the following command: ibrix_vs_snap_strategy -l [-T TYPE] Automated block snapshots 175

176 To see details about a specific automated snapshot scheme, use the following command: ibrix_vs_snap_strategy -i -n NAME Deleting an automated snapshot scheme A snapshot scheme can be deleted only from the CLI. Use the following command: ibrix_vs_snap_strategy -d -n NAME Managing block snapshots This section describes how to manage individual snapshots. Creating an on-demand snapshot To take an on-demand snapshot from the GUI, select the file system where the snapshot will be taken, and then select Block Snapshots from the lower Navigator. On the Block Snapshots panel, click New to display the Create Snapshot dialog box. On the General tab, select Once as the Snapshot Type and click OK. Use the following command to create a snapshot from the CLI ibrix_vs_snap -c -n SNAPFSNAME -f ORIGINFSNAME For example, to create a snapshot named ifs1_snap for file system ifs1: ibrix_vs_snap -c -n ifs1_snap -f ifs1 Mounting or unmounting a snapshot On the GUI, select Block Snapshots from the lower Navigator, select the snapshot on the Block Snapshots panel, and click Mount or Unmount. Include the -M option to the create command to automatically mount the snapshot file system after creating it. This makes the snapshot visible to authorized users. HP recommends that you do not allow writes to any snapshot file system. ibrix_vs_snap -c M n SNAPFSNAME -f ORIGINFSNAME For example, to create and mount a snapshot named ifs1_snap for file system ifs1: ibrix_vs_snap -c -M n ifs1_snap -f ifs1 Recovering system resources on snapshot failure If a snapshot encounters insufficient resources when attempting to update its contents due to changes in the origin file system, the snapshot fails and is marked invalid. Data is no longer accessible in the snapshot. To clean up records in the configuration database for an invalid snapshot, use the following command from the CLI: ibrix_vs_snap -r -f SNAPFSLIST For example, to clean up database records for a failed snapshot named ifs1_snap: ibrix_vs_snap -r -f ifs1_snap On the GUI, select the snapshot on the Block Snapshots panel and click Cleanup. Deleting snapshots Delete snapshots to free up resources when the snapshot is no longer needed or to create a new snapshot when you have already created the maximum allowed for your storage system. On the GUI, select the snapshot on the Block Snapshots panel and click Delete. On the CLI, use the following command: ibrix_vs_snap -d -f SNAPFSLIST For example, to delete snapshots ifs0_snap and ifs1_snap: ibrix_vs_snap -d -f ifs0_snap,ifs1_snap 176 Creating block snapshots

177 Viewing snapshot information Use the following commands to view snapshot information from the CLI. Listing snapshot information for all hosts The ibrix_vs_snap -l command displays snapshot information for all hosts. Sample output follows: ibrix_vs_snap -l NAME NUM_SEGS MOUNTED? GEN TYPE CREATETIME snap1 3 No 6 msa Wed Oct 7 15:09:50 EDT 2009 The following table lists the output fields for ibrix_vs_snap -l. Field NAME NUM_SEGS MOUNTED? GEN TYPE CREATETIME Description Snapshot name. Number of segments in the snapshot. Snapshot mount state. Number of times the snapshot configuration has been changed in the configuration database. Snapshot type, based on the underlying storage system. Creation timestamp. Listing detailed information about snapshots Use the ibrix_vs_snap -i command to monitor the status of active snapshots. You can use the command to ensure that the associated snapshot stores are not full. ibrix_vs_snap -i To list information about snapshots of specific file systems, use the following command: ibrix_vs_snap -i [-f SNAPFSLIST] The ibrix_vs_snap -i command lists the same information as ibrix_fs -i, plus information fields specific to snapshots. Include the -f SNAPFSLIST argument to restrict the output to specific snapshot file systems. The following example shows only the snapshot-specific fields. To view an example of the common fields, see Viewing file system information (page 35). SEGMENT OWNER LV_NAME STATE BLOCK_SIZE CAPACITY(GB) FREE(GB) AVAIL(GB) FILES FFREE USED% BACKUP TYPE TIER LAST_REPORTED ib ilv11_msa_snap9 snap OK, SnapUsed=4% 4, MIXED 7 Hrs 56 Mins 46 Secs ago 2 ib ilv12_msa_snap9 snap OK, SnapUsed=6% 4, MIXED 7 Hrs 56 Mins 46 Secs ago 3 ib ilv13_msa_snap9 snap OK, SnapUsed=6% 4, MIXED 7 Hrs 56 Mins 46 Secs ago 4 ib ilv14_msa_snap9 snap OK, SnapUsed=8% 4, MIXED 7 Hrs 56 Mins 46 Secs ago 5 ib ilv15_msa_snap9 snap OK, SnapUsed=6% 4, MIXED 7 Hrs 56 Mins 46 Secs ago 6 ib ilv16_msa_snap9 snap OK, SnapUsed=5% 4, MIXED 7 Hrs 56 Mins 46 Secs ago NOTE: For P4000 G2 storage systems, the state is reported as OK, but the SnapUsed field always reports 0%. Managing block snapshots 177

178 The following table lists the output fields for ibrix_vs_snap -i. Field SEGMENT OWNER LV_NAME STATE BLOCK_SIZE CAPACITY (GB) FREE (GB) AVAIL (GB) FILES FFREE USED% BACKUP TYPE TIER Last Reported Description Snapshot segment number. The file serving node that owns the snapshot segment. Logical volume. State of the snapshot. Block size used for the snapshot. Size of this snapshot file system, in GB. Free space on this snapshot file system, in GB. Space available for user files, in GB. Number of files that can be created in this snapshot file system. Number of unused file inodes available in this snapshot file system. Percentage of total storage occupied by user files. Backup host name. Segment type. Mixed means the segment can contain both directories and files. Tier to which the segment was assigned. Last time the segment state was reported. Accessing snapshot file systems By default, snapshot file systems are mounted in two locations on the file serving nodes: /<snapshot_name> /<original_file_system>/.<snapshot_name> For example, if you take a snapshot of the fs1 file system and name the snapshot fs1_snap1, it will be mounted at /fs1_snap1 and at /fs1/.fs1_snap1. X9000 clients must mount the snapshot file system (/<snapshot_name>) to access the contents of the snapshot. NFS and CIFS clients can access the contents of the snapshot through the original file system (such as /fs1/.fs1_snap1) or they can mount the snapshot file system (in this example, /fs1_snap1). The following window shows an NFS client browsing the snapshot file system.fs1_snap2 in the fs1_nfs file system. 178 Creating block snapshots

179 The next window shows a CIFS client accessing the snapshot file system.fs1_snap1. The original file system is mapped to drive X. Accessing snapshot file systems 179

180 Troubleshooting block snapshots Snapshot reserve is full and the MSA2000 is deleting snapshot volumes When the snapshot reserve is full, the MSA2000 will delete snapshot volumes on the storage array, leaving the device entries on the file serving nodes. To correct this situation, take the following steps: 1. Stop I/O or any applications that are reading or writing to the snapshot file systems. 2. Log on to the active Fusion Manager. 3. Unmount all snapshot file systems. 4. Delete all snapshot file systems to recover space in the snapshot reserve. CIFS clients receive an error when creating a snapshot CIFS clients might see the following error when attempting to create a snapshot: Make sure you are connected to the network and try again This error is generated when the snapshot creation takes longer than the CIFS timeout, causing the CIFS client to determine that the server has failed or the network is disconnected. To avoid this situation, do not take snapshots during periods of high CIFS activity. Cannot create 32 snapshots on an MSA system MSA systems or arrays support a maximum of 32 snapshots per file system. If snapshot creation is failing before you reach 32 snapshots, check the following: Verify the version of the MSA firmware. If the cluster has been rebuilt, use the MSA GUI or CLI to check for old snapshots that were not deleted before the cluster was rebuilt. The CLI command is show snapshots. Verify the virtual disk and LUN layout. 180 Creating block snapshots

181 16 Using data tiering A data tier is a logical grouping of file system segments. After creating tiers containing the segments in the file system, you can use the data tiering migration process to move files from the segments in one tier to the segments in another tier. For example, you could create a primary data tier for SAS storage and another tier for SATA storage. You could then migrate specific data from the SAS tier to the lower-cost SATA tier. Other configurations might be based on the type of file being stored, such as storing all streaming files in a tier or moving all files over a certain size to a specific tier. X9000 data tiering is transparent to users and applications and is compatible with X9000 software file system snapshots and other X9000 data services. Migration is a storage and filesystem intensive process which, in some circumstances, can take days to complete. Migration tasks must be run at a time when clients are not generating significant load. Migration it is not suitable for environments where there are no quiet times to run migration tasks. IMPORTANT: Data tiering has a cool-down period of approximately 10 minutes. If a file was last accessed during the cool-down period, the file will not be moved. Configuring data tiers Complete the following steps to configure data tiering: Assign segments to tiers. You can create any number of tiers. A tier cannot be on tape or on a location external to the X9000 file system, such as an NFS share. Define the primary tier. All new files are written to this tier. Create the tiering policy for the file system. The policy consists of rules that specify the data to be migrated. The parameters and directives used in the migration rules include actions based on file access patterns (such as access and modification times), file size, and file type. Rules can be constrained to operate on files owned by specific users and groups and to specific paths. Logical operators can be used to combine directives. Assigning segments to tiers Segments can be assigned to tiers when a file system is created or expanded, or at times when a migration task is not running. Similarly, tier assignments can be changed or removed at any time, provided that no migration tasks are running. On the GUI, select Filesystems from the Navigator and select a file system in the Filesystems panel. In the lower Navigator, select Segments. The Segments panel displays all of the segments in the file system. Configuring data tiers 181

182 In this example, filesystem ifs1 has four segments and no tiering information is currently defined. We will create two tiers, Tier1 and Tier2, and we will assign two segments to each tier. On the Segments panel, select the segments for the tier and click Assign to Tier. On the Assign to Tier dialog box, specify a name for the tier. When you repeat the operation to place other file system segments in a tier, the dialog box allows you to add the segments to an existing tier or to create a new tier. When you create a tier, the tier assignment is added to the Segments panel. In the following example, segments 1 and 2 are in Tier1 and segments 3 and 4 are in Tier Using data tiering

183 Defining the primary tier All new files are written to the primary tier, which is typically the tier built on the fastest storage. Use the following command to define the primary tier: ibrix_fs_tune -f FILESYSTEM -h SERVERS -t TIERNAME The following example specifies Tier1 as the primary tier: ibrix_fs_tune -f ifs1 -h ibrix1a,ibrix1b -t Tier1 This policy takes precedence over any other file allocation polices defined for the filesystem. NOTE: This example assumes users access the files over CIFS, NFS, FTP, or HTTP. If X9000 clients are used, the allocation policy must be applied to the clients. (Use -h to specify the clients.) Creating a tiering policy for a file system A tiering policy specifies migration rules for the file system. One tiering policy can be defined per file system and the policy must have at least one rule. Rules in the policy can migrate files between any two tiers in the filesystem. For example, rule1 could move files between Tier1 and Tier2, rule2 could migrate files from Tier2 to Tier1, and rule3 could migrate files between Tier1 and Tier3. A file is migrated according to the first rule that it matches. You can narrow the scope of rules by combining directives using logical operators. The following example creates a policy that has three simple rules: Migrate all files that have not been modified for 30 minutes from Tier1 to Tier2. (This rule is not valid for production, but is a good rule for testing.) Migrate all files larger than 5 MB from Tier1 to Tier2. Migrate all mpeg4 files from Tier1 to Tier 2. On the GUI, select Filesystems from the Navigator and then select a file system in the Filesystems panel. In the lower Navigator, select Active Tasks > Data Tiering > Rules. Creating a tiering policy for a file system 183

184 The Data Tiering Rules panel lists the existing rules for the file system. To create a rule, click Create. On the Create Data Tiering Rule dialog box, select the source and destination tier and then define a rule. The rule can move files between any two tiers. When you click OK, the rule is checked for correct syntax. If the syntax is correct, the rule is saved and appears on the Data Tiering Rules panel. The following example shows the three rules created for the example. You can delete rules if necessary. Select the rule on the Data Tiering Rules panel and click Delete. 184 Using data tiering

185 Additional rule examples The following rule migrates all files from Tier2 to Tier1: name="*" The following rule migrates all files in the subtree beneath the path. The path is relative to the mountpoint of the file system. path=testdata2 The next example migrates all mpeg4 files in the subtree. A logical and operator combines the rules: path=testdata4 and name="*mpeg4" The next example narrows the scope of the rule to files owned by users in a specific group. Note the use of parentheses. gname=users and (path=testdata4 and name="*mpeg4") For more examples and detailed information about creating rules, see Writing tiering rules (page 191). Running a migration task Migrating files from one tier to another must be initiated manually. Migration tasks cannot be scheduled to run using X9000 commands. Once started, a migration task passes once through the filesystem then stops. It is possible to use external job control software to run a migration task on a schedule if required. Only one migration task can run on a file system at any time. The task is not restarted on failure, and cannot be paused and later resumed. However, a migration task can be started when a server is in the InFailover state. To start a migration task from the GUI, select the file system from the Filesystems panel and then select Data Tiering in the lower Navigator. Click New on the Task Summary panel. The counters on the panel are updated periodically while the task is running. If necessary, click Stop to stop the data tiering task. There is no pause/resume function. When the task is complete, it appears on the GUI under Inactive Tasks for the file system. You can check the exit status there. Running a migration task 185

186 Click Details to see summary information about the task. Changing the tiering configuration with the GUI The following restrictions apply when changing the configuration: You cannot modify the tiering configuration for a filesystem while an active migration task is running. You cannot move segments between tiers, assign them to new tiers, or unassign them from tiers while an active migration task is running or while any rules exist that apply to the segments. Moving a segment to another tier To move a segment to another tier, select the segment on the Segments panel, click Assign to Tier, and then select a different tier for the segment, or create a new tier. The following example moved a segment from Tier1 to Tier Using data tiering

187 Removing a segment from a tier You can remove a segment from a tier, without assigning it to another tier. Select the file system from the Filesystems panel and expand Segments in the lower Navigator to list the tiers in the file system. Select the tier containing the segment. On the Tier Segments panel, select the segment and click Unassign. Configuring tiers and migrating data using the CLI Use the ibrix_tier command to manage tier assignments and to list information about tiers. Use the ibrix_migrator command to create or delete rules defining migration policies, to start or stop migration tasks, and to list information about rules and migrator tasks. Configuring tiers and migrating data using the CLI 187

188 Assigning segments to tiers First determine the segments in the file system and then assign them to tiers. Use the following command to list the segments: ibrix_fs -f FSNAME i For example (the output is truncated): [root@ibrix01a ~]# ibrix_fs -f ifs1 i.. SEGMENT OWNER LV_NAME STATE BLOCK_SIZE CAPACITY(GB) ibrix01b ilv1 OK 4,096 3, ibrix01a ilv2 OK 4,096 3, ibrix01b ilv3 OK 4,096 3, ibrix01a ilv4 OK 4,096 3, Use the following command to assign segments to a tier. The tier is created if it does not already exist. ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST For example, the following command creates Tier 1 and assigns segments 1 and 2 to it: [root@ibrix01a ~]# ibrix_tier -a -f ifs1 -t Tier1 -S 1,2 Assigned segment: 1 (ilv1) to tier Tier1 Assigned segment: 2 (ilv2) to tier Tier1 Command succeeded! NOTE: Be sure to spell the name of the tier correctly when you add segments to an existing tier. If you spell the name incorrectly, a new tier is created with the incorrect tier name, and no error is recognized. Use ibrix_fs_tune to designate the primary tier. See Defining the primary tier (page 183). Displaying information about tiers Use the following command to list the tiers in a file system. The -t option displays information for a specific tier. ibrix_tier -l -f FSNAME [-t TIERNAME] For example: [root@ibrix01a ~]# ibrix_tier -i -f ifs1 Tier: Tier1 =========== FS Name Segment Number Tier ifs1 1 Tier1 ifs1 2 Tier1 Tier: Tier2 =========== FS Name Segment Number Tier ifs1 3 Tier2 ifs1 4 Tier2 Creating a tiering policy To create a rule for migrating data from a source tier to a destination tier, use the following command: 188 Using data tiering

189 ibrix_migrator -A -f FSNAME -r RULE -S SOURCE_TIER -D DESTINATION_TIER The following rule migrates all files that have not been modified for 30 minutes from Tier1 to Tier2: ~]# ibrix_migrator -A -f ifs1 -r 'mtime older than 30 minutes' -S Tier1 -D Tier2 Rule: mtime<now :30:0 Command succeeded! Listing tiering rules To list all of the rules in the tiering policy, use the following command: ibrix_migrator -l [-f FSNAME] -r The output lists the file system name, the rule ID (IDs are assigned in the order in which rules are added to the configuration database), the rule definition, and the source and destination tiers. For example: [root@ibrix01a ~]# ibrix_migrator -l -f ifs1 -r HsmRules ======== FS Name Id Rule Source Tier Destination Tier ifs1 9 mtime older than 30 minutes Tier1 Tier2 ifs1 10 name = "*.mpeg4" Tier1 Tier2 ifs1 11 size > 4M Tier1 Tier2 Running a migration task To start a migration task, use the following command: ibrix_migrator -s -f FSNAME For example: [root@ibrix01a ~]# ibrix_migrator -s -f ifs1 Submitted Migrator operation to background. ID of submitted task: Migrator_163 Command succeeded! NOTE: The ibrix_migrator command cannot be run at the same time as ibrix_rebalance. To list the active migration task for a file system, use the ibrix_migrator -i option. For example: [root@ibrix01a ~]# ibrix_migrator -i -f ifs1 Operation: Migrator_163 ======================= Task Summary ============ Task Id : Migrator_163 Type : Migrator File System : ifs1 Submitted From : root from Local Host Run State : STARTING Active? : Yes EXIT STATUS : Started At : Jan 17, :32:55 Coordinator Server : ibrix01b Errors/Warnings : Dentries scanned : 0 Number of Inodes moved : 0 Number of Inodes skipped : 0 Avg size (kb) : 0 Avg Mb Per Sec : 0 Number of errors : 0 To view summary information after the task has completed, run the ibrix_migrator -i command again and include the -n option, which specifies the task ID. (The task ID appears in the output from ibrix migrator -i.) Configuring tiers and migrating data using the CLI 189

190 testdata1]# ibrix_task -i -n Migrator_163 Operation: Migrator_163 ======================= Task Summary ============ Task Id : Migrator_163 Type : Migrator File System : ifs1 Submitted From : root from Local Host Run State : STOPPED Active? : No EXIT STATUS : OK Started At : Jan 17, :32:55 Coordinator Server : ibrix01b Errors/Warnings : Dentries scanned : 1025 Number of Inodes moved : 1002 Number of Inodes skipped : 1 Avg size (kb) : 525 Avg Mb Per Sec : 16 Number of errors : 0 Stopping a migration task To stop a migration task, use the following command: ibrix_migrator -k -t TASKID [-F] Changing the tiering configuration with the CLI The following restrictions apply when changing the configuration: You cannot modify the tiering configuration for a filesystem while an active migration task is running. You cannot move segments between tiers, assign them to new tiers, or unassign them from tiers while an active migration task is running or while any rules exist that apply to the segments. Moving a segment to another tier Use the following command to assign a segment to another tier: ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST Removing a segment from a tier The following command removes segments from a tier. If you do not specify a segment list, all segments in the file system are unassigned. ibrix_tier -u -f FSNAME [-S SEGLIST] The following example removes segments 3 and 4 from their current tier assignment: [root@ibrix01a ~]# ibrix_tier -u -f ifs1 -S 3,4 Deleting a tier Before deleting a tier, take these steps: Delete all policy rules defined for the tier. Allow any active tiering jobs to complete. To unassign all segments and delete the tier, use the following command: ibrix_tier -d -f FSNAME -t TIERNAME 190 Using data tiering

191 Deleting a tiering rule Before deleting a rule, run the ibrix_migrator -l [-f FSNAME] -r command and note the ID assigned to the rule. Then use the following command to delete the rule: ibrix_migrator -d -f FSNAME -r RULE_ID The -r option specifies the rule ID. For example: [root@ibrix01a ~]# ibrix_migrator -d -f ifs2 -r 2 Writing tiering rules Rule attributes A tiering policy consists of one or more rules that specify how data is migrated from one tier to another. You can write rules using the GUI, or you can write them directly to the configuration database using the ibrix_migrator -A command. Each rule identifies file attributes to be matched. It also specifies the source tier to scan and the destination tier where files that meet the rule s criteria will be moved and stored. Note the following: Tiering rules are based on individual file attributes. All rules are executed when the tiering policy is applied during execution of the ibrix_migrator command. It is important that different rules do not target the same files, especially if different destination tiers are specified. If tiering rules are ambiguous, the final destination for a file is not predictable. See Ambiguous rules (page 193), for more information. The following are examples of attributes that can be specified in rules. All attributes are listed in Rule keywords (page 192). You can use AND and OR operators to create combinations. Access time File was last accessed x or more days ago File was accessed in the last y days Modification time File was last modified x or more days ago File size greater than n K File name or File type jpg, wav, exe (include or exclude) File ownership owned by user(s) (include or exclude) Use of the tiering assignments or policy on any file system is optional. Tiering is not assigned by default; there is no default tier. Operators and date/time qualifiers Valid rules operators are <, <=, =,!=, >, >=, and boolean and and or. Use the following qualifiers for fixed times and dates: Time: Enter as three pairs of colon-separated integers using a 24-hour clock. The format is hh:mm:ss (for example, 15:30:00). Date: Enter as yyyy-mm-dd [hh:mm:ss], where time of day is optional (for example, or :30:00). Note the space separating the date and time. When specifying an absolute date and/or time, the rule must use a compare type operator (< <= =!= > >=). For example: ibrix_migrator -A -f ifs2 -r "atime > ' ' " -S TIER1 -D TIER2 Writing tiering rules 191

192 Use the following qualifiers for relative times and dates: Relative time: Enter in rules as year or years, month or months,week or weeks, day or days, hour or hours. Relative date: Use older than or younger than. The rules engine uses the time the ibrix_migrator command starts execution as the start time for the rule. It then computes the required time for the rule based on this start time. For example, ctime older than 4 weeks refers to that time period more that 4 weeks before the start time. The following example uses a relative date: ibrix_migrator -A -f ifs2 -r "atime older than 2 days " -S TIER1 -D TIER2 Rule keywords The following keywords can be used in rules. Keyword atime ctime mtime gid gname uid uname type size name path strict_path Description Access time, used in a rule as a fixed or relative time. Change time, used in a rule as a fixed or relative time. Modification time, used in a rule as a fixed or relative time An integer corresponding to a group ID. A string corresponding to a group name. Enclose the name string in double quotes. An integer corresponding to a user ID. A string corresponding to a user name, where the user is the owner of the file. Enclose the name string in double quotes. File system entity the rule operates on. Currently, only the file entity is supported. In size-based rules, the threshold value for determining migration. Value is an integer specified in K (KB), M (MB), G (GB), and T (TB). Do not separate the value from its unit (for example, 24K). Regular expression. A typical use of a regular expression is to match file names. Enclose a regular expression in double quotes. The * wildcard is valid (for example, name = "*.mpg"). A name cannot contain a / character. You cannot specify a path; only a filename is allowed. Path name that allows these wild cards: *,?, /. For example, if the mountpoint for the file system is /mnt, path=ibfs1/mydir/* matches the entire directory subtree under /mnt/ibfs1/mydir. (A path cannot start with a /). Path name that rigidly conforms to UNIX shell file name expansion behavior. For example, strict_path=/mnt/ibfs1/mydir/* matches only the files that are explicitly in the mydir directory, but does not match any files in subdirectories of mydir. Migration rule examples When you write a rule, identify the following components: File system (-f) Source tier (-S) Destination tier (-D) Use the following command to write a rule. The rule portion of the command must be enclosed in single quotes. ibrix_migrator -A -f FSNAME -r 'RULE' -S SOURCE_TIER -D DEST_TIER Examples: 192 Using data tiering

193 The rule in the following example is based on the file s last modification time, using a relative time period. All files whose last modification date is more than one month in the past are moved. # ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month' -S T1 -D T2 In the next example, the rule is modified to limit the files being migrated to two types of graphic files. The or expression is enclosed in parentheses, and the * wildcard is used to match filename patterns. # ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month and ( name = "*.jpg" or name = "*.gif" )' -S T1 -D T2 In the next example, three conditions are imposed on the migration. Note that there is no space between the integer and unit that define the size threshold (10M): # ibrix_migrator -A -f ifs2 -r 'ctime older than 1 month and type = file and size >= 10M' -S T1 -D T2 The following example uses the path keyword. It moves files greater than or equal to 5M that are under the directory /ifs2/tiering_test from TIER1 to TIER2: ibrix_migrator -A -f ifs2 -r "path = tiering_test and size >= 5M" -S TIER1 -D TIER2 Rules can be group- or user-based as well as time- or data-based. In the following example, files associated with two users are migrated to T2 with no consideration of time. The names are quoted strings. # ibrix_migrator -A -f ifs2 -r 'type = file and ( uname = "ibrixuser" or uname = "nobody" )' -S T1 -D T2 Conditions can be combined with and and or to create very precise tiering rules, as shown in the following example. # ibrix_migrator -A -f ifs2 -r ' (ctime older than 3 weeks and ctime younger than 4 weeks) and type = file and ( name = "*.jpg" or name = "*.gif" ) and (size >= 10M and size <= 25M)' -S T1 -D T2 Ambiguous rules It is possible to write a set of ambiguous rules, where different rules could be used to move a file to conflicting destinations. For example, if a file can be matched by two separate rules, there is no guarantee which rule will be applied in a tiering job. Ambiguous rules can cause a file to be moved to a specific tier and then potentially moved back. Examples of two such situations follow. Example 1: In the following example, if a.jpg file older than one month exists in tier 1, then the first rule moves it to tier 2. However, once it is in tier 2, it is matched by the second rule, which then moves the file back to tier 1. # ibrix_migrator -A -f ifs2 -r ' mtime older than 1 month ' -S T1 -D T2 # ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1 There is no guarantee as to the order in which the two rules will be executed; therefore, the final destination is ambiguous because multiple rules can apply to the same file. Example 2: Rules can cause data movement in both directions, which can lead to issues. In the following example, the rules specify that all.doc files in tier 1 to be moved to tier 2 and all.jpg files in tier 2 be moved to tier 1. However, this might not succeed, depending on how full the tiers are. # ibrix_migrator -A -f ifs2 -r ' name = "*.doc" ' -S T1 -D T2 # ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1 For example, if tier 1 is filled with.doc files to 70% capacity and tier2 is filled with.jpg files to 80% capacity, then tiering might terminate before it is able to fully "swap" the contents of tier 1 and tier 2. The files are processed in no particular order; therefore, it is possible that more.doc Writing tiering rules 193

194 files will be encountered at the beginning of the job, causing space on tier 2 to be consumed faster than on tier 1. Once a destination tier is full, obviously no further movement in that direction is possible. These rules in these two examples are ambiguous because they give rise to possible conflicting file movement. It is the user s responsibility to write unambiguous rules for the data tiering policy for their file systems. 194 Using data tiering

195 17 Using file allocation Overview This chapter describes how to configure and manage file allocation. X9000 Software allocates new files and directories to segments according to the allocation policy and segment preferences that are in effect for a client. An allocation policy is an algorithm that determines the segments that are selected when clients write to a file system. File allocation policies File allocation policies are set per file system on each file serving node and on X9000 clients. The policies define the following: Preferred segments. The segments where a file serving node or X9000 client creates all new files and directories. Allocation policy. The policy that a file serving node or X9000 client uses to choose segments from its pool of preferred segments to create new files and directories. The segment preferences and allocation policy are set locally for X9000 clients. For NFS, CIFS, HTTP, and FTP clients (collectively referred to as NAS clients), the allocation policy and segment preferences must be set on the file serving nodes from which the NAS clients access shares. Segment preferences and allocation policies can be set and changed at any time, including when the target file system is mounted and in use. IMPORTANT: It is possible to set separate allocation policies for files and directories. However, this feature is deprecated and should not be used unless you are directed to do so by HP support. NOTE: X9000 clients access segments directly through the owning file serving node and do not honor the file allocation policy set on file serving nodes. IMPORTANT: behavior. Changing segment preferences and allocation policy will alter file system storage The following tables list standard and deprecated preference settings and allocation policies. Overview 195

196 Standard segment preferences and allocation policies Name ALL LOCAL RANDOM Description Prefer all of the segments available in the file system for new files and directories. Prefer the file serving node s local segments for new files and directories. Allocate files to a randomly chosen segment among preferred segments. Comment This is the default segment preference. It is suitable for most use cases. No writes are routed between the file serving nodes in the cluster. This preference is beneficial for performance in some configurations and for some workloads, but can cause some segments to be overutilized. This is the default allocation policy. It generally spreads new files and directories evenly (by number of files, not by capacity) across all of the preferred segments; however, that is not guaranteed. ROUNDROBIN Allocate files to preferred segments in segment order, returning to the first segment (or the designated starting segment) when a file or directory has been allocated to the last segment. This policy guarantees that new files and folders are spread evenly across the preferred segments (by number of files, not by capacity). Deprecated segment preferences and allocation policies IMPORTANT: HP recommends that you do not use these options. They are currently supported but will be removed in a future release. Name AUTOMATIC DIRECTORY STICKY HOST_ROUNDROBIN_NB NONE Description Lets the X9000 Software select the allocation policy. Allocates files to the segment where its parent directory is located. Allocates files to one segment until the segment s storage limit is reached, and then moves to the next segment as determined by the AUTOMATIC file allocation policy. For clusters with more than 16 file serving nodes, takes a subset of the servers to be used for file creation and rotates this subset on a regular, periodic basis. Sets directory allocation policy only. Causes the directory allocation policy to revert to its default, which is the policy set for file allocation. Comment Should be used only on the advice of HP support. Should be used only on the advice of HP support. Should be used only on the advice of HP support. Should be used only on the advice of HP support. Use NONE only to set file and directory allocation to the same policy. How file allocation settings are evaluated By default, ALL segments are preferred and file systems use the RANDOM allocation policy. These defaults are adequate for most X9000 environments; but in some cases, it may be necessary to change the defaults to optimize file storage for your system. 196 Using file allocation

197 An X9000 client or X9000 file serving node (referred to as the host ) uses the following precedence rules to evaluate the file allocation settings that are in effect: The host uses the default allocation policies and segment preferences: The RANDOM policy is applied, and a segment is chosen from among ALL the available segments. The host uses a non-default allocation policy (such as ROUNDROBIN) and the default segment preference: Only the file or directory allocation policy is applied, and a segment is chosen from among ALL available segments. The host uses a non-default segment preference and a non-default allocation policy (such as LOCAL/ROUNDROBIN): A segment is chosen according to the following rules: From the pool of preferred segments, select a segment according to the allocation policy set for the host, and store the file in that segment if there is room. If all segments in the pool are full, proceed to the next rule. Use the AUTOMATIC allocation policy to choose a segment with enough storage room from among the available segments, and store the file. When file allocation settings take effect on X9000 clients Although file allocation settings are executed immediately on file serving nodes, for X9000 clients, a file allocation intention is stored in the Fusion Manager. When X9000 Software services start on a client, the client queries the Fusion Manager for the file allocation settings that it should use and then implements them. If the services are already running on a client, you can force the client to query the Fusion Manager by executing ibrix_client or ibrix_lwhost --a on the client, or by rebooting the client. Using CLI commands for file allocation Follow these guidelines when using CLI commands to perform any file allocation configuration tasks: To perform a task for NAS clients (NFS, CIFS, FTP, HTTP), specify file serving nodes for the -h HOSTLIST argument. To perform a task for X9000 clients, specify individual clients for -h HOSTLIST or specify a hostgroup for -g GROUPLIST. Hostgroups are a convenient way to configure file allocation settings for a set of X9000 clients. To configure file allocation settings for all X9000 clients, specify the clients hostgroup. Setting file and directory allocation policies You can set a nondefault file or directory allocation policy for file serving nodes and X9000 clients. You can also specify the first segment where the policy should be applied, but in practice this is useful only for the ROUNDROBIN policy. IMPORTANT: Certain allocation policies are deprecated. See File allocation policies (page 195) for a list of standard allocation policies. On the GUI, open the Modify Filesystem Properties dialog box and select the Host Allocation tab. Setting file and directory allocation policies 197

198 Setting file and directory allocation policies from the CLI Allocation policy names are case sensitive and must be entered as uppercase letters (for example, RANDOM). Set a file allocation policy: ibrix_fs_tune -f FSNAME {-h HOSTLIST -g GROUPLIST} s LVNAMELIST p POLICY [-S STARTSEGNUM] The following example sets the ROUNDROBIN policy for files only on file system ifs1 on file serving node s1.hp.com, starting at segment ilv1: ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -s ilv1 Set a directory allocation policy: Include the -R option to specify that the command is for a directory. ibrix_fs_tune -f FSNAME {-h HOSTLIST -g GROUPLIST} -p POLICY [-S STARTSEGNUM] [-R] The following example sets the ROUNDROBIN directory allocation policy on file system ifs1 for file serving node s1.hp.com, starting at segment ilv1: ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -R Setting segment preferences There are two ways to prefer segments for file serving nodes, X9000 clients, or hostgroups: Prefer a pool of segments for the hosts to use. Prefer a single segment for files created by a specific user or group on the clients. Both methods can be in effect at the same time. For example, you can prefer a segment for a user and then prefer a pool of segments for the clients on which the user will be working. On the GUI, open the Modify Filesystem Properties dialog box and select the Segment Preferences tab. 198 Using file allocation

199 Creating a pool of preferred segments from the CLI A segment pool can consist of individually selected segments, all segments local to a file serving node, or all segments. Clients will apply the allocation policy that is in effect for them to choose a segment from the segment pool. NOTE: Segments are always created in the preferred condition. If you want to have some segments preferred and others unpreferred, first select a single segment and prefer it. This action unprefers all other segments. You can then work with the segments one at a time, preferring and unpreferring as required. By design, the system cannot have zero preferred segments. If only one segment is preferred and you unprefer it, all segments become preferred. When preferring multiple pools of segments (for example, one for X9000 clients and one for file serving nodes, make sure that no segment appears in both pools. Use the following command to specify the pool by logical volume name (LVNAMELIST): ibrix_fs_tune -f FSNAME {-h HOSTLIST -g GROUPLIST} -s LVNAMELIST Use the following command and the LOCAL keyword to create a pool of all segments on file serving nodes. Use the ALL keyword to restore the default segment preferences. ibrix_fs_tune -f FSNAME {-h HOSTLIST -g GROUPLIST} -S {SEGNUMLIST ALL LOCAL} Restoring the default segment preference The default is for all file system segments to be preferred. Use the following command to restore the default value: ibrix_fs_tune -f FSNAME {-h HOSTLIST -g GROUPLIST} -S ALL Setting segment preferences 199

HP StorageWorks X9000 File Serving Software File System User Guide

HP StorageWorks X9000 File Serving Software File System User Guide HP StorageWorks X9000 File Serving Software File System User Guide Abstract This guide describes how to configure and manage X9000 Software file systems and how to use NFS, CIFS, FTP, and HTTP to access

More information

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information

More information

Guest Management Software V2.0.2 Release Notes

Guest Management Software V2.0.2 Release Notes Guest Management Software V2.0.2 Release Notes Abstract These release notes provide important release-related information for GMS (Guest Management Software) Version 2.0.2. GMS V2.0.2 is MSM software version

More information

Configuring EMC Isilon

Configuring EMC Isilon This chapter contains the following sections: System, page 1 Configuring SMB Shares, page 3 Creating an NFS Export, page 5 Configuring Quotas, page 6 Creating a Group for the Isilon Cluster, page 8 Creating

More information

HP P4000 Remote Copy User Guide

HP P4000 Remote Copy User Guide HP P4000 Remote Copy User Guide Abstract This guide provides information about configuring and using asynchronous replication of storage volumes and snapshots across geographic distances. For the latest

More information

Infinite Volumes Management Guide

Infinite Volumes Management Guide ONTAP 9 Infinite Volumes Management Guide September 2016 215-11160_B0 doccomments@netapp.com Visit the new ONTAP 9 Documentation Center: docs.netapp.com/ontap-9/index.jsp Table of Contents 3 Contents

More information

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide HP Storage Provisioning Manager (SPM) Version 1.3 User Guide Abstract This guide provides information to successfully install, configure, and manage the HP Storage Provisioning Manager (SPM). It is intended

More information

HP StorageWorks. EVA Virtualization Adapter administrator guide

HP StorageWorks. EVA Virtualization Adapter administrator guide HP StorageWorks EVA Virtualization Adapter administrator guide Part number: 5697-0177 Third edition: September 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard Development Company,

More information

HP Database and Middleware Automation

HP Database and Middleware Automation HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty

More information

HP XP P9000 Remote Web Console Messages

HP XP P9000 Remote Web Console Messages HP XP P9000 Remote eb Console Messages Abstract This document lists the error codes and error messages for HP XP P9000 Remote eb Console for HP XP P9000 disk arrays, and provides recommended action for

More information

HPE 3PAR StoreServ Management Console 3.0 User Guide

HPE 3PAR StoreServ Management Console 3.0 User Guide HPE 3PAR StoreServ Management Console 3.0 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information

More information

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December

More information

HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V

HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V User Guide Abstract This document provides information about using HP 3PAR Recovery Manager for Microsoft Hyper-V for experienced Microsoft Windows

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

EMC Isilon. Cisco UCS Director Support for EMC Isilon

EMC Isilon. Cisco UCS Director Support for EMC Isilon Cisco UCS Director Support for, page 1 Adding an Account, page 2 Storage Pool Tiers, page 3 Storage Node Pools, page 4 SMB Shares, page 5 Creating an NFS Export, page 7 Quotas, page 9 Configuring a space

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

HP OpenView Storage Data Protector A.05.10

HP OpenView Storage Data Protector A.05.10 HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright

More information

EMC VNXe3200 Unified Snapshots

EMC VNXe3200 Unified Snapshots White Paper Abstract This white paper reviews and explains the various operations, limitations, and best practices supported by the Unified Snapshots feature on the VNXe3200 system. July 2015 Copyright

More information

HP IDOL Site Admin. Software Version: Installation Guide

HP IDOL Site Admin. Software Version: Installation Guide HP IDOL Site Admin Software Version: 10.9 Installation Guide Document Release Date: March 2015 Software Release Date: March 2015 Legal Notices Warranty The only warranties for HP products and services

More information

XP7 Online Migration User Guide

XP7 Online Migration User Guide XP7 Online Migration User Guide Abstract This guide explains how to perform an Online Migration. Part Number: 858752-002 Published: June 2016 Edition: 6 Copyright 2014, 2016 Hewlett Packard Enterprise

More information

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 2.7.2

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 2.7.2 Veritas NetBackup Copilot for Oracle Configuration Guide Release 2.7.2 Veritas NetBackup Copilot for Oracle Configuration Guide Documentation version: 2.7.2 Legal Notice Copyright 2016 Veritas Technologies

More information

HP 3PARInfo 1.4 User Guide

HP 3PARInfo 1.4 User Guide HP 3PARInfo 1.4 User Guide Abstract This guide provides information about installing and using HP 3PARInfo. It is intended for system and storage administrators who monitor and direct system configurations

More information

HP Intelligent Management Center Remote Site Management User Guide

HP Intelligent Management Center Remote Site Management User Guide HP Intelligent Management Center Remote Site Management User Guide Abstract This book provides overview and procedural information for Remote Site Management, an add-on service module to the Intelligent

More information

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 2.7.3

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 2.7.3 Veritas NetBackup Copilot for Oracle Configuration Guide Release 2.7.3 Veritas NetBackup Copilot for Oracle Configuration Guide Last updated: 2016-05-04 Document version: 2.7.3 Legal Notice Copyright 2016

More information

Administrator s Guide. StorageX 8.0

Administrator s Guide. StorageX 8.0 Administrator s Guide StorageX 8.0 March 2018 Copyright 2018 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

HP Data Protector Integration with Autonomy IDOL Server

HP Data Protector Integration with Autonomy IDOL Server Technical white paper HP Data Protector Integration with Autonomy IDOL Server Introducing e-discovery for HP Data Protector environments Table of contents Summary 2 Introduction 2 Integration concepts

More information

HPE Security ArcSight Connectors

HPE Security ArcSight Connectors HPE Security ArcSight Connectors SmartConnector for Microsoft DHCP File Configuration Guide October 17, 2017 Configuration Guide SmartConnector for Microsoft DHCP File October 17, 2017 Copyright 2006 2017

More information

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide HPE Storage Optimizer Software Version: 5.4 Best Practices Guide Document Release Date: November 2016 Software Release Date: November 2016 Legal Notices Warranty The only warranties for Hewlett Packard

More information

EMC VNXe Series. Configuring Hosts to Access NFS File Systems. Version 3.1 P/N REV. 03

EMC VNXe Series. Configuring Hosts to Access NFS File Systems. Version 3.1 P/N REV. 03 EMC VNXe Series Version 3.1 Configuring Hosts to Access NFS File Systems P/N 302-000-190 REV. 03 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

StoreServ Management Console 3.3 User Guide

StoreServ Management Console 3.3 User Guide StoreServ Management Console 3.3 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information on installation

More information

HP StoreOnce Recovery Manager Central for VMware User Guide

HP StoreOnce Recovery Manager Central for VMware User Guide HP StoreOnce Recovery Manager Central 1.2.0 for VMware User Guide Abstract The guide is intended for VMware and database administrators who are responsible for backing up databases. This guide provides

More information

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 3.1 and 3.1.1

Veritas NetBackup Copilot for Oracle Configuration Guide. Release 3.1 and 3.1.1 Veritas NetBackup Copilot for Oracle Configuration Guide Release 3.1 and 3.1.1 Veritas NetBackup Copilot for Oracle Configuration Guide Legal Notice Copyright 2018 Veritas Technologies LLC. All rights

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

HP MSA2000 Family VDS and VSS Hardware Providers installation guide

HP MSA2000 Family VDS and VSS Hardware Providers installation guide HP MSA2000 Family VDS and VSS Hardware Providers installation guide Part number: 485500-003 Second edition: February, 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,

More information

HP EVA Cluster Extension Software Installation Guide

HP EVA Cluster Extension Software Installation Guide HP EVA Cluster Extension Software Installation Guide Abstract This guide contains detailed instructions for installing and removing HP EVA Cluster Extension Software in Windows and Linux environments.

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 HP Data Protector A.06.11 disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 Technical white paper Table of contents Introduction... 2 Installation... 2 Preparing for Disaster

More information

Administrator s Guide. StorageX 7.8

Administrator s Guide. StorageX 7.8 Administrator s Guide StorageX 7.8 August 2016 Copyright 2016 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

CONFIGURING IBM STORWIZE. for Metadata Framework 6.3

CONFIGURING IBM STORWIZE. for Metadata Framework 6.3 CONFIGURING IBM STORWIZE for Metadata Framework 6.3 Publishing Information Software version 6.3.160 Document version 4 Publication date May 22, 2017 Copyright 2005-2017 Varonis Systems Inc. All rights

More information

NetBackup Copilot for Oracle Configuration Guide. Release 2.7.1

NetBackup Copilot for Oracle Configuration Guide. Release 2.7.1 NetBackup Copilot for Oracle Configuration Guide Release 2.7.1 NetBackup Copilot for Oracle Configuration Guide Documentation version: 2.7.1 Legal Notice Copyright 2015 Symantec Corporation. All rights

More information

StoreServ Management Console 3.2 User Guide

StoreServ Management Console 3.2 User Guide StoreServ Management Console 3.2 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information on installation

More information

HP Enterprise Secure Key Manager Configuration Guide for HP Tape Libraries

HP Enterprise Secure Key Manager Configuration Guide for HP Tape Libraries HP Enterprise Secure Key Manager Configuration Guide for HP Tape Libraries Abstract This document provides information about configuring the HP Enterprise Secure Key Manager (ESKM) for use with HP tape

More information

HP P6000 Cluster Extension Software Installation Guide

HP P6000 Cluster Extension Software Installation Guide HP P6000 Cluster Extension Software Installation Guide This guide contains detailed instructions for installing and removing HP P6000 Cluster Extension Software in Windows and Linux environments. The intended

More information

HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault

HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault Solution overview and best practices for data preservation with Veritas Enterprise Vault Technical white paper Technical

More information

HP StorageWorks Continuous Access EVA 2.1 release notes update

HP StorageWorks Continuous Access EVA 2.1 release notes update HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.

More information

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide AT459-96002 Part number: AT459-96002 First edition: April 2009 Legal and notice information Copyright 2009 Hewlett-Packard

More information

NTP Software VFM Administration Web Site

NTP Software VFM Administration Web Site NTP Software VFM Administration Web Site User Manual Version 7.1 This guide details the method for using NTP Software VFM Administration Web Site, from an administrator s perspective. Upon completion of

More information

HP Intelligent Management Center v7.1 MySQL 5.6 Installation and Configuration Guide (Windows)

HP Intelligent Management Center v7.1 MySQL 5.6 Installation and Configuration Guide (Windows) HP Intelligent Management Center v7.1 MySQL 5.6 Installation and Configuration Guide (Windows) Abstract This document provides installation and configuration information for MySQL. It includes the procedures

More information

HP Intelligent Management Center Branch Intelligent Management System (BIMS) User Guide

HP Intelligent Management Center Branch Intelligent Management System (BIMS) User Guide HP Intelligent Management Center Branch Intelligent Management System (BIMS) User Guide Abstract This guide contains basic information for network administrators, engineers, and operators who use the Branch

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

HPE Operations Bridge Reporter

HPE Operations Bridge Reporter HPE Operations Bridge Reporter Software Version: 10.21 Windows and Linux operating systems Disaster Recovery Guide Document Release Date: October 2017 Software Release Date: August 2017 Legal Notices Warranty

More information

HP Online ROM Flash User Guide. July 2004 (Ninth Edition) Part Number

HP Online ROM Flash User Guide. July 2004 (Ninth Edition) Part Number HP Online ROM Flash User Guide July 2004 (Ninth Edition) Part Number 216315-009 Copyright 2000, 2004 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required

More information

Administrator s Guide. StorageX 7.6

Administrator s Guide. StorageX 7.6 Administrator s Guide StorageX 7.6 May 2015 Copyright 2015 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide Abstract This guide provides information about developing encryption key management processes, configuring the tape autoloader

More information

HPE Security Fortify WebInspect Enterprise Software Version: Windows operating systems. Installation and Implementation Guide

HPE Security Fortify WebInspect Enterprise Software Version: Windows operating systems. Installation and Implementation Guide HPE Security Fortify WebInspect Enterprise Software Version: 17.10 Windows operating systems Installation and Implementation Guide Document Release Date: May 2017 Software Release Date: April 2017 Legal

More information

HPE StoreEver MSL6480 Tape Library Version 5.50 Firmware Release Notes

HPE StoreEver MSL6480 Tape Library Version 5.50 Firmware Release Notes HPE StoreEver MSL6480 Tape Library Version 5.50 Firmware Release Notes Abstract This document provides information about enhancements and fixes to the firmware for the HPE StoreEver MSL6480 Tape Library.

More information

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager ONTAP 9 Cluster Management Workflows for OnCommand System Manager June 2017 215-11440-C0 doccomments@netapp.com Updated for ONTAP 9.2 Table of Contents 3 Contents OnCommand System Manager workflows...

More information

HP Operations Manager

HP Operations Manager HP Operations Manager Software Version: 9.22 UNIX and Linux operating systems Java GUI Operator s Guide Document Release Date: December 2016 Software Release Date: December 2016 Legal Notices Warranty

More information

EMC ViPR Controller. Service Catalog Reference Guide. Version REV 02

EMC ViPR Controller. Service Catalog Reference Guide. Version REV 02 EMC ViPR Controller Version 3.5 Service Catalog Reference Guide 302-003-279 REV 02 Copyright 2013-2019 EMC Corporation All rights reserved. Published February 2019 Dell believes the information in this

More information

HP Management Integration Framework 1.7

HP Management Integration Framework 1.7 HP Management Integration Framework 1.7 Administrator Guide Abstract This document describes the use of HP Management Integration Framework interfaces and is intended for administrators involved in the

More information

HP Device Manager 4.6

HP Device Manager 4.6 Technical white paper HP Device Manager 4.6 HP t5740 Windows XPe Support Guide Table of contents Overview... 3 Updating the HPDM Agent... 3 Symantec Endpoint Protection (SEP) Firewall... 3 VNC Shadowing...

More information

HP integrated Citrix XenServer 5.0 Release Notes

HP integrated Citrix XenServer 5.0 Release Notes HP integrated Citrix XenServer 5.0 Release Notes Part Number 488554-003 March 2009 (Third Edition) Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

HP Data Protector Media Operations 6.11

HP Data Protector Media Operations 6.11 HP Data Protector Media Operations 6.11 Getting started This guide describes installing, starting and configuring Media Operations. Copyright 2009 Hewlett-Packard Development Company, L.P. Part number:

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager Clustered Data ONTAP 8.3 Cluster Management Workflows for OnCommand System Manager February 2016 215-10887_A0 doccomments@netapp.com Updated for 8.3.2 Table of Contents 3 Contents Contents of the Workflow

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HP XP7 Performance Advisor Software Installation Guide (v6.1.1)

HP XP7 Performance Advisor Software Installation Guide (v6.1.1) HP XP7 Performance Advisor Software Installation Guide (v6.1.1) Abstract This document describes how to install and configure the HP XP7 Performance Advisor Software. This document is intended for users

More information

HP ALM Synchronizer for Agile Manager

HP ALM Synchronizer for Agile Manager HP ALM Synchronizer for Agile Manager Software Version: 2.10 User Guide Document Release Date: August 2014 Software Release Date: August 2014 Legal Notices Warranty The only warranties for HP products

More information

HP Enterprise Integration module for SAP applications

HP Enterprise Integration module for SAP applications HP Enterprise Integration module for SAP applications Software Version: 2.60 User Guide Document Release Date: December 2010 Software Release Date: December 2010 Legal Notices Warranty The only warranties

More information

EMC SourceOne for File Systems

EMC SourceOne for File Systems EMC SourceOne for File Systems Version 7.2 Administration Guide 302-000-958 REV 02 Copyright 2005-2015 EMC Corporation. All rights reserved. Published in the USA. Published December 9, 2015 EMC believes

More information

HP WebInspect Enterprise

HP WebInspect Enterprise HP WebInspect Enterprise for the Windows operating system Software Version: 10.50 Implementation Guide Document Release Date: November 2015 Software Release Date: November 2015 Legal Notices Warranty The

More information

HP Intelligent Management Center v7.1 Branch Intelligent Management System Administrator Guide

HP Intelligent Management Center v7.1 Branch Intelligent Management System Administrator Guide HP Intelligent Management Center v7.1 Branch Intelligent Management System Administrator Guide Abstract This document describes how to administer the HP IMC Branch Intelligent Management System. HP Part

More information

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager ONTAP 9 Cluster Management Workflows for OnCommand System Manager August 2018 215-12669_C0 doccomments@netapp.com Table of Contents 3 Contents OnCommand System Manager workflows... 5 Setting up a cluster

More information

HP Storage Mirroring Application Manager 4.1 for Exchange white paper

HP Storage Mirroring Application Manager 4.1 for Exchange white paper HP Storage Mirroring Application Manager 4.1 for Exchange white paper Introduction... 2 Product description... 2 Features... 2 Server auto-discovery... 2 (NEW) Cluster configuration support... 2 Integrated

More information

HP AutoPass License Server

HP AutoPass License Server HP AutoPass License Server Software Version: 9.0 Windows, Linux and CentOS operating systems Support Matrix Document Release Date: October 2015 Software Release Date: October 2015 Page 2 of 10 Legal Notices

More information

HP StoreAll 8800/9320 Storage Administrator Guide

HP StoreAll 8800/9320 Storage Administrator Guide nl HP StoreAll 8800/9320 Storage Administrator Guide Abstract This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and

More information

HP Web Jetadmin 8.0 Credential Store Feature

HP Web Jetadmin 8.0 Credential Store Feature HP Web Jetadmin 8.0 Credential Store Feature Table of Contents: Overview...1 The Credential Store...1 Interacting with the Credential Store...2 Configuration of Device Credentials...2 Example...3 Credential

More information

HP Data Protector A support for Microsoft Exchange Server 2010

HP Data Protector A support for Microsoft Exchange Server 2010 HP Data Protector A.06.11 support for Microsoft Exchange Server 2010 White paper Introduction... 2 Microsoft Exchange Server 2010 concepts... 2 Microsoft Volume Shadow Copy Service integration... 2 Installation

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

HPE Enterprise Integration Module for SAP Solution Manager 7.1

HPE Enterprise Integration Module for SAP Solution Manager 7.1 HPE Enterprise Integration Module for SAP Solution Manager 7.1 Software Version: 12.55 User Guide Document Release Date: August 2017 Software Release Date: August 2017 HPE Enterprise Integration Module

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.0 Data Protection Guide For VMs and Datastores using the SnapCenter Plug-in for VMware vsphere March 2018 215-12931_C0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Surveillance Dell EMC Isilon Storage with Video Management Systems

Surveillance Dell EMC Isilon Storage with Video Management Systems Surveillance Dell EMC Isilon Storage with Video Management Systems Configuration Best Practices Guide H14823 REV 2.0 Copyright 2016-2018 Dell Inc. or its subsidiaries. All rights reserved. Published April

More information

HP Project and Portfolio Management Center

HP Project and Portfolio Management Center HP Project and Portfolio Management Center Software Version: 9.30 HP Demand Management User s Guide Document Release Date: September 2014 Software Release Date: September 2014 Legal Notices Warranty The

More information

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide Dell Storage vsphere Web Client Plugin Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

HP P4000 SAN Solution User Guide

HP P4000 SAN Solution User Guide HP P4000 SAN Solution User Guide Abstract This guide provides information for configuring and using the HP SAN Solution. It includes hardware configuration and information about designing and implementing

More information

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S. Updated for 8.2.2 Data ONTAP 8.2 MultiStore Management Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:

More information

HPE D2600/D2700 Disk Enclosure I/O Module Firmware 0149 Release Notes

HPE D2600/D2700 Disk Enclosure I/O Module Firmware 0149 Release Notes HPE D2600/D2700 Disk Enclosure I/O Module Firmware 0149 Release Notes Part Number: 504224-011R Published: November 2015 Edition: 12 Copyright 2009, 2015 Hewlett Packard Enterprise Development LP The information

More information

HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System

HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System Migration Guide HP Part Number: 5900-1089 Published: August 2010, Edition 1 Copyright 2010 Hewlett-Packard Development Company, L.P.

More information

OMi Management Pack for Oracle Database. Software Version: Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Oracle Database. Software Version: Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Oracle Database Software Version: 1.10 Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: June 2017 Software Release Date: February 2014

More information

Project and Portfolio Management Center

Project and Portfolio Management Center Project and Portfolio Management Center Software Version: 9.42 Getting Started Go to HELP CENTER ONLINE http://admhelp.microfocus.com/ppm/ Document Release Date: September 2017 Software Release Date: September

More information

HP Server Updates Catalog for System Center Configuration Manager 2007 User Guide

HP Server Updates Catalog for System Center Configuration Manager 2007 User Guide HP Server Updates Catalog for System Center Configuration Manager 2007 User Guide HP Part Number: Part Number 615495-001 Published: May 2011 (Second Edition) Notices Copyright 2011 Hewlett-Packard Development

More information

HPE ALM Excel Add-in. Microsoft Excel Add-in Guide. Software Version: Go to HELP CENTER ONLINE

HPE ALM Excel Add-in. Microsoft Excel Add-in Guide. Software Version: Go to HELP CENTER ONLINE HPE ALM Excel Add-in Software Version: 12.55 Microsoft Excel Add-in Guide Go to HELP CENTER ONLINE http://alm-help.saas.hpe.com Document Release Date: August 2017 Software Release Date: August 2017 Legal

More information

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version Installation and Administration Guide P/N 300-007-130 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

Synchronization Agent Configuration Guide

Synchronization Agent Configuration Guide SafeNet Authentication Service Synchronization Agent Configuration Guide 1 Document Information Document Part Number 007-012848-001, Rev. E Release Date July 2015 Applicability This version of the SAS

More information

Micro Focus Security ArcSight Connectors. SmartConnector for Microsoft IIS Multiple Site File. Configuration Guide

Micro Focus Security ArcSight Connectors. SmartConnector for Microsoft IIS Multiple Site File. Configuration Guide Micro Focus Security ArcSight Connectors SmartConnector for Microsoft IIS Multiple Site File Configuration Guide June, 2018 Configuration Guide SmartConnector for Microsoft IIS Multiple Site File June,

More information

IDOL Site Admin. Software Version: User Guide

IDOL Site Admin. Software Version: User Guide IDOL Site Admin Software Version: 11.5 User Guide Document Release Date: October 2017 Software Release Date: October 2017 Legal notices Warranty The only warranties for Hewlett Packard Enterprise Development

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Workflow Guide for 7.2 release July 2018 215-13170_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

NTP Software VFM. Administration Web Site for EMC Atmos User Manual. Version 6.1

NTP Software VFM. Administration Web Site for EMC Atmos User Manual. Version 6.1 NTP Software VFM Administration Web Site for EMC Atmos User Manual Version 6.1 This guide details the method for using NTP Software VFM Administration Web Site, from an administrator s perspective. Upon

More information