Configuration and Administration Guide 4.2

Size: px
Start display at page:

Download "Configuration and Administration Guide 4.2"

Transcription

1 PRIMECLUSTER Global File Services Configuration and Administration Guide 4.2 Oracle Solaris 10 J2S ENZ0(D) March 2013

2 Preface The Global File Services File System and its optional products are generically called "GFS product" or "GFS" in the document. This manual explains the functions, settings, and operations of the entire GFS products. This manual explains the GFS product that operates in the following environments. Platform (Server) Version of Oracle Solaris PRIMEPOWER Solaris 10 Sun Platform Solaris 10 Sun Platform Solaris 9 Target Readers This manual is intended for all the users operating the products with the GFS Shared File System (e.g. PRIMECLUSTER.) To read this manual, readers will require a general knowledge of UNIX and Oracle Solaris. (From here, it will be abbreviated as "Solaris".) Because GFS Shared File System uses the functions of PRIMECLUSTER, readers will also need knowledge of cluster control as used in a PRIMECLUSTER system. Because GFS Shared File System uses the functions of PRIMECLUSTER Global Disk Services, readers will also need knowledge of shard volume as used in a PRIMECLUSTER Global Disk Services. Organization This manual organized as below. Chapter 1 File System Functions This chapter describes the functions and features of the GFS Shared File System. Chapter 2 File System Structure This chapter describes the structure of the GFS Shared File System. Chapter 3 Failure Recovery This chapter describes the following failure recovery functions of the GFS Shared File System. Chapter 4 File System Design This chapter describes the main features of the operational design of the GFS Shared File System. Chapter 5 Management Partition This chapter describes the management partition. Chapter 6 Starting and Exiting the Management View This chapter describes how to start and exit the GFS Management View. Chapter 7 Operation Management View Screen Elements This chapter describes the screen elements of the GFS Management View. Chapter 8 Management Partition Operations (GUI) This chapter describes how to operate the management partition by the GFS Management View. Chapter 9 Management Partition Operations (Command) This chapter describes how to operate the management partition by commands. Chapter 10 File System Operations (GUI) This chapter describes how to operate the GFS Shared File System by the GFS Management View. - i -

3 Chapter 11 File System Operations (Command) This chapter describes how to operate the GFS Shared File System by commands. Chapter 12 File System Management This chapter describes the procedures for managing the GFS Shared File System using basic commands. Chapter 13 File System Backing-up and Restoring This chapter describes how to backing up and restoring data in the GFS Shared File System. Chapter 14 Tuning This chapter describes how to use a variety of utilities to optimize and make effective use of the GFS Shared File System. Chapter 15 Migration to the GFS Shared File System This chapter describes how to migrate from existing file systems to the GFS Shared File Systems. Appendix A List of Messages This appendix describes GFS Shared File System messages. Appendix B Reference Manual This appendix is intended for use as a reference manual for the GFS Shared File System. Appendix C Troubleshooting This appendix describes messages for which emergency action is required. Appendix D Incompatibility from Each Version This appendix describes an incompatibility from each version of the GFS Shared File System. Glossary The glossary defines the terms related to the GFS Shared File System. Related documentation The documents listed in this section contain information relevant to GFS Shared File System. Before beginning to work with GFS Shared File System, read the following document: - PRIMECLUSTER Concepts Guide - PRIMECLUSTER Installation and Administration Guide - PRIMECLUSTER Web-Based Admin View Operation Guide - PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide - PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools Configuration and Administration Guide - PRIMECLUSTER Reliant Monitor Services (RMS) Troubleshooting Guide - PRIMECLUSTER Global Disk Services Configuration and Administration Guide - PRIMECLUSTER Global Link Services Configuration and Administration Guide (Redundant Line Control Function) - PRIMECLUSTER Global Link Services Configuration and Administration Guide (Multipath Function) - RC2000 User's Guide Note A related document of PRIMECLUSTER includes the following documents besides the above-mentioned manual. - PRIMECLUSTER Installation Guide Installation instructions on the paper appended to each product of PRIMECLUSTER. - ii -

4 The data is stored on "CD3" of the product. For information about the file name, see "Product introduction", which is enclosed in the product. If "Solaris X" is indicated in the reference manual name of Oracle Solaris, replace "Solaris X" with "Oracle Solaris 10 (Solaris 10)". Manual Printing Use the PDF file to print this manual. Note Adobe Reader Version 5.0 or higher is required to read and print this PDF file. Online Manuals To reference the online manual, use the Cluster management server to register the user name in user group wvroot, clroot, cladmin, or clmon. See To enter a user group and meaning thereof, see "PRIMECLUSTER Web-Based Admin View Operation Guide". Notational Conventions Notation Prompts Command line examples that require system administrator (or root) rights to execute are preceded by the system administrator prompt, the hash sign (#). Command line examples that do not require system administrator rights are preceded by a dollar sign ($). Manual page section numbers Section No. of manual page Section numbers of the manual page appear in brackets after the commands of UNIX operating system and PRIMECLUSTER. Example: cp(1) The keyboard Keystrokes that represent nonprintable characters are displayed as key icons such as [Enter] or [F1]. For example, [Enter] means press the key labeled Enter; [Cntl]+[B] means hold down the key labeled Cntl or Control and then press the [B] key. Symbol Material of particular interest is preceded by the following symbols in this manual: Point Describes the contents of an important point. Note Describes the points the users should pay close attention to. - iii -

5 Information Provides useful information related to the topic. See Provides manuals for users' reference. Abbreviated name - Oracle Solaris may be abbreviated as Solaris Operating System or Solaris OS. - Oracle Solaris 9 is abbreviated as Solaris 9. - Oracle Solaris 10 is abbreviated as Solaris 10. Date of publication and edition December 2006, First edition April 2010, 1.1 edition November 2012, 1.2 edition March 2013, 1.3 edition Trademarks - UNIX is a registered trademark of The Open Group in the United States and other countries. - Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. - PRIMECLUSTER is a trademark of Fujitsu Limited. - Other product names are product names, trademarks, or registered trademarks of these companies. Copyright (c) Portions may be derived from Berkeley BSD system, licensed from the U. of CA. Acknowledgement This product includes software developed by the University of California, Berkeley and its contributors. Requests - No part of this document may be reproduced or copied without permission of FUJITSU LIMITED. - The contents of this document may be revised without prior notice. All Rights Reserved, Copyright (C) FUJITSU LIMITED Revision history Added and changed Location Manual Code Changed descriptions for guaranteeing the atomicity. Added "C.3.4 Corrective action when the mount processing fails." Added "C.3.5 Corrective action when write fails by ENOSPC." Functions not provided by the GFS Shared File System C.3 Common corrective actions against failures C.3 Common corrective actions against failures J2S ENZ0(B) J2S ENZ2(B) - iv -

6 Added and changed Location Manual Code Added "Maximum partitions size" in the table of Upper limits of file system. Described the operation of GFS if I/O for the management partition fails. Added " Enabling and disabling the Nagle algorithm in the communications between MDS and AC." Changed the directory for saving crash dump files. 1.7 Upper Limits of the File System 5.3 Notes on Management 14.1 Tuning Parameters C.2 Collecting information for troubleshooting C.2.1 Collecting a crash dump file J2S ENZ0(C) J2S ENZ2(C) J2S ENZ0(D) J2S ENZ2(D) - v -

7 Contents Chapter 1 File System Functions File System Overview Basic hardware configuration for use of file system File system sharing with PRIMEPOWER 800/900/1000/1500/2000/ Simultaneous Shared Access Maintaining Coherency High Availability Uninterrupted operation in case of node failure High-speed file system recovery Area reassignment in case of disk block failure Performance Data access performance Contiguous block assignment Multi-partition configuration Individualized meta-cache management Extendibility File system extension Operability GUI Upper Limits of the File System Notes Relationships with Other Components of PRIMECLUSTER Unavailable functions Functions not provided by the GFS Shared File System Functional difference with GFS Shared File System on Solaris 9 of PRIMEPOWER Service ports used by the GFS Shared File System Notes on use of logical volumes of GDS Notes on system time change Notes on use as loopback virtual file system(lofs) Notes on stopping a node Effect of file lock on system performance GFS operational notices for Solaris Notes on opening a file on the GFS Shared File System Notes on writing file data on the GFS Shared File System...11 Chapter 2 File System Structure Disk Structure of File System Super block Partition configuration data Meta-data area Area management data i-node area V-data area Update log area File data area Partition configuration Component configuration MDS (Meta Data Server) AC (Access Client) Monitoring daemon File Access and Token Management...16 Chapter 3 Failure Recovery MDS Failure Recovery Automatic recovery for primary MDS failures Automatic recovery for secondary MDS failures vi -

8 3.1.3 Automatic recovery in case primary MDS goes down during primary MDS operation only AC degeneration...19 Chapter 4 File System Design Mount and unmount opportunity of the file system at the time of system starting and ending System Design Effects of file system operation and system load Node layout Disk layout LAN selection For future expansion Backup Design...23 Chapter 5 Management Partition Management Partition Resources Required for the Management Partition Notes on Management...25 Chapter 6 Starting and Exiting the Management View Starting Web-Based Admin View Web-Based Admin View Top Menu Web-Based Admin View Operation Menu Functions Web-Based Admin View Tool Menu Functions Starting GFS Management View Exiting GFS Management View Exiting Web-Based Admin View...29 Chapter 7 Operation Management View Screen Elements Screen Configuration Menu Configurations and Functions General Operation View Help Icon Types and Object Status...39 Chapter 8 Management Partition Operations (GUI) Flow of Operations Creating the management partition Adding node configuration information to the management partition Creating the management partition Setting shared disks Creating the management partition, registering node configuration information and starting sfcfrmd daemon Adding node configuration information to the management partition Setting shared disks Stopping sfcfrmd daemon Adding node configuration information to the management partition Starting sfcfrmd daemon...53 Chapter 9 Management Partition Operations (Command) Flow of Operations Creating the management partition Adding node configuration information to the management partition Deleting node configuration information from the management partition Changing the sfcfrmd daemon's startup mode registered in the management partition Backup of the management partition information Restoring of the management partition information Creating of the management partition Setting Shared disks vii -

9 9.2.2 Initializing of the management partition Registering node configuration information to the management partition Starting sfcfrmd daemon Adding node configuration information to the management partition Setting Shared disks Stopping sfcfrmd daemon Adding node configuration information to the management partition Starting sfcfrmd daemon Deleting node configuration information from the management partition Stopping sfcfrmd daemon Deleting node configuration information from the management partition Starting sfcfrmd daemon Changing the sfcfrmd daemon's startup mode registered in the management partition Choosing the sfcfrmd daemon's startup mode Stopping sfcfrmd daemon Changing the sfcfrmd daemon's startup mode Starting sfcfrmd daemon Backup of the management partition information Backup of the management partition information Restoring of the management partition information Re-initializing the management partition Re-registering node configuration information to the management partition Re-configuring the sfcfrmd daemon's startup mode in the management partition Starting sfcfrmd daemon Restoring of the management partition information...66 Chapter 10 File System Operations (GUI) Flow of Operations Creation Change (in file system attributes) Change (partition addition) Change (shared node information) Deletion Creation Creating a file system Change Changing the file system attributes Changing the file system configuration (partition addition) Changing the shared node information Deletion Deleting the file system...87 Chapter 11 File System Operations (Command) Flow of Operations Creation Change (in file system attributes) Change (partition addition) Change (shared node information) Change (re-creating a file system) Change (MDS operational information) Deletion Creation Setting shared disks Creating a file systems Defaults of parameters used by mkfs_sfcfs(1m) Examples of creating a representative file system Customizing a GFS Shared File System Setting MDS operational information viii -

10 Setting vfstab Mount Mount of all nodes Mount Checking file system status Notes applying when the partitions of a created file system are used The GFS Shared File System File systems other than GFS Shared File System Change (file system attributes) Unmount Unmount of all nodes Unmount Change the file system attributes Changing the mount information Tuning file system Mount Change (partition addition) Unmount Setting shared disks Partition addition Mount Change (shared node information) Unmount Setting shared disks (When adding a node) Changing shared node information Adding node information Deleting node information Setting vfstab Mount Change (re-creating a file system) Unmount Re-creating the file system Mount Change (MDS operational information) Unmount Changing the MDS operational information Mount Deleting Unmount Removing the entry in /etc/vfstab Deleting the file system Chapter 12 File System Management File System Management Commands Checking a File System for Consistency and Repairing It Action to be taken when fsck terminates abnormally Memory allocation request error File system configuration information acquisition failure File system partition configuration data error Node on which fsck_sfcfs(1m) was executed is not sharing in the file system Irreparable file system destruction Operation error Repairing of file system is non-completion Executed in non-global zone Collection of zone information fails Other messages Extending a File System ix -

11 12.4 Displaying File System Information Acquiring file system information Displaying partition/node information Displaying file system management state How to set GFS Shared File System applications as cluster applications To set GFS Shared File System applications as cluster application Notes on cluster application settings Procedure flow chart for setting GFS Shared File System applications as cluster applications Procedure for setting GFS Shared File System applications as cluster applications Setup flow chart of adding file data partitions to GFS Shared File Systems of cluster applications Setup procedure of adding file data partitions to GFS Shared File Systems of cluster applications How to start up CF from GUI when a GFS Shared File System is used Chapter 13 File System Backing-up and Restoring Type of Backing-up and Restoring Backing-up by Standard Solaris OS commands Backing-up file by file Backing-up entire file system Restoring by Standard Solaris OS commands File-by-file restoring Entire-file-system restoring Set up after Restoration Resetting the partition information Repairing the file system Chapter 14 Tuning Tuning Parameters Amount of Cache Communication timeout value Enabling and disabling the Nagle algorithm in the communications between MDS and AC Chapter 15 Migration to the GFS Shared File System Moving the existing files Appendix A List of Messages A.1 AC Messages of the GFS Shared File System A.1.1 Panic messages A.1.2 Warning messages A.1.3 Information messages A.2 The GFS Shared File System Daemon messages A.2.1 Panic messages (MDS(sfcfsmg daemon)) A.2.2 Panic messages (sfcprmd daemon) A.2.3 Error messages (sfcfrmd daemon) A.2.4 Error messages (MDS (sfcfsmg daemon)) A.2.5 Error messages (sfcfsd daemon) A.2.6 Error messages (sfcfs_mount command) A.2.7 Error messages (sfcpncd daemon) A.2.8 Error messages (sfcprmd daemon) A.2.9 Warning messages (MDS (sfcfsmg daemon)) A.2.10 Warning messages (sfcfsd daemon) A.2.11 Warning messages (sfchnsd daemon) A.2.12 Information messages (sfcfrmd daemon) A.2.13 Information messages (MDS (sfcfsmg daemon)) A.2.14 Information messages (sfcprmd daemon) A.2.15 Information messages (sfchnsd daemon) A.3 The GFS Shared File System Script Messages A.3.1 sfcfsrm script A.4 Command messages for File System Common Management x -

12 A.4.1 df_sfcfs command A.4.2 fsck_sfcfs command A.4.3 fstyp_sfcfs command A.4.4 mkfs_sfcfs command A.4.5 mount_sfcfs command A.4.6 umount_sfcfs command A.5 The GFS Shared File System Specific Management Commands' Messages A.5.1 sfcadd command A.5.2 sfcadm and sfcnode commands A.5.3 sfcfrmstart command A.5.4 sfcfrmstop command A.5.5 sfcgetconf command A.5.6 sfcinfo command A.5.7 sfcmntgl command A.5.8 sfcsetup command A.5.9 sfcstat command A.5.10 sfcumntgl command A.5.11 sfcrscinfo command A.6 The GFS Shared File System Management View Messages A.6.1 Error messages A.6.2 Warning messages A.6.3 Information messages A.7 Web-Based Admin View Messages A.8 Messages Of Functions For GFS To Use A.9 Installation Error Messages Appendix B Reference Manual B.1 Commands for Common File System Management B.1.1 df_sfcfs(1m) Display of usage condition and composition information B.1.2 fsck_sfcfs(1m) Check GFS Shared File System for consistency B.1.3 fstyp_sfcfs(1m) Determine the type of file system B.1.4 mkfs_sfcfs(1m) Build GFS Shared File System B.1.5 mount_sfcfs(1m) Mount the GFS Shared File System on the local node B.1.6 umount_sfcfs(1m) Unmount the GFS Shared File System from the local node B.2 The GFS Shared File System Specific Management Commands B.2.1 sfcadd(1m) Adding a data partition B.2.2 sfcadm(1m) Change partition information setting B.2.3 sfcfrmstart(1m) Start sfcfrmd daemon on the local node B.2.4 sfcfrmstop(1m) Stop sfcfrmd daemon on the local node B.2.5 sfcgetconf(1m) Make a backup of the management partition B.2.6 sfcinfo(1m) Display partition information B.2.7 sfcmntgl(1m) Mount the GFS Shared File System on all nodes B.2.8 sfcnode(1m) Add, delete, and alter node configuration information B.2.9 sfcrscinfo(1m) Display file system information B.2.10 sfcsetup(1m) Perform the following functions: (1) Initialization of management partition. (2) Addition, deletion and display of node information. (3) Display path of management partition. (4) Registration and display of the startup mode of the sfcfrmd daemon B.2.11 sfcstat(1m) Report statistics on GFS Shared File Systems B.2.12 sfcumntgl(1m) Unmount the GFS Shared File System on all nodes Appendix C Troubleshooting C.1 Corrective actions of messages C.2 Collecting information for troubleshooting C.2.1 Collecting a crash dump file C.2.2 Collecting a core image of the daemon C.3 Common corrective actions against failures C.3.1 Action for I/O errors C.3.2 Corrective action in the event of data inconsistency xi -

13 C.3.3 Corrective action when the sfcfrmd daemon is not started C.3.4 Corrective action when the mount processing fails C.3.5 Corrective action when write fails by ENOSPC Appendix D Incompatibility from Each Version D.1 Incompatibilities from 4.1A Glossary Index xii -

14 Chapter 1 File System Functions This chapter describes the functions and features of the GFS Shared File System. 1.1 File System Overview The GFS Shared File System is a shared file system that allows simultaneous access from multiple Solaris systems to which a shared disk device is connected. (A file system of this type is referred to as a shared file system. A file system such as UFS that will be used only within a node is referred to as a local file system.) The GFS Shared File System is an optimum shared file system for business uses that are API-compatible with UFS and GFS Local File System, and provides high reliability and high performance. The GFS Shared File System maintains consistency of data even with updating from multiple nodes, enabling data transfer by a distributed application with a conventional API when the application is executed on multiple nodes. Also, contiguous file operation on other nodes is assured even though one node fails, making the GFS Shared File System especially suitable for an environment that requires high availability of the file system. GFS Shared File Systems can be used on the following system: - Solaris 10 for 64 bit - Solaris 9 for 64 bit The GFS Shared File System has the following functions: - Simultaneous shared access from multiple nodes to files or file systems - Maintaining consistency for file data reference and updating from multiple nodes - File access using a file cache on each node - Continuous file operation on other nodes if one node fails while maintaining file system consistency - High-speed file system recovery function - High-speed I/O processing by contiguous block assignment to areas in a file - Support of multi-partition to implement I/O processing load distribution - Support of multi-partition to implement the extension of file system size without rebuilding file system - GUI-based file system operation using a Web browser Like the UFS file system, the following application interface is commonly available: - 64-bit file system interface See For the functions that are unavailable with the GFS Shared File System, see "1.8.2 Unavailable functions" Basic hardware configuration for use of file system The following basic configuration is required to build the GFS Shared File System: - Shared disk device, which provides simultaneous access between nodes sharing the file system - One or more NICs for the cluster interconnect to monitor internodes communication failures. - Remote Console Connecting Unit - GUI display personal computer or Solaris OS computer with bit map - NIC for public LAN - 1 -

15 Figure 1.1 Basic hardware configuration File system sharing with PRIMEPOWER 800/900/1000/1500/2000/2500 The PRIMEPOWER 800/900/1000/1500/2000/2500 can be used for multiple nodes by splitting one frame into multiple partitions. Using the GFS Shared File System in this configuration allows a file system to be shared between nodes. Note Partitions as used here refer to nodes split into multiple nodes through physical partitioning of the system. Figure 1.2 Sharing file system with PRIMEPOWER 800/900/1000/1500/2000/

16 1.2 Simultaneous Shared Access Maintaining Coherency The GFS Shared File System realizes simultaneous accessing and updating from multiple nodes to a file system on a shared disk device. The GFS Shared File System maintains consistency in updating from multiple nodes to files or file systems. The file lock function spanning nodes can be used with a conventional API. In this way, applications at multiple nodes can perform exclusive updating of mutual file data and reading of the latest data. A conventional UNIX file system API, such as file lock, can do these operations. Figure 1.3 Distributed execution of file related application 1.3 High Availability The GFS Shared File System allows contiguous access to file systems if a node failure or disk block failure occurs Uninterrupted operation in case of node failure If one node fails when the GFS Shared File System is being used from multiple nodes, file access from the other nodes can be continued. The file system data retained by the failing node is automatically recovered in consistency within the GFS Shared File System from the other nodes. In other words, the processing of the application programs operating on the other nodes can be continued without causing a file system operation error. See For more information on the uninterrupted operation function, see "Chapter 3 Failure Recovery"

17 Figure 1.4 Uninterrupted operation in case of node failure High-speed file system recovery In case of a node failure, fsck(1m) must be executed for an ordinary file system to recover consistency. In most file systems, it is necessary to inspect all of the meta-data for the systems in order to recover consistency. If a node failure occurs, considerable time may be required before the file system actually becomes available. The GFS Shared File System records operations that have changed the file system structure in an area called updated log. These operations include file creation and deletion. Using the data in the update log area allows the file system to recover from a system failure in less than a minute. When recovering from a system failure, the GFS Shared File System retrieves the update log in the recovery process. The file system then decides whether to invalidate or complete the file system operation that was done, and reflects the result. The file system structure can then be mounted and used without fully checking it. As mentioned in "1.3.1 Uninterrupted operation in case of node failure", the GFS Shared File System operating on multiple nodes does not require execution of fsck_sfcfs(1m) because consistency recovery processing is performed automatically in case of a node failure. Note The fsck_sfcfs(1m) full check mode is also provided. To recover the file system from a hardware failure on the disk, execution of fsck_sfcfs(1m) in the full check mode may be required Area reassignment in case of disk block failure The GFS Shared File System automatically assigns another disk block to a new meta-data area if a disk block hardware failure occurs. Assignment allows continuous file system processing if a disk failure occurs only on a specific block. Note This function only suppresses use of a block where an I/O error has occurred. If a request to use the same block is issued, however, an I/ O error may occur again. In the case of an I/O error, the error must be corrected because the response time for that request increases. If block reassignment by such an I/O error occurs, back up the file system first. Then correct the hardware failure by replacing the failed disk with a new one, and restore the backed-up data to recover the file system

18 1.4 Performance The GFS Shared File System has the following functions for implementing file system access at high speed: Data access performance The GFS Shared File System allows access to the file system on the shared disk device from multiple nodes. Conventional distributed file systems have transferred file system data from management servers to clients that have actually issued an access request via a LAN in the network. However, the GFS Shared File System accesses the disk directly from a request node, reducing network load and speeding up response time for a read or write request in comparison with NFS Contiguous block assignment The GFS Shared File System assigns contiguous blocks to file data to take better advantage of batch I/O processing to improve the file system performance. The GFS Shared File System manages area on the extent-base for assigning contiguous blocks. For the range of blocks that can be acquired contiguously, the file system manages area assignment using a file offset, a start block number, and the number of blocks used. The file system also takes into consideration the possibility of contiguous block assignment when extending the file. Following picture shows a block assignment example that a file has three extents which locate 0, 64 megabytes, 96 megabytes file offset and 64 megabytes, 32 megabytes, 4 megabytes extent length respectively. Figure 1.5 Contiguous block assignment Empty file data areas are also managed in the extent base to implement high-speed assignment of optimum empty areas Multi-partition configuration The GFS Shared File System provides a function that unifies partitions into one file system. With the GFS Shared File System, you can easily solve the problem of there being insufficient area by adding partitions. In multi-partition configuration, a round robin allocation system enables the use of the file data area in different partitions, thus improving file system performance by sharing the I/O load between different disks Individualized meta-cache management The GFS Shared File System constructs its cache management of meta-data in an individualized manner. Many conventional file systems have managed meta-data in a uniform manner. However, the GFS Shared File System provides individualized management of i-node, directory block, and indirect block areas on disks, taking into consideration the characteristics of the access. As a result, it improves the cache-hit ratio and reducing resources used. 1.5 Extendibility The GFS Shared File System is capable of easily extending its data partitions by specifying free disk partitions, providing a quick solution if the file system has too few free areas

19 1.5.1 File system extension The GFS Shared File System adds partitions to a conventional system file, thus solving the problems of there being an area shortfall without taking additional time for the backup and/or re-creation of the file system. Figure 1.6 File system extension 1.6 Operability GUI The GFS Shared File System uses a GUI that allows a Web browser to be used to create, delete, operate, and modify its data and monitor its status. 1.7 Upper Limits of the File System The following table lists the upper limits of the GFS Shared File System on a file system basis: The maximum number of GFS Shared File Systems in a cluster system is 10. Table 1.1 Upper limits of file system Item Maximum file system capacity Maximum partitions size Maximum file size Maximum number of sharable nodes Maximum directory size Maximum number of partitions consisting of a file system Maximum number of i-nodes per file system Maximum number of directories per file system Upper limit 1 terabyte - 1 kilobyte 1 terabyte 1 terabyte - 8 kilobyte 2 nodes 2 gigabytes - 1 kilobyte 32 partitions 16 mega 1 mega - 6 -

20 Item Maximum number of concurrently open files 5000 files (*1) Maximum number of file locks setting file locks Upper limit *1 Maximum number of files can be concurrently open on one node. Note We recommend you to lessen the number of the GFS Shared File Systems simultaneously used in one cluster system as much as possible. When you use many GFS Shared File Systems, please carry a sufficient number for a node of CPUs, and carry out system verification in advance. 1.8 Notes Below are notes on using the GFS Shared File System Relationships with Other Components of PRIMECLUSTER CIP must have been set up, as it is used for the sfcfrmd daemon of the GFS Shared File System. See For details to set up CIP, see the "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide." GDS is used to create logical volumes for use by the GFS Shared File System. See For notes, see "1.8.4 Notes on use of logical volumes of GDS" For details to set up GDS, see the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." The GFS Shared File System must work with the other nodes in the same cluster and must recognize that each node works correctly. When a node downs, it turns to be the state called LEFTCLUSTER. If there is a node with this state, it is impossible to change the management state of the GFS Shared File System. To keep the GFS Shared File System operational whenever a node downs, set up the Shutdown Facility so that the LEFTCLUSTER state will turn into the DOWN state automatically. See For details of the Shutdown Facility and the Monitoring Agents, refer to the following manuals. - "Configuring the Shutdown Facility" of the "PRIMECLUSTER Installation and Administration Guide." - "PRIMECLUSTER SF" of the "PRIMECLUSTER Concepts Guide" - "Shutdown Facility (SF)" of the "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide." Unavailable functions Functions not provided by the GFS Shared File System The GFS Shared File System explained in this guide does not provide the following functions: - Use as a root file system, /usr, /var, /opt - 7 -

21 - To use as a mount point - For Solaris 10, GFS Shared File Systems are used in a non-global zone - quota function - ACL function - Asynchronous I/O function - File sharing with other nodes by an NFS - Use of IPv6 - Setting of hard link for a directory - Direct I/O function - lockfs(1m) - Setup of an extension file attribute that is added on Solaris 9 - Mounting with the non-blocking mandatory lock added on Solaris 9 - Execution of fssnap(1m) that is added on Solaris 9 - Guaranteeing the atomicity of the write for the following operations from other nodes, when data written by write(2), writev(2), pwrite(2) is stored in several blocks. - write(2), writev(2), pwrite(2) - truncate(3c), ftruncate(3c) - creat(2), open(2) with O_TRUNC - Execution of open(2) from the other node to the file where mmap(2) is being executed with MAP_SHARED and PROT_WRITE specified. - Execution of mmap(2) from the other node to the file where open(2) is being executed with MAP_SHARED and PROT_WRITE specified. - Execution of open(2) from the other node to the writable file where mmap(2) is being executed. - Execution of mmap(2) from the other node to the writable file where open(2) is being executed. - Advisory lock setup to the file where the recommended mandatory lock is set, and changes to the mandatory lock setting of the file where mmap(2) is being executed. - If mmap(2) is being executed to the file where the mandatory lock is set, EAGAIN is returned to the F_UNLCK request with fcntl(2). If the file is being recovered, the command is terminated normally. - As for the type except the following file type, creation of a file and use of a file are not possible. - Normal - Directory - Symbolic link - Execution of read(2) to the directory - Specification of F_SHARE and F_UNSHARE in the ONC shared lock (fcntl(2)) - Analysis of the "sfcfs" module through the Dtrace dynamic trace function that is added in Solaris Functional difference with GFS Shared File System on Solaris 9 of PRIMEPOWER The GFS Shared File System explained in this guide does not provide the following functions which were provided by the GFS Shared File System on Solaris 9 of PRIMEPOWER. - Two or more communication paths set up from AC to MDS(multiple LAN setup) - 8 -

22 - Creation of file system with separate representative partition and update log area - Uninterrupted operation when MDS process is downed - MDS failback - Addition of meta data partition to file system - Addition of file data partition to mounted file system - Addition of shared node to mounted file system - High speed backup and restoration utilizing PRIMECLUSTER Global Disk Services Snapshot - File Extension Attribute Information See For the GFS Shared File System which operates on Solaris 9 of PRIMEPOWER, see the "PRIMECLUSTER Global File Services Configuration and Administration Guide." Service ports used by the GFS Shared File System In the GFS Shared File Systems, TCP service ports within the range of 9100 to 9163, and 9200 are reserved. If these port numbers conflict with those for other applications, change the entry port number beginning with sfcfs- and sfcfsrm in the /etc/services file. If you need to change the port numbers, be aware that the port numbers beginning with sfcfs- and sfcfsrm must be the same on all the nodes Notes on use of logical volumes of GDS GDS is volume management software designed to improve the availability and operation manageability of information stored on disk units of the Storage Area Network (SAN). Various kinds of access controls can be performed on the logical volumes of GDS to protect data against damage due to unauthorized access. To use a logical volume of GDS in the GFS Shared File System, make the following settings: - The type of disk class to which the logical volume belongs is shared. - All nodes sharing the GFS Shared File System are specified in the scope of the disk cluster to which the logical volume belongs. - When the attribute lock mode of the logical volume is lock=off, the volume is started automatically. - The attribute access mode of the logical volume is set as read-write. To use the GFS Shared File Systems, logical volumes of GDS must be ACTIVE. If they are STOP, all accesses to the shared file systems will be prohibited. Note If you create the GFS Shared File System on GDS-striped logical volumes, set the width of the stripe group to 256 blocks or more. Note Don't use online volume expansion for the logical volume of GDS that is including GFS Shared File System. Because the copy of the super block and of the partition configuration information of the GFS Shared File System cannot be read when the size of the logical volumes of GDS is changed by the online volume expansion, the GFS Shared File System cannot be used. If you want to expand the GFS Shared File System, add an unoccupied logical volume partition, or create a new logical volume and add it

23 See For the operations of disk classes to which logical volumes of GDS belong, refer to the description of class operations under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide". For the operations of logical volumes of GDS, refer to the description of volume operations under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide" Notes on system time change Do not perform the following operation while a GFS Shared File System is working. - Change the system time by the date(1) command and rdate(1m) command etc. If the system time is changed by the date(1) command, the file system will be blockaded. The monitoring mechanism on a GFS Shared File System might not be operated normally due to a sudden change of the system time. To resume the system operation, mount again after unmounting the file system Notes on use as loopback virtual file system(lofs) When using a GFS Shared File System as a loopback virtual file system (lofs), the file system may not be stopped while any related lofs is mounted. The node will panic if the file system is stopped and a related lofs is mounted. When using a GFS Shared File System as a loopback virtual file system, unmount all related lofs before stopping the file system Notes on stopping a node Use shutdown(1m) commands to stop a node. If reboot(1m), halt(1m), poweroff(1m), or uadmin(1m) is used to stop a node, GFS may not work correctly Effect of file lock on system performance If many file locks are set on a GFS Shared File System, the system performance can be affected as follows: - Loads on the memory will be increased by increase in amount of memory used by MDS. - Loads on CPU will be intensive by the file lock processing. - If there are many processes that wait for the file lock, and the following operations are executed, loads in the network might be temporarily increased. 1. The file lock is released, or 2. The node where the file lock setup process exists goes down. - If the recovery procedure is executed when many file locks are set, it will take time and fail GFS operational notices for Solaris 10 In Solaris 10, only a global zone allows for GFS installation, command execution, and GFS Shared File System operation. A non-global zone cannot be used to operate GFS. If the GFS command that is installed on the /usr/sbin directory in a global zone is executed in a nonglobal zone, the following error will be detected: cannot be executed in non-global zone Note If the GFS command is executed in a non-global zone, an error message ld.so.1(1) might be output, depending on how the zone is created

24 Notes on opening a file on the GFS Shared File System The number of files that can be concurrently open on one node at one time is limited in the GFS Shared File System. The maximum number is 5000 files per file system. When open(2) is attempted for the file on the GFS Shared File System exceeding the maximum number, open(2) will fail and ENFILE will be returned as an error number Notes on writing file data on the GFS Shared File System The GFS Shared File System uses extent based area management. If a file has many extents, some of extents are stored as indirect blocks in V-data area. In this case, an allocation of new blocks could induce a rebuilding of indirect block. It would consume many V-data area. To prevent V- data starvation, GFS prevents an allocation of new block when V-data area usage exceeds 80%. In this case, a system-call (e.g. write(2)) which attempts to allocate a new block fails with ENOSPC. Deleting unused files or directories or moving them into other file systems helps you to reduce V-data area usage. See For information on how to check usage of V-data area, see df_sfcfs(1m)

25 Chapter 2 File System Structure This chapter describes the structure of the GFS Shared File System. 2.1 Disk Structure of File System This section describes the disk structure of the GFS Shared File System. As illustrated below, the disk structure of the GFS Shared File System consists of the following elements: - Super block - Partition configuration data - Meta-data area (area management data, i-node area, V-data area) - Update log area - File data area Figure 2.1 Disk structure of the GFS Shared File System The GFS Shared File System manages the V-data area in units of 1,024 bytes and the file data area in unit of 8,192 bytes. Conventional file systems have decentralized meta-data areas on their disks. However, the GFS Shared File System centralizes metadata to improve meta-data access performance. Besides, localizing updated portions upon recovery of the file system allows the update log area to shorten the recovery time Super block Super block contains a file system type, creation and update dates, label data, size and layout of the file system, and a data history of empty areas. The copy of super block at the end of the file system exists to allow continuous processing if a disk device block failure occurs in the area where the super block is stored Partition configuration data The partition configuration information contains the following information: - What partitions are used to configure a file system. A GFS Shared File System can consist of multiple partitions. - Area allocation in the partition In order to expand the file system configuration and change the shared device information, the partition area will be changed

26 Similar to the super block, the partition configuration data is important for operation of the file system and is designed to have a partition copy at the end of the file system and to be resistant to disk device block failures Meta-data area This section describes the area of meta-data management. The meta-data area is an area for storing meta-data, which exists only in a representative partition Area management data Area management data includes the allocation information of i-node, V-data and file data. File data allocation information is managed in extent-base while combining a part to include extent information with a list managed with a degree of continuity. When a file system is created with mkfs_sfcfs(1m), an area of the fixed size is allocated for management information, i-nodes and V- data i-node area The i-node area is a data structure containing file data. The pointer to extent information including the file data and the file type, length, owner, group IDs, and access right are stored in each i-node. There is one i-node for each file V-data area The V-data area includes a directory block, a symbolic link path name, and an indirect block. An area is obtained from this area if necessary. The V-data area is managed in units of 1,024 bytes for efficient use of directory blocks Update log area The update log area stores a history of file system structure changes for high-speed file system recovery. This area is maintained as a cyclic log. The update log area contains data about the following processes: - Acquiring or releasing an i-node - Updating i-node data - Allocating or releasing an extent, which is a group of contiguous file system data blocks handled as a single unit - Acquiring or releasing V-data - Updating V-data The GFS Shared File System assures that data is written in the V-data area before the file system structure is updated. If a system failure occurs, file system consistency is restored by using fsck_sfcfs(1m) to either invalidate or re-execute unprocessed changes on the file system. Also, system failure recovery processing is conducted. Only changes to the file system structure are recorded in the update logs; file-data is not recorded File data area The file data area is an area for storing file data. In these areas, 8192 bytes areas defined as the minimum block, and blocks are managed for contiguous allocation in a file area Partition configuration A GFS Shared File System can consist of single partition or multiple partitions. In the multi-partition configuration, multiple partitions are allocated to single file system. Meta-data area and update log area are gathered by one partition and posted. The partition posted meta-data area is called the representative partition

27 In a single partition configuration, a meta-data area, an update log area, and a file data area are allocated to one partition. Figure 2.2 Single partition configuration In the multi-partition configuration, the file data are can hold multiple partitions. Also, the file system can be crated by detaching the file data area from the meta data area. Typical examples of multi-partition configurations are shown on the following. 1. Adding a file data area to another partition. (file data area addition) Figure 2.3 File data area addition 2. Partition configuration in which the file data area is separated from the representative partition. Figure 2.4 File data area separation In a multi-partition configuration, super block and partition configuration data are maintained in all partitions. 2.2 Component configuration The component configuration of the GFS Shared File System is illustrated below. The GFS Shared File System consists mainly of three components: 1. MDS (Meta-data server) The file system server function component (sfcfsmg) of the GFS Shared File System

28 2. AC (Access client) The file system client function component (kernel component AC) of the GFS Shared File System 3. Monitoring daemons (sfcfrmd, sfcprmd, sfcfsd, sfchnsd, sfcpncd) The control function component of MDS, AC and composition component of GFS Shared File System. Figure 2.5 Component composition MDS (Meta Data Server) The MDS is a server daemon that manages the meta-data of the GFS Shared File System and operates as a user process. There are two processes of primary MDS and secondary MDS for each file system. Two of the nodes that share a file system are predefined as MDS nodes, and MDS operates on one of these nodes. There are only 2 MDS nodes, the primary and the secondary one. The secondary MDS is used for standby and runs on a different node from the primary MDS. However, when only one of MDS nodes is active, the secondary MDS does not run. The MDS uses multithreading in order to execute processing requests from AC in parallel. The MDS has the following major functions: - Token management - Meta-data area management - Update log area management - File data area management - File system free area management

29 - AC node management AC (Access Client) This client processes requests from an application that accesses the GFS Shared File System. It operates within the kernel. Access clients exist on all the nodes where the GFS Shared File System is mounted. The AC allows cache management of meta-data and file data within the client Monitoring daemon - sfcfrmd Receives a request from the sfcfsd or the commands and provides the communication paths of commands-sfcfsd, commands-sfcprmd and inter-node. Configuration database management and the management partition I/O functions are also provided. - sfcprmd Used to monitor the startup, stop, or failure of the processes that constitute the GFS Shared File System. When the configuration process starts or executes another daemon or command, a request is transmitted to sfcprmd, and then sfcprmd executes a daemon or a command as required. The process from which a request is transmitted is referred to as the client process, while the process to be started from sfcprmd as required is referred to as the target process. The target process is monitored by sfcprmd and, when the process terminates normally or abnormally, the status is posted to the client process. - sfcfsd Provides MDS/AC control functions such as the GFS Shared File System operation start/stop, failover of MDS. - sfchnsd Used to provide functions such as node down event reception, domain-wide lock, membership information acquisition, and node state acquisition. - sfcpncd Performs live monitoring of sfcprmd or sfcfrmd that is to be monitored. When the process to be monitored abnormally end (process down), this daemon makes system go down in order to maintain consistency of file systems. 2.3 File Access and Token Management Each AC caches file data to improve file system performance. Since it is necessary to manage discard operation of cache to ensure consistency in each file system, the GFS Shared File System uses file-exclusive tokens (referred to hereafter as tokens) for management. A token guarantees that the AC can access files. The token contains information that indicates how it accesses a particular file. An AC must have a token of a file when the AC accesses the file. Two types of tokens are provided: one that ensures meta-data access and another that ensures file data access. - Meta-data access and token management The meta-data token required to access i-node and V-data. - File data access and token management Tokens are used to access file data in ordinary files. The two types of available tokens (write authority and read authority) are managed in units of logical blocks. Such operations as simultaneously reading from the same block and simultaneously reading from/writing to different blocks in the same file can be performed simultaneously by multiple shared nodes without degrading performance. The file data token is required to access ordinary files

30 Chapter 3 Failure Recovery This chapter describes the following failure recovery functions of the GFS Shared File System: - MDS Failure Recovery - AC degeneration 3.1 MDS Failure Recovery If a MDS fails by node down, the GFS Shared File System automatically performs failure recovery to continue operation of the file system. The following provides an overview of the operations performed if either the primary MDS or secondary MDS fails Automatic recovery for primary MDS failures There is one case in which the standby secondary MDS becomes the primary MDS to continue operation. This case occurs if both the primary and secondary MDSs are set up as operable and the primary MDS fails by node down. In this case, the following process is performed: 1. Switching the secondary MDS to the primary MDS The secondary MDS operates as the primary MDS in place of the primary MDS that has failed. 2. Replaying the update log The new primary MDS replays the update log to ensure consistency of the file system. 3. Resuming processing after MDS switching The AC sends a processing request to the new primary MDS. Until that request is accepted, all access to the file system is blocked

31 4. Restarting the failed MDS Usually, the MDS is restarted as the secondary MDS Automatic recovery for secondary MDS failures If the secondary MDS fails because of a node failure, the primary MDS will not be affected. It will continue operation without the secondary MDS. In this case, the following process is performed: 1. Restarting the failed MDS The MDS is restarted as a secondary MDS. 2. Making the restarted secondary MDS effective The AC performs the processing necessary for recognizing the restarted secondary MDS Automatic recovery in case primary MDS goes down during primary MDS operation only If the primary MDS goes down in a state in which the MDS nodes are degraded only during primary MDS operation, the following processing is performed automatically

32 1. Restarting the failed MDS The MDS is started as the primary MDS again. 2. Replaying the update log The new primary MDS replays the update log to ensure the consistency of the file system. 3. Resuming processing after MDS switching The AC makes a processing request to the new primary MDS. 3.2 AC degeneration If the GFS Shared File System is mounted, MDS will hold the information on the AC. When the node to which AC operates is downed, MDS cancels the information on the AC automatically, and continuation of management of the file system in a normal node is performed automatically

33 Chapter 4 File System Design This chapter describes the main features of the operational design of the GFS Shared File System. 4.1 Mount and unmount opportunity of the file system at the time of system starting and ending In the GFS Shared File System, a setup for mounting and unmounting automatically at the time of system starting and ending differs from ufs settings. It is required to set no for the mount at boot parameter in /etc/vfstab for all GFS Shared File Systems. A setup of the automatic mount at system startup is specified by describing or omitting noauto in the mount options field of /etc/vfstab. When noauto is not described in the mount options field of /etc/vfstab, the automatic mounting is performed on startup. The mounting takes places at the moment of running S81sfcfsrm start, the startup script in run levels 2. When noauto is described in the mount options field of /etc/vfstab, the automatic mounting is not performed on system startup. GFS Shared File Systems are automatically unmounted when the system is shut down. They are unmounted through the stop script K41sfcfsrm. Note It is necessary to set the management partition and the GDS volumes where partitions of the GFS Shared File System that will automatically be mounted are allocated to automatically become ACTIVE during system startup. See For more information about GDS, see "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." 4.2 System Design This section describes the following items that must be considered in creating or using the GFS Shared File System: - Effects of file system operation and system load - Node layout - Disk layout - LAN selection - For future expansion Effects of file system operation and system load To maintain strict file system consistency, the GFS Shared File System processing can significantly slow down during the following operations: - File access - Open and close a file for each successive read or write, without putting all I/O together within one open and close. - Sequential write with a small record size (less than 1 megabyte). It can cause a fragmentation of a file data. - Deleting 1000 or more open files at once. If one or more of the above situations occur, you can improve file system performance by changing open/close frequency or timing of the target file, or optimizing I/O size

34 - Confliction between nodes, and directory access. - Frequently referencing a file from other nodes that is frequently updated from one or more nodes. - Frequently updating the same block in the same file from two or more nodes. - Frequently creating or deleting files or directories in the same directory from multiple nodes, and repeatedly monitoring directory updates with readdir(3c) or stat(2). - When an ls(1) command with the -l option and a command requiring attributes of files in the directory such as cp(1), du(1m), and tar(1) are both issued in a directory containing more than files. If one or more of the above situations occurs, you can improve file system performance by changing the monitoring or access frequency, or dividing up files into separate directories. CPU load or I/O load may be concentrated on the node where the MDS that manages the file system meta-data operates. A heavy load here indicates that operations, which require updating of the file system meta-data, such as file creation, deletion, or extension, are frequently being done. In such cases, file system throughput may improve by optimizing the layout of the meta-data area and update log area. An alternative is to increase CPU throughput of the node where the MDS operates Node layout In the GFS Shared File System, up to 2 nodes can share 1 file system simultaneously. Therefore, file system processing can continue even if the node (on which MDS manages file system meta-data) fails. For such operation, you must select two nodes on which MDS can operate and set these nodes as MDS nodes. From the MDS nodes, select the primary MDS node (on which the primary MDS usually operates), the secondary MDS node (on which the secondary MDS usually operates). See For details on the effects of MDS node settings, see "3.1 MDS Failure Recovery" As previously described in "4.2.1 Effects of file system operation and system load" the following should be taken into consideration: - CPU and I/O loads accompanying updating of the meta-data of the file system that occur on the node where MDS operates - CPU throughput, I/O throughput, memory capacity, and swap area Note MDS receives requests from all nodes sharing a file system and then updates the meta-data. Therefore, install at least two CPUs on nodes on which MDS may operate. Note When activating the secondary MDS, activation of the primary MDS may take the time for a while Disk layout In the GFS Shared File Systems, area of a disk configuring the file system consists of the following area: - meta-data area - update log area - data area The primary MDS references or updates two areas: the meta-data area and the update log area. The AC references or updates the data area

35 A GFS Shared File System can consist of a single partition or multiple partitions. The file system with multiple partitions can improve I/ O performance. For example, in an environment where bottlenecks of intensive file data access will reduce the I/O performance of the file system, load balancing with access control will be enabled by configuring the file data area with multiple partitions. Also, the file data area is detached from the representative partition, and the meta data area and update log area can be detached from the file data area. This will increase the performance and throughput of file data I/O. See For more information about available partition configurations, see "2.1.6 Partition configuration" LAN selection In the GFS Shared File System, the MDS communications needed to obtain meta-data and to maintain consistency are processed through the LAN. Set up the LAN path while keeping the following in mind. - LAN traffic volume and LAN load - Redundant Configuration Definition for a LAN fault The GFS Shared File System has been designed in order not to increase the LAN load. But, the load intensity can be high under certain operation conditions. When a higher load is imposed on the LAN due to another processing, the more response time may be consumed in accessing the GFS Shared File System. The LAN load status can be checked with netstat(1m). The use of the following is recommended when the load on a file system is large or when the response time is more important than other factors. - High-speed LAN - Private LAN In the GFS Shared File System, specific nodes block the file systems if an error occurs in the communications between MDS and node AC. This is done because AC determines file system processing cannot be continued. The followings are suggestions for avoiding problems in the communication paths. - To establish a multiplex communication path by integrating several LAN paths into one local path using fast switching mode of GLS (Redundant Line Control Function). See For more information about GLS, see "PRIMECLUSTER Global Link Services Configuration and Administration Guide (Redundant Line Control Function)." For future expansion When a GFS Shared File System is created with mkfs_sfcfs(1m), the required minimum area is allocated according to the default settings. At file system creation, check that at least the parameter values listed in the table below are sufficient for future expansion. For example, the partition addition function cannot add partitions exceeding the -o maxvol parameter of mkfs_sfcfs(1m). In this case, to expand the file system, the current data must be backed up and then deleted, a new file system must be created, and the data must be restored. In preparation for file system capacity expansion for future needs, create a file system, estimating the maximum size for expansion. Table 4.1 Parameter that you should confirm for future expansion mkfs_sfcfs(1m) parameter Default Meaning maxdsz File data area size at specification Maximum file data area size maxnode 16 Maximum number of sharing nodes

36 mkfs_sfcfs(1m) parameter Default Meaning maxvol 16 Maximum number of partitions 4.3 Backup Design This section describes the hardware required to restore backed up file systems and the methods used to perform backups. - Required hardware A tape unit or hard disk unit is used to backup the GFS Shared File System. - Backup restoration procedure The GFS Shared File System restores backed up file systems with the following methods: - Restoring the entire file system: dd(1m) - Restoring the file system on a file basis: cpio(1) or tar(1) If the GFS Shared File System consists of multiple partitions, backups should be restored on a partition basis. If tape devices and hard disk devices are configured, considered for a load distribution control of assigning I/O transactions, backup and restore tasks can be distributed on all the nodes, and this will reduce the time required to backup and restore each partition. See For more information about restoring backups in the GFS Shared File System, see "Chapter 13 File System Backing-up and Restoring"

37 Chapter 5 Management Partition This chapter describes purposes and functions of management partitions. For information on how to operate management partitions, see the following; - "Chapter 8 Management Partition Operations (GUI)" - "Chapter 9 Management Partition Operations (Command)" 5.1 Management Partition Management partitions provide the important information for GFS Shared File System operation. Also, it is used for the "sfcfrmd" daemon to maintain data integrity. The management partition contains the following information: 1. Information of each node constituting the GFS Shared File System. 2. Information linking to a shared node and a shared device in each GFS Shared File System. 3. MDS and AC allocation information for each GFS Shared File System. The node information of 1 must be set up using sfcsetup(1m). The information of 2 and 3 can be set up, modified and added with mkfs_sfcfs(1m), sfcadm(1m), sfcnode(1m) and sfcadd(1m). The sfcgetconf(1m) command is available for backing-up the information within the management partition. Figure 5.1 Configuration of the connection between nodes and the management partition

38 5.2 Resources Required for the Management Partition The GFS Shared File System requires, in addition to the volumes that constitute the file system, one GDS Shared volume that is exclusively for the management partition for each cluster system. Even when using two or more file systems, the number of required management partitions is one. Specify a volume size of at least 40 megabytes for the management partition. See For details on how to set up the GDS Shared disk system, see the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." Note A setup of the volume which creates the management partition should set to "shared" the type of a disk class with which volume belongs. Also, all the nodes that share the GFS Shared File System must be specified as the scope of the disk class to which the volume belongs. Moreover, the settings must be such that the volume is automatically activated when the node is started. Set "no" to [ Exclusive use ] of the disk class use including the volume where the management partition exists in. If a disk class that includes volumes with management partitions is set in a GDS resource of a cluster application by mistake, the volume will not enter ACTIVE automatically during node startup. Therefore, the GFS Shared File Systems does not work. 5.3 Notes on Management If a failure occurs in the management partition, the GFS Shared File System within a cluster system stops. If I/O to the management partition fails, GFS may panic the node with an I/O error in order to maintain the consistency of the file system. To overcome this, it is recommended to mirror the volumes within the management partition. Also, prior to changing the configuration of the file system, make a backup of the management partition information with sfcgetconf(1m). See For details on backup of the management partition, see "9.6 Backup of the management partition information" and "9.7 Restoring of the management partition information"

39 Chapter 6 Starting and Exiting the Management View GFS Management View operates under a Web environment. As long as you have a web environment, you will have access via a browser from any location. This chapter describes how to start and exit the GFS Management View. For information about the following items, all of, which are necessary for using the GFS Management View, see the "PRIMECLUSTER Web-Based Admin View Operation Guide": - Topology - Preparation - Operating Screen Note that, because the GFS Shared File System is subject to the following functions, they must be installed in advance and be usable before attempting file system operation: - PRIMECLUSTER Global Disk Services - PRIMECLUSTER Cluster Foundation For details on each function, see the following: - "PRIMECLUSTER Global Disk Services Configuration and Administration Guide" - "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide" When startup of the GFS Management View is enabled, you can operate the management partition with the GUI. For details, see the following: - "Chapter 8 Management Partition Operations (GUI)" If the management partition has already been created, the file system can be operated with the GUI. For details, see the following: - "Chapter 10 File System Operations (GUI)" If the management partition or the file system is to be operated from the command line, you do not have to make any Management View settings. 6.1 Starting Web-Based Admin View If all preparations are complete, start Web-Based Admin View using the following procedures. 1. Start the browser. See For information about a client that starts the browser, see the "PRIMECLUSTER Web-Based Admin View Operation Guide". 2. Specify the URL in the following format. host-name: Specify "IP address or host name (httpip) for client" of the primary or secondary management server. The default httpip value will be an IP address that is allocated to the node name output with "uname -n". port-number: Specify "8081". If the port number is changed, specify the changed port

40 See For information on how to change the http port number, see "Change http port number" of the "PRIMECLUSTER Web-Based Admin View Operation Guide". Note If Web-Based Admin View is not started regardless of a host name of the management server being specified for "host name", specify an IP address of the public LAN directly. 3. If the Web-Based Admin View is started, the user login screen will appear as follows. Figure 6.1 Login screen Enter user name and password for the management server then click the <OK> button. 4. When the user authentication processing is completed, the top menu of the Web-Based Admin View will appear. 6.2 Web-Based Admin View Top Menu After starting Web-Based Admin View has started, the [Web-Based Admin View operation Screen] appears. Web-Based Admin View allows you to operate and manage GFS, set environment from the WWW screen. Top menu refers to the Web-Based Admin View operation menu Web-Based Admin View Operation Menu Functions The Web-Based Admin View operation menu supports the following functions related with the GFS Shared File System. Table 6.1 Web-Based Admin View Operation Menu Description Menu Description Global File Services Set and operates GFS Shared File System. Note - Note that the contents of the Web-Based Admin View operation menu vary depending on the installed product. - When a dialog is displayed because of an error in Web-Based Admin View, the picture on the right side of the Web-Based Admin View top menu also turns red. If other screens hide the dialog, clicking the red picture brings the dialog box to the front. To find some errors occurred surely, the picture on the right side of the Web-Based Admin View top menu must be always displayed Web-Based Admin View Tool Menu Functions For information about the Web-Based Admin View Tool Menu, see "PRIMECLUSTER Web-Based Admin View Operation Guide"

41 Note In the "PRIMECLUSTER Web-Based Admin View Operation Guide", replace the term "Cluster Management view" with "GFS Management View". 6.3 Starting GFS Management View Select the [Global File Services] Menu on the Web-Based Admin View operation menu, the GFS Management screen (hereafter referred to as the main screen) opens. From the main screen, you can create the management partition, create a file system, change a file system's configuration, and delete a file system. See For more information, see "Chapter 8 Management Partition Operations (GUI)" and "Chapter 10 File System Operations (GUI)" Figure 6.2 GFS Management Main Screen If there is more than one node, [Select Node] Screen appears on top of the main screen. Select the desired node. The selected node can be changed from the main screen

42 Figure 6.3 Select Node Screen 6.4 Exiting GFS Management View To exit the GFS Management View, click <Exit> button on the [General] menu. The following message will appear. Figure 6.4 Exit Screen If you press <Yes> button the Web-Based Admin View Screen (top menu) is displayed. If you press <No> button, the main screen is displayed again. 6.5 Exiting Web-Based Admin View Exit Web-Based Admin View, as follows: 1. Select the <Logout> button on the top menu. 2. When the login screen appears, exit the browser or use the <Back> button feature of the browser to exit Web-Based Admin View. Note If the login screen continues to be displayed The login screen will remain displayed briefly after the browser exits. No additional action is required to close the login screen

43 Chapter 7 Operation Management View Screen Elements This chapter describes the screen elements of the GFS Management View. 7.1 Screen Configuration Main screen Select Global File Services from Web-Based Admin View, and the screen below appears. From this main screen, you can create a file system, change a file system's configuration and delete a file system. Screen configuration of main screen is shown below. Figure 7.1 GFS Management Screen (main screen) Mount Tree field File systems accessible from the node selected with [Select Node] in the [General] menu on the main screen can be displayed in the tree structure. The tree displays only file systems listed in /etc/vfstab. When the GFS Management screen appears, the [Select Node] screen appears first to select target node. File System Information field Displays the file systems for the node/directory selected from the "Mount Tree field". If the file system is unmounted state, 0 is displayed in the size. Each object has an icon representing the object type. If the file system type is sfcfs, icons are color-coded so that the status of the objects can be determined at a glance

44 See For an explanation of icon types and object status, see "7.3 Icon Types and Object Status". Detailed Information field When the type of the file system selected in the "Mount Tree field" or the "File System Information field" is sfcfs, that file system's detailed informations such as the placement of the MDS and the state of quota are displayed. If the file system is unmounted state, 0 is displayed in the size in a left column. The size of the data area is displayed in the size in the column of each right partition. Therefore, the size column is not displayed as for the partition without the data area. Log Information field Displays messages concerning the GFS Shared File System daemon program. The node name that the message was displayed is added at the head of each message. Title Bar Displays screen title [Global File Services]. Menu Bar Display the menu buttons. Menu Button Allow you to control the objects selected on screen. There are <General>, <Operation>, <View> and <Help>. Drop-down Menu When a menu button from the Menu Bar is selected, a Drop-down Menu will be appears. See For details on the drop-down menu, see "7.2 Menu Configurations and Functions" Pilot Lamp Shows the status of monitored objects. The lamp can indicate the following status. Table 7.1 Types of Pilot Lamp Pilot Lamp Status Meaning (Gray, lit up) Normal - ( Red, blinking) (Red, lit up) (Yellow, blinking) (Yellow, lit up) Abnormal Abnormal Alarm Alarm The file system is abnormal. (Unavailable) When red blinking warning lamp is single-clicked The utilization rate of the file system exceeds the threshold or the file system is abnormal at other node. When yellow blinking warning lamp is single-clicked When a field cannot be fully displayed, move a mouse cursor to that part that is not being displayed clearly. A popup display will appear. In a pop-up list of the scope, information on the nodes sharing the file system is displayed in the node (host) name format. When several host names are specified, they are displayed in parentheses

45 7.2 Menu Configurations and Functions Each menu button has a Drop-down Menu; you can operate the selected object on screen. This section explains the menu configuration and functions General Figure 7.2 General menu Select Node Select the node you want to operate, and press <OK> button. Only one node can be selected at a time. If you don't need selecting node, press <Cancel> button. Figure 7.3 [General]: [Select Node] Screen Exit Exits Global File Services. Figure 7.4 Figure [General]: [Exit] Screen

46 7.2.2 Operation Figure 7.5 Operation menu Create Creates a file system. See For details, see " Creating a file system" Figure 7.6 [Operation]: [Create] Screen Change Configuration Changes the partition configuration of a file system. See For details, see " Changing the file system configuration (partition addition)"

47 Figure 7.7 [Operation]: [Change Configuration] Screen Delete Deletes a file system. See For details, see " Deleting the file system" Figure 7.8 [Operation]: [Delete] Screen Change Attributes Changes the mount information, share information, and detailed information for a file system. See For details, see " Changing the file system attributes" and " Changing the shared node information"

48 Figure 7.9 [Operation]: [Change Attributes] Screen Operate management partition You can create the management partition, register and add node information. See For details, see "8.2 Creating the management partition" and "8.3 Adding node configuration information to the management partition"

49 Figure 7.10 [Operation]:[Operate management partition]:[create] Screen 1 Figure 7.11 [Operation]:[Operate management partition]:[create] Screen

50 Figure 7.12 [Operation]:[Operate management partition]:[add node] Screen Operate sfcfrmd sfcfrmd daemon can be started or stopped. See For details on the operation, see "8.3.4 Starting sfcfrmd daemon", "8.3.2 Stopping sfcfrmd daemon"

51 Figure 7.13 [Operation]: [Operate sfcfrmd]:[start] Screen Figure 7.14 [Operation]: [Operate sfcfrmd]:[stop] Screen

52 7.2.3 View Figure 7.15 View menu Abnormal Only Only displays file systems with abnormalities. Update Now Display the latest file system information Help Figure 7.16 Help menu Help Displays help information. 7.3 Icon Types and Object Status GFS Management View uses icons to show object types and status. The status and icons are shown below. 1. Node Icon Status Meaning Adapter Icon Status Meaning File system (sfcfs) Icon Status Meaning (Green) Normal The file system is working normally on all nodes. (Yellow) Alarm The usage rate of the file system exceeds the threshold or an error occurred at another node

53 Icon Status Meaning (Red) Abnormal An error occurred in the local node. (Blue) In transition The file system is in mounted or unmounted. (Light brown) Inactive The file system is in unmounted. 4. File system (ufs) Icon Status Meaning (Green) Normal - (Red) Abnormal - 5. Physical Disk Icon Status Meaning Partition Icon Status Meaning - - Note File system status The GFS Shared File System (file system type: sfcfs) is displayed as normal until it is accessed, and then the abnormality is detected. Even if the file system is unmounted, the icon shows the status. When the file system type is ufs, the status is not shown under an unmounted status

54 Chapter 8 Management Partition Operations (GUI) This chapter describes how to operate the management partition by the GFS Management View. Note that, because the GFS Shared File System is subject to the following functions, they must be installed in advance and be usable prior to starting file system operation: - PRIMECLUSTER Global Disk Services - PRIMECLUSTER Cluster Foundation See For details on each function, see the following: - "PRIMECLUSTER Global Disk Services Configuration and Administration Guide" - "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide" To execute the management partition operation, the GFS Management View settings must have been made in advance. For details, see the following: - "Chapter 6 Starting and Exiting the Management View" When you want to create a file system immediately after you create a management partition, see the following: - "Chapter 10 File System Operations (GUI)" - "Chapter 11 File System Operations (Command)" 8.1 Flow of Operations This section describes the flow of Management Partition operations of the GFS Management View Creating the management partition The following figure describes the flow of operations for creating a file system. For details on operations in the following figure, see "8.2 Creating the management partition". Figure 8.1 Operation flow for creating the management partition

55 8.1.2 Adding node configuration information to the management partition The following figure describes the flow of operations for adding node configuration information to the management partition. For details on operations in the following figure, see "8.3 Adding node configuration information to the management partition" Figure 8.2 Operation flow for adding node configuration information to the management partition 8.2 Creating the management partition This section describes how to create the management partition. Note Before creating the management partition, confirm that a cluster partition error has not occurred. If a cluster partition error has occurred, fix the cluster partition error first. See For information on how to check whether a cluster partition error has occurred and, how to fix a cluster partition error, see "C.3.3 Corrective action when the sfcfrmd daemon is not started" Setting shared disks The device used as a management partition of a GFS Shared File System needs to fulfill the following conditions. - Size should be 40 megabytes or more. (At least 40 megabytes will be required for the management partition.) - It is the GDS logical volume and a status should be "ACTIVE." (Check with the sdxinfo(1)) - It is shared from all the nodes that use the GFS Shared File System. (Check with the sdxinfo(1)) See For details on the sdxinfo(1), see "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." GDS logical volume used as management partition should be set up as follows. - Set up the type of a disk class with which GDS logical volume belongs as shared.(change GDS class attributes)

56 - Set all the nodes that share GFS Shared File System as the scope of the disk class to which logical volume belongs.(change GDS class attributes) See For information on logical volume operation, see the relevant item in "Operating on the management view" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." Creating the management partition, registering node configuration information and starting sfcfrmd daemon Creating the management partition and registering node configuration information can be done after selecting [Operation]:[Operate management partition]:[create] on the GFS Management View. Figure 8.3 Management partition creation wizard (1) In the [Management partition creation wizard (1)] screen, select partition used as management partition. In [Candidate partition], partitions that can use as management partition at the node executing management partition creation are displayed. If the partition used as management partition is selected with the check mark, it becomes possible to click the <Next> button

57 After selecting the partition use as management partition from [Candidate partition], it will go to [Management partition creation wizard (2)] screen if you click <Next> button. To stop the creation of the management partition, click the <Cancel> button. To restore the selection state to the default value, click the <Reset> button. Note Before attempting to create the management partition, first stop sfcfrmd daemon. Note Partitions that are used as management partitions cannot be initialized from the GFS Management View. Please do with sfcsetup(1m). For details, see "9.2.2 Initializing of the management partition"

58 Figure 8.4 Management partition creation wizard (2) In the [Management partition creation wizard (2)] screen, select (with the check mark) nodes to register into management partition selected in the [Management partition creation wizard (1)] screen. In [Node], the node included in the scope of the disk class to which the GDS logical volume selected as a management partition at the [Management partition creation wizard (1)] screen belongs is displayed. By the default, [Select] of all nodes are checked. Note The node selected as a connecting node in [Select Node] screen appearing at the time of starting the GFS Management View needs to surely register as management partition. Therefore, the check mark of [Select] cannot be deselected. To stop creation processing of the management partition, click the <Cancel> button. To restore the selection state to the default value after change [Select] value, click the <Reset> button. Click <Back> button when returning to the screen that selects the partition creating as a management partition. When the check state of [Select] button of a node registering with a management partition is checked and the <Create> button is clicked, the following messages will be displayed

59 Processing will be interrupted, if you click the <No> button of the displayed message. Creating management partition is started, if you click <Yes> button. It cannot be interrupted after creating management partition is started. Start sfcfrmd daemon on the node registered with management partition

60 The following message appears and management partition creation is completed. 8.3 Adding node configuration information to the management partition This section describes how to add node configuration information to the management partition. Note You cannot add a configuration node while the sfcfrmd daemon is started. Note Before adding node configuration information to the management partition, confirm that a cluster partition error has not occurred. If a cluster partition error has occurred, fix the cluster partition error first. See For information on how to check whether a cluster partition error has occurred and, how to fix a cluster partition error, see "C.3.3 Corrective action when the sfcfrmd daemon is not started" Setting shared disks The node added as configuration node needs to be included in the scope of disk class to which GDS logical volume shared by all nodes registered to management partition belongs. See For details, see "8.2.1 Setting shared disks"

61 8.3.2 Stopping sfcfrmd daemon You can stop sfcfrmd daemon by selecting [Operation]: [Operate sfcfrmd]:[stop] on the GFS Management View. You must be stop sfcfrmd daemon in all nodes when node configuration information is added to management partition. Note Unmount all GFS Shared File Systems before stopping sfcfrmd daemon. Information If node configuration information is added to management partition without stopping sfcfrmd daemon, sfcfrmd daemon under running is stopped automatically under processing of adding node configuration information to management partition. Figure 8.5 sfcfrmd daemon stop node selecting dialog box In the [sfcfrmd daemon stop node selecting dialog box], select the check mark of node stopping sfcfrmd daemon. In [Node], all nodes registered to management partition are displayed. If you select [Select] of node stopping sfcfrmd daemon, <Commit> button can be clicked

62 If you click the <Select all nodes> button, all the nodes will be checked. To restore the selection state to the default value, click the <Reset> button. To cancel stopping sfcfrmd daemon, click the <Cancel> button. When you click <Commit> button, the following messages will be displayed. Processing will be interrupted, if you click the <No> button of the displayed message. Stopping sfcfrmd daemon is started, if you click <Yes> button. It cannot be interrupted after stopping sfcfrmd daemon is started

63 The following message appears and stopping sfcfrmd daemon is completed Adding node configuration information to the management partition Node configuration information can be added to the management partition by selecting [Operation]:[Operate management partition]:[add node] on the GFS Management View. If the node which newly added to management partition is selected as a connecting node in [Select Node] screen appearing at the time of starting the GFS Management View, following message will appear when [Operation]:[Operate management partition]: [Add node] menu is selected. Processing will be interrupted, if you click the <No> button of the displayed message. It will go to the screen that selects the node added to a management partition if a <Yes> button is clicked. If the node which is already registered into the management partition is selected as a connecting node in [Select Node] screen appearing at the time of starting the GFS Management View, message does not appear and it will go to the screen that selects the node added to a management partition

64 Figure 8.6 Management partition - Node selection dialog box In the [Management partition - Node selection dialog box], select the node that adds the node configuration information to management partition. In [Node], all nodes included in the scope of the disk class to which the GDS logical volume used as a management partition belong are displayed. By the default, it checks as follows. - When the node which is already registered into the management partition is selected as a connecting node in [Select Node] screen appearing at the time of starting the GFS Management View. [Select] of the node registered into management partition is checked. - When the node which newly added to management partition is selected as a connecting node in [Select Node] screen appearing at the time of starting the GFS Management View. [Select] of the node registered into management partition and the node connected in [Select Node] screen is checked. If [Select] of node added to management partition is selected with the check mark, it becomes possible to click the <Commit> button

65 To stop the processing node addition processing for the management partition, click the <Cancel> button. To restore the selection state to the default value, click the <Reset> button. After selecting (with the check mark) [Select] of added node and the <Commit> button is clicked, the following messages will be displayed. Processing will be interrupted, if you click the <No> button of the displayed message. Adding node to management partition is started, if you click <Yes> button. It cannot be interrupted after adding node to management partition is started

66 sfcfrmd daemon of the node newly added to management partition is started. The following message appears and addition of the node to management partition is completed. Note Since the GFS Management View does not support the deletion of node configuration information from the management partition, you cannot clear the check mark corresponding to a node that is already registered Starting sfcfrmd daemon You can start sfcfrmd daemon by selecting [Operation]: [Operate sfcfrmd]:[start] on the GFS Management View. You must be start sfcfrmd daemon in all nodes after node configuration information is added to management partition. Note To start sfcfrmd daemon, you must register the node configuration information to the management partition in advance. Only when you stop sfcfrmd daemon manually before adding the node configuration information to management partition, start sfcfrmd daemon. In the [sfcfrmd daemon start node selection dialog box], select node starting sfcfrmd daemon

67 Figure 8.7 sfcfrmd daemon start node selection dialog box In [Node], all nodes registered to management partition are displayed. If you mark [Select] of node starting sfcfrmd daemon, <Commit> button can be clicked. If you click the <Select all nodes> button, all the nodes will be checked. To cancel starting sfcfrmd daemon, click the <Cancel> button

68 To restore the selection state to the default value, click the <Reset> button. When you click <Commit> button, the following messages will be displayed. Processing will be interrupted, if you click the <No> button of the displayed message. Starting sfcfrmd daemon is started, if you click <Yes> button. It cannot be interrupted after starting sfcfrmd daemon is started. The following message appears and starting sfcfrmd daemon is completed

69 Chapter 9 Management Partition Operations (Command) This chapter describes how to operate the management partition by commands. Note that, because the GFS Shared File System is subject to the following functions, they must be installed in advance and be usable before attempting to operate the file system: - PRIMECLUSTER Global Disk Services - PRIMECLUSTER Cluster Foundation See For details on each function, see the following: - "PRIMECLUSTER Global Disk Services Configuration and Administration Guide" - "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide" If you want to create a file system immediately after creating a management partition, see either of the following: - "Chapter 11 File System Operations (Command)" - "Chapter 10 File System Operations (GUI)" 9.1 Flow of Operations This section describes the flow of operations in management partition Creating the management partition The following figure describes the flow of creating the management partition. For details on operations in the following figure, see "9.2 Creating of the management partition". Figure 9.1 Operation flow for creating the management partition Adding node configuration information to the management partition The following figure describes the flow of operations for addition of node configuration information to the management partition. For details on operations in the following figure, see "9.3 Adding node configuration information to the management partition"

70 Figure 9.2 Operation flow for addition of node configuration information to the management partition Deleting node configuration information from the management partition The following figure describes the flow of operations for deleting node configuration information from the management partition. For details on operations in the following figure, see "9.4 Deleting node configuration information from the management partition". Figure 9.3 Operation flow for deleting node configuration information from the management partition Changing the sfcfrmd daemon's startup mode registered in the management partition The following figure describes the flow of operations for changing the sfcfrmd daemon's startup mode registered in the management partition. For details on operations in the following figure, see "9.5 Changing the sfcfrmd daemon's startup mode registered in the management partition"

71 Figure 9.4 Operation flow for changing the sfcfrmd daemon's startup mode registered in the management partition Backup of the management partition information The following figure describes the flow of operations for backup of the management partition information. For details on operations in the following figure, see "9.6 Backup of the management partition information". Figure 9.5 Operation flow for backup of the management partition information Restoring of the management partition information The following figure describes the flow of operations for restoring of the management partition information. For details on operations in the following figure, see "9.7 Restoring of the management partition information"

72 Figure 9.6 Operation flow for restoring of the management partition information 9.2 Creating of the management partition This section describes how to create the management partition information. Note Before creating of the management partition, confirm that a cluster partition error has not occurred. If a cluster partition error has occurred, fix the cluster partition error first. See For information on how to check whether a cluster partition error has occurred and, how to fix a cluster partition error, see "C.3.3 Corrective action when the sfcfrmd daemon is not started" Setting Shared disks The management partition of the GFS Shared File System is created on the GDS logical volume of the shared device. See For information about the shared device, see "8.2.1 Setting shared disks" Initializing of the management partition Use the sfcsetup(1m) command with the -c option to initializing the management partition

73 See For details on sfcsetup(1m), see sfcsetup(1m). Note To initialize the management partition, sfcfrmd daemon must have already been stopped on the cluster system. The example of initializing /dev/sfdsk/gfs/rdsk/control as a management partition is shown as follows. # sfcsetup -c /dev/sfdsk/gfs/rdsk/control <Enter> Note Please confirm whether to meet the following requirements beforehand in the case re-initialized the management partition. - All the GFS Shared File Systems must be deleted. - All the node information must not be registered in the management partition. When a partition that is already initialized as the management partition is to be initialized again, execute the command with the -f option specified, as shown below: # sfcsetup -c -f /dev/sfdsk/gfs/rdsk/control <Enter> Note Initialization of a management partition sets the sfcfrmd daemon's startup mode to wait. Use the sfcsetup(1m) command with the -m option to change the sfcfrmd daemon's startup mode. See For details of the sfcfrmd daemon's startup mode, see "9.5.1 Choosing the sfcfrmd daemon's startup mode". For details of changing the sfcfrmd daemon's startup mode, see "9.5.3 Changing the sfcfrmd daemon's startup mode" Registering node configuration information to the management partition sfcsetup(1m) command with the -a option registers node configuration information to the management partition. An example of registering node configuration information to the management information is shown below. Note Register node configuration information on all the nodes on which will be shared GFS Shared File Systems. 1. Registering a node in the management partition # sfcsetup -a /dev/sfdsk/gfs/rdsk/control <Enter>

74 2. The path name of the management partition that has been set up can be confirmed by executing the sfcsetup(1m) command with the -p option specified. # sfcsetup -p <Enter> /dev/sfdsk/gfs/rdsk/control 3. Registered node configuration information can be confirmed by executing sfcsetup(1m) with no option specified. # sfcsetup <Enter> HOSTID CIPNAME MP_PATH 8038xxxx sunnyrms yes 8038yyyy moonyrms yes Starting sfcfrmd daemon The sfcfrmd daemon is started by all nodes so that operation may begin. Use the sfcfrmstart(1m) command to start sfcfrmd daemon. See For details on sfcfrmstart(1m), see sfcfrmstart(1m). Note To start sfcfrmd daemon, node configuration information must have been registered in the management partition. Execute the command as follows on the node on which sfcfrmd daemon is to be started. # sfcfrmstart <Enter> 9.3 Adding node configuration information to the management partition This section describes how to add node configuration information to the management partition. Note Before adding node configuration information to the management partition, confirm that a cluster partition error has not occurred. If a cluster partition error has occurred, fix the cluster partition error first. See For information on how to check whether a cluster partition error has occurred and, how to fix a cluster partition error, see "C.3.3 Corrective action when the sfcfrmd daemon is not started" Setting Shared disks It is necessary to set nodes that will be added as configuration nodes in the scope of the disk class to which the GDS logical volume belongs. This GDS logical volume is shared among the nodes that are registered in the management partition

75 See For information about the shared device, see "8.2.1 Setting shared disks" Stopping sfcfrmd daemon The sfcfrmd daemon should be stopped on all nodes in the clustered system to add node configuration information to the management partition. Use sfcfrmstop(1m) to stop sfcfrmd daemon. See For details on sfcfrmstop(1m), see sfcfrmstop(1m). Note Please unmount all GFS Shared File Systems before stopping a sfcfrmd daemon. Execute the command as follows on all the nodes in the cluster system. # sfcfrmstop <Enter> Adding node configuration information to the management partition sfcsetup(1m) command with the -a option adds node configuration information to the management partition. An example of adding node configuration information to the management information is shown below. Perform the following operations at the adding node. 1. Add node configuration information to the management partition. # sfcsetup -a /dev/sfdsk/gfs/rdsk/control <Enter> 2. The path name of the management partition that has been set up can be confirmed by executing the sfcsetup(1m) command with the -p option specified. # sfcsetup -p <Enter> /dev/sfdsk/gfs/dsk/control 3. Added node configuration information can be confirmed by executing sfcsetup(1m) with no option specified. # sfcsetup <Enter> HOSTID CIPNAME MP_PATH 8038xxxx sunnyrms yes 8038yyyy moonyrms yes Starting sfcfrmd daemon The sfcfrmd daemon is started by all nodes so that operation may begin. See For details of starting sfcfrmd daemon, see "9.2.4 Starting sfcfrmd daemon"

76 9.4 Deleting node configuration information from the management partition This section describes how to delete node configuration information from the management partition Stopping sfcfrmd daemon The sfcfrmd daemon should be stopped on all nodes in the clustered system to deleting node information from the management partition. See For details of stopping sfcfrmd daemon, see "9.3.2 Stopping sfcfrmd daemon" Deleting node configuration information from the management partition To delete node configuration information from the management partition, run the sfcsetup(1m) command with the -d option. Running the command deletes the information for the node from which the sfcsetup(1m) command has been executed. It also deletes the data on that node that are related to the management partition. Note Please confirm whether to meet the following requirements beforehand when you delete composition node information from the management partition. - The GFS Shared File System which makes the deleted node the range of a shared node is a nonexistence. On the node that is to be deleted, execute the command as shown below: # sfcsetup -d <Enter> Starting sfcfrmd daemon The sfcfrmd daemon is started by all nodes so that operation may begin. See For details of starting sfcfrmd daemon, see "9.2.4 Starting sfcfrmd daemon". 9.5 Changing the sfcfrmd daemon's startup mode registered in the management partition This section describes how to change the sfcfrmd daemon's startup mode Choosing the sfcfrmd daemon's startup mode In the GFS Shared File System, monitoring daemons such as sfcfrmd, sfcprmd, sfcfsd, sfchnsd, and sfcpncd monitor the cluster state. The monitoring daemons also ensure that only a single primary MDS is running on a given cluster at one time, so file system access is consistent on all nodes. If the state of some of the nodes in the cluster cannot be confirmed due to a cluster partition error, the node that is operating the primary MDS cannot be determined because a quorum does not exist. In this case, the GFS Shared File System services suspended startup of the sfcfrmd daemon, in order to ensure data consistency

77 There are two available sfcfrmd daemon's startup modes following: - wait When a node is started up, the startup of the sfcfrmd daemon will be suspended until a cluster quorum exists, and also startup of nodes will be suspended at the same time. And, when the CF is started up from the GUI, the startup of the sfcfrmd daemon is given up if a quorum does not exist. - wait_bg When a node is started up or the CF is started up from the GUI, the startup of the sfcfrmd daemon will be suspended in the background until it can confirm that a cluster quorum exists. Node startup or CF startup will continue. When the value is omitted (When initializing management partition), defaults to wait. If the GFS Shared File System is used for cluster applications, please choose wait. If the GFS Shared File System is not used for cluster applications, you can choose wait_bg. In this case, the cluster applications will be started without waiting for the GFS Shared File System become usable. Also, the time required for system startup can be reduced. Note - If you start some of the nodes after stopping all the nodes in the cluster, the state of nodes that are not operated in a cluster system cannot be confirmed. So, the startup of the sfcfrmd daemon is suspended until all nodes are operated in a cluster system. - If wait is set and you want to start up CF from GUI after stopping CF on all the nodes, see "12.6 How to start up CF from GUI when a GFS Shared File System is used". Information When a node's startup is suspended, you can login to system using the network Stopping sfcfrmd daemon To change the sfcfrmd daemon's start mode registered in the management partition, the sfcfrmd daemon must be stopped on all the cluster nodes. See For details of stopping sfcfrmd daemon, see "9.3.2 Stopping sfcfrmd daemon" Changing the sfcfrmd daemon's startup mode To change the sfcfrmd daemon's startup mode, use the -m option of sfcsetup(8). How to change the sfcfrmd daemon's startup mode to wait_bg is explained below. 1. Confirm the current the sfcfrmd daemon's startup mode. # sfcsetup -m <Enter> wait 2. Change the sfcfrmd daemon's startup mode. # sfcsetup -m wait_bg <Enter> 3. Confirm the sfcfrmd daemon's startup mode after change

78 # sfcsetup -m <Enter> wait_bg Starting sfcfrmd daemon Start the sfcfrmd daemon on all the nodes. See For details of starting sfcfrmd daemon, see "9.2.4 Starting sfcfrmd daemon". 9.6 Backup of the management partition information This section describes how to backup of the management partition information Backup of the management partition information Use the sfcgetconf(1m) command to make a backup of the management partition information. The method of backup of the management partition information is explained below. # sfcgetconf _backup_file_ <Enter> See For details on sfcgetconf(1m), see sfcgetconf(1m). Note To back up management partition information with sfcgetconf(1m), the sfcfrmd demon should be operating by the execution node. If sfcfrmd daemon is not operating, start it by using sfcfrmstart(1m). For details, see "9.2.4 Starting sfcfrmd daemon". Make a backup of the management partition information, after changing the configuration of the file system by mkfs_sfcfs(1m), sfcadd(1m), sfcadm(1m), sfcnode(1m). sfcgetconf(1m) generate a shell script _backup_file_: The contents of the shell script are explained below. # cat _backup_file_ <Enter> #!/bin/sh # This file is made by: # sfcgetconf _backup_file_ # Tue Jun 18 09:08: # #---- fsid : # MDS primary (port) : host1 (sfcfs-1) # MDS secondory (port) : host2 (sfcfs-1) # MDS other : # AC : host1,host2 # options : # device : /dev/sfdsk/gfs01/rdsk/volume01 sfcadm -m host1,host2 -g host1,host2 -p sfcfs-1,sfcfs-1 /dev/sfdsk/gfs01/rdsk/volume

79 9.7 Restoring of the management partition information This section describes how to restore of the management partition information Re-initializing the management partition If a disk failure occurs in the GDS volume that was being used as the management partition, initialize the management partition in the GDS volume in which the disk unit was replaced, after you replace the disk unit. See For details of initializing of the management partition, see "9.2.2 Initializing of the management partition" Re-registering node configuration information to the management partition Re-register node configuration information to the re-initialized management partition. See For details of registering node configuration information to the management partition, see "9.2.3 Registering node configuration information to the management partition" Re-configuring the sfcfrmd daemon's startup mode in the management partition Re-configure the sfcfrmd daemon's startup mode in the management partition. Note The registered sfcfrmd daemon's startup mode is wait when the management partition is initialized. If another sfcfrmd daemon's startup mode is selected, this operation is required. See For details of changing the sfcfrmd daemon's startup mode, see "9.5.3 Changing the sfcfrmd daemon's startup mode" Starting sfcfrmd daemon The sfcfrmd daemon is started by all nodes so that operation may begin. See For details of starting sfcfrmd daemon, see "9.2.4 Starting sfcfrmd daemon" Restoring of the management partition information The shell script generated by sfcgetconf(1m) in "9.6.1 Backup of the management partition information" is executed by the restoring of management partition information. The method of restoring of the management partition information is explained below

80 # sh _backup_file_ <Enter> get other node information start... end

81 Chapter 10 File System Operations (GUI) This chapter describes how to operate the GFS Shared File System by the GFS Management View. The structure of this chapter is the same as that of "Chapter 11 File System Operations (Command)". For details about command operation, see "Chapter 11 File System Operations (Command)". The management partition settings must have been made before you create a file system. For information on the outline of the management partition, see the following: - "Chapter 5 Management Partition" For details on how to set up the management partition, see the following: - "Chapter 8 Management Partition Operations (GUI)" - "Chapter 9 Management Partition Operations (Command)" Start file system operation after you have created a file system. For details on file system operation, see the following: - "Chapter 12 File System Management" Flow of Operations This section explains the flow of file system operations of the GFS Management View Creation The following figure shows the basic design flow for creating a file system in the GFS Shared File System. Figure 10.1 Operation flow for creating file system See For details about GUI operation in the above figure, see "10.2 Creation". For details about command operation in the above figure, see "11.2 Creation"

82 Change (in file system attributes) Using the GFS Management View, the following file system attributes can be changed: - Mount information - File system tuning The following figure shows the basic design flow for changing the file system attributes in the GFS Shared File System. Figure 10.2 Operation flow for change in file system attributes See For details about GUI operation in the above figure, see " Changing the file system attributes". For details about command operation in the above figure, see "11.3 Change (file system attributes)" Change (partition addition) Using the GFS Management View, the following configuration changes can be made: - Addition of a file data partition The following figure shows the basic design flow for changing the file system configuration in the GFS Shared File System

83 Figure 10.3 Operation flow for partition addition See For details about GUI operation in the above figure, see " Changing the file system configuration (partition addition)". For details about command operation in the above figure, see "11.4 Change (partition addition)" Change (shared node information) The following describes the flow of operations to change information about nodes sharing a file system. Information about nodes sharing a file system can be changed only when the file system is unmounted on all nodes. Figure 10.4 Operation flow for changing shared node information

84 See For details about GUI operation in the above figure, see " Changing the shared node information". For details about command operation in the above figure, see "11.5 Change (shared node information)" Deletion The following describes the flow of operations to delete a file system. A file system can be deleted only when the file system is unmounted on all nodes. Figure 10.5 Operation flow for deleting a file system See For details about GUI operation in the above figure, see "10.4 Deletion". For details about command operation in the above figure, see "11.8 Deleting" Creation This section explains how to create a file system. It is easy to create a file system because the file system creation operation uses a wizard. To start the file system creation wizard, Click [Create] on the [Operation] drop-down menu. Note Note of Operation - Information that has already been input becomes invalid and the default value corresponding to information newly set on the previous screen is set when <Next> button is selected after <Back> button is selected on the wizard screen and the operation that exerts the influence on the following screens is done. - The information for the Management View must be updated before the changes are reflected. To immediately update the information, on the main screen click [Update Now] on the [View] drop-down menu Creating a file system The following describes the procedure for creating a file system

85 (1) Setting a shared disk device The GFS Shared File System uses logical volumes of GDS as shared devices. The status of the logical volumes of GDS must be ACTIVE on each node. Note GDS logical volume should be set up as follows. - Set up the type of a disk class with which GDS logical volume belongs as shared.(change GDS class attributes) - Set all the nodes that share GFS Shared File System as the scope of the disk class to which logical volume belongs.(change GDS class attributes) See For GDS disk class operations, see the applicable items under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide". For the logical volume operations of GDS, see the applicable items under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide". (2) Setting share information Click [Create] on the [Operation] menu and the [Create File System Wizard] screen appears. It is shown in the following "Figure Create File System Wizard (1)". In the [Create File System Wizard (1)] screen, set the node information and enter the mount point. Figure 10.6 Create File System Wizard (1) Selecting node names In the [Node name], select the names of the sharing nodes. Be sure to select two nodes

86 It is not possible to deselect the local node. Selecting host names In order to select host names other than the host names displayed, click the <Select> button then select the LAN host that will be used for each node in the following [Host Name Selection Wizard] screen. If you don't click the <Select> button, the displayed host name will be used. Figure 10.7 Host Name Selection Wizard Please choose the host name of LAN used from the left-hand side [Candidate host names]. Then please click the <Add> button in order to add to the right-hand side [Selection host names]. Note that only one host name can be selected at a time in [Selection host names]. If you want to change [Selection host names], delete the host name being selected in [Selection host names]. Selecting the primary MDS and secondary MDS Specify the nodes on which the [Primary MDS] and [Secondary MDS] will be started. A single, unique node must be selected for each MDS. Setting mount point and creating a directory Specify the full path name of a mount point in the [Mount point] field. If <yes> button is selected for [Make directory], a directory is created with the following attributes: Owner: root Group: sys Access right: 775 After setting the area information items, click the <Next> button to proceed to the next screen which is shown in the following "Figure Create File System Wizard (2)". To restore the settings to their default values, click the <Reset> button. To cancel the file system creates operation, click the <Cancel> button. (3) Selecting a partition After completing the MDS configuration, the register partition screen will be displayed. It is shown in the following "Figure Create File System Wizard (2)"

87 Figure 10.8 Create File System Wizard (2) Select the partition from [Candidate partitions] field, and click the <Add> button. More than one partition can be selected at a time. However, cannot select a partition that is a part of existing file system or a management partition. After selecting a partition, click the <Next> button to continue. The next screen is shown in "Figure Create File System Wizard (3)". To cancel all selections of the partition currently displayed on [Component partitions] field, click the <reset> button. To return to the [Create File System Wizard (1)] screen, click the <Back> button. To cancel the file system creates operation, click the <Cancel> button. Note If the GDS logical volume is not ACTIVE, it cannot be selected as a configuration partition. If it needs to be selected as a configuration partition, bring the logical volume into ACTIVE. (4) Setting partition information In the [Create File System Wizard (3)] screen, select the area (META/LOG/DATA) to be assigned to each partition selected in the [Create File System Wizard (2)]. The partition to which META is assigned becomes a representative partition. See For an explanation of the combinations of areas (META/LOG/DATA) assigned to partitions, see "2.1.6 Partition configuration"

88 Figure 10.9 Create File System Wizard (3) After setting the area information items, click the <Next> button to proceed to the next screen which is shown in the following "Figure Create File System Wizard (4)". To restore the settings to their default values, click the <Reset> button. To return to the [Create File System Wizard (2)] screen, click the <Back> button. To cancel the file system creation operation, click the <Cancel> button. If you do not need to change extended information, detailed information and mount information, click the <Create> button to create the file system. (5) Setting extended information In the [Create File System Wizard (4)] screen, the [extended information] values should be set such that they allow for future expansion of the file system and configuration changes

89 Figure Create File System Wizard (4) Note Set the maximum number of partitions and the maximum size of data area after adequately considering future data area expansion for the file system to be created. For details, see "4.2.5 For future expansion". See The above parameter for expansion setup corresponds to specific mkfs_sfcfs(1m) options of the GFS Shared File System as follows: - [Maximum number of partitions]: maxvol=n - [Maximum size of data area]: maxdsz=n For details about the parameters, see mkfs_sfcfs(1m). When the extended information items are correct, click the <Next> button to proceed. The next screen is shown in the following "Figure Create File System Wizard (5)". To restore the settings to their default values, click the <Reset> button. To return to the [Create File System Wizard (3)] screen, click the <Back> button. To cancel the file system creation operation, click the <Cancel> button. If you do not need to change the detailed information and mount information, click the <Create> button to create the file system. (6) Setting detailed information In the [Create File System Wizard (5)], set the detailed information

90 Figure Create File System Wizard (5) See The above parameter for expansion setup corresponds to specific mkfs_sfcfs(1m) options of the GFS Shared File System as follows: - [File System threshold]: free=n - [V-data threshold]: mfree=n - [Byte per i-node]: nbpi=n - [Block per 1-extent]: nblkpext=n - [Size of update log area]: logsz=n For details about the parameters, see mkfs_sfcfs(1m). Note The [File System threshold] is the value obtained by subtracting from 100 the minimum percentage of free area in the file system specified by the -o free option of mkfs_sfcfs(1m). The [V-data threshold] is the value obtained by subtracting from 100 the minimum percentage of free area in the V-data area specified by the -o mfree option of mkfs_sfcfs(1m). When the detailed information items are correct, click the <Next> button to continue. The next screen is shown in the following "Figure Create File System Wizard (6)". To restore the settings to their default values, click the <Reset> button. To return to the [Create File System Wizard (4)] screen, click the <Back> button. To cancel the file system creation operation, click the <Cancel> button. If you do not need to change the mount information, click the <Create> button to create the file system

91 (7) Setting mount information In the [Create File System Wizard (6)] screen, set the mount information. Figure Create File System Wizard (6) See The above each mount option corresponds to mount option of mount_sfcfs(1m) as follows: - [setuid execution disabled]: nosuid - [RO mount]: ro - [noatime]: noatime - [auto mount disabled]: noauto For details about the parameters, see mount_sfcfs(1m). After setting the mount information, click the <Create> button to create the file system. To restore the settings to their default values, click the <Reset> button. To return to the [Create File System Wizard (5)] screen, click the <Back> button. To cancel the file system creation operation, click the <Cancel> button. Note If you create a file system by clicking the <Create> button, the GFS Management View will automatically add an entry to /etc/vfstab

92 (8) Mounting the file system Mount the file system. See For information on how to mount the file system using a command, see " Mount" Change The GFS Management View provides three change operations: - Changing the file system attributes - Changing the file system configuration (partition addition) - Changing the shared node information Note Note of Operation - The information for the Management View must be updated before the changes are reflected in the display of the Main Screen. To immediately update the information, on the Main Screen click [Update Now] on the [View] drop-down menu. - If the sfcfs information is not available, the file system cannot be changed. Take action according to the output error message Changing the file system attributes The following describes the procedure for changing file system attributes using the GFS Management View. Changing the file system attributes includes changing the mount information. (1) Unmounting a file system Before changing the file system attribute, unmount the file system on all the nodes. See For information on how to unmount the file system using a command, see " Unmount". (2) Selecting a file system From the [Mount tree] or [File system information] field in the main screen, select the file system you want to change. Only file systems of the sfcfs type can be selected. Click [Change Attributes] on the [Operation] menu, and the [File System Attributes] screen appears. From the [File system Attribute] screen, the [Mount information], [Share information], and [Detailed information] can be selected using the screen switching tabs. (3) Changing the mount information In the [Mount information] screen, the [Mount information] can be changed. To change the settings, change the corresponding parameters. To restore the settings to their previous values, click the <Reset> button. To execute the mount information change operation, click the <Apply> button

93 Specify the full path name of a mount point in the [Mount point] field. If [yes] is selected for [Make directory], a directory is created with the following attributes: Owner: root Group: sys Access permissions: 775 Figure Mount information for file system attributes See The above each mount option corresponds to mount option of mount_sfcfs(1m) as follows: - [setuid execution disabled]: nosuid - [RO mount]: ro - [noatime]: noatime - [auto mount disabled]: noauto For details about the parameters, see mount_sfcfs(1m). Note If you change the mounting information by clicking the <Apply> button, the GFS Management View will automatically add the information in /etc/vfstab. (4) Changing detailed information In the [Detailed information] screen, tune the file system To change the settings, change the corresponding parameters

94 To restore the settings to their previous values, click the <Reset> button. To execute the detailed information change operation, click the <Apply> button. Figure Detailed information for file system attributes See The variable parameter of detailed information corresponds to the tuning parameter of the GFS Shared File System as follows: - [Communication timeout]: CON_TIMEOUT For details about the parameter, see " Communication timeout value". (5) Mounting a file system After the completion of the file system attribute change operation, a file system is in mounted. See For information on how to mount the file system using a command, see " Mount" Changing the file system configuration (partition addition) This section explains the procedure for adding file data partitions. It is easy to change a file system's configuration because the file system configuration change operation uses a wizard. To start the Change File System Configuration Wizard, click [Change Configuration] on the [Operation] drop-down menu

95 Note Note of Operation Information that has already been input becomes invalid and the default value corresponding to information newly set on the previous screen is set when <Next> button is selected after <Back> button is selected on the wizard screen and the operation that exerts the influence on the following screens is done. (1) Unmounting a file system Before changing the file system configuration, unmount the file system on all the nodes. See For details about command, see " Unmount". (2) Setting shared disk device A partition that will be added to the GFS Shared File System must be a logical volume of a GDS shared class. Also, the GDS logical volume must be ACTIVE. Note GDS logical volume should set up as follows. - Set up the type of a disk class with which GDS logical volume belongs as shared.(change GDS class attributes) - Set all the nodes that share GFS Shared File System as the scope of the disk class to which logical volume belongs.(change GDS class attributes) See For GDS disk class operations, see the applicable items under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide ". For GDS logical volume operations, see the applicable items under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide ". (3) Selecting a file system From the [Mount tree] or [File system information] field in the main screen, select the file system to be changed. Only file systems of the sfcfs type can be selected. Click [Change Configuration] on the [Operation] menu, the [Change File System Configuration Wizard (1)] screen appears. (4) Selecting a partition From the [Candidate partitions] field, select the partition to be added and click the <Add> button. More than one partition can be selected at a time. Note that those partitions that are already in use for file systems cannot be selected. To cancel all selections of the partition currently displayed on [Component partitions] field, click the <reset> button

96 Figure Change File System Configuration Wizard (1) After selecting the partitions, click the <Next> button to continue. The next screen is shown in the following "Figure Change File System Configuration Wizard (2)". To cancel the file system configuration change operation, click the <Cancel> button. (5) Checking partition information In the [Change File System Configuration Wizard (2)] screen, confirm the file system configuration including adding partition. See For an explanation of the combinations of areas (META/DATA) assigned to partitions, see "2.1.6 Partition configuration". Note The added partition is automatically allocated to the file data area

97 Figure Change File System Configuration Wizard (2) After confirming the information items, click the <Modify> button to execute the file system configuration change operation. To return to the [Change File System Configuration Wizard (1)], click the <Back> button. To cancel the file system configuration change operation, click the <Cancel> button. (6) Mounting a file system After completion of the file system attribute change operation, mount the file system if the file system is unmounted. See For details about command, see " Mount" Changing the shared node information The following describes the procedure for changing sharing-node information using the GFS Management View. (1) Unmounting a file system Before changing the shared node information, unmount the file system on all the nodes. See For details about command, see " Unmount". (2) Selecting a file system From the [Mount tree] or [File system information] field in the main screen, select the file system you want to change. Only file systems of the sfcfs type can be selected. Click [Change Attributes] on the [Operation] menu and go to [Share information] screen using the screen-switching tab. (3) Changing share information In the [Share information] screen, the information on the nodes sharing the file system can be changed

98 To change the settings, change the corresponding parameters. To restore the settings to their previous values, click the <Reset> button. Figure Share information for file system attributes Selecting node names In the [Node name] column, select the names of the sharing nodes. Be sure to select two nodes. It is not possible to deselect the local node, the primary MDS or the secondary MDS. Selecting a host name In order to select host names other than the host names displayed, click the <Select> button. Then select the LAN host that will be used for each node in the following [Host Name Selection Wizard] screen. If you don't click the <Select> button, the displayed host name will be used. The host name for each node already selected cannot be changed. Delete the host name being selected, and select the LAN host from the candidate hosts then add it to the hosts for selection

99 Figure Host Name Selection Wizard Note Primary MDS and secondary MDS The [Primary MDS] and [Secondary MDS] cannot be changed. If a change is necessary, the file system must be deleted and recreated. See For an explanation of each parameter, see the explanation of sfcadm(1m) in sfcadm(1m). (4) Confirming the sharing-node information change operation To execute the sharing-node information change operation, click the <Apply> button. Note If you change sharing-node information by clicking the <Apply> button, the GFS Management View will automatically add and change the entry in /etc/vfstab. (5) Mounting a file system After completion of the file system attributes change operation, mount the file system. See For details about command, see " Mount"

100 10.4 Deletion This section explains how to delete a file system. Note Note of Operation - The information for the Management View must be updated before the changes are reflected in the display of the Main Screen. To immediately update the information, on the Main Screen click [Update Now] on the [View] drop-down menu. - If the sfcfs information is not available, the file system cannot be changed. Take action according to the output error message Deleting the file system The following describes the procedure for deleting a file system using the GFS Management View. (1) Unmount the file system Unmount the file system on all nodes before deletion. See For detail on the unmount procedure, see " Unmount". (2) Selecting the file system From the [Mount tree] or [File system information] field in the main screen, select the file system to be deleted. Only file systems of the sfcfs type can be selected. (3) Deleting the file system Click [Delete] on the [Operation] menu and the following warning message appears. To delete the file system, click the <Yes> button. To cancel the deletion of the file system, click the <No> button. Figure File system deletion warning message Note If you delete the file system by clicking the <Yes> button, the GFS Management View will automatically delete the entry from /etc/ vfstab

101 Chapter 11 File System Operations (Command) This chapter describes how to operate the GFS Shared File System by commands. The structure of this chapter is the same as that of "Chapter 10 File System Operations (GUI)". For details about GUI operation, see "Chapter 10 File System Operations (GUI)". The management partition settings must have been made before you create a file system. For information on the outline of the management partition, see the following: - "Chapter 5 Management Partition" For details on how to set up the management partition, see the following: - "Chapter 9 Management Partition Operations (Command)" - "Chapter 8 Management Partition Operations (GUI)" Start file system operation after you have created a file system. For details on file system operation, see the following: - "Chapter 12 File System Management" 11.1 Flow of Operations This section describes the flow of operations in the GFS Shared File System Creation The following figure shows the basic design flow for creating a file system in the GFS Shared File System. Figure 11.1 Operation flow for creating file system See For details about command operation in the above figure, see "11.2 Creation" For details about GUI operation in the above figure, see "10.2 Creation" Change (in file system attributes) In the GFS Shared File System, the following file system attributes can be changed:

102 - Mount information - File system tuning The following figure shows the basic design flow for changing the file system attributes in the GFS Shared File System. Figure 11.2 Operation flow for change in file system attributes See For details about command operation in the above figure, see "11.3 Change (file system attributes)" For details about GUI operation in the above figure, see " Changing the file system attributes" Change (partition addition) In the GFS Shared File System, the following information can be changed: - Addition of file data partitions The following figure shows the basic design flow for changing the file system configuration in the GFS Shared File System. Figure 11.3 Operation flow for partition addition

103 See For details about command operation in the above figure, see "11.4 Change (partition addition)" For details about GUI operation in the above figure, see " Changing the file system configuration (partition addition)" Change (shared node information) In the GFS Shared File System, the following change of shared node information can be operated: - Adding shared node information - Deleting shared node information - Updating shared node information The following figure shows the basic design flow for changing the file system shared node information in the GFS Shared File System. Figure 11.4 Operation flow for changing the shared node information See For details about command operation in the above figure, see "11.5 Change (shared node information)" For details about GUI operation in the above figure, see " Changing the shared node information" Change (re-creating a file system) The following figure shows the basic design flow for re-creating the file system in the GFS Shared File System

104 Figure 11.5 Operation flow for re-creating a file system See For details about command operation in the above figure, see "11.6 Change (re-creating a file system)" Change (MDS operational information) The following figure shows the basic design flow for changing the MDS operational information in the GFS Shared File System. Figure 11.6 Operation flow for changing the MDS operational information See For details about command operation in the above figure, see "11.7 Change (MDS operational information)" Deletion The following figure shows the basic design flow for deleting the file system in the GFS Shared File System

105 Figure 11.7 Operation flow for deleting a file system See For details about command operation in the above figure, see "11.8 Deleting" For details about GUI operation in the above figure, see "10.4 Deletion" 11.2 Creation This section describes the operations from GFS Shared File System creation to operation. 1. Setting shared disks 2. Creating a file systems 3. Setting vfstab 4. Mount 5. Checking file system status Also, the notes are provided when the GFS Shared File System is created by using the partition that already used File System Setting shared disks The GFS Shared File System uses logical volumes of GDS as shared devices. The status of the logical volumes of GDS must be ACTIVE on each node. Note GDS logical volume should set up as follows. - Set up the type of a disk class with which GDS logical volume belongs as shared.(change GDS class attributes) - Set all the nodes that share GFS Shared File System as the scope of the disk class to which logical volume belongs.(change GDS class attributes) See For GDS disk class operations, see the applicable items under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide."

106 For GDS logical volume operations, see the applicable items under "Operation using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." Creating a file systems To create a GFS Shared File System, use mkfs_sfcfs(1m). Create the GFS Shared File System on any one of the file system shared nodes. See For details on mkfs_sfcfs(1m), see mkfs_sfcfs(1m). For details on mkfs(1m), see "Solaris X Reference Manual Collection" Defaults of parameters used by mkfs_sfcfs(1m) Specifies a value of parameter along use of the file system that decided beforehand as had described in "Chapter 4 File System Design." The defaults of parameters used by mkfs_sfcfs(1m) are as follows: Table 11.1 Defaults of parameters used by mkfs_sfcfs(1m) Data block size Minimum free space (-o free=n) Number of i-nodes (-o nbpi=n) Update log area size (-o logsz=n) Meta-data area size (-o metasz=n) Parameter Always 8192 bytes 10% of the file data area Default One in an 8192-byte disk area Up to 16 mega. 1% of the file system size The available range is from 5 megabytes to 50 megabytes. About 10% of the file system size Note The use ratio of the meta-data area decreases by growing of the file system size when the meta-data area size is not specified Examples of creating a representative file system Examples of creating a typical GFS Shared File System are given below. Single partition configuration To create a GFS Shared File System with a single partition, specify the partition configuring the file system. In this case, areas for all types of data (meta-data area, update log area, and file data area) are created in the representative partition. The following is an example of using mkfs_sfcfs(1m) to create a file system with a single partition. # mkfs -F sfcfs -o node=mikan,karin /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Representative partition (meta-data, log, and data): /dev/sfdsk/gfs01/rdsk/volume01 Shared hosts : mikan, karin Primary MDS : mikan Secondary MDS : karin

107 Figure 11.8 Single partition configuration Multiple partitions configuration (Multiple file data partitions configuration) To specify multiple partitions in the data area, specify a representative partition and file data partitions. In this case, a meta-data area, update log area, and data area are created in the representative partition. The following is an example of using mkfs_sfcfs(1m) to create a file system with multiple file data partitions. # mkfs -F sfcfs -o data=/dev/sfdsk/gfs01/dsk/volume02,data=/dev/sfdsk/gfs01/rdsk/ volume03,node=mikan,karin /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Representative partition (meta-data, log, and data): /dev/sfdsk/gfs01/dsk/volume01 Data partition: /dev/sfdsk/gfs01/rdsk/volume02, /dev/sfdsk/gfs01/rdsk/volume03 Shared hosts: mikan,karin Primary MDS: mikan Secondary MDS: karin

108 Figure 11.9 Multiple data partitions configurations Multiple partitions configuration (Separating the data area) If multi-partition of the file data area is specified with the "-o dataopt=y" option, representative partitions will not contain the file data area. The following example shows how to create a file system where the file data area is not included in multiple file data partitions or representative partitions with mkfs_sfcfs(1m). # mkfs -F sfcfs -o dataopt=y,data=/dev/sfdsk/gfs01/rdsk/volume02,data=/dev/sfdsk/gfs01/rdsk/ volume03,node=mikan,karin /dev/sfdsk/gfs01/dsk/volume01 <Enter> Representative partition (meta-data and log): /dev/sfdsk/gfs01/rdsk/volume01 Data partition: /dev/sfdsk/gfs01/dsk/volume02, /dev/sfdsk/gfs01/rdsk/volume03 Shared hosts: mikan, karin Primary MDS: mikan Secondary MDS: karin Figure Separating the data area

109 Customizing a GFS Shared File System Parameters for customizing a file system are provided by mkfs_sfcfs(1m), which is used to create a GFS Shared File Systems. Typical parameters are described below. Update log area size (-o logsz=n) Specify the size of the update log area with a value from 5 to 100 megabytes. The default size of an update log area is of 1% of the file system size. If, however, 1% of the file system is less than 5 megabytes, 5 megabytes is set. Also, if the value is greater than 50 megabytes, 50 megabytes is set. Meta-data area size (-o metasz=n) Specifies the size of the meta-data area. The default is about 10% of the file system size. However, when the file system size is larger, the ratio of the meta-data area becomes smaller. The minimum value that can be specified is same as default. The maximum value is minimum requirement size for managing 1-mega V- data. (The total number of V-data of the file system can be confirmed by executing df_sfcfs(1m).) However, larger value than the size of the representative partition cannot be specified. Maximum data area size (-o maxdsz=n) Specify the maximum total size of the data area when adding a file data partition as an expansion to a GFS Shared File System. If a file data partition that exceeds the maximum data space, area management information might be insufficient. If this occurs, part of free file data space might not be used. You can add a file data partition using sfcadd(1m). The maximum size is less than megabytes (1 terabyte). Maximum number of partitions (-o maxvol=n) Specify the maximum number of partitions that may ever comprise this GFS Shared File System. The types of partitions configuring the GFS Shared File System are the representative partition, update log partition, and file data partition. The value specified here is the total of the partitions of all types. The default value is 16. Specifiable values are 1 to Setting MDS operational information The following shows an example of creating a typical GFS Shared File System with MDS operational information specified to improve availability: Priority settings for MDS placement Specify the priority of placing the primary MDS and secondary MDS using the shared host name in the -o node option of the mkfs_sfcfs(1m) command. The primary MDS and secondary MDS candidates are selected in the order the shared host names are specified. The primary MDS is ultimately determined based on the operational status of actual shared nodes at the time of starting operation of the file system. The following shows an example of creating a file system using mkfs_sfcfs(1m) specifying mikan as the primary MDS candidate and karin as the secondary MDS candidate. # mkfs -F sfcfs -o node=mikan,karin /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Representative partition (meta-data,log and data): /dev/sfdsk/gfs01/rdsk/volume01 Shared hosts: mikan, karin Primary MDS: mikan Secondary MDS: karin Setting vfstab In the GFS Shared File System, it is necessary to code an entry of the created file system into /etc/vfstab of all the nodes that share the file system. Entries for the same file system must be identical on all nodes sharing that file system. Specify a directory for the "mount point" field of /etc/vfstab. This will be the default mount point when entering the mount_sfcfs(1m) command without an argument for the mount point. Specify a directory that exists on all nodes sharing the file system. In the "mount options" field, specify the options to be used for mounting

110 Note Make sure to specify "-" in the "fsck pass" field, and "no" in the "mount at boot" field. Information You can prevent the file system from mistakenly being used as another one by also coding an entry of the created file system into /etc/ vfstab of the nodes that do not share the file system. Note On nodes that do not share the file system, set the "mount options" field to "noauto". Examples of /etc/vfstab entries are shown below: Table 11.2 When mounting in rw mode at startup of the node: Parameter Value device to mount /dev/sfdsk/gfs01/dsk/volume01 device to fsck /dev/sfdsk/gfs01/rdsk/volume01 mount point /mnt/fs1 FS type sfcfs fsck pass - mount at boot no mount options rw Table 11.3 When mounting is not performed at startup of the node Parameter Value device to mount /dev/sfdsk/gfs01/dsk/volume01 device to fsck /dev/sfdsk/gfs01/rdsk/volume01 mount point /mnt/fs1 FS type sfcfs fsck pass - mount at boot no mount options rw,noauto Table 11.4 Mount options Option closesync noatime nosuid rw ro Description All non-updated data in the relevant file is reflected when the file is finally closed. Does not update the file access time. By default, a file system is mounted with permission to execute "setuid". On specifiying "nosuid", however, there will be no permission to execute "setuid" when mounting a file system. Mounts in read/write mode (rw) or read only mode (ro). By default, rw is used

111 noauto Option Description Does not mount when a node is started. By default, mounting is performed when a node is started Mount If a GFS Shared File System is used, mount it on all the nodes. In such a case, execute the sfcmntgl(1m) on any one of the nodes where the GFS Shared File System will be used. If it is only used on one node, mount the file system using mount_sfcfs(1m) as well as ufs File System. If the sfcmntgl(1m) is executed, sfcfsd daemon and MDS will be started on a node where the MDS is configured. Then, the file system will be mounted on all the nodes. Note If the file system is not added in /etc/vfstab, mounting of the file system will fail. Also, sfcmntgl(1m) and mount_sfcfs(1m) are not used for the mount option. If either of them is executed, the current mount option that is specified for the mount option filed of /etc/vfstab will be used Mount of all nodes If the file system is mounted on all the nodes, execute sfcmntgl(1m) on any one of the nodes. See For details on sfcmntgl(1m), see sfcmntgl(1m). You can mount the file system on all the nodes as follows: - When the mount point is specified: # sfcmntgl /mnt/fs1 <Enter> Mount point: /mnt/fs1 - When a representative partition is specified: # sfcmntgl /dev/sfdsk/gfs01/dsk/volume01 <Enter> Representative partition: /dev/sfdsk/gfs01/dsk/volume01 - When both the mount point and a representative partition are specified: # sfcmntgl /dev/sfdsk/gfs01/dsk/volume01 /mnt/fs1 <Enter> Representative partition: /dev/sfdsk/gfs01/dsk/volume01 Mount point: /mnt/fs1 If mounting fails for a node, the name of the node on which the failure occurred is reported Mount If you want to mount a file system on specific nodes, use mount_sfcfs(1m). Then, the file system will only be mounted on the nodes where the file system will be shared

112 See For details on mount_sfcfs(1m), see mount_sfcfs(1m). For details on mount(1m), see "Solaris X Reference Manual Collection". Mount the GFS Shared File System on the node as shown below: - When the mount point is specified: # mount /mnt/fs1 <Enter> Mount point: /mnt/fs1 - When a representative partition is specified: # mount /dev/sfdsk/gfs01/dsk/volume01 <Enter> Representative partition: /dev/sfdsk/gfs01/dsk/volume01 - When both the mount point and a representative partition are specified: # mount /dev/sfdsk/gfs01/dsk/volume01 /mnt/fs1 <Enter> Representative partition: /dev/sfdsk/gfs01/dsk/volume01 Mount point: /mnt/fs Checking file system status sfcrscinfo(1m) can check if a GFS Shared File System can be mounted. Execute the command on any one of the nodes where the file system will be shared. See For details on sfcrscinfo(1m), see sfcrscinfo(1m). The following example shows how to check the mount status of the GFS Shared File System whose representative partition is /dev/ sfdsk/gfs/rdsk/volume1 with sfcrscinfo(1m). # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 2 MDS(P) run mikan 2 AC run mikan 2 MDS(S) wait karin 2 AC run karin Check if either of MDS(P) or MDS(S) is "run -" and the access client (AC) of the nodes where the file system will be shared is also "run -". If both are "run -", the file system is running properly. If the status indicates, "stop -", they are stopped properly Notes applying when the partitions of a created file system are used When creating a GFS Shared File System using a partition of an existing file system, be aware of the following important information The GFS Shared File System To use a partition that is already in use by an existing GFS Shared File System for a new GFS Shared File System, delete the existing file system and then create the new file system

113 See For information about deleting a file system, see "11.8 Deleting" File systems other than GFS Shared File System To use a partition that is in use by a file system other than the GFS Shared File System, unmount the file system on all nodes and then delete in accordance with the procedure for that file system. Also, delete the definition in /etc/vfstab and then create the GFS Shared File System Change (file system attributes) To change mount information (the GFS Shared File System attribute change) or tuning the file system, perform the following tasks: 1. Unmount 2. Change the file system attributes 3. Mount Unmount To change the GFS Shared File System, unmount it on all the nodes. First, stop using the file system by stopping applications. Then, unmount the file system. Check if the file system is unmounted properly on all the nodes using sfcrscinfo(1m). See In order to use the process of the file system, execute fuser(1m). For details about the command, see the "Solaris X Reference Manual Collection". For information on how to check the file system status using sfcrscinfo(1m), see " Checking file system status" Unmount of all nodes To unmount the file system on all the nodes, use sfcumntgl(1m). Execute the command on any one of the nodes where the file system is shared. See For details on sfcumntgl(1m), see sfcumntgl(1m). - When the mount point is specified: # sfcumntgl /mnt/fs1 <Enter> Mount point: /mnt/fs1 - When a representative partition is specified: # sfcumntgl /dev/sfdsk/gfs01/dsk/volume01 <Enter> Representative partition: /dev/sfdsk/gfs01/dsk/volume01 If unmounting fails on a node, the name of the node on which the failure occurred is reported

114 Unmount To unmount the file system on specific nodes, use umount_sfcfs(1m). Execute the command on any one of the nodes where the file system is shared. See For details on umount_sfcfs(1m), see umount_sfcfs(1m). For details on umount(1m), see "Solaris X Reference Manual Collection". Unmount the GFS Shared File System on the node as shown below: - When the mount point is specified: # umount /mnt/fs1 <Enter> Mount point: /mnt/fs1 - When a representative partition is specified: # umount /dev/sfdsk/gfs01/dsk/volume01 <Enter> Representative partition: /dev/sfdsk/gfs01/dsk/volume Change the file system attributes Changing the mount information To change the mount information of the GFS Shared File System, change /etc/vfstab information. The parameters of mount point and mount options can be changed. See For details of setting /etc/vfstab, see " Setting vfstab" Tuning file system The GFS Shared File System allows change of the communication timeout value. Execute sfcadm(1m) to change the timeout value on any one of the nodes where the file system is shared. See For details of sfcadm(1m), see sfcadm(1m). The following example shows how to set the timeout value to 180 seconds for the existing file system (/dev/sfdsk/gfs01/rdsk/ volume01) using sfcadm(1m). # sfcadm -o CON_TIMEOUT=180 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Mount Mount the file system after completing the GFS Shared File System changes

115 See For details on how to mount the GFS Shared File System, see " Mount" 11.4 Change (partition addition) To add a file data partition (GFS Shared File System configuration change), perform the following tasks: 1. Unmount 2. Setting shared disks 3. Partition addition 4. Mount Unmount Unmount the GFS Shared File System on all the nodes. See For details on how to unmount the GFS Shared File System, see " Unmount" Setting shared disks The partition that will be added to GFS Shared File System must be GDS logical volume. The volume must be ACTIVE on each node. Note GDS logical volume should set up as follows. - Set up the type of a disk class with which GDS logical volume belongs as shared.(change GDS class attributes) - Set all the nodes that share GFS Shared File System as the scope of the disk class to which logical volume belongs.(change GDS class attributes) See For GDS disk class operations, see the applicable items under "Operation" "using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." For GDS logical volume operations, see the applicable items under "Operation" "using Global Disk Services Management View" in the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide." Partition addition To add a partition as file data area, use sfcadd(1m). Execute the command on any one of the nodes where the file system is shared. See For details on sfcadd(1m), see sfcadd(1m)

116 Note If you add a partition to the file system that is not stopped properly, recovery the file system by executing the fsck_sfcfs(1m) command in advance. How to add a file data partition (/dev/sfdsk/gfs01/rdsk/volume02) to an existing file system (/dev/sfdsk/gfs01/rdsk/volume01) by sfcadd(1m) is shown below. 1. Check the configuration of the current file system. # sfcinfo /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA Add a file data partition (/dev/sfdsk/gfs01/dsk/volume02) as file data area. # sfcadd -D /dev/sfdsk/gfs01/rdsk/volume02 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> 3. Confirm that the file data partition has been added. # sfcinfo /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA /dev/sfdsk/gfs01/dsk/volume02( ) DATA Note To add a file data partition, be aware of the following: - A file data partition addition is not possible if doing so would exceed the maximum number of partitions (-o maxvol=n) for the file system, as specified by mkfs_sfcfs(1m). - When file data region size after adding partition exceeds the maximum file data region size specified by mkfs_sfcfs(1m) (-o maxdsz=n), a part of free area in file data region might not be able to be used because of management region shortage Mount After the GFS Shared File System change is completed, mount the file system. See For details on how to mount the GFS Shared File System, see " Mount" 11.5 Change (shared node information) To add or delete a node from the nodes where the GFS Shared File System is shared, perform the following: 1. Unmount 2. Setting shared disks (When adding a node)

117 3. Changing shared node information 4. Setting vfstab 5. Mount Note Addition of a node is not allowed if it exceeds the maximum size of the GFS Shared File System. See "1.7 Upper Limits of the File System" A node where a primary MDS and secondary MDS are configured cannot be deleted from a group of shared nodes Unmount To change shared node information, unmount the GFS Shared File System on all the nodes. See For details on how to unmount the GFS Shared File System, see " Unmount" Setting shared disks (When adding a node) When adding a node to a group of nodes that share the file system, be aware of the following important information. Note The node must be cluster-configured. The node must provide the GFS Shared File System that is registered in the management partition. It is necessary to include the adding node in the scope of the disk class of the GDS logical volumes that are used as management partition and GFS Shared File System. See For information on how to add a node to a cluster system, see "Expanding the Operation Configuration" of the "PRIMECLUSTER Installation and Administration Guide". For information on how to operate a disk class to which the GDS logical volume belongs, see "Operation" "Operation using GDS Management View" of the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide". For information on how to add a node to a group of nodes that provide the GFS Shared File System, see "8.1.2 Adding node configuration information to the management partition" or "9.1.2 Adding node configuration information to the management partition" Changing shared node information The GFS Shared File System stores information about nodes that share the file system in each partition to restrict access from nodes other than the nodes that share the file system. The node information mainly contains the following information: - Host ID - Host name To add and delete node information for the file system, use sfcnode(1m)

118 See For details on the sfcnode(1m) options, see sfcnode(1m) Adding node information Described below is the procedure for using sfcnode(1m) to add node information (moony) to an existing file system (/dev/sfdsk/gfs01/ rdsk/volume01). Execute the command on any one of the nodes where the file system is shared. 1. Check that the target file system is unmounted on all nodes sharing the file system. You can do this by checking that "STOP" is displayed in all status fields of the sfcinfo(1m) outputs. # sfcinfo -n /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID hostid status hostname 1 80a4f75b STOP sunny 2. Add the node information. # sfcnode -a moony /dev/sfdsk/gfs01/rdsk/volume01 <Enter> 3. Confirm that the node information has been added. You can do this by checking that the moony field is newly displayed. # sfcinfo -n /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID hostid status hostname 1 80a4f75b STOP sunny STOP moony 4. Check the MDS allocation status. # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 1 MDS(P) stop sunny 1 AC stop sunny 1 AC stop moony 5. If necessary, set the added node as MDS node. # sfcadm -g,moony /dev/sfdsk/gfs01/rdsk/volume01 <Enter> 6. Check if the node to the node where an MDS is configured has been added. If moony is displayed as MDS(S), the node has been added. # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 1 MDS(P) stop sunny 1 AC stop sunny 1 MDS(S) stop moony 1 AC stop moony

119 Deleting node information Described below is the procedure for using sfcnode(1m) to delete node information (moony) from an existing file system (/dev/sfdsk/ gfs01/rdsk/volume01). If you are deleting node information while the file system is unmounted, execute sfcnode(1m) on a node that shares the target file system. 1. Check that the file system is unmounted in all nodes configuring the current file system and all nodes sharing the file system. You can do this by checking that "STOP" is displayed in all status fields of the sfcinfo(1m) outputs. # sfcinfo -n /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID hostid status hostname 1 80a4f75b STOP sunny STOP moony 2. Check if the node whose information is to be deleted is an MDS node. If moony is displayed as MDS(S), it is an MDS node. # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 1 MDS(P) stop sunny 1 AC stop sunny 1 MDS(S) stop moony 1 AC stop moony 3. If the node whose information is to be deleted is an MDS node, it has to be deallocated as an MDS node. # sfcadm -g sunny /dev/sfdsk/gfs01/rdsk/volume01 <Enter> 4. Check if the node to the node where an MDS is configured has been deleted. If moony is no longer displayed as MDS(S), allocation as an MDS node has been removed. # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 1 MDS(P) stop sunny 1 AC stop sunny 1 AC stop moony 5. Delete the node information. # sfcnode -d -h moony /dev/sfdsk/gfs01/rdsk/volume01 <Enter> 6. Confirm that the node information has been deleted. You can do this by checking that the moony field is not displayed in the sfcinfo(1m) outputs. # sfcinfo -n /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID hostid status hostname 1 80a4f75b STOP sunny Setting vfstab In the GFS Shared File System, it is necessary to add a mount entry to /etc/vfstab on all of the nodes that are sharing a file system. In addition, if a mount entry is added to /etc/vfstab on the node that is not sharing a file system, the special file is not used as other file systems even if mistaking. Add the GFS Shared File System to /etc/vfstab of nodes that have been added

120 If nodes are deleted, change the /etc/vfstab settings so that the GFS Shared File System will not be mounted. See For details on setting /etc/vfstab, see " Setting vfstab" Mount After change of the shared node information is completed, mount the file system. See For details on how to mount the GFS Shared File System, see " Mount" 11.6 Change (re-creating a file system) To recover the GFS Shared File System or restore all the files from the backup file to clear up the fragment, perform the following file system re-creating procedure. 1. Unmount 2. Re-creating the file system 3. Mount Unmount To recreate the GFS Shared File System, unmount the file system. See For details on how to unmount the GFS Shared File System, see " Unmount" Re-creating the file system To re-create the file system, use mkfs_sfcfs(1m). You can recreate the file system in the same configuration as before without having to delete it, using mkfs_sfcfs(1m) with the "-o force" option. Execute the command on any one of the nodes where the file system is shared. See For details on the mkfs_sfcfs(1m) options, see mkfs_sfcfs(1m). The following is an example of using mkfs_sfcfs(1m) to re-create a file system with a single partition. # mkfs_sfcfs -F sfcfs -o force,node=mikan,karin /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Representative partition (meta-data, log, and data): /dev/sfdsk/gfs01/rdsk/volume01 Shared hosts: mikan, karin Primary MDS: mikan Secondary MDS: karin Note When the file system is re-created, data of the previous file system will be deleted. Back up your data if necessary

121 Mount After re-creating of the GFS Shared File System is completed, mount the file system. See For details on how to mount the GFS Shared File System, see " Mount" 11.7 Change (MDS operational information) To change the MDS operational information of the GFS Shared File System, perform the following tasks: 1. Unmount 2. Changing the MDS operational information 3. Mount Unmount To change the MDS operational information of the GFS Shared File System, unmount the file system. See For details on how to unmount the GFS Shared File System, see " Unmount" Changing the MDS operational information In the GFS Shared File System, information about all nodes that operate MDS is retained. - Priority of nodes on which to place MDS as primary or secondary MDS Use sfcadm(1m) to change information about nodes on which MDS is placed. See For details on sfcadm(1m) options, see sfcadm(1m). The following example shows how to change the priority of nodes where an MDS is configured to karin and mikan, using sfcadm(1m). 1. Check the current file system configuration. # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 1 MDS(P) stop mikan 1 AC stop mikan 1 MDS(S) stop karin 1 AC stop karin From the result of executing sfcrscinfo -m, you can check that the priority order of the MDS nodes is mikan (primary) and karin (secondary). 2. Change the priority of nodes where an MDS is configured. # sfcadm -g karin,mikan /dev/sfdsk/gfs01/rdsk/volume01 <Enter>

122 3. Check if the priority of nodes where an MDS is configured has been changed. # sfcrscinfo -m /dev/sfdsk/gfs01/rdsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 1 MDS(P) stop karin 1 AC stop karin 1 MDS(S) stop mikan 1 AC stop mikan Mount After change of an MDS operational information is completed, mount the file system. See For details on how to mount the GFS Shared File System, see " Mount" Deleting To delete the GFS Shared File System, perform the following tasks: 1. Unmount 2. Removing the entry in /etc/vfstab 3. Deleting the file system Unmount To delete the GFS Shared File System, unmount the file system. If there are any applications that are used for the file system, delete them as well. See For details on how to unmount the GFS Shared File System, see " Unmount" Removing the entry in /etc/vfstab To delete the GFS Shared File System, delete the file system from /etc/vfstab of all the nodes. See For details on setting /etc/vfstab, see " Setting vfstab" Deleting the file system To delete a file system that is not being used in the GFS Shared File System, use sfcadm(1m) with the -D option specified. Execute the command on any one of the nodes where the file system is shared. See For details on the sfcadm(1m) options, see sfcadm(1m)

123 How to delete a file system using sfcadm(1m) is shown below. 1. Check the current file system information. # sfcinfo -a <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA /dev/sfdsk/gfs02/dsk/volume01: FSID special size Type mount 2 /dev/sfdsk/gfs02/dsk/volume01( ) META /dev/sfdsk/gfs02/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs02/dsk/volume01( ) DATA Delete the file system whose representative partition is /dev/sfdsk/gfs/rdsk/volume01. # sfcadm -D /dev/sfdsk/gfs02/rdsk/volume01 <Enter> 3. Confirm that the file system has been deleted. # sfcinfo -a <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA

124 Chapter 12 File System Management This chapter describes the procedures for managing the GFS Shared File System using basic commands. To execute the procedure explained in this Chapter, a file system must have been created in advance. For details on how to create a file system, see the following: - "Chapter 10 File System Operations (GUI)" - "Chapter 11 File System Operations (Command)" 12.1 File System Management Commands For each file system, there are two different types of management commands, namely, the general purpose management commands and the system specific management commands. The general purpose management commands that provided by Solaris OS are used for basic functions and the system specific commands are called by specifying options or file system types. The GFS Shared File System features an exclusive management commands that act as the general purpose management command and the system specific management command for the GFS Shared File System's original functions. Specify "sfcfs" for -F option as a file system type when using the general purpose management command to a GFS Shared File System. If you omit the -F option, a search is made of those entries matched with "special" or "mount_point" in /etc/vfstab after which the registered file system types are automatically selected. Table 12.1 General purpose file system management commands df(1m) fsck(1m) fstyp(1m) mkfs(1m) mount(1m) Command umount(1m) Description Displays the number of free disk blocks, number of files, etc. Checks a file system for consistency and repairs it. Determines the file system type. Creates a file system. Mounts a file system. Unmounts a file system. See For details of the general-purpose management commands, see each command's page of "Solaris X Reference Manual." Table 12.2 Management Commands Specific to the GFS Shared File System Command Description sfcadd Adds file data partitions. sfcadm Changes partition information setting. sfcfrmstart Starts sfcfrmd daemon on a local node. sfcfrmstop Stops sfcfrmd daemon on a local node. sfcgetconf Makes a backup of the management partition. sfcinfo Displays partition information. sfcmntgl Mounts the GFS Shared File System on all nodes. sfcnode Adds, deletes, and changes node configuration information. sfcrscinfo Displays file system information

125 Command sfcsetup sfcstat sfcumntgl Description Initializes the management partition, adds, deletes, and displays node information, displays a path of the management partition, and registers and displays the startup mode of the sfcfrmd daemon. Reports statistical information of a GFS Shared File System. Unmounts the GFS Shared File System from all nodes. Commands of GFS Shared File System are classified as follows: - used in mounted file system umount sfcstat - used in unmounted file system fsck, mkfs(except -m), mount sfcadd, sfcadm, sfcfrmstart, sfcfrmstop, sfcnode, sfcsetup - used in both mounted and unmounted file system df, mkfs -m sfcgetconf, sfcinfo, sfcmntgl*1, sfcrscinfo, sfcumntgl (*1: An error occurs if the file system is mounted in all shared nodes.) 12.2 Checking a File System for Consistency and Repairing It If a file system is damaged and its consistency is lost, for example, due to an automatic recovery error after a primary MDS failure, the file system must be checked and restored using fsck_sfcfs(1m). The GFS Shared File System provides the update log function to ensure high-speed recovery if an error such as a system failure occurs. If the update log function has been enabled, the file system can be repaired quickly regardless of the size of the file system. This is possible because of update log replay, which updates the un-updated portion of the meta-data located in the update log area. By default, or when "-o log" is specified, fsck_sfcfs(1m) repairs the file system by replaying the update log. If the update log data has been physically damaged, fsck_sfcfs(1m) does not execute update log replay, but automatically performs a full check on the file system. To meet the need for immediate system operation resuming, an option "-o elog" that avoids file system restoration while it provides update log replay. If this option is specified, fsck_sfcfs(1m) terminates immediately without performing check and recovery when the update log data has been physically damaged. In this event, the file system cannot be mounted unless check and recovery is performed using fsck_sfcfs(1m). The mounting on such a file system should be attempted after it is restored through full checking without update log replay by the "-o nolog" option. The following example repairs a file system with log replay. # fsck -F sfcfs /dev/sfdsk/gfs01/rdsk/volume01 <Enter> The following example performs a full check on the file system and repairs it without log replay. # fsck -F sfcfs -o nolog /dev/sfdsk/gfs01/rdsk/volume01 <Enter> See For details about fsck_sfcfs(1m) options, see fsck_sfcfs(1m)

126 Action to be taken when fsck terminates abnormally The following describes available solutions for fsck_sfcfs(1m) abnormal termination Memory allocation request error The following message is displayed if fsck_sfcfs(1m) fails in requesting for memory allocation for the internal table. Can't allocate memory Add a swap area File system configuration information acquisition failure The following message is output if acquisition of the file system configuration information fails. Can not connect to FsRM other node information get error See Start the file system monitoring facility on all the nodes that share the GFS Shared File System. See "9.2.4 Starting sfcfrmd daemon" File system partition configuration data error This message indicates that the command terminated abnormally because a mismatch was found in the partition configuration data for the GFS Shared File System partitions. Can't open <device-name> <errno> setup: Volume information error setup: fsck quit due to unrecoverable error! Use sfcadm(1m) to recover the partition information. The following example shows how to restore partition information on the "/dev/sfdsk/gfs01/rdsk/volume01" device. For a single partition, perform the following operation: # sfcadm -r -a /dev/sfdsk/gfs01/rdsk/volume01 <Enter> For a multi-partition configuration, use sfcinfo(1m) to check the partitions and then use sfcadm(1m) to perform recovery using the representative partition. # sfcinfo /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5120 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA /dev/sfdsk/gfs01/dsk/volume02( ) DATA /dev/sfdsk/gfs01/dsk/volume03( ) DATA # sfcadm -r /dev/sfdsk/gfs01/rdsk/volume01 <Enter>

127 See In the result of sfcinfo(1m) executed, the partition displayed the first will be the representative partition of the file system. See sfcinfo(1m) Node on which fsck_sfcfs(1m) was executed is not sharing in the file system The following message is displayed if a node on which fsck_sfcfs(1m) was exexcuted is not a sharing node in the file system. No node volume entry for localhost,file system access forbidden! Execute fsck_sfcfs(1m) at the shared node of the file system Irreparable file system destruction The following message indicates that the file system is irreparable because of the GFS Shared File System data has been destroyed. BAD SUPER BLOCK BAD VOLUME INFORMATION No Meta volume information available! No Log volume information available! Only found <num> data volume, total <num> in superblock! If a backup of the file system is available, recover the file system from the backup Operation error The following message is displayed if there are problems in the parameter or the execution environment of fsck_sfcfs(1m). not super user duplicate option filesystem lock failed. errno(<error-number>) <device-name> is a mounted file system <device-name> is not for sfcfs Can't check <device-name> Can't open /etc/vfstab setmntent failed. errno(<error-number>) fs_clean is not FSLOG. cannot log replay! Confirm whether the parameter and the execution user of the command are correct or /etc/vfstab and /etc/mnttab exist Repairing of file system is non-completion The following message is displayed if the file system was not repaired completely

128 ***** PLEASE RERUN FSCK ***** <device-name>: UNEXPECTED INCONSISTENCY: RUN fsck MANUALLY. Log Replay is failed. Execute a full file system check using fsck_sfcfs(1m). The following example shows how to perform check the file system on the "/dev/sfdsk/gfs01/rdsk/volume01" device. # fsck -F sfcfs -o nolog /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Executed in non-global zone The following message is output when the fsck_sfcfs(1m) command is executed in a non-global zone. cannot be executed in non-global zone The command cannot be used in non-global zones. For Solaris 10, only global zones can be used to operate GFS Collection of zone information fails The following message is output when collection of zone setup information fails. cannot get current zone information When the message is outputted, Check the system then wait until the other process is done, or expand the swap area or memory. Execute the command again Other messages When the message which shows other abnormalities is outputted and fsck_sfcfs(1m) terminates abnormally, please tell your local Customer Support the output message of fsck_sfcfs(1m) after collecting the diagnostic data with fjsnap(1m) Extending a File System The GFS Shared File System can increase the size of an existing file system without saving or restoring data. Use the sfcadd(1m) command to enlarge the file system. To expand the file system by adding file data partitions, be sure that the file system is being unmounted. See For the execution procedure using Operation Management View, see " Changing the file system configuration (partition addition)". For the execution procedure using line commands, see "11.4 Change (partition addition)". Note If you schedule an expansion of file system, specify the expanding maximum file data area size (-o maxdsz=n) argument of mkfs_sfcfs(1m) when creating the file system. If the maximum data area size of the file system is not specified during file system creation, the resource that is able to manage the file data area's total size of all specified partition is automatically reserved. So, when the file system is expanded, managed resources might be insufficient. If the free file data area is not fragmented, the resources will not be insufficient. If there are too many fragments in the file, part of free area may not be used

129 Information If managed resources are insufficient, the following message will be output to the console: NOTICE: sfcfs_mds:3004: fsid: too much fragment (extent) See For corrective action of the above message, see "A.2.13 Information messages (MDS (sfcfsmg daemon))" Displaying File System Information Acquiring file system information Information about a file system, including the usage status of disk blocks, the state of free disk blocks, and the number of files, can be acquired using df_sfcfs(1m). - The following information is displayed for a meta-data area: - Use of i-nodes (number of i-nodes, number of free i-nodes) - Free extent status (maximum and unused leaves) - V-data usage status (maximum and unused V-data blocks) - The following information is displayed for a file data area: - Number of all blocks and free blocks - Only the number of assigned blocks is displayed for an update log area. The following example shows how to display the status of the file system using df_sfcfs(1m). # df -k /mnt <Enter> Filesystem kbytes used avail capacity Mounted on /dev/sfdsk/gfs01/dsk/volume % /mnt # df -F sfcfs -o v /dev/sfdsk/gfs01/dsk/volume01 <Enter> Filesystem:/mnt inodes free vdata free leaf free Type /dev/sfdsk/gfs01/dsk/volume META Kbytes /dev/sfdsk/gfs01/dsk/volume LOG kbytes used free %used /dev/sfdsk/gfs01/dsk/volume % DATA See For details about df_sfcfs(1m) options, see df_sfcfs(1m)

130 Displaying partition/node information The partition information about the partition set configuring the GFS Shared File System can be displayed using sfcinfo(1m). Node information can also be displayed. The following example shows how to display the partition and node information for a file system using sfcinfo(1m). # sfcinfo /dev/sfdsk/gfs01/dsk/volume01 <Enter> FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA # sfcinfo -n /dev/sfdsk/gfs01/dsk/volume01 <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID hostid status hostname RUN mikan 1 807e7937 RUN karin See For details about sfcinfo(1m) options, see sfcinfo(1m) Displaying file system management state The management state of the GFS Shared File System can be checked using sfcrscinfo(1m). Check the information displayed by this command before mounting file systems, unmounting file systems, or changing file system configuration. The following example shows how to check the management state using sfcrscinfo(1m). # sfcrscinfo -am <Enter> /dev/sfdsk/gfs01/dsk/volume01: FSID MDS/AC STATE S-STATE RID-1 RID-2 RID-N hostname 2 MDS(P) run shoga 2 AC run shoga 2 MDS(S) wait ichou 2 AC run ichou The meaning of sfcrscinfo(1m) is as follows: - MDS/AC - MDS(P): The primary MDS on a definition - MDS(S): The secondary MDS on a definition - AC : The node that shares a file system - STATE/S-STATE (MDS) - run - : The primary MDS is running - wait - : The secondary MDS is waiting for failure recovery of the primary MDS - stop - : MDS stopped - none - : no relation with MDS - If S-STATE is other than "-", MDS is in state-transition

131 - STATE/S-STATE (AC) - run - : file system is in mounted - stop - : file system is in unmounted - If S-STATE is other than "-", AC is in state-transition. See For details about sfcrscinfo(1m) options, see sfcrscinfo(1m) How to set GFS Shared File System applications as cluster applications To set GFS Shared File System applications as cluster application, switch only applications with the GFS Shared File System mounted on running nodes To set GFS Shared File System applications as cluster application To set GFS Shared File System applications as cluster application, it is necessary to set an RMS environment first. - Registering an RMS host name in /etc/hosts on each node Define each node where RMS is configured in /etc/hosts so that RMS will be recognized the host name with RMS sunny sunnyrms moony moonyrms - Setting of cluster.config file or.rhosts file with each node In order to distribute RMS composition definition information to each node, an RMS wizard uses CF remote services or rcp(1). For this reason, a setup of the RMS host name to a setup,.rhost, or hosts.equiv of a cluster.config file is needed. See For details, see "General configuration procedure" of the "PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools Configuration and Administration Guide" Notes on cluster application settings The GFS Shared File System cannot be defined as Fsystem resource of the cluster application. Moreover, the disk class to which the GDS volume that is used as management partition or GFS Shared File System belongs cannot be defined as Gds resource of the cluster application. If you want to use the application that use the GFS Shared File System as cluster applications, perform the procedure described in " Procedure flow chart for setting GFS Shared File System applications as cluster applications". Mount information of the GFS Shared File System must be registered in /etc/vfstab and "no" must be specified in the "mount at boot " Moreover, "noauto" must not be specified in "mount options" of the file system registered in /etc/vfstab to do automatic mount after processing revitalization of the GFS Shared File System Procedure flow chart for setting GFS Shared File System applications as cluster applications You can set GFS Shared File System applications as cluster applications using the following setup flow chart:

132 Table 12.3 The flow chart for setting GFS Shared File System applications as cluster applications Shared operation (to be executed on node A) Execution of automatic resource registration (See "Procedure 1" in " ") Node A (active node) GDS volume creation (See "Procedure 2" in " ") Creating the GFS Shared File System management partition (See "Procedure 3" in " ") - Node registration to the GFS Shared File System (See "Procedure 3" in " ") - Creating the GFS Shared File System (See "Procedure 3" in " ") - Adding the relevant GFS Shared File System to /etc/vfstab (See "Procedure 4" in " ") Setting cluster applications (See "Procedure 5" in " "), (See the "PRIMECLUSTER Installation and Administration Guide" and "PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools Configuration and Administration Guide") Start up RMS and check settings (See "Procedure 6" in " ") - Node B (standby node) Node registration to the GFS Shared File System (See "Procedure 3" in " ") - Adding the relevant GFS Shared File System to /etc/vfstab (See "Procedure 4" in " ") Start up RMS and check settings (See "Procedure 6" in " ") Procedure for setting GFS Shared File System applications as cluster applications This section describes how to set GFS Shared File System applications as cluster applications when CIP settings are completed. See For details on setting, see the "PRIMECLUSTER Installation and Administration Guide", "PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools Configuration and Administration Guide", and the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide". Procedure 1. Execution of automatic resource registration (common operation) When automatic resource registration of disk equipment is not carried out yet, the following command is executed with a node, and a disk device is registered as a cluster resource. # /etc/opt/fjsvcluster/bin/clautoconfig -r -n <Enter> Procedure 2. GDS volume creation (node A (active node), node B (standby node)) Create the GDS volume using the following method

133 GFS Shared File Systems require GDS volumes for management partitions. A disk class to which the GDS volume belongs as the management partition is different than that of the GDS volume that is used as a GDS resource of a cluster application. The following explanation assumes the node A (active node) host name to be sunny and the node B (standby node) host name to be moony. [When a mirror volume is to be created] - Register a disk in the disk class on sunny by using the sdxdisk command. In the cluster system, specify a node name as the scope of the -a option and then specify shared as the class type. sunny# sdxdisk -M -c gfs -d c0t1d0=disk1,c0t2d0=disk2 -a scope=sunny:moony,type=shared <Enter> sunny# sdxdisk -M -c gfs01 -d c0t3d0=disk3,c0t4d0=disk4 -a scope=sunny:moony,type=shared <Enter> - On sunny, disk connection to the mirror disk group is performed with the sdxdisk command. sunny# sdxdisk -C -c gfs -g grp0001 -d DISK1,DISK2 <Enter> sunny# sdxdisk -C -c gfs01 -g grp0002 -d DISK3,DISK4 <Enter> - On sunny, GDS volume creation is performed with the sdxvolume command. Note that a volume of at least 40 megabytes is necessary for the management partition of the GFS Shared File System. sunny# sdxvolume -M -c gfs -g grp0001 -v control -s <Enter> sunny# sdxvolume -M -c gfs01 -g grp0002 -v volume01 -s <Enter> - On moony, activation of the GDS volume is performed with the sdxvolume command. moony# sdxvolume -N -c gfs -v control <Enter> moony# sdxvolume -N -c gfs01 -v volume01 <Enter> [When a single volume is to be created] - On sunny, disk registration in the disk class is performed with the sdxdisk command. In the cluster system, specify a node name as the scope of the -a option and then specify shared as the cluster type. sunny# sdxdisk -M -c gfs -d c0t1d0=disk1:single -a scope=sunny:moony,type=shared <Enter> sunny# sdxdisk -M -c gfs -d c0t2d0=disk2:single -a scope=sunny:moony,type=shared <Enter> - On sunny, GDS volume creation is performed with the sdxvolume command. Note that a volume of at least 40 megabytes is necessary for the management partition of the GFS Shared File System. sunny# sdxvolume -M -c gfs -d DISK1 -v control -s <Enter> sunny# sdxvolume -M -c gfs01 -d DISK2 -v volume01 -s <Enter> - On moony, activation of the GDS volume is performed with the sdxvolume command. moony# sdxvolume -N -c gfs -v control <Enter> moony# sdxvolume -N -c gfs01 -v volume01 <Enter> Procedure 3. GFS Shared File System creation (node A (active node), node B (standby node)) Create the GFS Shared File System on either one of the nodes. Use mkfs_sfcfs(1m) to create the GFS Shared File System. When the GFS Shared File System is created for the first time after installation, activation of the GFS Shared File System is necessary before file system creation

134 - Create a management partition for the GFS Shared File System on either one of the nodes. # sfcsetup -c /dev/sfdsk/gfs/rdsk/control <Enter> - Register node information in the management partition of the GFS Shared File System on each node. sunny# sfcsetup -a /dev/sfdsk/gfs/rdsk/control <Enter> moony# sfcsetup -a /dev/sfdsk/gfs/rdsk/control <Enter> - Activate the GFS Shared File System on each node. sunny# sfcfrmstart <Enter> moony# sfcfrmstart <Enter> - Create the GFS Shared File System on either one of the nodes. # mkfs -F sfcfs -o node=sunny,moony /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Procedure 4. Adding the relevant GFS Shared File System to /etc/vfstab (node A (active node), node B (standby node)) Add the mount information for the relevant GFS Shared File System to /etc/vfstab on each node. Be sure to specify "noauto" in the "mount options" field as a parameter of the mount information to be added. /dev/sfdsk/gfs01/dsk/volume01 /dev/sfdsk/gfs01/rdsk/volume01 /sfcfs sfcfs - no - Procedure 5. Setting the definition of switchover of the RMS application as the RMS configuration (node A (active node), node B (standby node)) Set up the RMS configuration by using the userapplication Configuration Wizard. See For details on setup, refer to "PRIMECLUSTER Configuration and Administration Guide" and "PRIMECLUSTER RMS Configuration and Administration Guide." Note In GFS Shared File System, the following measures are necessary in order to always use GDS volume by active state. - For cluster application settings, a GDS resource "Gds:Global-Disk-Services" must not be registered in the GDS volume of the GFS Shared File System. - For cluster application settings, an Fsystem resource "LocalFileSystems" must not be registered in the GFS Shared File System. - For GDS volume, must not execute /usr/opt/reliant/bin/hvgdsetup. Procedure 6. Confirming settings (node A (active node), node B (standby node)) After mounting the GFS Shared File System on each node, start RMS and then confirm whether the settings of the /etc/vfstab file and the RMS application are proper, paying attention to the following points: - Whether the GFS Shared File System can be mounted on both nodes according to the /etc/vfstab file descriptions. - Whether the RMS application switches over to a standby node at failover of the active node

135 Note If mounting of the GFS Shared File System fails, the following may be the cause of the failure (apply the indicated solution): - The type of the file system specified in the "/etc/vfstab" file is incorrect Solution: Correct the /etc/vfstab file. - The GDS volume is not ACTIVE Solution: Activate the GDS volume Setup flow chart of adding file data partitions to GFS Shared File Systems of cluster applications The following setup flow chart shows how to add file data partitions to GFS Shared File Systems of cluster applications. Table 12.4 The flow chart of adding file data partitions to GFS Shared File Systems of cluster applications Common operation (to be executed on node A) Node A (active node) - Stop RMS. (See "Procedure 1" in " ") - Add a shared disk device as the GDS volume. (See "Procedure 2" in " ") Add a file data partition to the GFS Shared File System. (See "Procedure 3" in " ") Node B (standby node) Stop RMS. (See "Procedure 1" in " ") - - Start RMS. (See "Procedure 4" in " ") Start RMS. (See "Procedure 4" in " ") Setup procedure of adding file data partitions to GFS Shared File Systems of cluster applications The following setup flow chart shows how to add file data partitions to GFS Shared File Systems of cluster applications. See For details on setting, see the "PRIMECLUSTER Installation and Administration Guide", "PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools Configuration and Administration Guide", and the "PRIMECLUSTER Global Disk Services Configuration and Administration Guide". Procedure 1. Stopping RMS (node A (active node), node B (standby node)) If RMS is activated, stop RMS on each node. Procedure 2. GDS volume creation (node A (active node), node B (standby node)) See "Procedure 2. GDS volume creation (node A (active node), node B (standby node))" Procedure 3. Adding a file data partition to a GFS Shared File System (node A (active node)) Add a file data partition to the relevant GFS Shared File System from an active node with the sfcadd(1m) command. sunny# sfcadd -D /dev/sfdsk/gfs01/rdsk/volume02 /dev/sfdsk/gfs01/rdsk/volume01 <Enter>

136 Procedure 4. Starting up RMS (node A (active node), node B (standby node)) Activate RMS on each node How to start up CF from GUI when a GFS Shared File System is used This section describes how to start up CF from GUI after stopping it on all the nodes where a GFS Shared File System is used. The two start modes of the sfcfrmd daemon of the GFS Shared File System are wait (default) and wait_bg. If wait is selected, starting the GFS Shared File System will fail because a quorum does not exist until CF is activated on all the nodes. If you need to start CF from GUI when the CF is stopped on all the nodes, follow the steps below. See For information on how to start up CF, see "Starting CF" of the "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide". 1. Select the <Cluster Admin> button menu on the Web-Based Admin View operation menu. Select the node where CF will be started, and click <OK> button

137 2. Click <Load driver> button then activate the CF driver. 3. The [Start CF] window will pop up. Click <OK> button to start up CF

138 4. Since CF is not activated on all the nodes, GFS startup fails then the [Error] window will pop up. Click <No> button then cancel the GFS startup processing. 5. If the GFS startup is cancelled, the [Error] window will pop up again. Click <No> button then cancel the script processing. 6. Check if all the services before GFS have been activated then click <OK> button. 7. Repeat operations from step 1 to 6 on each cluster node then activate all the services before GFS. Note On the node where CF is activated lastly, GFS startup will succeed because CF has been activated on all the nodes. 8. On any node that cancelled the GFS startup processing, stop CF and restart up CF

139 See For information on how to stop CF, see "Stopping CF" of the "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide". 9. Execute Step 8 then activate all the services after GFS on all the nodes where GFS startup is cancelled

140 Chapter 13 File System Backing-up and Restoring This chapter describes how to backing-up and restoring data in the GFS Shared File System Type of Backing-up and Restoring The following describes the backing-up and restoring of the GFS Shared File System using. - Backing-up and restoring file by file - General-purpose backing-up tools not dependent on the file system structure (NetWorker, ArcServe/OPEN, etc.) - Standard Solaris OS commands cpio(1) and tar(1) - Backing-up and restoring entire file system - Standard Solaris OS commands dd(1m) Note Commands that depend on the file system structure, such as UFS ufsdump(1m) and VxFS vxdump(1m), cannot be used to backup a GFS Shared File System. The following describes the backing-up and restoring data of the GFS Shared File System in this Chapter. - Backing-up and restoring data using Standard Solaris OS commands cpio(1), tar(1) and dd(1m) Backing-up by Standard Solaris OS commands Backing-up file by file To backup a single file or all files under the directory while it is mounted, use cpio(1) or tar(1). See For details about cpio(1) and tar(1), see the "Solaris X Reference Manual Collection". The following are examples of backing-up on tape using cpio(1) and tar(1). - Backing-up method with cpio(1) # cd /userdata <Enter> # find. -depth -print cpio -oc > /dev/rmt/0 <Enter> - Backing-up method with tar(1) # cd /userdata <Enter> # tar cvf /dev/rmt/0. <Enter> Note Tapes used for data backup should be labeled for identifying the backing-up method. cron(1m) can schedule backing-up startup

141 Backing-up entire file system To backup the GFS Shared File System in file system unit, use dd(1m). The following describes the backing-up procedure using dd(1m). See For details about dd(1m), see the "Solaris X Reference Manual Collection". Entire-file-system backing-up allows backing-up of the entire file system more quickly than file-by-file backing-up. However, a file system backed up with this method cannot be restored on a file-by-file basis during restoring. Before executing backing-up, use the following procedure to check the partition information for the desired GFS Shared File System. Estimate the backing-up media capacity required for backing-up and the number of required media and then unmount of the GFS Shared File System. For a single partition configuration Use sfcinfo(1m) to display partition information and check that the file system has only a single partition (same special file names) and that the capacity of the tape is sufficient. The size of the partition can be estimated by adding 1 megabyte to the sum of the values displayed in the size field (in kilobytes). Example: In the following example, a file system with a single partition configuration is backed up. # sfcinfo /dev/sfdsk/gfs01/dsk/volume01 <Enter> FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA # dd if=/dev/sfdsk/gfs01/rdsk/volume01 of=/dev/rmt/0 bs=1024k <Enter> For a multi-partition configuration dd(1m) can handle one partition at a time. For a multi-partition configuration, therefore, the partitions must be backed up one at a time. Use sfcinfo(1m) to check the partition configuration then backup each partition with the same procedure as used for a single partition configuration. Example: In the following example, a file system with two partitions (file data area addition) is backed up. # sfcinfo /dev/sfdsk/gfs01/dsk/volume01 <Enter> FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01( ) META /dev/sfdsk/gfs01/dsk/volume01( ) 5116 LOG /dev/sfdsk/gfs01/dsk/volume01( ) DATA /dev/sfdsk/gfs01/dsk/volume02( ) DATA # dd if=/dev/sfdsk/gfs01/rdsk/volume01 of=/dev/rmt/0 bs=1024k <Enter> -> Tape 1 Tape change # dd if=/dev/sfdsk/gfs01/rdsk/volume02 of=/dev/rmt/0 bs=1024k <Enter> -> Tape 2 Note - All the partitions making up the desired GFS Shared File System must be backed up. Avoid backing-up and storing only some of the partitions. - The tapes used for backups must be labeled so that the backed up partitions can be identified. They must then be stored as a set

142 Note dd(1m) does not support multi-tape volumes. If the file system is too large to fit on a single tape, data needs to be divided and backed up. In such cases, set bs (block length) and count (number of blocks) and increase iseek (input-side offset (number of blocks) ) by the value of count. Example: In the following example, a partition is backed up in 1 gigabyte units. # dd if=/dev/sfdsk/gfs01/rdsk/volume01 of=/dev/rmt/0 bs=1024k count=1024 <Enter> -> Tape 1 Tape change # dd if=/dev/sfdsk/gfs01/rdsk/volume01 of=/dev/rmt/0 bs=1024k count=1024 iseek=1024 <Enter> -> Tape 2 Tape change # dd if=/dev/sfdsk/gfs01/rdsk/volume01 of=/dev/rmt/0 bs=1024k count=1024 iseek=2048 <Enter> -> Tape 3 Tape change # dd if=/dev/sfdsk/gfs01/rdsk/volume01 of=/dev/rmt/0 bs=1024k count=1024 iseek=3072 <Enter> -> Tape Restoring by Standard Solaris OS commands The following describes the procedure for restoring a file system from the backup tapes prepared as described earlier. Two restoring methods are available: - Restoring file by file using the appropriate Standard Solaris OS commands - Restoring by file system using dd(1m) File-by-file restoring Use cpio(1) or tar(1) to restore files from the backup tape to the disk. See For details about cpio(1) and tar(1), see the "Solaris X Reference Manual Collection". The following describes an example of restoring from a tape. - Method for restoring using cpio(1) # cd /userdata <Enter> # cpio -icdm < /dev/rmt/0 <Enter> - Method for restoring using tar(1) # cd /userdata <Enter> # tar xvf /dev/rmt/0 <Enter> Restoring from a backup tape must be performed using the same command that was used for backing-up Entire-file-system restoring Restore each partition saved on a backup tape onto the disk using dd(1m)

143 Note The size of the partition of the restoration destination should be equal to the size of the partition of backup source. When it is not equal, the execution of dd(1m) succeeds, but the partition of the restoration destination cannot be used as the GFS Shared File System. Before executing restoring, unmount the desired GFS Shared File System from all the nodes using the file system, and unmounted of the file system. For a single partition configuration Example: In the following example, a file system with a single partition configuration is restored. # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume01 bs=1024k <Enter> For a multi-partition configuration Example: In the following example, a file system with two partitions (file data area addition) is restored. # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume01 bs=1024k <Enter> <- Tape 1 Tape change # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume02 bs=1024k <Enter> <- Tape 2 dd(1m) does not support multi-tape volumes. To restore a file system backed up as separate blocks, specify the same values for the bs and count parameters as for where specified for the backup. For oseek, specify the same value that was specified for iseek. Example: In the following example, a file system that was backed up in 1 gigabyte units is restored. # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume01 bs=1024k count=1024 <Enter> <- Tape 1 Tape change # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume01 bs=1024k count=1024 oseek=1024 <Enter> <- Tape 2 Tape change # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume01 bs=1024k count=1024 oseek=2048 <Enter> <- Tape 3 Tape change # dd if=/dev/rmt/0 of=/dev/sfdsk/gfs01/rdsk/volume01 bs=1024k count=1024 oseek=3072 <Enter> <- Tape Set up after Restoration Resetting the partition information It is necessary to change management information only when restoring it to a partition different from the backup because it enters the state with different partition information in the medium by the following procedures. It is not necessary to work because there is no change in partition information in the medium when restoring it to quite the same partition as the backup. By the following ways, if the GFS Shared File System is restored to a partition that is different from a backup, it is necessary to change management information because partition information is not corresponding to an actual partition composition. - Restoring to a partition that differs from the backup source with the dd(1m) command Use the sfcadm(1m) command to reset the partition information. If the GFS Shared File System is restored to a partition that is different from a backup, it differs according to the procedure of restoration is whether the partition at the restoration destination is GFS Shared File System. Each procedure is shown as follows

144 - When restoring the GFS Shared File System to an unused partition The following shows how to reset the partition information with sfcadm(1m) after you have restored the three-partition GFS Shared File System to another partition (/dev/sfdsk/gfs99/rdsk/volume01, /dev/sfdsk/gfs99/rdsk/volume02, /dev/sfdsk/gfs99/rdsk/ volume03) that is different from the creation source. Note that the representative partition is /dev/sfdsk/gfs99/rdsk/volume01, the port name is sfcfs-1 and that the shared nodes are host01, host02. # sfcadm -m host01,host02 -g host01,host02 -p sfcfs-1,sfcfs-1 /dev/sfdsk/gfs99/rdsk/volume01,/dev/ sfdsk/gfs99/rdsk/volume02,/dev/sfdsk/gfs99/rdsk/volume03 <Enter> - When restoring the GFS Shared File System to a partition in use You can set the partition information by executing sfcadm(1m) after restoring the file system that consists of three partition to the partitions; /dev/sfdsk/gfs99/rdsk/volume01, /dev/sfdsk/gfs99/rdsk/volume02, and /dev/sfdsk/gfs99/rdsk/volume03. In these partitions, the file system in the same configuration exists. The representative partition is /dev/sfdsk/gfs99/rdsk/volume01, and the port name and shared node are the same. # sfcadm /dev/sfdsk/gfs99/rdsk/volume01,/dev/sfdsk/gfs99/rdsk/volume02,/dev/sfdsk/gfs99/rdsk/ volume03 <Enter> Note that the state of the file system creation for the restoration destination can be confirmed with sfcinfo(1m). See For details on sfcinfo(1m), see sfcinfo(1m). For details on sfcadm(1m), see sfcadm(1m). Note If nodes where the file system is shared are different before and after backup or restore, change the sharing node settings Repairing the file system If you attempt to backup file systems that are mounted in units of file systems, a file access may occur while backup is being performed, such that an inconsistency occurs in the backup content. To repair this state, the administrator must execute a file system consistency check and perform a repair with fsck_sfcfs(1m). If no inconsistency occurs, fsck_sfcfs(1m) will be completed in several seconds to one minute. An example of file system repair is shown below. The representative partition is /dev/sfdsk/gfs99/rdsk/volume01. Before executing the command, be sure to check that the file system is not mounted on all the nodes. # fsck -F sfcfs -y /dev/sfdsk/gfs99/rdsk/volume01 <Enter> See For details on fsck_sfcfs(1m), see fsck_sfcfs(1m)

145 Chapter 14 Tuning This chapter describes how to use a variety of utilities to optimize and make effective use of the GFS Shared File System Tuning Parameters This section describes the tuning parameter of the MDS(Meta Data Server) of the GFS Shared File System Amount of Cache It is possible to tune the following items: - Amount of extent-based management information to be cached in memory (SFCFS_NCACHE_EXTENT) - Amount of directory information to be cached in memory (SFCFS_NCACHE_DIRECTORY) - Amount of i-nodes on the disk to be cached in memory (SFCFS_NCACHE_INODE) The areas of the following sizes are reserved in the process space of MDS for each file system: - Cache of extent-based management information Specified value of SFCFS_NCACHE_EXTENT x 1.4 kilobytes - Cache of directory information Specified value of SFCFS_NCACHE_DIRECTORY x 1.4 kilobytes - Cache of i-node Specified value of SFCFS_NCACHE_INODE x 4.4 kilobytes The following shows the default value and amount of reserved memory for each tuning parameter. Table 14.1 Default value and amount of reserved memory for tuning parameters Tuning parameter name Default value Amount of reserved memory (megabyte) SFCFS_NCACHE_EXTENT SFCFS_NCACHE_DIRECTORY SFCFS_NCACHE_INODE Specify the tuning parameters with the -o option of sfcadm(1m). The following shows the setting example. (Example) # sfcadm -o SFCFS_NCACHE_EXTENT=4096 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> # sfcadm -o SFCFS_NCACHE_DIRECTORY=20480 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> # sfcadm -o SFCFS_NCACHE_INODE=5120 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Check the specified values with the -e option of sfcinfo(1m). The following shows the example to check the values. (Example) # sfcinfo -e /dev/sfdsk/gfs01/dsk/volume01 grep SFCFS_NCACHE <Enter> SFCFS_NCACHE_EXTENT=

146 SFCFS_NCACHE_DIRECTORY=20480 SFCFS_NCACHE_INODE=5120 If the size of a directory exceeds the directory cache's size, performance of file creation in the directory or reference will fall. When a large directory is created, we recommend strongly that users specify a larger value for SFCFS_NCACHE_DIRECTORY. Also, in the case of tuning, execute sfcstat(1m) that reports statistics on GFS Shared File System by specifying the -m option, and check the cache-hit ratio of the meta-cache which corresponds to the tuning parameter. The cache-hit ratio is calculated from the number of accesses and the number of cache hits. If the cache-hit ratio is low, consider to specify a larger value of the tuning parameter. The following shows the correspondence of each tuning parameter and meta-cache type displayed by the -m option of sfcstat(1m). Table 14.2 Correspondence of tuning parameter and meta-cache type displayed by -m option of sfcstat(1m) Tuning parameter name SFCFS_NCACHE_EXTENT SFCFS_NCACHE_DIRECTORY SFCFS_NCACHE_INODE NODE, LEAF DIR DINODE Meta-cache type Communication timeout value It is possible to tune the following items: - Communication timeout value of the GFS Shared File System(CON_TIMEOUT) CON_TIMEOUT set up time in seconds after a communicative response is lost until it estimates a partner node to be an abnormal state in the monitoring of the node by communication. This value can be set for every file system. The default of a communication timeout value is 180 seconds. It is not usually necessary to change communication timeout value. Specify the tuning parameters with the -o option of sfcadm(1m). The following shows the setting example. (Example) # sfcadm -o CON_TIMEOUT=180 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Check the specified values with the -e option of sfcinfo(1m). The following shows the example to check the values. (Example) # sfcinfo -e /dev/sfdsk/gfs01/dsk/volume01 grep CON_TIMEOUT <Enter> CON_TIMEOUT=240 When it is expected that the loading of system becomes high, please set the value of CON_TIMEOUT greatly. Please set up the value of CON_TIMEOUT small, when there are few file systems to be used and improve the response of the error return from a system call to the time of the blockade accompanying the abnormalities in communication etc. When the number of file system use are 10 or less and set a value of CON_TIMEOUT small, its value should be 60 seconds or more Enabling and disabling the Nagle algorithm in the communications between MDS and AC Note When using the function to enable and disable the Nagle algorithm in communications between MDS and ACM, apply the following urgent fix:

147 or later It is possible to tune the following items: - Enabling and disabling the Nagle algorithm (GFS_TCP_NAGLE) For GFS_TCP_NAGLE, specify whether to use (enable or disable) the Nagle algorithm in the communications between MDS and AC. Set the value to 1 to enable it, and set the value to 0 to disable it. This value can be set for every file system. The Nagle algorithm is enabled by default. Specify the tuning parameters with the -o option of sfcadm(1m). The following is an example to disable the Nagle algorithm. (Example) # sfcadm -o GFS_TCP_NAGLE=0 /dev/sfdsk/gfs01/rdsk/volume01 <Enter> Check the specified values with the -e option of sfcinfo(1m). The following shows the example to check the values. (Example) # sfcinfo -e /dev/sfdsk/gfs01/dsk/volume01 grep GFS_TCP_NAGLE <Enter> GFS_TCP_NAGLE=0 By disabling the Nagle algorithm, you can reduce the response time in accessing the GFS Shared File System from a node where the primary MDS is not running. However, the network load used for the communications between MDS and AC may increase. The network load status can be checked with netstat(1m)

148 Chapter 15 Migration to the GFS Shared File System This chapter describes how to move the existing files such as UFS that are created on the file system to the GFS Shared File System Moving the existing files When moving existing files that were created on a file system such as UFS to the GFS Shared File System, be aware of the following important information. Note - The limit value of the quota function in the file system cannot be moved to the GFS Shared File System. - The ACL settings of the file system cannot be moved to the GFS Shared File System. For archive created with tar(1) or cpio(1) and including the ACL settings, ACL related errors will be displayed, but file data can be restored. - When files with holes are transferred to the GFS Shared File System, some area is reserved on the storage device and such files no longer have holes. File system migration first requires that files and directories be backed up on another location (for example, on a storage device or another file system). Then, a new GFS Shared File System should be configured, and the files and directories can be expanded. Take the following procedures: 1. Check that the file system is not being used. 2. Save all of the files on the UFS file system on a backup device or another file system using a command such as tar(1) and cpio(1). 3. Create the GFS Shared File System, as described in "10.2 Creation", "11.2 Creation" 4. Expand the saved files on the created GFS Shared File System. The following example shows how to move the file of the UFS file system that is mounted on /mnt to the GFS Shared File System on the same device after saving it in /data temporarily. 1. Back up files # cd /mnt <Enter> # tar cvpf -. > /data/backup.tar <Enter> # cd / <Enter> # umount /mnt <Enter> 2. Configure GFS Shared File System Create a GFS Shared File System and mount it according to "10.2 Creation" or "11.2 Creation" In this example, the mount point is /sfcfs. 3. Restore the file. # cd /sfcfs <Enter> # tar xvf /data/backup.tar <Enter> Figure 15.1 Migration from UFS

Configuration and Administration Guide 4.3

Configuration and Administration Guide 4.3 FUJITSU Software PRIMECLUSTER Global File Services Configuration and Administration Guide 4.3 Oracle Solaris 10 J2S2-1595-03ENZ0(03) March 2015 Preface The Global File Services File System and its optional

More information

PRIMECLUSTER. Global File Services Configuration and Administration Guide 4.1. (Linux)

PRIMECLUSTER. Global File Services Configuration and Administration Guide 4.1. (Linux) PRIMECLUSTER Global File Services Configuration and Administration Guide 4.1 (Linux) December 2004 Preface The Global File Services File System and its optional products are generically called "GFS product"

More information

Configuration and Administration Guide 4.3

Configuration and Administration Guide 4.3 PRIMECLUSTER Global File Services Configuration and Administration Guide 4.3 Linux J2UZ-5363-01ENZ0(B) April 2008 Preface The Global File Services File System and its optional products are generically

More information

PRIMECLUSTER. Installation and Administration Guide 4.0. (for Linux)

PRIMECLUSTER. Installation and Administration Guide 4.0. (for Linux) PRIMECLUSTER Installation and Administration Guide 4.0 (for Linux) Edition June 2003 Preface This manual serves as your starting point for using PRIMECLUSTER. It explains the workflow of the series of

More information

RC2000. User's Guide

RC2000. User's Guide RC2000 User's Guide Edition February 2003 Preface Purpose This manual describes the functions and operation of RC2000. RC2000 is a software product that provides functions for operating an OS console

More information

FUJITSU Software PRIMECLUSTER Clustering Base 4.5A00. Installation Guide. Oracle Solaris

FUJITSU Software PRIMECLUSTER Clustering Base 4.5A00. Installation Guide. Oracle Solaris FUJITSU Software PRIMECLUSTER Clustering Base 4.5A00 Installation Guide Oracle Solaris J2S2-1668-01ENZ0(01) December 2017 Preface Purpose This manual explains how to install PRIMECLUSTER Clustering Base.

More information

PRIMECLUSTER. Concepts Guide 4.3 (Oracle Solaris, Linux) Edition June 2014

PRIMECLUSTER. Concepts Guide 4.3 (Oracle Solaris, Linux) Edition June 2014 PRIMECLUSTER PRIMECLUSTER Concepts Guide 4.3 (Oracle Solaris, Linux) Edition June 2014 Copyright and Trademarks Linux is a trademark or registered trademark of Mr. Linus Torvalds in the United States and

More information

PRIMECLUSTER. Global Disk Services Configuration and Administration Guide 4.1. (Solaris Operating System)

PRIMECLUSTER. Global Disk Services Configuration and Administration Guide 4.1. (Solaris Operating System) PRIMECLUSTER Global Disk Services Configuration and Administration Guide 4.1 (Solaris Operating System) October 2005 Preface This manual describes the setting up and managing of GDS (Global Disk Services)

More information

ETERNUS SF Express V15.3/ Storage Cruiser V15.3/ AdvancedCopy Manager V15.3. Migration Guide

ETERNUS SF Express V15.3/ Storage Cruiser V15.3/ AdvancedCopy Manager V15.3. Migration Guide ETERNUS SF Express V15.3/ Storage Cruiser V15.3/ AdvancedCopy Manager V15.3 Migration Guide B1FW-5958-06ENZ0(00) June 2013 Preface Purpose This manual describes how to upgrade to this version from the

More information

PRIMECLUSTER GDS 4.3A20B. Installation Guide. Oracle Solaris

PRIMECLUSTER GDS 4.3A20B. Installation Guide. Oracle Solaris PRIMECLUSTER GDS 4.3A20B Installation Guide Oracle Solaris J2S2-1607-04ENZ0(00) September 2013 Preface Purpose This manual explains how to install PRIMECLUSTER GDS. Target Readers This manual is written

More information

PRIMECLUSTER. Concepts Guide (Solaris, Linux) Edition October 2002

PRIMECLUSTER. Concepts Guide (Solaris, Linux) Edition October 2002 PRIMECLUSTER PRIMECLUSTER Concepts Guide (Solaris, Linux) Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail: email: manuals@fujitsu-siemens.com Tel.: (089) 636-00000 Fax: (++49)

More information

FUJITSU Storage ETERNUS SF Storage Cruiser V16.3 / AdvancedCopy Manager V16.3. Cluster Environment Setup Guide

FUJITSU Storage ETERNUS SF Storage Cruiser V16.3 / AdvancedCopy Manager V16.3. Cluster Environment Setup Guide FUJITSU Storage ETERNUS SF Storage Cruiser V16.3 / AdvancedCopy Manager V16.3 Cluster Environment Setup Guide B1FW-6006-04ENZ0(00) October 2015 Preface Purpose This manual provides information on installation

More information

ETERNUS SF AdvancedCopy Manager V15.0. Quick Reference

ETERNUS SF AdvancedCopy Manager V15.0. Quick Reference ETERNUS SF AdvancedCopy Manager V15.0 Quick Reference B1FW-5967-02ENZ0(00) April 2012 Preface Purpose This manual describes the pre-installation requirements, installation procedure, configuration procedure,

More information

PRIMECLUSTER. Concepts Guide (Solaris, Linux) Edition April 2003

PRIMECLUSTER. Concepts Guide (Solaris, Linux) Edition April 2003 PRIMECLUSTER PRIMECLUSTER Concepts Guide (Solaris, Linux) Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail: email: manuals@fujitsu-siemens.com Tel.: (089) 636-00000 Fax: (++49)

More information

ETERNUS SF Express V15.1/ Storage Cruiser V15.1/ AdvancedCopy Manager V15.1. Migration Guide

ETERNUS SF Express V15.1/ Storage Cruiser V15.1/ AdvancedCopy Manager V15.1. Migration Guide ETERNUS SF Express V15.1/ Storage Cruiser V15.1/ AdvancedCopy Manager V15.1 Migration Guide B1FW-5958-03ENZ0(00) August 2012 Preface Purpose This manual describes how to upgrade to this version from the

More information

ETERNUS SF AdvancedCopy Manager V13.1 Operator's Guide for Tape Backup Option

ETERNUS SF AdvancedCopy Manager V13.1 Operator's Guide for Tape Backup Option J2S2-0560-02ENZ0(A) ETERNUS SF AdvancedCopy Manager V13.1 Operator's Guide for Tape Backup Option Preface ++ Purpose This manual describes the functionality of ETERNUS SF AdvancedCopy Manager for Tape

More information

PRIMECLUSTER. Web-Based Admin View Operation Guide

PRIMECLUSTER. Web-Based Admin View Operation Guide PRIMECLUSTER Web-Based Admin View Operation Guide Edition June 2004 Preface This manual outlines the functions and operation of the Web-Based Admin View. Web-Based Admin View is a common base to utilize

More information

PRIMECLUSTER DR/PCI Hot Plug User s Guide

PRIMECLUSTER DR/PCI Hot Plug User s Guide PRIMECLUSTER DR/PCI Hot Plug User s Guide - SPARC Solaris - Version 4.2 November, 2003 Introduction This manual describes how to use the Dynamic Reconfiguration (DR) function and the PCI Hot Plug function

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

FUJITSU Storage ETERNUS SF Express V16.3 / Storage Cruiser V16.3 / AdvancedCopy Manager V16.3. Migration Guide

FUJITSU Storage ETERNUS SF Express V16.3 / Storage Cruiser V16.3 / AdvancedCopy Manager V16.3. Migration Guide FUJITSU Storage ETERNUS SF Express V16.3 / Storage Cruiser V16.3 / AdvancedCopy Manager V16.3 Migration Guide B1FW-6000-04ENZ0(00) October 2015 Preface Purpose This manual describes how to upgrade to this

More information

PRIMECLUSTER. Web-Based Admin View Operation Guide

PRIMECLUSTER. Web-Based Admin View Operation Guide PRIMECLUSTER Web-Based Admin View Operation Guide Edition August 2005 Preface This manual outlines the functions and operation of the Web-Based Admin View. Web-Based Admin View is a common base to utilize

More information

ExpressCluster X 3.2 WebManager Mobile

ExpressCluster X 3.2 WebManager Mobile ExpressCluster X 3.2 WebManager Mobile Administrator s Guide 2/19/2014 1st Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual Copyright NEC Corporation 2014. All rights

More information

Cluster Configuration Design Guide (Linux/PRIMECLUSTER)

Cluster Configuration Design Guide (Linux/PRIMECLUSTER) C122-A007-04EN PRIMEQUEST 1000 Series Cluster Configuration Design Guide (Linux/PRIMECLUSTER) FUJITSU LIMITED Preface This manual describes the network and shared I/O unit information and configuration

More information

FUJITSU Storage ETERNUS SF Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Cluster Environment Setup Guide

FUJITSU Storage ETERNUS SF Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Cluster Environment Setup Guide FUJITSU Storage ETERNUS SF Storage Cruiser V16.5 / AdvancedCopy Manager V16.5 Cluster Environment Setup Guide B1FW-6006-06ENZ0(00) May 2017 Preface Purpose This manual provides information on installation

More information

PRIMECLUSTER. Concepts Guide (Solaris, Linux ) Edition November 2005

PRIMECLUSTER. Concepts Guide (Solaris, Linux ) Edition November 2005 PRIMECLUSTER PRIMECLUSTER Concepts Guide (Solaris, Linux ) Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail: email: manuals@fujitsu-siemens.com Tel.: (089) 636-00000 Fax: (++49)

More information

Cluster Foundation PRIMECLUSTER. Cluster Foundation (CF) (Oracle Solaris) Configuration and Administration Guide 4.3

Cluster Foundation PRIMECLUSTER. Cluster Foundation (CF) (Oracle Solaris) Configuration and Administration Guide 4.3 Cluster Foundation PRIMECLUSTER Cluster Foundation (CF) (Oracle Solaris) Configuration and Administration Guide 4.3 Edition February 2012 Copyright and Trademarks PRIMECLUSTER is a trademark of Fujitsu

More information

EXPRESSCLUSTER X Integrated WebManager

EXPRESSCLUSTER X Integrated WebManager EXPRESSCLUSTER X Integrated WebManager Administrator s Guide 10/02/2017 12th Edition Revision History Edition Revised Date Description 1st 06/15/2009 New manual 2nd 09/30/2009 This manual has been updated

More information

FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Migration Guide

FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5. Migration Guide FUJITSU Storage ETERNUS SF Express V16.5 / Storage Cruiser V16.5 / AdvancedCopy Manager V16.5 Migration Guide B1FW-6000-06ENZ0(01) June 2017 Preface Purpose This manual describes how to upgrade to this

More information

ExpressCluster X 3.1 WebManager Mobile

ExpressCluster X 3.1 WebManager Mobile ExpressCluster X 3.1 WebManager Mobile Administrator s Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual ii Copyright NEC Corporation 2011. All

More information

PRIMECLUSTER Clustering Base 4.3A20B. Installation Guide. Oracle Solaris

PRIMECLUSTER Clustering Base 4.3A20B. Installation Guide. Oracle Solaris PRIMECLUSTER Clustering Base 4.3A20B Installation Guide Oracle Solaris J2S2-1605-04ENZ0(03) December 2014 Preface Purpose This manual explains how to install PRIMECLUSTER Clustering Base. Target Readers

More information

PRIMECLUSTER. Web-Based Admin View Operation Guide

PRIMECLUSTER. Web-Based Admin View Operation Guide PRIMECLUSTER Web-Based Admin View Operation Guide Edition April 2006 Preface This manual outlines the functions and operation of the Web-Based Admin View. Web-Based Admin View is a common base to utilize

More information

ETERNUS SF AdvancedCopy Manager V13.3 Message Guide

ETERNUS SF AdvancedCopy Manager V13.3 Message Guide B1WW-8881-05ENZ0 (00) ETERNUS SF AdvancedCopy Manager V13.3 Message Guide ii Preface About This Manual This manual explains the messages output by ETERNUS SF AdvancedCopy Manager. Intended Reader This

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

ExpressCluster X Integrated WebManager

ExpressCluster X Integrated WebManager ExpressCluster X Integrated WebManager Administrator s Guide 09/30/2009 Second Edition Revision History Edition Revised Date Description First 06/15/2009 New manual Second 09/30/2009 This manual has been

More information

PRIMECLUSTER PRIMECLUSTER

PRIMECLUSTER PRIMECLUSTER PRIMECLUSTER PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools (Solaris, Linux ) Configuration and Administration Guide Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail:

More information

ETERNUS SF AdvancedCopy Manager Operator's Guide for Tape Server Option

ETERNUS SF AdvancedCopy Manager Operator's Guide for Tape Server Option ETERNUS SF AdvancedCopy Manager 14.0 Operator's Guide for Tape Server Option J2X1-7453-01ENZ0(00) July 2009 Preface Purpose This manual describes the functionality of ETERNUS SF AdvancedCopy Manager for

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

PRIMECLUSTER Wizard for NetWorker 4.1 J2S Z2

PRIMECLUSTER Wizard for NetWorker 4.1 J2S Z2 PRIMECLUSTER Wizard for NetWorker 4.1 J2S1-7460-01-Z2 Introduction Purpose of this manual This manual explains procedure of installation through operation for PRIMECLUSTER Wizard for NetWorker. Audience

More information

ETERNUS SF AdvancedCopy Manager Operator's Guide for Cluster Environment

ETERNUS SF AdvancedCopy Manager Operator's Guide for Cluster Environment ETERNUS SF AdvancedCopy Manager 14.2 Operator's Guide for Cluster Environment J2X1-7452-04ENZ0(00) June 2011 Preface Purpose This manual explains the installation and customization of ETERNUS SF AdvancedCopy

More information

ExpressCluster for Linux Version 3 Web Manager Reference. Revision 6us

ExpressCluster for Linux Version 3 Web Manager Reference. Revision 6us ExpressCluster for Linux Version 3 Web Manager Reference Revision 6us EXPRESSCLUSTER is a registered trademark of NEC Corporation. Linux is a trademark or registered trademark of Linus Torvalds in the

More information

EXPRESSCLUSTER X 4.1 for Windows

EXPRESSCLUSTER X 4.1 for Windows EXPRESSCLUSTER X 4.1 for Windows Installation and Configuration Guide April 10, 2019 1st Edition Revision History Edition Revised Date Description 1st Apr 10, 2019 New manual Copyright NEC Corporation

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

SystemWalker/StorageMGR User's Guide Solaris-

SystemWalker/StorageMGR User's Guide Solaris- SystemWalker/StorageMGR User's Guide 10.0 -Solaris- Preface ++ Purpose This manual explains how to operate Web-GUI with SystemWalker/StorageMGR. SystemWalker is a generic name for products for operation

More information

ExpressCluster X R3 WAN Edition for Windows

ExpressCluster X R3 WAN Edition for Windows ExpressCluster X R3 WAN Edition for Windows Installation and Configuration Guide v2.1.0na Copyright NEC Corporation 2014. All rights reserved. Copyright NEC Corporation of America 2011-2014. All rights

More information

EXPRESSCLUSTER X 4.0 for Windows

EXPRESSCLUSTER X 4.0 for Windows EXPRESSCLUSTER X 4.0 for Windows Installation and Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New manual Copyright NEC Corporation

More information

File Systems: Interface and Implementation

File Systems: Interface and Implementation File Systems: Interface and Implementation CSCI 315 Operating Systems Design Department of Computer Science File System Topics File Concept Access Methods Directory Structure File System Mounting File

More information

File Systems: Interface and Implementation

File Systems: Interface and Implementation File Systems: Interface and Implementation CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition

More information

Chapter 7: File-System

Chapter 7: File-System Chapter 7: File-System Interface and Implementation Chapter 7: File-System Interface and Implementation File Concept File-System Structure Access Methods File-System Implementation Directory Structure

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

CS720 - Operating Systems

CS720 - Operating Systems CS720 - Operating Systems File Systems File Concept Access Methods Directory Structure File System Mounting File Sharing - Protection 1 File Concept Contiguous logical address space Types: Data numeric

More information

High Availability System Guide

High Availability System Guide FUJITSU Software Interstage Application Server High Availability System Guide Windows/Solaris/Linux B1WS-1092-03ENZ0(00) April 2014 Preface Purpose of this Document This manual provides information on

More information

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide Note: Before using this information and the product

More information

ExpressCluster X SingleServerSafe 3.2 for Windows. Configuration Guide. 2/19/2014 1st Edition

ExpressCluster X SingleServerSafe 3.2 for Windows. Configuration Guide. 2/19/2014 1st Edition ExpressCluster X SingleServerSafe 3.2 for Windows Configuration Guide 2/19/2014 1st Edition Revision History Edition Revised Date Description First 2/19/2014 New manual Copyright NEC Corporation 2014.

More information

ETERNUS SF AdvancedCopy Manager Overview

ETERNUS SF AdvancedCopy Manager Overview ETERNUS SF AdvancedCopy Manager 14.2 Overview J2X1-7443-04ENZ0(00) June 2011 Preface Purpose This manual provides an overview of the ETERNUS SF AdvancedCopy Manager. This manual describes the ETERNUS SF

More information

Chapter 11: File-System Interface

Chapter 11: File-System Interface Chapter 11: File-System Interface Silberschatz, Galvin and Gagne 2013 Chapter 11: File-System Interface File Concept Access Methods Disk and Directory Structure File-System Mounting File Sharing Protection

More information

ExpressCluster X for Windows

ExpressCluster X for Windows ExpressCluster X for Windows PP Guide (Virtualization Software) 09/30/2012 5th Edition Revision History Edition Revision Date 1 04/14/2009 Created 2 10/20/2009 Corresponds to Hyper-V 2.0. Modified sample

More information

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD.

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. File System Implementation FILES. DIRECTORIES (FOLDERS). FILE SYSTEM PROTECTION. B I B L I O G R A P H Y 1. S I L B E R S C H AT Z, G A L V I N, A N

More information

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. File-System Structure File structure Logical storage unit Collection of related information File

More information

EXPRESSCLUSTER X 3.3 for Windows

EXPRESSCLUSTER X 3.3 for Windows EXPRESSCLUSTER X 3.3 for Windows Installation and Configuration Guide 04/10/2017 5th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual 2nd 04/20/2015 Corresponds to the

More information

Solaris OE. Softek AdvancedCopy Manager User's Guide 10.2

Solaris OE. Softek AdvancedCopy Manager User's Guide 10.2 Solaris OE Softek AdvancedCopy Manager User's Guide 10.2 Preface ++ Purpose This manual explains how to operate Web-GUI with Softek AdvancedCopy Manager. ++ Reader This manual is intended for system managers

More information

Chapter 10: File System Implementation

Chapter 10: File System Implementation Chapter 10: File System Implementation Chapter 10: File System Implementation File-System Structure" File-System Implementation " Directory Implementation" Allocation Methods" Free-Space Management " Efficiency

More information

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

This course is for those wanting to learn basic to intermediate topics in Solaris 10 system administration.

This course is for those wanting to learn basic to intermediate topics in Solaris 10 system administration. Course Summary Description This course teaches basic to intermediate topics in Solaris 10 system administration. The operating system will be Oracle Solaris 10 (SunOS 5.10 Release 1/13 U11). Objectives

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

Chapter 11: File System Implementation. Objectives

Chapter 11: File System Implementation. Objectives Chapter 11: File System Implementation Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block

More information

Chapter 10: File System. Operating System Concepts 9 th Edition

Chapter 10: File System. Operating System Concepts 9 th Edition Chapter 10: File System Silberschatz, Galvin and Gagne 2013 Chapter 10: File System File Concept Access Methods Disk and Directory Structure File-System Mounting File Sharing Protection 10.2 Silberschatz,

More information

PRIMECLUSTER. Reliant Monitor Services (RMS) with Wizard Tools (Linux, Solaris ) Configuration and Administration

PRIMECLUSTER. Reliant Monitor Services (RMS) with Wizard Tools (Linux, Solaris ) Configuration and Administration PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools (Linux, Solaris ) Configuration and Administration Edition June 2009 Comments Suggestions Corrections The User Documentation Department would

More information

ExpressCluster X 3.0 for Windows

ExpressCluster X 3.0 for Windows ExpressCluster X 3.0 for Windows Installation and Configuration Guide 10/01/2010 First Edition Revision History Edition Revised Date Description First 10/01/2010 New manual Copyright NEC Corporation 2010.

More information

Thin Provisioning User s Manual

Thin Provisioning User s Manual NEC Storage Software Thin Provisioning User s Manual IS044-16E NEC Corporation 2009-2017 No part of the contents of this book may be reproduced or transmitted in any form without permission of NEC Corporation.

More information

PRIMECLUSTER. Edition April 2003

PRIMECLUSTER. Edition April 2003 PRIMECLUSTER PRIMECLUSTER Reliant Monitor Services (RMS) Configuration and Administration Guide (Solaris) Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail: email: manuals@fujitsu-siemens.com

More information

ETERNUS SF AdvancedCopy Manager V13.2 Operator's Guide (Linux)

ETERNUS SF AdvancedCopy Manager V13.2 Operator's Guide (Linux) J2UZ-8170-03ENZ0(A) ETERNUS SF AdvancedCopy Manager V13.2 Operator's Guide (Linux) ii Preface ++ Purpose This manual describes the operations available on ETERNUS SF AdvancedCopy Manager. ++ Intended Readers

More information

Sun Certified System Administrator for the Solaris 10 OS Bootcamp

Sun Certified System Administrator for the Solaris 10 OS Bootcamp Sun Certified System Administrator for the Solaris 10 OS Bootcamp Student Guide - Volume 3 SA-997 Rev A (SA-202-S10-C.2) D63735GC10 Edition 1.0 D64505 Copyright 2008, 2010, Oracle and/or its affiliates.

More information

V. File System. SGG9: chapter 11. Files, directories, sharing FS layers, partitions, allocations, free space. TDIU11: Operating Systems

V. File System. SGG9: chapter 11. Files, directories, sharing FS layers, partitions, allocations, free space. TDIU11: Operating Systems V. File System SGG9: chapter 11 Files, directories, sharing FS layers, partitions, allocations, free space TDIU11: Operating Systems Ahmed Rezine, Linköping University Copyright Notice: The lecture notes

More information

Microsoft Windows NT Microsoft Windows SystemWalker/StorageMGR. User's Guide V10.0L10

Microsoft Windows NT Microsoft Windows SystemWalker/StorageMGR. User's Guide V10.0L10 Microsoft Windows NT Microsoft Windows 2000 SystemWalker/StorageMGR User's Guide V10.0L10 Preface ++ Purpose This manual explains how to operate Web-GUI with SystemWalker/StorageMGR. SystemWalker is a

More information

PRIMECLUSTER PRIMECLUSTER 4.2A20

PRIMECLUSTER PRIMECLUSTER 4.2A20 PRIMECLUSTER PRIMECLUSTER 4.2A20 Scalable Internet Services (SIS) (Solaris, Linux ) Configuration and Administration Fujitsu Technology Solutions GmbH Paderborn 33094 Paderborn e-mail: email: manuals@ts.fujitsu.com

More information

PRIMECLUSTER. Scalable Internet Services (SIS) (Solaris, Linux ) Configuration and Administration Guide. Edition November 2003

PRIMECLUSTER. Scalable Internet Services (SIS) (Solaris, Linux ) Configuration and Administration Guide. Edition November 2003 PRIMECLUSTER PRIMECLUSTER Scalable Internet Services (SIS) (Solaris, Linux ) Configuration and Administration Guide Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail: email: manuals@fujitsu-siemens.com

More information

ExpressCluster X SingleServerSafe 3.2 for Windows. Operation Guide. 2/19/2014 1st Edition

ExpressCluster X SingleServerSafe 3.2 for Windows. Operation Guide. 2/19/2014 1st Edition ExpressCluster X SingleServerSafe 3.2 for Windows Operation Guide 2/19/2014 1st Edition Revision History Edition Revised Date Description First 2/19/2014 New manual Copyright NEC Corporation 2014. All

More information

Structure and Overview of Manuals

Structure and Overview of Manuals FUJITSU Software Systemwalker Operation Manager Structure and Overview of Manuals UNIX/Windows(R) J2X1-6900-08ENZ0(00) May 2015 Introduction Purpose of This Document Please ensure that you read this document

More information

Chapter 11: Implementing File

Chapter 11: Implementing File Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

ETERNUS SF AdvancedCopy Manager Glossary

ETERNUS SF AdvancedCopy Manager Glossary ETERNUS SF AdvancedCopy Manager 14.1 Glossary J2X1-7455-02ENZ0(00) January 2010 Preface Purpose This manual describes the terminology used in the ETERNUS SF AdvancedCopy Manager manuals. Intended Readers

More information

PRIMECLUSTER. Cluster Foundation (CF) (Solaris ) Configuration and Administration Guide. Edition November 2003

PRIMECLUSTER. Cluster Foundation (CF) (Solaris ) Configuration and Administration Guide. Edition November 2003 Cluster Foundation PRIMECLUSTER Cluster Foundation (CF) (Solaris ) Configuration and Administration Guide Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail: email: manuals@fujitsu-siemens.com

More information

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory

More information

PRIMECLUSTER PRIMECLUSTER

PRIMECLUSTER PRIMECLUSTER PRIMECLUSTER PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools (Solaris, Linux ) Configuration and Administration Guide Redakteur Fujitsu Siemens Computers GmbH Paderborn 33094 Paderborn e-mail:

More information

ExpressCluster X 2.1 for Windows

ExpressCluster X 2.1 for Windows ExpressCluster X 2.1 for Windows Getting Started Guide 09/30/2009 Second Edition Revision History Edition Revised Date Description First 06/15/2009 New manual Second 09/30/2009 This manual has been updated

More information

BEA Liquid Data for. WebLogic. Deploying Liquid Data

BEA Liquid Data for. WebLogic. Deploying Liquid Data BEA Liquid Data for WebLogic Deploying Liquid Data Release: 1.0.1 Document Date: October 2002 Revised: December 2002 Copyright Copyright 2002 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend

More information

File System: Interface and Implmentation

File System: Interface and Implmentation File System: Interface and Implmentation Two Parts Filesystem Interface Interface the user sees Organization of the files as seen by the user Operations defined on files Properties that can be read/modified

More information

Chapter 11: Implementing File-Systems

Chapter 11: Implementing File-Systems Chapter 11: Implementing File-Systems Chapter 11 File-System Implementation 11.1 File-System Structure 11.2 File-System Implementation 11.3 Directory Implementation 11.4 Allocation Methods 11.5 Free-Space

More information

FUJITSU Software Systemwalker Operation Manager. Upgrade Guide. UNIX/Windows(R)

FUJITSU Software Systemwalker Operation Manager. Upgrade Guide. UNIX/Windows(R) FUJITSU Software Systemwalker Operation Manager Upgrade Guide UNIX/Windows(R) J2X1-3150-16ENZ0(00) May 2015 Preface Purpose of This Document This document describes the migration method, and notes when

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

ExpressCluster X 1.0 for Windows

ExpressCluster X 1.0 for Windows ExpressCluster X 1.0 for Windows Getting Started Guide 6/22/2007 Third Edition Revision History Edition Revised Date Description First 09/08/2006 New manual Second 12/28/2006 Reflected the logo change

More information

Interstage Shunsaku Data Manager Operator s Guide

Interstage Shunsaku Data Manager Operator s Guide Interstage Shunsaku Data Manager Operator s Guide Operator s Guide Trademarks Trademarks of other companies are used in this manual only to identify particular products or systems. Product Microsoft, Visual

More information

Chapter 11: File System Implementation

Chapter 11: File System Implementation Chapter 11: File System Implementation Chapter 11: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

File Systems: Interface and Implementation

File Systems: Interface and Implementation File Systems: Interface and Implementation CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition

More information

ExpressCluster X 3.1 for Linux

ExpressCluster X 3.1 for Linux ExpressCluster X 3.1 for Linux Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

Chapter 11: File System Implementation

Chapter 11: File System Implementation Chapter 11: File System Implementation Chapter 11: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

EXPRESSCLUSTER X SingleServerSafe 3.3 for Windows. Operation Guide. 10/03/2016 4th Edition

EXPRESSCLUSTER X SingleServerSafe 3.3 for Windows. Operation Guide. 10/03/2016 4th Edition EXPRESSCLUSTER X SingleServerSafe 3.3 for Windows Operation Guide 10/03/2016 4th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual 2nd 04/20/2015 Corresponds to the internal

More information

Microsoft Windows NT Microsoft Windows SystemWalker/StorageMGR. Installation Guide V10.0L10

Microsoft Windows NT Microsoft Windows SystemWalker/StorageMGR. Installation Guide V10.0L10 Microsoft Windows NT Microsoft Windows 2000 SystemWalker/StorageMGR Installation Guide V10.0L10 Preface ++Purpose This manual explains the installation and customization of the SystemWalker/StorageMGR.

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File-Systems, Silberschatz, Galvin and Gagne 2009 Chapter 11: Implementing File Systems File-System Structure File-System Implementation ti Directory Implementation Allocation

More information

ExpressCluster X 3.1 for Solaris

ExpressCluster X 3.1 for Solaris ExpressCluster X 3.1 for Solaris Getting Started Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011. All rights

More information