Managing NFS and KRPC Kernel Configurations in HP-UX 11i v3

Similar documents
HP-UX Support Tools Manager (STM) Release Notes

Monitoring Network File Systems

HPE Common Internet File System (CIFS) Server Release Notes Version B for HP-UX 11i v3

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide

NFS Design Goals. Network File System - NFS

DISTRIBUTED FILE SYSTEMS & NFS

HP 830 Series PoE+ Unified Wired-WLAN Switch Switching Engine

Ramdisk (Memory-based Disk) Support on HP-UX 11i v2

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

RAID-01 (ciss) B Mass Storage Driver Release Notes

Using NFS as a filesystem type with HP Serviceguard A on HP-UX 11i v3

HP 3PAR OS MU3 Patch 17

HP AutoPass License Server

HP OpenView Storage Data Protector A.05.10

HPE 3PAR OS MU5 Patch 49 Release Notes

An Introduction to GPFS

HP-UX DCE v2.0 Application Development Tools Release Notes

Introduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC...

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2

HP Network Node Manager ispi Performance for Quality Assurance Software

HP P4000 Remote Copy User Guide

HP Business Availability Center

Hewlett Packard Enterprise. HPE OmniStack for vsphere Upgrade Guide

HP Serviceguard Quorum Server Version A Release Notes, Fourth Edition

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide

Serviceguard NFS Toolkit A , A and A Administrator's Guide

HP 5120 SI Switch Series

HP Real User Monitor. Software Version: Real User Monitor Sizing Guide

HPE ilo Federation User Guide for ilo 5

Marvell BIOS Utility User Guide

HP Accelerated iscsi for Multifunction Network Adapters User Guide

version on HP-UX 11i v3 March 2014 Operating Environment Updat e Release

Using NFS as a file system type with HP Serviceguard A on HP-UX and Linux

HPE 3PAR OS MU3 Patch 24 Release Notes

ALM Lab Management. Lab Management Guide. Software Version: Go to HELP CENTER ONLINE

Administrator Guide. Windows Embedded Standard 7

HPE Security ArcSight Connectors

HP Service Quality Management Solution

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

HP Network Node Manager i-series Software

PCI / PCIe Error Recovery Product Note. HP-UX 11i v3

HP-UX PAM RADIUS A Release Notes

HP D6000 Disk Enclosure Direct Connect Cabling Guide

HPE 3PAR OS MU3 Patch 28 Release Notes

HP 3PAR OS MU1 Patch 11

HPE 3PAR OS MU2 Patch 36 Release Notes

HPE Intelligent Management Center

HP Virtual Connect Enterprise Manager

HP ALM Client MSI Generator

Supported File and File System Sizes for HFS and JFS

HP ProLiant Essentials RDMA for HP Multifunction Network Adapters User Guide

Virtual Recovery Assistant user s guide

HP ALM Performance Center

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

HP 3PAR OS MU3 Patch 18 Release Notes

IDE Connector Customizer Readme

HPE Security ArcSight Connectors

HPE ALM Client MSI Generator

HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault

HP IDOL Site Admin. Software Version: Installation Guide

HP Network Node Manager i Software Step-by-Step Guide to Custom Poller

HP 3PARInfo 1.4 User Guide

Configuring RAID with HP Z Turbo Drives

Software Package Builder 7.0 User's Guide

HPE Network Node Manager i Software

IEther-00 (iether) B Ethernet Driver Release Notes

HP Routing Switch Series

HP Auto Port Aggregation (APA) Release Notes

HP Operations Orchestration

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5

HP Storage Mirroring Application Manager 4.1 for Exchange white paper

HP LeftHand P4500 and P GbE to 10GbE migration instructions

HP Service Manager. Process Designer Tailoring Best Practices Guide (Codeless Mode)

Network Time Protocol (NTP) Release Notes

HP UFT Connection Agent

Status of the Linux NFS client

HPE 3PAR OS MU3 Patch 23 Release Notes

IBM MQ Appliance HA and DR Performance Report Version July 2016

Veeam Cloud Connect. Version 8.0. Administrator Guide

IDOL Site Admin. Software Version: User Guide

OMi Management Pack for Oracle Database. Software Version: Operations Manager i for Linux and Windows operating systems.

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HP 3PAR OS MU2 Patch 11

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide

HP Auto Port Aggregation (APA) Release Notes

WIDS Technology White Paper

HP StorageWorks Continuous Access EVA 2.1 release notes update

HP Operations Orchestration

HP Management Integration Framework 1.7

Guest Management Software V2.0.2 Release Notes

Intelligent Provisioning 1.64(B) Release Notes

HP integrated Citrix XenServer 5.0 Release Notes

What s New in Oracle Cloud Infrastructure Object Storage Classic. Topics: On Oracle Cloud. Oracle Cloud

HP Database and Middleware Automation

HPE FlexFabric 7900 Switch Series

HP 5820X & 5800 Switch Series Network Management and Monitoring. Configuration Guide. Abstract

OMi Management Pack for Microsoft Active Directory. Software Version: Operations Manager i for Linux and Windows operating systems.

HP Fortify Scanning Plugin for Xcode

HPE Operations Agent. Concepts Guide. Software Version: For the Windows, HP-UX, Linux, Solaris, and AIX operating systems

Transcription:

Managing NFS and KRPC Kernel Configurations in HP-UX 11i v3 HP Part Number: 762807-003 Published: September 2015 Edition: 2

Copyright 2009, 2015 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license required from Hewlett-Packard for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor s standard commercial license. The information contained herein is subject to change without notice. The only warranties for Hewlett-Packard products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting additional warranty. Hewlett-Packard shall not be liable for technical or editorial errors or omissions contained herein. Links to third-party websites take you outside the HP website. HP has no control over and is not responsible for information outside HP.com. Oracle is a registered US trademark of Oracle Corporation, Redwood City, California. UNIX is a registered trademark of The Open Group.

Contents Contents... 3 HP secure development lifecycle... 4 1 Introduction... 5 2 Managing Kernel Tunables using Kctune... 6 2.1 NFS Client Tunables... 6 2.2 NFS Server Tunables... 47 2.3 KRPC Client Tunables... 52 2.4 KRPC Server Tunables... 54 2.5 KLM Tunable... 61 3 Documentation feedback... 63 Appendix A. Obsolete tunables... 64

HP secure development lifecycle Starting with HP-UX 11i v3 March 2013 update release, HP secure development lifecycle provides the ability to authenticate HP-UX software. Software delivered through this release has been digitally signed using HP's private key. You can now verify the authenticity of the software before installing the products, delivered through this release. To verify the software signatures in signed depot, the following products must be installed on your system: B.11.31.1303 or later version of SD (Software Distributor) A.01.01.07 or later version of HP-UX Whitelisting (WhiteListInf) To verify the signatures, run: /usr/sbin/swsign -v s <depot_path>. For more information, see Software Distributor documentation at http://www.hp.com/go/sd-docs. NOTE: Ignite-UX software delivered with HP-UX 11i v3 March 2014 release or later supports verification of the software signatures in signed depot or media, during cold installation. For more information, see Ignite-UX documentation at http://www.hp.com/go/ignite-ux-docs.

1 Introduction NFS is a network-based application that offers transparent file access across a network. The behavior and performance of NFS depends on numerous kernel tunables. Tunables are variables that control the behavior of the HP-UX kernel. To achieve optimal performance, the system administrator can modify the values of the tunables. This white paper discusses the configurable Network File System (NFS), Kernel Remote Procedure Call (KRPC), and Kernel Lock Manager (KLM) tunables in HP-UX 11i v3 and describes how to manage them. Appendix A lists the tunables that were previously provided on HP-UX 11i v2 and are obsolete on HP-UX 11i v3. There are two kinds of tunables: dynamic and static. A dynamic tunable does not require a reboot to activate changes whereas a static tunable does. This white paper specifies which of the listed tunables are dynamic and also specifies when the tunables may require modification. Disclaimer: HP makes every effort to deliver your systems with a standard configuration that works well in most environments. However, there are some applications that can benefit from selective and carefully planned changes to the default settings.

2 Managing Kernel Tunables using Kctune 2.1 NFS Client Tunables Table 2.1-1 lists the NFS client tunables. The last column specifies which ONCplus version first introduced the tunable. Table 2.1-1 NFS Client Tunables Kctune Tunable Name Range Default value Units ONCplus Version nfs_async_timeout 0 to MAXINT 6000 Milliseconds B.11.31_LR nfs_disable_rddir_cache 0 or 1 0 Boolean B.11.31_LR nfs_enable_write_behind 0 or 1 0 Boolean B.11.31_LR nfs_nacache 0 - MAXINT 0 Hash Queues B.11.31_LR nfs_nrnode 0 to MAXINT 0 Entries B.11.31_LR nfs_write_error_interval 0 to MAXINT 5 Seconds B.11.31_LR nfs_write_error_to_cons_only 0 or 1 0 Boolean B.11.31_LR nfs2_async_clusters 1 to MAXINT 1 Requests B.11.31_LR nfs2_bsize 8192 to MAXINT (The value must be a power of 2.) 8192 Bytes B.11.31_LR nfs2_cots_timeo 0 to MAXINT 600 Tenths of a second B.11.31_LR nfs2_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs2_dynamic 0 or 1 1 Boolean B.11.31_LR nfs2_lookup_neg_cache 0 or 1 1 Boolean B.11.31_LR nfs2_max_threads 0 to nkthread/5 8 Threads B.11.31_LR nfs2_nra 0 to MAXINT 4 Requests B.11.31_LR nfs2_shrinkreaddir 0 or 1 0 Boolean B.11.31_LR nfs3_async_clusters 0 to MAXINT 1 Requests B.11.31_LR nfs3_bsize 4096 to MAXINT (The value must be a power of 2.) 32768 Bytes B.11.31_LR nfs3_cots_timeo 10 to MAXINT 600 Tenths of a second B.11.31_LR nfs3_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs3_dynamic 0 or 1 0 Boolean B.11.31_LR nfs3_enable_async_directio_read 0 or 1 0 Boolean B.11.31_08 nfs3_enable_async_directio_write 0 or 1 0 Boolean B.11.31_08 nfs3_jukebox_delay 100 to MAXINT 1000 Seconds B.11.31_LR nfs3_lookup_neg_cache 0 or 1 1 Boolean B.11.31_LR nfs3_max_async_directio_requests 8 to 64 8 Boolean B.11.31_08 nfs3_max_threads 0 to nkthread/5 8 Threads B.11.31_LR nfs3_max_transfer_size 4096 to MAXINT 1048576 Bytes B.11.31_LR nfs3_max_transfer_size_clts 4096 to MAXINT 32768 Bytes B.11.31_LR nfs3_max_transfer_size_cots 4096 to MAXINT 1048576 Bytes B.11.31_LR

Kctune Tunable Name Range Default value Units ONCplus Version nfs3_nra 0 to MAXINT 4 Requests B.11.31_LR nfs3_pathconf_disable_cache 0 or 1 0 Boolean B.11.31_LR nfs4_async_clusters 0 to MAXINT 1 Requests B.11.31_LR nfs4_bsize 4096 to MAXINT 32768 Bytes B.11.31_LR (The value must be a power of 2.) nfs4_cots_timeo 10 to MAXINT 600 Tenths of a second B.11.31_LR nfs4_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs4_lookup_neg_cache 0 or 1 1 Boolean B.11.31_LR nfs4_max_threads 0 to nkthreads/5 8 Threads B.11.31_LR nfs4_max_transfer_size 4096 to MAXINT 1048576 Bytes B.11.31_LR nfs4_max_transfer_size_cots 4096 to MAXINT 1048576 Bytes B.11.31_LR nfs4_nra 0 to MAXINT 4 Requests B.11.31_LR nfs4_pathconf_disable_cache 0 or 1 0 Boolean B.11.31_LR 2.1.1 nfs_async_timeout The nfs_async_timeout tunable controls the duration of time that threads executing asynchronous I/O requests sleep before exiting. If no new requests arrive before the timer expires, the thread wakes up and exits. If a request arrives, the thread wakes up to execute requests then goes back to sleep. Default: 6000 Milliseconds (6 Seconds) Min: 0 Max: 360000 Milliseconds (6 Minutes) Note: If the tunable is set to a value greater than 360000, an informational warning is issued at runtime. Any value greater than 360000 is outside the tested limits.

The nfs_async_timeout tunable is dynamic. System reboot is not required to activate changes made to this tunable. Changes made to the nfs_async_timeout tunable are applicable to all NFS mounted filesystems. Modify this tunable only if you can accurately predict the rate of asynchronous I/O. To avoid the overhead of creating and deleting threads, increase the value of this tunable. To free up resources for other subsystems, decrease the value of this tunable. Setting the value of nfs_async_timeout to 0 causes threads to exit immediately when there are no requests to process. HP recommends that you do not set the value of this tunable to 0. 2.1.2 nfs_disable_rddir_cache The nfs_disable_rddir_cache tunable controls the ability of a cache to hold responses from NFSv2 READDIR, NFSv3 READDIR, NFSv3 READDIRPLUS, and NFSv4 READDIR requests. When retrieving directory information, this cache avoids over-the-wire calls to the server. Default: 0 (NFS directory cache enabled) Min: 0 Max: 1 (NFS directory cache disabled) The nfs_disable_rddir_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Changes made to the value of this tunable are applicable to all NFS mounted filesystems. Directory caching cannot be enabled or disabled on a per filesystem basis. Modify the value of this tunable only if interoperability problems develop. These problems are caused when a server does not update the modification time on a directory when a file or directory is created or removed from it. For example, you might add a new directory and find that the modification time has not been updated on the directory listing. Or, you might delete a directory and find that the name of the removed directory still appears. To enable caching for all NFS version mounted filesystems, set the value of this tunable to 0. To disable caching for all three versions of NFS mounted filesystems, set the value of this tunable to 1. Disabling caching can result in additional over-the-wire requests from the NFS client. If you disable the [readdir] caching, you should also consider disabling the following tunables: nfs2_lookup_neg_cache nfs3_lookup_neg_cache nfs4_lookup_neg_cache

2.1.3 nfs_enable_write_behind The nfs_enable_write_behind tunable controls the write behind feature when writing to files over NFS. When the write behind feature is enabled, over-the-wire NFS writes are scheduled by the writer/application thread. While this can result in NFS write data being sent to the server more frequently, the server is not affected by the frequent arrival of writes. Default: 0 (NFS write-behind is disabled) Min: 0 Max: 1 (NFS write-behind is enabled) The nfs_enable_write_behind tunable is dynamic. System reboot is not required to activate a change made to this tunable. Enable this tunable to enable write behind behavior where over-the-wire NFS writes are scheduled by the writer/application thread. With some NFS servers, enabling the write behind feature results in improved write performance. However, the performance of many NFS server implementations depends heavily on data packets arriving in order. Thus write performance could potentially suffer when the write behind feature is enabled. HP recommends changing this tunable only if you can verify your NFS server is not affected by the frequent arrival of writes. Some NFS servers will benefit from using write behind, but write behind could degrade the performance of other servers. 2.1.4 nfs_nacache The nfs_nacache tunable controls the size of the hash structures that manage the file access cache on the NFS client. The tunable controls the size of the hash structures for NFS filesystems. The file access cache stores file access rights for users. By default, the algorithm assumes a single access cache entry per active file. Default: 0 Min: 0 Max: 40000 Note: If the value of nfs_nacache is set to default, the value displayed is 0. However, the actual value is that of the nfs_nrnode tunable. If the tunable is set to a value greater than 40000, an informational warning is issued at boot time. Any value greater than 40000 is outside the tested limit.

The nfs_nacache tunable is static. System reboot is required to activate changes made to this tunable. Increase the value of this tunable only in extreme cases where a large number of users are accessing the same NFS file or directory simultaneously. Decreasing the value of this tunable to a value less than nfs_nrnode can result in long hash queues and slower performance. HP does not recommend decreasing the value of this tunable below the value of nfs_nrnode or ncsize. 2.1.5 nfs_nrnode The nfs_nrnode tunable specifies the size of the rnode cache for NFS filesystems. The NFS client uses the rnode cache to store information about files on the client. Each cache entry contains a file handle that uniquely identifies files on the NFS server. To avoid network traffic, the tunable also contains pointers to various caches used by the NFS client. Each rnode has a one-to-one association with a vnode that caches the file data. Default: 0 Min: 0 Max: 40000 Note: If the value of nfs_nrnode tunable is set to default, the value displayed is 0. However, the actual value is that of the ncsize tunable. If the tunable is set to a value greater than 40000, an informational warning is issued at boot time. Any value greater than 40000 is outside the tested limit. The nfs_nrnode tunable is static. System reboot is required to activate changes made to this tunable. In most cases, modifying the nfs_nrnode tunable directly is not recommended. Instead, HP recommends tuning the ncsize tunable and allowing nfs_nrnode to default to the same size. If you are able to accurately predict the number of files your NFS client will access and you want to control the amount of system memory dedicated to the NFS rnode cache, then you can increase or decrease the size of the nfs_nrnode tunable. For example, if your NFS client accesses only a few large files and you want to reclaim system memory resources used by the NFS rnode cache, you can specify an nfs_nrnode size smaller than ncsize. For more information about the ncsize parameter, see the ncsize(5) manpage. 2.1.6 nfs_write_error_interval The nfs_write_error_interval tunable controls the time, in seconds, between logging ENOSPC (no disk space) and EDQUOT (over disk quota) write errors seen by the NFS client.

Default: 5 seconds Min: 0 Max: 360000 seconds (100 hours) Note: If the tunable is set to a value greater than 360000 seconds, an informational warning is issued. Any value greater than 360000 seconds is outside the tested limit. The nfs_write_error_interval tunable is dynamic. System reboot is not required to activate changes made to this tunable. Modify the value of this tunable in response to the volume of disk space and quota error messages being logged by the client. To see the error messages less frequently, increase the value of this tunable. To see the error messages more frequently, decrease the value of this tunable. 2.1.7 nfs_write_error_to_cons_only The nfs_write_error_to_cons_only tunable controls whether NFS write errors are logged to both the system console and syslog, or to the system console exclusively. Default: 0 (NFS error messages are logged to both syslog and system console) Min: 0 Max: 1 (NFS error messages are logged to system console only) The nfs_write_error_to_cons_only tunable is dynamic. System reboot is not required to activate changes made to this tunable. If you find the /var filesystem filled with error messages logged by the syslog daemon on behalf of NFS, set the value of the tunable to 1. 2.1.8 nfs2_async_clusters The nfs2_async_clusters tunable controls the mix of asynchronous requests generated by the NFSv2 client. There are four types of asynchronous requests: read-ahead putpage pageio readdir-ahead

The client attempts to service these different requests without favoring one type of operation over another. However some NFSv2 servers can take advantage of clustered requests from NFSv2 clients. For instance, write gathering is a server function that depends on the NFSv2 client sending out multiple WRITE requests in a short time span. If requests are taken out of the queue individually, the client defeats this server functionality designed to enhance performance. The nfs2_async_clusters tunable controls the number of outgoing requests for each type before changing types. The nfs3_async_clusters tunable controls the mix of asynchronous requests generated by NFSv3 clients. The nfs4_async_clusters tunable controls the mix of asynchronous requests generated by NFSv4 clients. For more information on these tunables, see: nfs3_async_clusters nfs4_async_clusters Default: 1 Min: 1 Max: 10 Note: If the tunable is set to a value greater than 10 asynchronous requests, an informational warning is issued at runtime. Any value greater than 10 is outside the tested limits. The nfs2_async_clusters tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the cluster setting is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_async_clusters tunable. If server functionality depends upon clusters of operations coming from the client, increase the value of this tunable. However, this can impact the operations in other queues because they have to wait until the current queue is empty or the cluster limit is reached. Note: Setting the value of nfs2_async_clusters to 0 causes all queued requests of a particular type to be processed before moving to the next type. 2.1.9 nfs2_bsize The nfs2_bsize tunable controls the logical block size used by NFSv2 clients. Block size represents the amount of data the client reads from or writes to the server.

The nfs3_bsize tunable controls the logical block size used by NFSv3 clients. The nfs4_bsize tunable controls the logical block size used by NFSv4 clients. For more information on these tunables, see: nfs3_bsize nfs4_bsize Default: 8192 Min: 8192 Max: 65536 Note: If the tunable is set to a value greater than 65536 bytes, an informational warning is issued at runtime. Any value greater than 65536 is outside the tested limits. The value of the tunable must be a power of 2. The nfs2_bsize tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the logical block size is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_bsize tunable. The transfer size for NFSv2 is limited to 8192 bytes. Changing this value beyond 8192 bytes does not have any benefits. This tunable is a system wide global tunable and thus affects every NFSv2 filesystem. To control the transfer sizes of specific NFSv2 filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1m) manpage for more information. 2.1.10 nfs2_cots_timeo The nfs2_cots_timeo tunable controls the default RPC timeout for NFSv2 mounted filesystems using a connection-oriented transport such as TCP. The nfs3_cots_timeo tunable controls the default RPC timeout for NFSv3 mounted filesystems. The nfs4_cots_timeo tunable controls the default RPC timeout for NFSv4 mounted filesystems. For more information on these tunables, see: nfs3_cots_timeo nfs4_cots_timeo Default: 600 tenths of a second (1 minute) Min: 10 tenths of a second (1 second) Max: 36000 tenths of a second (1 hour)

Note: If the tunable is set to a value less than 10 tenths of a second or greater than 36000 tenths of a second, an informational warning is issued at runtime. These values are outside the tested limits. The nfs2_cots_timeo tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the timeout duration is set per filesystem at mount time. The system administrator must unmount and remount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_cots_timeo tunable. If you are experiencing a large number of timeouts on connection-oriented NFSv2 filesystems, increase the value of this tunable. However, a large number of connection-oriented timeouts can be an indication of networking hardware or software problems. 2.1.11 nfs2_do_symlink_cache The nfs2_do_symlink_cache tunable caches the contents of symbolic links in NFSv2 mounted filesystems. If the server changes the contents of a symbolic link, and if either the time stamps are not updated or the granularity of the time stamp is too large, then the changes become visible to the client after a long time interval. The nfs3_do_symlink_cache tunable caches the contents of symbolic links in NFSv3 mounted filesystems. The nfs4_do_symlink_cache tunable caches the contents of symbolic links in NFSv4 mounted filesystems. For more information on these tunables, see: nfs3_do_symlink_cache nfs4_do_symlink_cache Default: 1 (Symbolic link cache is enabled) Min: 0 (Symbolic link cache is disabled) Max: 1 The nfs2_do_symlink_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_do_symlink_cache tunable. Enable this tunable to cache the contents of symbolic links. Because the client uses the cached version, changes made to the contents of the symbolic link file are not immediately visible to applications running on the client. To make changes made to the symbolic link file immediately visible to applications on the client, disable the tunable. Disabling the tunable can result in more over-the-wire requests from the client if filesystems are mounted with NFSv2 and contain symbolic links.

2.1.12 nfs2_dynamic The nfs2_dynamic tunable controls the dynamic retransmission feature for NFSv2 mounted filesystems. The dynamic retransmission feature is designed to reduce NFS retransmissions by monitoring server response time and adjusting read and write transfer sizes on NFSv2 mounted filesystems using connectionless transports such as UDP. The nfs3_dynamic tunable controls the dynamic retransmission feature for NFSv3 mounted filesystems. For more information, see nfs3_dynamic. Default: 1 (Dynamic retransmission is enabled) Min: 0 (Dynamic retransmission is disabled) Max: 1 The nfs2_dynamic tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the dynamic retransmission feature is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected when you change the value of this tunable. In congested networks, sending smaller sized NFS data packets can help if the network is dropping larger data packets. Enabling this tunable enables you to adjust the read and write transfer sizes for successful NFS I/O. If packets are not being dropped in the network, disabling this functionality results in increased throughput. However, if the server response is delayed or the network is overloaded, the number of timeouts can increase. HP recommends leaving this tunable enabled because it helps the system minimize NFS packet loss on congested networks. 2.1.13 nfs2_lookup_neg_cache The nfs2_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv2 mounted filesystems. The negative name cache records file names that were looked up but not found. This cache helps avoid over-the-wire lookups for files that are already known to be non-existent. The nfs3_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv3 mounted filesystems. The nfs4_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv4 mounted filesystems. For more information on these tunables, see: nfs3_lookup_neg_cache nfs4_lookup_neg_cache Default: 1 (negative name cache will be used) Min: 0 (negative name cache will not be used) Max: 1 The nfs2_lookup_neg_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable.

If filesystems are mounted read-only on the client, and applications running on the client need to immediately see any filesystem changes made on the server, disable this tunable. If you disable this tunable, also consider disabling the nfs_disable_rddir_cache tunable. For more information, see nfs_disable_rddir_cache. 2.1.14 nfs2_max_threads The nfs2_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv2 filesystems. The operations executed asynchronously are read, readdir, and write. The nfs3_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv3 filesystems. The nfs4_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv4 filesystems. For more information on these tunables, see: nfs3_max_threads nfs4_max_threads Default: 8 Min: 0 Max: 256 Note: If the tunable is set to a value greater than 256 threads, an informational warning is issued at runtime. Any value greater than 256 is outside the tested limits. The nfs2_max_threads tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the number of threads is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. Before modifying the value of this tunable, examine the available network bandwidth. If the network has high available bandwidth and the client and server have sufficient CPU and memory resources, increase the value of this tunable. This increase enables you to effectively utilize the available network bandwidth as well as the client and server resources. However, the total number of asynchronous threads for NFSv2 cannot exceed 20% of the available nkthreads. NFS mounts fail if the mount command cannot guarantee the ability to create the maximum number of threads for that mount point. If the network has low available bandwidth, decrease the value of this tunable. This decrease will ensure that the NFS client does not overload the network. Decreasing the value can impact NFS performance because it limits the number of asynchronous threads that can be spawned, and thus limits the number of simultaneous asynchronous I/O requests. 2.1.15 nfs2_nra The nfs2_nra tunable controls the number of read-ahead operations queued by NFSv2 clients when

sequential access to a file is discovered. Read-ahead operations increase concurrency and read throughput. The nfs3_nra tunable controls the number of read-ahead operations queued by NFSv3 clients. The nfs4_nra tunable controls the number of read-ahead operations queued by NFSv4 clients. For more information on these tunables, see: nfs3_nra nfs4_nra Default: 4 Min: 0 Max: 16 Note: If the tunable is set to a value greater than 16, an informational warning is issued at runtime. Any value greater than 16 is outside the tested limits. The nfs2_nra tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. If the network has high available bandwidth and the client and server have sufficient CPU and memory resources, increase the value of this tunable. This increase enables you to effectively utilize the available network bandwidth as well as the client and server resources. If the network has low available bandwidth, decrease the value of this tunable. This decrease ensures that the NFS client does not overload the network. 2.1.16 nfs2_shrinkreaddir The nfs2_shrinkreaddir tunable is a solution for a defect that causes older NFS servers to incorrectly handle NFSv2 READDIR requests with more than 1024 bytes of directory information. Default: 0 (Tunable is disabled and the 1024-byte limit is not enforced) Min: 0 Max: 1 (Tunable is enabled and the 1024-byte limit is enforced) The nfs2_shrinkreaddir tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. Modify this tunable only if you know or suspect that you are dealing with an older NFSv2 server that cannot handle READDIR requests of size larger than 1K.

Enable this tunable to ensure the client does not generate a READDIR request for more than 1024 bytes of directory information. Disable the tunable to allow the client to issue READDIR requests containing up to 8192 bytes of data. 2.1.17 nfs3_async_clusters The nfs3_async_clusters tunable controls the mix of asynchronous requests that are generated by the NFSv3 client. There are four types of asynchronous requests: read-ahead putpage pageio readdir-ahead. The client attempts to service these different requests without favoring one type of operation over another. However some NFSv3 servers can take advantage of clustered requests from NFSv3 clients. For instance, write gathering is a server function that depends on the NFSv3 client sending out multiple WRITE requests in a short time span. If requests are taken out of the queue individually, the client defeats this server functionality designed to enhance performance. The nfs3_async_clusters tunable controls the number of outgoing requests for each type before changing types. The nfs2_async_clusters tunable controls the mix of asynchronous requests generated by NFSv2 clients. The nfs4_async_clusters tunable controls the mix of asynchronous requests generated by NFSv4 clients. For more information on these tunables, see: nfs2_async_clusters nfs4_async_clusters Default: 1 Min: 0 Max: 10

Note: If the tunable is set to a value greater than 10 asynchronous requests, an informational warning is issued at runtime. Any value greater than 10 is outside the tested limits. The nfs3_async_clusters tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the cluster setting is per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_async_clusters tunable. If server functionality depends upon clusters of operations coming from the client, increase the value of this tunable. However, this can impact the operations in other queues if they have to wait until the current queue is empty or the cluster limit is reached. Note: Setting the value of nfs3_async_clusters to 0 causes all of the queued requests of a particular type to be processed before moving to the next type. 2.1.18 nfs3_bsize The nfs3_bsize tunable controls the logical block size used by NFSv3 clients. Block size represents the amount of data the client reads from or writes to the server. The nfs3_bsize tunable works in conjunction with the nfs3_max_transfer_size, nfs3_max_transfer_size_cots, and nfs3_max_transfer_size_clts tunables when determining the maximum size of these I/O requests. For NFSv3 TCP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots. For NFSv3 UDP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_clts. The nfs2_bsize tunable controls the logical block size used by NFSv2 clients. The nfs4_bsize tunable controls the logical block size used by NFSv4 clients. For more information on these tunables, see: nfs2_bsize nfs4_bsize Default: 32768 Min: 4096 Max: 1048576

Note: If the tunable is set to a value greater than 1048576 bytes, an informational warning is issued at runtime. Any value greater than 1048576 is outside the tested limits. The value of the tunable must be a power of 2. The nfs3_bsize tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the logical block size is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_bsize tunable. For NFS/TCP Filesystems: To increase the transfer size of NFSv3 TCP requests, set the nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. For example, if 1 MB transfers are desired, all three tunables must be set to at least 1 MB. If two of the tunables are set to 1 MB and the third is set to 32 KB, the transfer size will be 32 KB since that is the smallest value of the three tunables. To decrease the size of NFSv3 TCP requests, decrease the value of the nfs3_max_transfer_size_cots tunable. For example, to decrease the size of I/O requests on all NFSv3 TCP filesystems to 8 KB, set the value of nfs3_max_transfer_size_cots to 8192. For NFS/UDP Filesystems: To increase the size of NFSv3 UDP requests, set the nfs3_bsize, nfs3_max_transfer_size and nfs3_max_transfer_size_clts tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. To decrease the size of NFSv3 UDP requests, decrease the value of the nfs3_max_transfer_size_clts tunable. For example, to decrease the size of I/O requests on all NFSv3 UDP filesystems to 8 KB, set the value of nfs3_max_transfer_size_clts to 8192.

Caution: HP strongly discourages increasing nfs3_max_transfer_size_clts above the default value of 32768 as this can cause NFS/UDP requests to fail. Also, if the NFS client is experiencing NFS READ failures and the system is reporting "NFS read failed for server <servername>: RPC: Can t decode result" errors, this is an indication that the nfs3_bsize, nfs3_max_transfer_size, nfs3_max_transfer_size_clts, or nfs3_max_transfer_size_cots tunable value was changed while NFS filesystems were mounted. The system administrator must unmount and remount the NFS filesystem to use the new value.

Note: The nfs3_bsize tunable affects every NFSv3 filesystem. To control the transfer sizes of specific NFS filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1m) man page for more information. 2.1.19 nfs3_cots_timeo The nfs3_cots_timeo tunable controls the default RPC timeout for NFSv3 mounted filesystems using a connection-oriented transport such as TCP. The nfs2_cots_timeo tunable controls the default RPC timeout for NFSv2 mounted filesystems. The nfs4_cots_timeo tunable controls the default RPC timeout for NFSv4 mounted filesystems. For more information on these tunables, see: nfs2_cots_timeo nfs4_cots_timeo Default: 600 tenths of a second (1 minute) Min: 10 tenths of a second (1 second) Max: 36000 tenths of a second (1 hour) Note: If the tunable is set to a value less than 10 tenths of a second or greater than 36000 tenths of a second, an informational warning is issued at runtime. These values are outside the tested limits. The nfs3_cots_timeo tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the timeout duration is set per filesystem at mount time. The system administrator must unmount and remount the filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_cots_timeo tunable. If you are experiencing a large number of timeouts on connection-oriented NFSv3 filesystems, increase the value of this tunable. However, a large number of connection-oriented timeouts can be an indication of networking hardware or software problems. 2.1.20 nfs3_do_symlink_cache The nfs3_do_symlink_cache tunable caches the contents of symbolic links in NFSv3 mounted filesystems. If the server changes the contents of a symbolic link, and if either the time stamps are not updated or the granularity of the time stamp is too large, then the changes become visible to the client after a long time interval.

The nfs2_do_symlink_cache tunable caches the contents of symbolic links in NFSv2 mounted filesystems. The nfs4_do_symlink_cache tunable caches the contents of symbolic links in NFSv4 mounted filesystems. For more information on these tunables, see: nfs2_do_symlink_cache nfs4_do_symlink_cache Default: 1 (Symbolic link cache is enabled) Min: 0 (Symbolic link cache is disabled) Max: 1 The nfs3_do_symlink_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_do_symlink_cache tunable. Enable this tunable to cache the contents of symbolic links. Because the client uses the cached version, changes made to the contents of the symbolic link file are not immediately visible to applications running on the client. To make the changes made to the symbolic link file immediately visible to applications on the client, disable this tunable. Disabling the tunable can result in more over-the-wire requests from the client if filesystems are mounted with NFSv3 and contain symbolic links. 2.1.21 nfs3_dynamic The nfs3_dynamic tunable controls the dynamic retransmission feature for NFSv3 mounted filesystems. The dynamic retransmission feature is designed to reduce NFS retransmissions by monitoring server response time and adjusting read and write transfer sizes on NFSv3 mounted filesystems using connectionless transports such as UDP. The nfs2_dynamic tunable controls the dynamic retransmission feature for NFSv2 mounted filesystems. For more information on the tunable, see nfs2_dynamic. Default: 0 (Dynamic retransmission is disabled) Min: 0 Max: 1 (Dynamic retransmission is enabled) The nfs3_dynamic tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the dynamic retransmission feature is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected when you change the value of this tunable. In congested networks, sending smaller sized NFS data packets can help if the network is dropping larger data packets. Enabling this tunable enables you to adjust the read and write transfer sizes for successful NFS I/O. If packets are not being dropped in the network, disabling this functionality results in increased throughput. However, if the server response is delayed or the network is overloaded, the number of timeouts can increase.

HP recommends leaving this tunable enabled because it helps the system minimize NFS packet loss on congested networks. 2.1.22 nfs3_enable_async_directio_read The nfs3_enable_async_directio_read tunable controls whether NFS clients perform direct I/O read operations synchronously, where only a single read operation is performed at a time, or asynchronously, where the client may issue multiple read operations in parallel. Enabling this feature may improve read performance on NFS v3 filesystems mounted with the forcedirectio option. forcedirectio is an NFS mount option that typically benefits large sequential data transfers and database workloads. Most database applications, such as Oracle, prefer to manage their own data cache resources and will benefit from bypassing any system file cache (such as the Unified File Cache on HP-UX 11i v3). When an NFS client mounts a filesystem with the forcedirectio option, data is transferred directly between the client and server without buffering on the client. By default the direct I/O data transfers are synchronous, where the client sends a single read request to the server and waits for the server to respond with the requested data before initiating a new request. Enabling the nfs3_enable_async_directio_read tunable allows the client to send several I/O requests in parallel before waiting for the server's response. The number of parallel direct I/O requests is configurable via the nfs3_max_async_directio_requests tunable. This can greatly improve read performance for applications that use direct I/O. Currently this feature is supported only for TCP traffic. Default: 0 (Tunable is disabled) Min: 0 Max: 1 (Tunable is enabled) The nfs3_enable_async_directio_read tunable is dynamic. System reboot is not required to activate a change made to this tunable. Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable. If an application experiences poor read performance on an NFS filesystem mounted with the forcedirectio option, enabling the nfs3_enable_async_directio_read tunable can improve the read performance. 2.1.23 nfs3_enable_async_directio_write The nfs3_enable_async_directio_write tunable controls whether NFS clients perform direct I/O write operations synchronously, where only a single write operation is performed at a time, or asynchronously, where the client may issue multiple write operations in parallel. Enabling this feature may improve write performance on NFS v3 filesystems mounted with the forcedirectio option. forcedirectio is an NFS mount option that typically benefits large sequential data transfers and database workloads. Most database applications, such as Oracle, prefer to manage their own data cache resources and will benefit from bypassing any system file cache (such as the Unified File Cache on HP-UX 11i v3). When an NFS client mounts a filesystem with the forcedirectio option, data is transferred directly between the client and server without buffering on the client. By default the direct I/O data transfers are synchronous, where the client sends a single write request

to the server and waits for the server to respond with the requested data before initiating a new request. Enabling the nfs3_enable_async_directio_write tunable allows the client to send several I/O requests in parallel before waiting for the server's response. The number of parallel direct I/O requests is configurable via the nfs3_max_async_directio_requests tunable. This can greatly improve write performance for applications that use direct I/O. Currently this feature is supported only for TCP traffic. Default: 0 (Tunable is disabled) Min: 0 Max: 1 (Tunable is enabled) The nfs3_enable_async_directio_write tunable is dynamic. System reboot is not required to activate a change made to this tunable. Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable. If an application experiences poor write performance on an NFS filesystem mounted with the forcedirectio option, enabling the nfs3_enable_async_directio_write tunable can improve the write performance. 2.1.24 nfs3_jukebox_delay The nfs3_jukebox_delay tunable specifies the time interval the NFS client must wait after receiving the NFS3ERR_JUKEBOX error before retransmitting the request to the server. If an NFS client requests a file on the server, and if the file is unavailable because it resides on a slow media or has been migrated on an HSM storage device, the server generates the NFS3ERR_JUKEBOX error. If the server returns this error, it indicates that the file cannot be accessed for a considerable amount of time. The retransmission of the request depends on the time interval specified by this tunable. Default: 1000 (10 seconds) Min: 100 (1 second) Max: 60000 (600 seconds) Note: If the tunable is set to a value less than 100 or greater than 60000, an informational warning is issued at runtime. These values are outside the tested limits. The nfs3_jukebox_delay tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected when you change the value of this tunable.

If it takes a considerable amount of time for files to migrate from your HSM storage devices, increase the value of this tunable. However, if you increase the value of the tunable, it can prevent the file from becoming immediately visible when it becomes available. If files are migrated quickly from your HSM storage devices, decrease the value of this tunable. If you decrease the value of the tunable, you can view the file as soon as it becomes available. However, if you set the tunable too low, your client can send retransmissions before the server is able to retrieve the files from the HSM storage devices. 2.1.25 nfs3_lookup_neg_cache The nfs3_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv3 mounted filesystems. The negative name cache records file names that were looked up but not found. This cache helps avoid over-the-wire lookups for files that are already known to be non-existent. The nfs2_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv2 mounted filesystems. The nfs4_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv4 mounted filesystems. For more information on these tunables, see: nfs2_lookup_neg_cache nfs4_lookup_neg_cache Default: 1 (Negative name cache will be used) Min: 0 (Negative name cache will not be used) Max: 1 The nfs3_lookup_neg_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected by changing the value of this tunable. If filesystems are mounted read-only on the client, and applications running on the client need to immediately see filesystem changes on the server, disable this tunable. If you disable this tunable, also consider disabling the nfs_disable_rddir_cache tunable. For more information, see nfs_disable_rddir_cache. 2.1.26 nfs3_max_async_directio_requests The nfs3_max_async_directio_requests tunable specifies the maximum number of parallel read or write requests that NFS v3 direct can send on behalf of the application. This tunable is effective only on processes performing I/O operation on the forcedirectio NFS mount point. Default: 8 Min: 4 Max: 64

The nfs3_max_async_directio_requests tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable. This tunable is effective only if nfs3_enable_async_directio_read and/or nfs3_enable_async_directio_write are enabled The nfs3_max_async_directio_requests tunable works in conjunction with the nfs3_max_transfer_size tunable and rsize/wsize mount option when determining the mount point transfer bytes. For example, if nfs3_max_async_directio_requests is set to 8 and rsize/wsize is set to 32768, the nfs3_enable_async_directio_read or nfs3_enable_async_directio_write parameters can send or receive is 8*32768 = 262144 requests in parallel. Note: If after enabling the nfs3_enable_async_directio_read or nfs3_enable_async_directio_write parameters, the NFS client is frequently experiencing NFS READ or WRITE failures and the system is reporting RPCTIMEOUT error, this is an indication that the nfs3_max_async_directio_requests might be set too high. Setting it to a lower value may resolve this problem. 2.1.27 nfs3_max_threads The nfs3_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv3 filesystems. The operations executed asynchronously are read, readdir, and write. The nfs2_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv2 filesystems. The nfs4_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv4 filesystems. For more information on these tunables, see: nfs2_max_threads nfs4_max_threads Default: 8 Min: 0 Max: 256

Note: If the tunable is set to a value greater than 256 threads, an informational warning is issued at runtime. Any value greater than 256 is outside the tested limits. The nfs3_max_threads tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the number of threads is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of this tunable. Before modifying the value of this tunable, examine the available network bandwidth. If the network has high available bandwidth and the client and server have sufficient CPU and memory resources, increase the value of this tunable. This increase enables you to effectively utilize the available network bandwidth as well as the client and server resources. However, the total number of asynchronous threads for NFSv3 cannot exceed 20% of the available nkthreads. NFS mounts fail if the mount command cannot guarantee the ability to create the maximum number of threads for that mount point. If the network has low available bandwidth, decrease the value of this tunable. This decrease will ensure that the NFS client does not overload the network. Decreasing the value can impact NFS performance because it limits the number of asynchronous threads that can be spawned, and thus limits the number of simultaneous asynchronous I/O requests. 2.1.28 nfs3_max_transfer_size The nfs3_max_transfer_size tunable specifies the maximum size of the data portion of NFSv3 READ, WRITE, READDIR, and READDIRPLUS requests. This parameter controls both the maximum size of the data that the server returns and the maximum size of the request the client generates. The nfs3_max_transfer_size tunable works in conjunction with the nfs3_bsize, nfs3_max_transfer_size_cots, and nfs3_max_transfer_size_clts tunables when determining the maximum size of these I/O requests. For NFSv3 TCP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots. For UDP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_clts. The nfs4_max_transfer_size tunable specifies the maximum size of the data portion of NFSv4 requests. For more information on the tunable, see nfs4_max_transfer_size. Default: 1048576 Min: 4096 Max: 1048576

Note: If the tunable is set to a value greater than 1048576, an informational warning is issued at runtime. Any value greater than 1048576 is outside the tested limits. The value of the tunable must be a power of 2. The nfs3_max_transfer_size tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the transfer size for a filesystem is set when the filesystem is mounted. In order to affect a particular filesystem, the system administrator must unmount and re-mount the filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of this tunable. For NFS/TCP Filesystems: To increase the transfer size of NFSv3 TCP requests, set the nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. For example, if 1 MB transfers are desired, all three tunables must be set to at least 1 MB. If two of the tunables are set to 1 MB and the third is set to 32 KB, the transfer size will be 32 KB since that is the smallest value of the three tunables. To decrease the size of NFSv3 TCP requests, decrease the value of the nfs3_max_transfer_size_cots tunable. For example, to decrease the size of I/O requests on all NFSv3 TCP filesystems to 8 KB, set the value of nfs3_max_transfer_size_cots to 8192. For NFS/UDP Filesystems: To increase the size of NFSv3 UDP requests, set the nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_clts tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. To decrease the size of NFSv3 UDP requests, decrease the value of the nfs3_max_transfer_size_clts tunable. For example, to decrease the size of I/O requests on all NFSv3 UDP filesystems to 8 KB, set the value of nfs3_max_transfer_size_clts to 8192. Caution: HP strongly discourages increasing nfs3_max_transfer_size_clts above the default value of 32768 as this can cause NFS/UDP requests to fail. Also, if the NFS client is experiencing NFS READ failures and the system is reporting "NFS read failed for server <servername>: RPC: Can t decode result" errors, this is an indication that the nfs3_bsize, nfs3_max_transfer_size, nfs3_max_transfer_size_clts, or nfs3_max_transfer_size_cots tunable value was changed while NFS filesystems were mounted. The system administrator must unmount and remount the NFS filesystem to use the new value.