Managing NFS and KRPC Kernel Configurations in HP-UX 11i v3
|
|
- Barnard Sutton
- 6 years ago
- Views:
Transcription
1 Managing NFS and KRPC Kernel Configurations in HP-UX 11i v3 HP Part Number: Published: September 2015 Edition: 2
2 Copyright 2009, 2015 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license required from Hewlett-Packard for possession, use or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor s standard commercial license. The information contained herein is subject to change without notice. The only warranties for Hewlett-Packard products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting additional warranty. Hewlett-Packard shall not be liable for technical or editorial errors or omissions contained herein. Links to third-party websites take you outside the HP website. HP has no control over and is not responsible for information outside HP.com. Oracle is a registered US trademark of Oracle Corporation, Redwood City, California. UNIX is a registered trademark of The Open Group.
3 Contents Contents... 3 HP secure development lifecycle Introduction Managing Kernel Tunables using Kctune NFS Client Tunables NFS Server Tunables KRPC Client Tunables KRPC Server Tunables KLM Tunable Documentation feedback Appendix A. Obsolete tunables... 64
4 HP secure development lifecycle Starting with HP-UX 11i v3 March 2013 update release, HP secure development lifecycle provides the ability to authenticate HP-UX software. Software delivered through this release has been digitally signed using HP's private key. You can now verify the authenticity of the software before installing the products, delivered through this release. To verify the software signatures in signed depot, the following products must be installed on your system: B or later version of SD (Software Distributor) A or later version of HP-UX Whitelisting (WhiteListInf) To verify the signatures, run: /usr/sbin/swsign -v s <depot_path>. For more information, see Software Distributor documentation at NOTE: Ignite-UX software delivered with HP-UX 11i v3 March 2014 release or later supports verification of the software signatures in signed depot or media, during cold installation. For more information, see Ignite-UX documentation at
5 1 Introduction NFS is a network-based application that offers transparent file access across a network. The behavior and performance of NFS depends on numerous kernel tunables. Tunables are variables that control the behavior of the HP-UX kernel. To achieve optimal performance, the system administrator can modify the values of the tunables. This white paper discusses the configurable Network File System (NFS), Kernel Remote Procedure Call (KRPC), and Kernel Lock Manager (KLM) tunables in HP-UX 11i v3 and describes how to manage them. Appendix A lists the tunables that were previously provided on HP-UX 11i v2 and are obsolete on HP-UX 11i v3. There are two kinds of tunables: dynamic and static. A dynamic tunable does not require a reboot to activate changes whereas a static tunable does. This white paper specifies which of the listed tunables are dynamic and also specifies when the tunables may require modification. Disclaimer: HP makes every effort to deliver your systems with a standard configuration that works well in most environments. However, there are some applications that can benefit from selective and carefully planned changes to the default settings.
6 2 Managing Kernel Tunables using Kctune 2.1 NFS Client Tunables Table lists the NFS client tunables. The last column specifies which ONCplus version first introduced the tunable. Table NFS Client Tunables Kctune Tunable Name Range Default value Units ONCplus Version nfs_async_timeout 0 to MAXINT 6000 Milliseconds B.11.31_LR nfs_disable_rddir_cache 0 or 1 0 Boolean B.11.31_LR nfs_enable_write_behind 0 or 1 0 Boolean B.11.31_LR nfs_nacache 0 - MAXINT 0 Hash Queues B.11.31_LR nfs_nrnode 0 to MAXINT 0 Entries B.11.31_LR nfs_write_error_interval 0 to MAXINT 5 Seconds B.11.31_LR nfs_write_error_to_cons_only 0 or 1 0 Boolean B.11.31_LR nfs2_async_clusters 1 to MAXINT 1 Requests B.11.31_LR nfs2_bsize 8192 to MAXINT (The value must be a power of 2.) 8192 Bytes B.11.31_LR nfs2_cots_timeo 0 to MAXINT 600 Tenths of a second B.11.31_LR nfs2_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs2_dynamic 0 or 1 1 Boolean B.11.31_LR nfs2_lookup_neg_cache 0 or 1 1 Boolean B.11.31_LR nfs2_max_threads 0 to nkthread/5 8 Threads B.11.31_LR nfs2_nra 0 to MAXINT 4 Requests B.11.31_LR nfs2_shrinkreaddir 0 or 1 0 Boolean B.11.31_LR nfs3_async_clusters 0 to MAXINT 1 Requests B.11.31_LR nfs3_bsize 4096 to MAXINT (The value must be a power of 2.) Bytes B.11.31_LR nfs3_cots_timeo 10 to MAXINT 600 Tenths of a second B.11.31_LR nfs3_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs3_dynamic 0 or 1 0 Boolean B.11.31_LR nfs3_enable_async_directio_read 0 or 1 0 Boolean B.11.31_08 nfs3_enable_async_directio_write 0 or 1 0 Boolean B.11.31_08 nfs3_jukebox_delay 100 to MAXINT 1000 Seconds B.11.31_LR nfs3_lookup_neg_cache 0 or 1 1 Boolean B.11.31_LR nfs3_max_async_directio_requests 8 to 64 8 Boolean B.11.31_08 nfs3_max_threads 0 to nkthread/5 8 Threads B.11.31_LR nfs3_max_transfer_size 4096 to MAXINT Bytes B.11.31_LR nfs3_max_transfer_size_clts 4096 to MAXINT Bytes B.11.31_LR nfs3_max_transfer_size_cots 4096 to MAXINT Bytes B.11.31_LR
7 Kctune Tunable Name Range Default value Units ONCplus Version nfs3_nra 0 to MAXINT 4 Requests B.11.31_LR nfs3_pathconf_disable_cache 0 or 1 0 Boolean B.11.31_LR nfs4_async_clusters 0 to MAXINT 1 Requests B.11.31_LR nfs4_bsize 4096 to MAXINT Bytes B.11.31_LR (The value must be a power of 2.) nfs4_cots_timeo 10 to MAXINT 600 Tenths of a second B.11.31_LR nfs4_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs4_lookup_neg_cache 0 or 1 1 Boolean B.11.31_LR nfs4_max_threads 0 to nkthreads/5 8 Threads B.11.31_LR nfs4_max_transfer_size 4096 to MAXINT Bytes B.11.31_LR nfs4_max_transfer_size_cots 4096 to MAXINT Bytes B.11.31_LR nfs4_nra 0 to MAXINT 4 Requests B.11.31_LR nfs4_pathconf_disable_cache 0 or 1 0 Boolean B.11.31_LR nfs_async_timeout The nfs_async_timeout tunable controls the duration of time that threads executing asynchronous I/O requests sleep before exiting. If no new requests arrive before the timer expires, the thread wakes up and exits. If a request arrives, the thread wakes up to execute requests then goes back to sleep. Default: 6000 Milliseconds (6 Seconds) Min: 0 Max: Milliseconds (6 Minutes) Note: If the tunable is set to a value greater than , an informational warning is issued at runtime. Any value greater than is outside the tested limits.
8 The nfs_async_timeout tunable is dynamic. System reboot is not required to activate changes made to this tunable. Changes made to the nfs_async_timeout tunable are applicable to all NFS mounted filesystems. Modify this tunable only if you can accurately predict the rate of asynchronous I/O. To avoid the overhead of creating and deleting threads, increase the value of this tunable. To free up resources for other subsystems, decrease the value of this tunable. Setting the value of nfs_async_timeout to 0 causes threads to exit immediately when there are no requests to process. HP recommends that you do not set the value of this tunable to nfs_disable_rddir_cache The nfs_disable_rddir_cache tunable controls the ability of a cache to hold responses from NFSv2 READDIR, NFSv3 READDIR, NFSv3 READDIRPLUS, and NFSv4 READDIR requests. When retrieving directory information, this cache avoids over-the-wire calls to the server. Default: 0 (NFS directory cache enabled) Min: 0 Max: 1 (NFS directory cache disabled) The nfs_disable_rddir_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Changes made to the value of this tunable are applicable to all NFS mounted filesystems. Directory caching cannot be enabled or disabled on a per filesystem basis. Modify the value of this tunable only if interoperability problems develop. These problems are caused when a server does not update the modification time on a directory when a file or directory is created or removed from it. For example, you might add a new directory and find that the modification time has not been updated on the directory listing. Or, you might delete a directory and find that the name of the removed directory still appears. To enable caching for all NFS version mounted filesystems, set the value of this tunable to 0. To disable caching for all three versions of NFS mounted filesystems, set the value of this tunable to 1. Disabling caching can result in additional over-the-wire requests from the NFS client. If you disable the [readdir] caching, you should also consider disabling the following tunables: nfs2_lookup_neg_cache nfs3_lookup_neg_cache nfs4_lookup_neg_cache
9 2.1.3 nfs_enable_write_behind The nfs_enable_write_behind tunable controls the write behind feature when writing to files over NFS. When the write behind feature is enabled, over-the-wire NFS writes are scheduled by the writer/application thread. While this can result in NFS write data being sent to the server more frequently, the server is not affected by the frequent arrival of writes. Default: 0 (NFS write-behind is disabled) Min: 0 Max: 1 (NFS write-behind is enabled) The nfs_enable_write_behind tunable is dynamic. System reboot is not required to activate a change made to this tunable. Enable this tunable to enable write behind behavior where over-the-wire NFS writes are scheduled by the writer/application thread. With some NFS servers, enabling the write behind feature results in improved write performance. However, the performance of many NFS server implementations depends heavily on data packets arriving in order. Thus write performance could potentially suffer when the write behind feature is enabled. HP recommends changing this tunable only if you can verify your NFS server is not affected by the frequent arrival of writes. Some NFS servers will benefit from using write behind, but write behind could degrade the performance of other servers nfs_nacache The nfs_nacache tunable controls the size of the hash structures that manage the file access cache on the NFS client. The tunable controls the size of the hash structures for NFS filesystems. The file access cache stores file access rights for users. By default, the algorithm assumes a single access cache entry per active file. Default: 0 Min: 0 Max: Note: If the value of nfs_nacache is set to default, the value displayed is 0. However, the actual value is that of the nfs_nrnode tunable. If the tunable is set to a value greater than 40000, an informational warning is issued at boot time. Any value greater than is outside the tested limit.
10 The nfs_nacache tunable is static. System reboot is required to activate changes made to this tunable. Increase the value of this tunable only in extreme cases where a large number of users are accessing the same NFS file or directory simultaneously. Decreasing the value of this tunable to a value less than nfs_nrnode can result in long hash queues and slower performance. HP does not recommend decreasing the value of this tunable below the value of nfs_nrnode or ncsize nfs_nrnode The nfs_nrnode tunable specifies the size of the rnode cache for NFS filesystems. The NFS client uses the rnode cache to store information about files on the client. Each cache entry contains a file handle that uniquely identifies files on the NFS server. To avoid network traffic, the tunable also contains pointers to various caches used by the NFS client. Each rnode has a one-to-one association with a vnode that caches the file data. Default: 0 Min: 0 Max: Note: If the value of nfs_nrnode tunable is set to default, the value displayed is 0. However, the actual value is that of the ncsize tunable. If the tunable is set to a value greater than 40000, an informational warning is issued at boot time. Any value greater than is outside the tested limit. The nfs_nrnode tunable is static. System reboot is required to activate changes made to this tunable. In most cases, modifying the nfs_nrnode tunable directly is not recommended. Instead, HP recommends tuning the ncsize tunable and allowing nfs_nrnode to default to the same size. If you are able to accurately predict the number of files your NFS client will access and you want to control the amount of system memory dedicated to the NFS rnode cache, then you can increase or decrease the size of the nfs_nrnode tunable. For example, if your NFS client accesses only a few large files and you want to reclaim system memory resources used by the NFS rnode cache, you can specify an nfs_nrnode size smaller than ncsize. For more information about the ncsize parameter, see the ncsize(5) manpage nfs_write_error_interval The nfs_write_error_interval tunable controls the time, in seconds, between logging ENOSPC (no disk space) and EDQUOT (over disk quota) write errors seen by the NFS client.
11 Default: 5 seconds Min: 0 Max: seconds (100 hours) Note: If the tunable is set to a value greater than seconds, an informational warning is issued. Any value greater than seconds is outside the tested limit. The nfs_write_error_interval tunable is dynamic. System reboot is not required to activate changes made to this tunable. Modify the value of this tunable in response to the volume of disk space and quota error messages being logged by the client. To see the error messages less frequently, increase the value of this tunable. To see the error messages more frequently, decrease the value of this tunable nfs_write_error_to_cons_only The nfs_write_error_to_cons_only tunable controls whether NFS write errors are logged to both the system console and syslog, or to the system console exclusively. Default: 0 (NFS error messages are logged to both syslog and system console) Min: 0 Max: 1 (NFS error messages are logged to system console only) The nfs_write_error_to_cons_only tunable is dynamic. System reboot is not required to activate changes made to this tunable. If you find the /var filesystem filled with error messages logged by the syslog daemon on behalf of NFS, set the value of the tunable to nfs2_async_clusters The nfs2_async_clusters tunable controls the mix of asynchronous requests generated by the NFSv2 client. There are four types of asynchronous requests: read-ahead putpage pageio readdir-ahead
12 The client attempts to service these different requests without favoring one type of operation over another. However some NFSv2 servers can take advantage of clustered requests from NFSv2 clients. For instance, write gathering is a server function that depends on the NFSv2 client sending out multiple WRITE requests in a short time span. If requests are taken out of the queue individually, the client defeats this server functionality designed to enhance performance. The nfs2_async_clusters tunable controls the number of outgoing requests for each type before changing types. The nfs3_async_clusters tunable controls the mix of asynchronous requests generated by NFSv3 clients. The nfs4_async_clusters tunable controls the mix of asynchronous requests generated by NFSv4 clients. For more information on these tunables, see: nfs3_async_clusters nfs4_async_clusters Default: 1 Min: 1 Max: 10 Note: If the tunable is set to a value greater than 10 asynchronous requests, an informational warning is issued at runtime. Any value greater than 10 is outside the tested limits. The nfs2_async_clusters tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the cluster setting is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_async_clusters tunable. If server functionality depends upon clusters of operations coming from the client, increase the value of this tunable. However, this can impact the operations in other queues because they have to wait until the current queue is empty or the cluster limit is reached. Note: Setting the value of nfs2_async_clusters to 0 causes all queued requests of a particular type to be processed before moving to the next type nfs2_bsize The nfs2_bsize tunable controls the logical block size used by NFSv2 clients. Block size represents the amount of data the client reads from or writes to the server.
13 The nfs3_bsize tunable controls the logical block size used by NFSv3 clients. The nfs4_bsize tunable controls the logical block size used by NFSv4 clients. For more information on these tunables, see: nfs3_bsize nfs4_bsize Default: 8192 Min: 8192 Max: Note: If the tunable is set to a value greater than bytes, an informational warning is issued at runtime. Any value greater than is outside the tested limits. The value of the tunable must be a power of 2. The nfs2_bsize tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the logical block size is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_bsize tunable. The transfer size for NFSv2 is limited to 8192 bytes. Changing this value beyond 8192 bytes does not have any benefits. This tunable is a system wide global tunable and thus affects every NFSv2 filesystem. To control the transfer sizes of specific NFSv2 filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1m) manpage for more information nfs2_cots_timeo The nfs2_cots_timeo tunable controls the default RPC timeout for NFSv2 mounted filesystems using a connection-oriented transport such as TCP. The nfs3_cots_timeo tunable controls the default RPC timeout for NFSv3 mounted filesystems. The nfs4_cots_timeo tunable controls the default RPC timeout for NFSv4 mounted filesystems. For more information on these tunables, see: nfs3_cots_timeo nfs4_cots_timeo Default: 600 tenths of a second (1 minute) Min: 10 tenths of a second (1 second) Max: tenths of a second (1 hour)
14 Note: If the tunable is set to a value less than 10 tenths of a second or greater than tenths of a second, an informational warning is issued at runtime. These values are outside the tested limits. The nfs2_cots_timeo tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the timeout duration is set per filesystem at mount time. The system administrator must unmount and remount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_cots_timeo tunable. If you are experiencing a large number of timeouts on connection-oriented NFSv2 filesystems, increase the value of this tunable. However, a large number of connection-oriented timeouts can be an indication of networking hardware or software problems nfs2_do_symlink_cache The nfs2_do_symlink_cache tunable caches the contents of symbolic links in NFSv2 mounted filesystems. If the server changes the contents of a symbolic link, and if either the time stamps are not updated or the granularity of the time stamp is too large, then the changes become visible to the client after a long time interval. The nfs3_do_symlink_cache tunable caches the contents of symbolic links in NFSv3 mounted filesystems. The nfs4_do_symlink_cache tunable caches the contents of symbolic links in NFSv4 mounted filesystems. For more information on these tunables, see: nfs3_do_symlink_cache nfs4_do_symlink_cache Default: 1 (Symbolic link cache is enabled) Min: 0 (Symbolic link cache is disabled) Max: 1 The nfs2_do_symlink_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of the nfs2_do_symlink_cache tunable. Enable this tunable to cache the contents of symbolic links. Because the client uses the cached version, changes made to the contents of the symbolic link file are not immediately visible to applications running on the client. To make changes made to the symbolic link file immediately visible to applications on the client, disable the tunable. Disabling the tunable can result in more over-the-wire requests from the client if filesystems are mounted with NFSv2 and contain symbolic links.
15 nfs2_dynamic The nfs2_dynamic tunable controls the dynamic retransmission feature for NFSv2 mounted filesystems. The dynamic retransmission feature is designed to reduce NFS retransmissions by monitoring server response time and adjusting read and write transfer sizes on NFSv2 mounted filesystems using connectionless transports such as UDP. The nfs3_dynamic tunable controls the dynamic retransmission feature for NFSv3 mounted filesystems. For more information, see nfs3_dynamic. Default: 1 (Dynamic retransmission is enabled) Min: 0 (Dynamic retransmission is disabled) Max: 1 The nfs2_dynamic tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the dynamic retransmission feature is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected when you change the value of this tunable. In congested networks, sending smaller sized NFS data packets can help if the network is dropping larger data packets. Enabling this tunable enables you to adjust the read and write transfer sizes for successful NFS I/O. If packets are not being dropped in the network, disabling this functionality results in increased throughput. However, if the server response is delayed or the network is overloaded, the number of timeouts can increase. HP recommends leaving this tunable enabled because it helps the system minimize NFS packet loss on congested networks nfs2_lookup_neg_cache The nfs2_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv2 mounted filesystems. The negative name cache records file names that were looked up but not found. This cache helps avoid over-the-wire lookups for files that are already known to be non-existent. The nfs3_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv3 mounted filesystems. The nfs4_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv4 mounted filesystems. For more information on these tunables, see: nfs3_lookup_neg_cache nfs4_lookup_neg_cache Default: 1 (negative name cache will be used) Min: 0 (negative name cache will not be used) Max: 1 The nfs2_lookup_neg_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable.
16 If filesystems are mounted read-only on the client, and applications running on the client need to immediately see any filesystem changes made on the server, disable this tunable. If you disable this tunable, also consider disabling the nfs_disable_rddir_cache tunable. For more information, see nfs_disable_rddir_cache nfs2_max_threads The nfs2_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv2 filesystems. The operations executed asynchronously are read, readdir, and write. The nfs3_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv3 filesystems. The nfs4_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv4 filesystems. For more information on these tunables, see: nfs3_max_threads nfs4_max_threads Default: 8 Min: 0 Max: 256 Note: If the tunable is set to a value greater than 256 threads, an informational warning is issued at runtime. Any value greater than 256 is outside the tested limits. The nfs2_max_threads tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the number of threads is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. Before modifying the value of this tunable, examine the available network bandwidth. If the network has high available bandwidth and the client and server have sufficient CPU and memory resources, increase the value of this tunable. This increase enables you to effectively utilize the available network bandwidth as well as the client and server resources. However, the total number of asynchronous threads for NFSv2 cannot exceed 20% of the available nkthreads. NFS mounts fail if the mount command cannot guarantee the ability to create the maximum number of threads for that mount point. If the network has low available bandwidth, decrease the value of this tunable. This decrease will ensure that the NFS client does not overload the network. Decreasing the value can impact NFS performance because it limits the number of asynchronous threads that can be spawned, and thus limits the number of simultaneous asynchronous I/O requests nfs2_nra The nfs2_nra tunable controls the number of read-ahead operations queued by NFSv2 clients when
17 sequential access to a file is discovered. Read-ahead operations increase concurrency and read throughput. The nfs3_nra tunable controls the number of read-ahead operations queued by NFSv3 clients. The nfs4_nra tunable controls the number of read-ahead operations queued by NFSv4 clients. For more information on these tunables, see: nfs3_nra nfs4_nra Default: 4 Min: 0 Max: 16 Note: If the tunable is set to a value greater than 16, an informational warning is issued at runtime. Any value greater than 16 is outside the tested limits. The nfs2_nra tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. If the network has high available bandwidth and the client and server have sufficient CPU and memory resources, increase the value of this tunable. This increase enables you to effectively utilize the available network bandwidth as well as the client and server resources. If the network has low available bandwidth, decrease the value of this tunable. This decrease ensures that the NFS client does not overload the network nfs2_shrinkreaddir The nfs2_shrinkreaddir tunable is a solution for a defect that causes older NFS servers to incorrectly handle NFSv2 READDIR requests with more than 1024 bytes of directory information. Default: 0 (Tunable is disabled and the 1024-byte limit is not enforced) Min: 0 Max: 1 (Tunable is enabled and the 1024-byte limit is enforced) The nfs2_shrinkreaddir tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. Modify this tunable only if you know or suspect that you are dealing with an older NFSv2 server that cannot handle READDIR requests of size larger than 1K.
18 Enable this tunable to ensure the client does not generate a READDIR request for more than 1024 bytes of directory information. Disable the tunable to allow the client to issue READDIR requests containing up to 8192 bytes of data nfs3_async_clusters The nfs3_async_clusters tunable controls the mix of asynchronous requests that are generated by the NFSv3 client. There are four types of asynchronous requests: read-ahead putpage pageio readdir-ahead. The client attempts to service these different requests without favoring one type of operation over another. However some NFSv3 servers can take advantage of clustered requests from NFSv3 clients. For instance, write gathering is a server function that depends on the NFSv3 client sending out multiple WRITE requests in a short time span. If requests are taken out of the queue individually, the client defeats this server functionality designed to enhance performance. The nfs3_async_clusters tunable controls the number of outgoing requests for each type before changing types. The nfs2_async_clusters tunable controls the mix of asynchronous requests generated by NFSv2 clients. The nfs4_async_clusters tunable controls the mix of asynchronous requests generated by NFSv4 clients. For more information on these tunables, see: nfs2_async_clusters nfs4_async_clusters Default: 1 Min: 0 Max: 10
19 Note: If the tunable is set to a value greater than 10 asynchronous requests, an informational warning is issued at runtime. Any value greater than 10 is outside the tested limits. The nfs3_async_clusters tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the cluster setting is per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_async_clusters tunable. If server functionality depends upon clusters of operations coming from the client, increase the value of this tunable. However, this can impact the operations in other queues if they have to wait until the current queue is empty or the cluster limit is reached. Note: Setting the value of nfs3_async_clusters to 0 causes all of the queued requests of a particular type to be processed before moving to the next type nfs3_bsize The nfs3_bsize tunable controls the logical block size used by NFSv3 clients. Block size represents the amount of data the client reads from or writes to the server. The nfs3_bsize tunable works in conjunction with the nfs3_max_transfer_size, nfs3_max_transfer_size_cots, and nfs3_max_transfer_size_clts tunables when determining the maximum size of these I/O requests. For NFSv3 TCP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots. For NFSv3 UDP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_clts. The nfs2_bsize tunable controls the logical block size used by NFSv2 clients. The nfs4_bsize tunable controls the logical block size used by NFSv4 clients. For more information on these tunables, see: nfs2_bsize nfs4_bsize Default: Min: 4096 Max:
20 Note: If the tunable is set to a value greater than bytes, an informational warning is issued at runtime. Any value greater than is outside the tested limits. The value of the tunable must be a power of 2. The nfs3_bsize tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the logical block size is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_bsize tunable. For NFS/TCP Filesystems: To increase the transfer size of NFSv3 TCP requests, set the nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. For example, if 1 MB transfers are desired, all three tunables must be set to at least 1 MB. If two of the tunables are set to 1 MB and the third is set to 32 KB, the transfer size will be 32 KB since that is the smallest value of the three tunables. To decrease the size of NFSv3 TCP requests, decrease the value of the nfs3_max_transfer_size_cots tunable. For example, to decrease the size of I/O requests on all NFSv3 TCP filesystems to 8 KB, set the value of nfs3_max_transfer_size_cots to For NFS/UDP Filesystems: To increase the size of NFSv3 UDP requests, set the nfs3_bsize, nfs3_max_transfer_size and nfs3_max_transfer_size_clts tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. To decrease the size of NFSv3 UDP requests, decrease the value of the nfs3_max_transfer_size_clts tunable. For example, to decrease the size of I/O requests on all NFSv3 UDP filesystems to 8 KB, set the value of nfs3_max_transfer_size_clts to 8192.
21 Caution: HP strongly discourages increasing nfs3_max_transfer_size_clts above the default value of as this can cause NFS/UDP requests to fail. Also, if the NFS client is experiencing NFS READ failures and the system is reporting "NFS read failed for server <servername>: RPC: Can t decode result" errors, this is an indication that the nfs3_bsize, nfs3_max_transfer_size, nfs3_max_transfer_size_clts, or nfs3_max_transfer_size_cots tunable value was changed while NFS filesystems were mounted. The system administrator must unmount and remount the NFS filesystem to use the new value.
22 Note: The nfs3_bsize tunable affects every NFSv3 filesystem. To control the transfer sizes of specific NFS filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1m) man page for more information nfs3_cots_timeo The nfs3_cots_timeo tunable controls the default RPC timeout for NFSv3 mounted filesystems using a connection-oriented transport such as TCP. The nfs2_cots_timeo tunable controls the default RPC timeout for NFSv2 mounted filesystems. The nfs4_cots_timeo tunable controls the default RPC timeout for NFSv4 mounted filesystems. For more information on these tunables, see: nfs2_cots_timeo nfs4_cots_timeo Default: 600 tenths of a second (1 minute) Min: 10 tenths of a second (1 second) Max: tenths of a second (1 hour) Note: If the tunable is set to a value less than 10 tenths of a second or greater than tenths of a second, an informational warning is issued at runtime. These values are outside the tested limits. The nfs3_cots_timeo tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the timeout duration is set per filesystem at mount time. The system administrator must unmount and remount the filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_cots_timeo tunable. If you are experiencing a large number of timeouts on connection-oriented NFSv3 filesystems, increase the value of this tunable. However, a large number of connection-oriented timeouts can be an indication of networking hardware or software problems nfs3_do_symlink_cache The nfs3_do_symlink_cache tunable caches the contents of symbolic links in NFSv3 mounted filesystems. If the server changes the contents of a symbolic link, and if either the time stamps are not updated or the granularity of the time stamp is too large, then the changes become visible to the client after a long time interval.
23 The nfs2_do_symlink_cache tunable caches the contents of symbolic links in NFSv2 mounted filesystems. The nfs4_do_symlink_cache tunable caches the contents of symbolic links in NFSv4 mounted filesystems. For more information on these tunables, see: nfs2_do_symlink_cache nfs4_do_symlink_cache Default: 1 (Symbolic link cache is enabled) Min: 0 (Symbolic link cache is disabled) Max: 1 The nfs3_do_symlink_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_do_symlink_cache tunable. Enable this tunable to cache the contents of symbolic links. Because the client uses the cached version, changes made to the contents of the symbolic link file are not immediately visible to applications running on the client. To make the changes made to the symbolic link file immediately visible to applications on the client, disable this tunable. Disabling the tunable can result in more over-the-wire requests from the client if filesystems are mounted with NFSv3 and contain symbolic links nfs3_dynamic The nfs3_dynamic tunable controls the dynamic retransmission feature for NFSv3 mounted filesystems. The dynamic retransmission feature is designed to reduce NFS retransmissions by monitoring server response time and adjusting read and write transfer sizes on NFSv3 mounted filesystems using connectionless transports such as UDP. The nfs2_dynamic tunable controls the dynamic retransmission feature for NFSv2 mounted filesystems. For more information on the tunable, see nfs2_dynamic. Default: 0 (Dynamic retransmission is disabled) Min: 0 Max: 1 (Dynamic retransmission is enabled) The nfs3_dynamic tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the dynamic retransmission feature is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected when you change the value of this tunable. In congested networks, sending smaller sized NFS data packets can help if the network is dropping larger data packets. Enabling this tunable enables you to adjust the read and write transfer sizes for successful NFS I/O. If packets are not being dropped in the network, disabling this functionality results in increased throughput. However, if the server response is delayed or the network is overloaded, the number of timeouts can increase.
24 HP recommends leaving this tunable enabled because it helps the system minimize NFS packet loss on congested networks nfs3_enable_async_directio_read The nfs3_enable_async_directio_read tunable controls whether NFS clients perform direct I/O read operations synchronously, where only a single read operation is performed at a time, or asynchronously, where the client may issue multiple read operations in parallel. Enabling this feature may improve read performance on NFS v3 filesystems mounted with the forcedirectio option. forcedirectio is an NFS mount option that typically benefits large sequential data transfers and database workloads. Most database applications, such as Oracle, prefer to manage their own data cache resources and will benefit from bypassing any system file cache (such as the Unified File Cache on HP-UX 11i v3). When an NFS client mounts a filesystem with the forcedirectio option, data is transferred directly between the client and server without buffering on the client. By default the direct I/O data transfers are synchronous, where the client sends a single read request to the server and waits for the server to respond with the requested data before initiating a new request. Enabling the nfs3_enable_async_directio_read tunable allows the client to send several I/O requests in parallel before waiting for the server's response. The number of parallel direct I/O requests is configurable via the nfs3_max_async_directio_requests tunable. This can greatly improve read performance for applications that use direct I/O. Currently this feature is supported only for TCP traffic. Default: 0 (Tunable is disabled) Min: 0 Max: 1 (Tunable is enabled) The nfs3_enable_async_directio_read tunable is dynamic. System reboot is not required to activate a change made to this tunable. Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable. If an application experiences poor read performance on an NFS filesystem mounted with the forcedirectio option, enabling the nfs3_enable_async_directio_read tunable can improve the read performance nfs3_enable_async_directio_write The nfs3_enable_async_directio_write tunable controls whether NFS clients perform direct I/O write operations synchronously, where only a single write operation is performed at a time, or asynchronously, where the client may issue multiple write operations in parallel. Enabling this feature may improve write performance on NFS v3 filesystems mounted with the forcedirectio option. forcedirectio is an NFS mount option that typically benefits large sequential data transfers and database workloads. Most database applications, such as Oracle, prefer to manage their own data cache resources and will benefit from bypassing any system file cache (such as the Unified File Cache on HP-UX 11i v3). When an NFS client mounts a filesystem with the forcedirectio option, data is transferred directly between the client and server without buffering on the client. By default the direct I/O data transfers are synchronous, where the client sends a single write request
25 to the server and waits for the server to respond with the requested data before initiating a new request. Enabling the nfs3_enable_async_directio_write tunable allows the client to send several I/O requests in parallel before waiting for the server's response. The number of parallel direct I/O requests is configurable via the nfs3_max_async_directio_requests tunable. This can greatly improve write performance for applications that use direct I/O. Currently this feature is supported only for TCP traffic. Default: 0 (Tunable is disabled) Min: 0 Max: 1 (Tunable is enabled) The nfs3_enable_async_directio_write tunable is dynamic. System reboot is not required to activate a change made to this tunable. Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable. If an application experiences poor write performance on an NFS filesystem mounted with the forcedirectio option, enabling the nfs3_enable_async_directio_write tunable can improve the write performance nfs3_jukebox_delay The nfs3_jukebox_delay tunable specifies the time interval the NFS client must wait after receiving the NFS3ERR_JUKEBOX error before retransmitting the request to the server. If an NFS client requests a file on the server, and if the file is unavailable because it resides on a slow media or has been migrated on an HSM storage device, the server generates the NFS3ERR_JUKEBOX error. If the server returns this error, it indicates that the file cannot be accessed for a considerable amount of time. The retransmission of the request depends on the time interval specified by this tunable. Default: 1000 (10 seconds) Min: 100 (1 second) Max: (600 seconds) Note: If the tunable is set to a value less than 100 or greater than 60000, an informational warning is issued at runtime. These values are outside the tested limits. The nfs3_jukebox_delay tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected when you change the value of this tunable.
26 If it takes a considerable amount of time for files to migrate from your HSM storage devices, increase the value of this tunable. However, if you increase the value of the tunable, it can prevent the file from becoming immediately visible when it becomes available. If files are migrated quickly from your HSM storage devices, decrease the value of this tunable. If you decrease the value of the tunable, you can view the file as soon as it becomes available. However, if you set the tunable too low, your client can send retransmissions before the server is able to retrieve the files from the HSM storage devices nfs3_lookup_neg_cache The nfs3_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv3 mounted filesystems. The negative name cache records file names that were looked up but not found. This cache helps avoid over-the-wire lookups for files that are already known to be non-existent. The nfs2_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv2 mounted filesystems. The nfs4_lookup_neg_cache tunable controls whether a negative name cache is used for NFSv4 mounted filesystems. For more information on these tunables, see: nfs2_lookup_neg_cache nfs4_lookup_neg_cache Default: 1 (Negative name cache will be used) Min: 0 (Negative name cache will not be used) Max: 1 The nfs3_lookup_neg_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected by changing the value of this tunable. If filesystems are mounted read-only on the client, and applications running on the client need to immediately see filesystem changes on the server, disable this tunable. If you disable this tunable, also consider disabling the nfs_disable_rddir_cache tunable. For more information, see nfs_disable_rddir_cache nfs3_max_async_directio_requests The nfs3_max_async_directio_requests tunable specifies the maximum number of parallel read or write requests that NFS v3 direct can send on behalf of the application. This tunable is effective only on processes performing I/O operation on the forcedirectio NFS mount point. Default: 8 Min: 4 Max: 64
27 The nfs3_max_async_directio_requests tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable. This tunable is effective only if nfs3_enable_async_directio_read and/or nfs3_enable_async_directio_write are enabled The nfs3_max_async_directio_requests tunable works in conjunction with the nfs3_max_transfer_size tunable and rsize/wsize mount option when determining the mount point transfer bytes. For example, if nfs3_max_async_directio_requests is set to 8 and rsize/wsize is set to 32768, the nfs3_enable_async_directio_read or nfs3_enable_async_directio_write parameters can send or receive is 8*32768 = requests in parallel. Note: If after enabling the nfs3_enable_async_directio_read or nfs3_enable_async_directio_write parameters, the NFS client is frequently experiencing NFS READ or WRITE failures and the system is reporting RPCTIMEOUT error, this is an indication that the nfs3_max_async_directio_requests might be set too high. Setting it to a lower value may resolve this problem nfs3_max_threads The nfs3_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv3 filesystems. The operations executed asynchronously are read, readdir, and write. The nfs2_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv2 filesystems. The nfs4_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv4 filesystems. For more information on these tunables, see: nfs2_max_threads nfs4_max_threads Default: 8 Min: 0 Max: 256
28 Note: If the tunable is set to a value greater than 256 threads, an informational warning is issued at runtime. Any value greater than 256 is outside the tested limits. The nfs3_max_threads tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the number of threads is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of this tunable. Before modifying the value of this tunable, examine the available network bandwidth. If the network has high available bandwidth and the client and server have sufficient CPU and memory resources, increase the value of this tunable. This increase enables you to effectively utilize the available network bandwidth as well as the client and server resources. However, the total number of asynchronous threads for NFSv3 cannot exceed 20% of the available nkthreads. NFS mounts fail if the mount command cannot guarantee the ability to create the maximum number of threads for that mount point. If the network has low available bandwidth, decrease the value of this tunable. This decrease will ensure that the NFS client does not overload the network. Decreasing the value can impact NFS performance because it limits the number of asynchronous threads that can be spawned, and thus limits the number of simultaneous asynchronous I/O requests nfs3_max_transfer_size The nfs3_max_transfer_size tunable specifies the maximum size of the data portion of NFSv3 READ, WRITE, READDIR, and READDIRPLUS requests. This parameter controls both the maximum size of the data that the server returns and the maximum size of the request the client generates. The nfs3_max_transfer_size tunable works in conjunction with the nfs3_bsize, nfs3_max_transfer_size_cots, and nfs3_max_transfer_size_clts tunables when determining the maximum size of these I/O requests. For NFSv3 TCP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots. For UDP traffic, the transfer size corresponds to the smallest value of nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_clts. The nfs4_max_transfer_size tunable specifies the maximum size of the data portion of NFSv4 requests. For more information on the tunable, see nfs4_max_transfer_size. Default: Min: 4096 Max:
29 Note: If the tunable is set to a value greater than , an informational warning is issued at runtime. Any value greater than is outside the tested limits. The value of the tunable must be a power of 2. The nfs3_max_transfer_size tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the transfer size for a filesystem is set when the filesystem is mounted. In order to affect a particular filesystem, the system administrator must unmount and re-mount the filesystem after changing this tunable. Only NFSv3 mount points are affected by changing the value of this tunable. For NFS/TCP Filesystems: To increase the transfer size of NFSv3 TCP requests, set the nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_cots tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. For example, if 1 MB transfers are desired, all three tunables must be set to at least 1 MB. If two of the tunables are set to 1 MB and the third is set to 32 KB, the transfer size will be 32 KB since that is the smallest value of the three tunables. To decrease the size of NFSv3 TCP requests, decrease the value of the nfs3_max_transfer_size_cots tunable. For example, to decrease the size of I/O requests on all NFSv3 TCP filesystems to 8 KB, set the value of nfs3_max_transfer_size_cots to For NFS/UDP Filesystems: To increase the size of NFSv3 UDP requests, set the nfs3_bsize, nfs3_max_transfer_size, and nfs3_max_transfer_size_clts tunables to the same value. Otherwise, the transfer size will default to the smallest value of these three tunables. To decrease the size of NFSv3 UDP requests, decrease the value of the nfs3_max_transfer_size_clts tunable. For example, to decrease the size of I/O requests on all NFSv3 UDP filesystems to 8 KB, set the value of nfs3_max_transfer_size_clts to Caution: HP strongly discourages increasing nfs3_max_transfer_size_clts above the default value of as this can cause NFS/UDP requests to fail. Also, if the NFS client is experiencing NFS READ failures and the system is reporting "NFS read failed for server <servername>: RPC: Can t decode result" errors, this is an indication that the nfs3_bsize, nfs3_max_transfer_size, nfs3_max_transfer_size_clts, or nfs3_max_transfer_size_cots tunable value was changed while NFS filesystems were mounted. The system administrator must unmount and remount the NFS filesystem to use the new value.
HP-UX Support Tools Manager (STM) Release Notes
HP-UX Support Tools Manager (STM) Release Notes HP-UX 11i v3 Version: B.11.31.24.02 Part Number: 820365-002 Published: June 2016 Edition: 1 Copyright 2016 Hewlett Packard Enterprise Development LP The
More informationMonitoring Network File Systems
Monitoring Network File Systems eg Enterprise v6 Restricted Rights Legend The information contained in this document is confidential and subject to change without notice. No part of this document may be
More informationHPE Common Internet File System (CIFS) Server Release Notes Version B for HP-UX 11i v3
HPE Common Internet File System (CIFS) Server Release Notes Version B.04.05.03.00 for HP-UX 11i v3 Part Number: 766971-010 Published: February 2017 Edition: 5 Contents HPE secure development lifecycle...
More informationOMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.
OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December
More informationHPE Storage Optimizer Software Version: 5.4. Best Practices Guide
HPE Storage Optimizer Software Version: 5.4 Best Practices Guide Document Release Date: November 2016 Software Release Date: November 2016 Legal Notices Warranty The only warranties for Hewlett Packard
More informationNFS Design Goals. Network File System - NFS
Network File System - NFS NFS Design Goals NFS is a distributed file system (DFS) originally implemented by Sun Microsystems. NFS is intended for file sharing in a local network with a rather small number
More informationDISTRIBUTED FILE SYSTEMS & NFS
DISTRIBUTED FILE SYSTEMS & NFS Dr. Yingwu Zhu File Service Types in Client/Server File service a specification of what the file system offers to clients File server The implementation of a file service
More informationHP 830 Series PoE+ Unified Wired-WLAN Switch Switching Engine
HP 830 Series PoE+ Unified Wired-WLAN Switch Switching Engine Network Management and Monitoring Configuration Guide Part number: 5998-3936 Software version: 3308P26 Document version: 6W101-20130628 Legal
More informationRamdisk (Memory-based Disk) Support on HP-UX 11i v2
Ramdisk (Memory-based Disk) Support on HP-UX 11i v2 Introduction... 2 Terms and Definitions... 2 Ramdisk Features in HP-UX 11i v2... 2 Ramdisk Installation... 3 Ramdisk Configuration... 3 Ramdisk Device
More informationGuidelines for using Internet Information Server with HP StorageWorks Storage Mirroring
HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information
More informationRAID-01 (ciss) B Mass Storage Driver Release Notes
RAID-01 (ciss) B.11.31.1705 Mass Storage Driver Release Notes HP-UX 11i v3 Abstract This document contains specific information that is intended for users of this HPE product. Part Number: Published:
More informationUsing NFS as a filesystem type with HP Serviceguard A on HP-UX 11i v3
Using NFS as a filesystem type with HP Serviceguard A.11.20 on HP-UX 11i v3 Technical white paper Table of contents Introduction... 2 Audience... 2 Terms and Definitions... 2 Serviceguard support for NFS
More informationHP 3PAR OS MU3 Patch 17
HP 3PAR OS 3.2.1 MU3 Patch 17 Release Notes This release notes document is for Patch 17 and intended for HP 3PAR Operating System Software. HP Part Number: QL226-98310 Published: July 2015 Edition: 1 Copyright
More informationHP AutoPass License Server
HP AutoPass License Server Software Version: 9.0 Windows, Linux and CentOS operating systems Support Matrix Document Release Date: October 2015 Software Release Date: October 2015 Page 2 of 10 Legal Notices
More informationHP OpenView Storage Data Protector A.05.10
HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright
More informationHPE 3PAR OS MU5 Patch 49 Release Notes
HPE 3PAR OS 3.2.1 MU5 Patch 49 Release Notes This release notes document is for Patch 49 and intended for HPE 3PAR Operating System Software + P39. Part Number: QL226-99362a Published: October 2016 Edition:
More informationAn Introduction to GPFS
IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4
More informationHP-UX DCE v2.0 Application Development Tools Release Notes
HP-UX DCE v2.0 Application Development Tools Release Notes HP-UX 11i v3 Version 2.0 Manufacturing Part Number: 5991-7724 February 2007 U.S.A. Copyright 2007 Hewlett-Packard Development Company L.P. All
More informationIntroduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC...
HP StorageWorks P2000 G3 FC MSA Dual Controller Virtualization SAN Starter Kit Protecting Critical Applications with Server Application Optimization (SAO) Technical white paper Table of contents Introduction...
More informationHP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2
HP Data Protector A.06.11 disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 Technical white paper Table of contents Introduction... 2 Installation... 2 Preparing for Disaster
More informationHP Network Node Manager ispi Performance for Quality Assurance Software
HP Network Node Manager ispi Performance for Quality Assurance Software Intelligent Response Agent for the Windows and Linux operating systems Software Version: 9.20 Installation Guide Document Release
More informationHP P4000 Remote Copy User Guide
HP P4000 Remote Copy User Guide Abstract This guide provides information about configuring and using asynchronous replication of storage volumes and snapshots across geographic distances. For the latest
More informationHP Business Availability Center
HP Business Availability Center for the Windows and Solaris operating systems Software Version: 8.00 Embedded UCMDB Applets Using Direct Links Document Release Date: January 2009 Software Release Date:
More informationHewlett Packard Enterprise. HPE OmniStack for vsphere Upgrade Guide
Hewlett Packard Enterprise HPE OmniStack for vsphere Upgrade Guide Part number: P00126-001 Published: September 2017 2017 Hewlett Packard Enterprise Development LP Notices The information contained herein
More informationHP Serviceguard Quorum Server Version A Release Notes, Fourth Edition
HP Serviceguard Quorum Server Version A.02.00 Release Notes, Fourth Edition Manufacturing Part Number: B8467-90026 Reprinted December 2005 Legal Notices Copyright 2005 Hewlett-Packard Development Company,
More informationHPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide
HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide Abstract This document explains how to install and use the HPE StoreEver MSL6480 Tape Library CLI utility, which provides a non-graphical
More informationServiceguard NFS Toolkit A , A and A Administrator's Guide
Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.08 Administrator's Guide HP-UX 11i v1, v2, and v3 HP Part Number: B5140-90049 Published: October 2011 Edition: 15 Copyright 2011 Hewlett-Packard
More informationHP 5120 SI Switch Series
HP 5120 SI Switch Series Network Management and Monitoring Configuration Guide Part number: 5998-1813 Software version: Release 1505 Document version: 6W102-20121111 Legal and notice information Copyright
More informationHP Real User Monitor. Software Version: Real User Monitor Sizing Guide
HP Real User Monitor Software Version: 9.26 Real User Monitor Sizing Guide Document Release Date: September 2015 Software Release Date: September 2015 Real User Monitor Sizing Guide Legal Notices Warranty
More informationHPE ilo Federation User Guide for ilo 5
HPE ilo Federation User Guide for ilo 5 Abstract This guide explains how to configure and use the HPE ilo Federation features. It is intended for system administrators, Hewlett Packard Enterprise representatives,
More informationMarvell BIOS Utility User Guide
Marvell BIOS Utility User Guide for HPE MicroServer Gen10 Abstract This user guide provides information on how to use the embedded Marvell BIOS Utility to create and manage RAID virtual disks and arrays.
More informationHP Accelerated iscsi for Multifunction Network Adapters User Guide
HP Accelerated iscsi for Multifunction Network Adapters User Guide Part Number 410538-00J December 2007 (Ninth Edition) Copyright 2006, 2007 Hewlett-Packard Development Company, L.P. The information contained
More informationversion on HP-UX 11i v3 March 2014 Operating Environment Updat e Release
Technical white paper Installation of non-def ault VxFS and VxVM soft ware version on HP-UX 11i v3 March 2014 Operating Environment Updat e Release Table of contents Introduction... 3 Installation Instructions...
More informationUsing NFS as a file system type with HP Serviceguard A on HP-UX and Linux
Technical white paper Using NFS as a file system type with HP Serviceguard A.11.20 on HP-UX and Linux Table of contents Introduction 2 Audience 2 Serviceguard support for NFS on HP-UX and Linux 2 Overview
More informationHPE 3PAR OS MU3 Patch 24 Release Notes
HPE 3PAR OS 3.1.3 MU3 Patch 24 Release Notes This release notes document is for Patch 24 and intended for HPE 3PAR Operating System Software + P19. Part Number: QL226-99298 Published: August 2016 Edition:
More informationALM Lab Management. Lab Management Guide. Software Version: Go to HELP CENTER ONLINE
ALM Lab Management Software Version: 12.55 Lab Management Guide Go to HELP CENTER ONLINE http://admhelp.microfocus.com/alm Document Release Date: August 2017 Software Release Date: August 2017 ALM Lab
More informationAdministrator Guide. Windows Embedded Standard 7
Administrator Guide Windows Embedded Standard 7 Copyright 2010, 2012 2015, 2017 HP Development Company, L.P. Citrix and XenDesktop are registered trademarks of Citrix Systems, Inc. and/or one more of its
More informationHPE Security ArcSight Connectors
HPE Security ArcSight Connectors SmartConnector for Windows Event Log Unified: Microsoft Exchange Access Auditing Supplemental Configuration Guide July 15, 2017 Supplemental Configuration Guide SmartConnector
More informationHP Service Quality Management Solution
HP Service Quality Management Solution Service Designer V3.0 Installation and Configuration Guide Edition: 2.0 for Microsoft Windows Operating System Nov 2011 Copyright 2011 Hewlett-Packard Company, L.P.
More informationIntroduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5
A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4
More informationHP Network Node Manager i-series Software
HP Network Node Manager i-series Software For the Windows, HP-UX, Linux, and Solaris operating systems Software Version: NNMi 8.1x patch 4 (8.12) Online Help: Document Release Date: June 2009 Software
More informationPCI / PCIe Error Recovery Product Note. HP-UX 11i v3
PCI / PCIe Error Recovery Product Note HP-UX 11i v3 HP Part Number: 5900-0584 Published: September 2010 Legal Notices Copyright 2003-2010 Hewlett-Packard Development Company, L.P. Confidential computer
More informationHP-UX PAM RADIUS A Release Notes
HP-UX PAM RADIUS A.01.00 Release Notes HP-UX 11i v2, HP-UX 11i v3 HP Part Number: 5992-3382 Published: March 2008 Edition: 1.0 Copyright 2008 Hewlett-Packard Development Company, L.P. Confidential computer
More informationHP D6000 Disk Enclosure Direct Connect Cabling Guide
HP D6000 Disk Enclosure Direct Connect Cabling Guide Abstract This document provides cabling examples for when an HP D6000 Disk Enclosure is connected directly to a server. Part Number: 682251-001 September
More informationHPE 3PAR OS MU3 Patch 28 Release Notes
HPE 3PAR OS 3.2.1 MU3 Patch 28 Release tes This release notes document is for Patch 28 and intended for HPE 3PAR Operating System Software 3.2.1.292 (MU3)+Patch 23. Part Number: QL226-99107 Published:
More informationHP 3PAR OS MU1 Patch 11
HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1
More informationHPE 3PAR OS MU2 Patch 36 Release Notes
HPE 3PAR OS 321 MU2 Patch 36 Release Notes This release notes document is for Patch 36 and intended for HPE 3PAR Operating System Software 321200 (MU2)+P13 Part Number: QL226-99149 Published: May 2016
More informationHPE Intelligent Management Center
HPE Intelligent Management Center Service Health Manager Administrator Guide Abstract This guide provides introductory, configuration, and usage information for Service Health Manager (SHM). It is for
More informationHP Virtual Connect Enterprise Manager
HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential
More informationHP ALM Client MSI Generator
HP ALM Client MSI Generator Software Version: 1.00 User Guide Document Release Date: October 2010 Software Release Date: October 2010 Legal Notices Warranty The only warranties for HP products and services
More informationSupported File and File System Sizes for HFS and JFS
Supported File and File System Sizes for HFS and JFS Executive Summary... 2 Hierarchical File System (HFS) Supported Sizes... 2 JFS (VxFS) Supported Sizes... 3 Large File System (> 2 TB) Compatibility
More informationHP ProLiant Essentials RDMA for HP Multifunction Network Adapters User Guide
HP ProLiant Essentials RDMA for HP Multifunction Network Adapters User Guide Part Number 432562-00B February 2007 (Second Edition) Copyright 2007 Hewlett-Packard Development Company, L.P. The information
More informationVirtual Recovery Assistant user s guide
Virtual Recovery Assistant user s guide Part number: T2558-96323 Second edition: March 2009 Copyright 2009 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind
More informationHP ALM Performance Center
HP ALM Performance Center Software Version: 12.53 Quick Start Document Release Date: May 2016 Software Release Date: May 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise Development
More informationIBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018
IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before
More informationHP 3PAR OS MU3 Patch 18 Release Notes
HP 3PAR OS 3.2.1 MU3 Patch 18 Release Notes This release notes document is for Patch 18 and intended for HP 3PAR Operating System Software 3.2.1.292 (MU3). HP Part Number: QL226-98326 Published: August
More informationIDE Connector Customizer Readme
IDE Connector Customizer Readme Software version: 1.0 Publication date: November 2010 This file provides information about IDE Connector Customizer 1.0. Prerequisites for IDE Connector Customizer The Installation
More informationHPE Security ArcSight Connectors
HPE Security ArcSight Connectors SmartConnector for Windows Event Log Unified: Microsoft Network Policy Server Supplemental Configuration Guide March 29, 2013 Supplemental Configuration Guide SmartConnector
More informationHPE ALM Client MSI Generator
HPE ALM Client MSI Generator Software Version: 12.55 User Guide Document Release Date: August 2017 Software Release Date: August 2017 HPE ALM Client MSI Generator Legal Notices Warranty The only warranties
More informationHPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault
HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault Solution overview and best practices for data preservation with Veritas Enterprise Vault Technical white paper Technical
More informationHP IDOL Site Admin. Software Version: Installation Guide
HP IDOL Site Admin Software Version: 10.9 Installation Guide Document Release Date: March 2015 Software Release Date: March 2015 Legal Notices Warranty The only warranties for HP products and services
More informationHP Network Node Manager i Software Step-by-Step Guide to Custom Poller
HP Network Node Manager i Software Step-by-Step Guide to Custom Poller NNMi 9.1x Patch 2 This document includes two examples. The first example illustrates how to use Custom Poller to monitor disk space.
More informationHP 3PARInfo 1.4 User Guide
HP 3PARInfo 1.4 User Guide Abstract This guide provides information about installing and using HP 3PARInfo. It is intended for system and storage administrators who monitor and direct system configurations
More informationConfiguring RAID with HP Z Turbo Drives
Technical white paper Configuring RAID with HP Z Turbo Drives HP Workstations This document describes how to set up RAID on your HP Z Workstation, and the advantages of using a RAID configuration with
More informationSoftware Package Builder 7.0 User's Guide
Software Package Builder 7.0 User's Guide HP-UX 11i v1, HP-UX 11i v2, and HP-UX 11i v3 HP Part Number: 5992-5179 Published: March 2010 Edition: Edition 7 Copyright 2002-2010 Hewlett-Packard Development
More informationHPE Network Node Manager i Software
HPE Network Node Manager i Software Managing Traps in NNMi NNMi Version 10.30 White Paper Contents Introduction... 3 About SNMP Traps and NNMi... 3 The Flow of SNMP Traps on the NNMi Management Server...
More informationIEther-00 (iether) B Ethernet Driver Release Notes
IEther-00 (iether) B.11.31.1503 Ethernet Driver Release Notes HP-UX 11i v3 Abstract This document contains specific information that is intended for users of this HP product. HP Part Number: 5900-4023
More informationHP Routing Switch Series
HP 12500 Routing Switch Series EVI Configuration Guide Part number: 5998-3419 Software version: 12500-CMW710-R7128 Document version: 6W710-20121130 Legal and notice information Copyright 2012 Hewlett-Packard
More informationHP Auto Port Aggregation (APA) Release Notes
HP Auto Port Aggregation (APA) Release Notes HP-UX 11i v3 HP Part Number: 5900-3026 Published: March 2013 Edition: 2 Copyright 2013 Hewlett-Packard Development Company L.P. Confidential computer software.
More informationHP Operations Orchestration
HP Operations Orchestration For the Linux or Windows operating systems Software Version: 9.02 Document Release Date: October 2011 Software Release Date: October 2011 Legal Notices Warranty The only warranties
More informationHPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5
HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 January 2016 This release note describes the enhancement, known restrictions, and errors found in the WBEM software and documentation,
More informationHP Storage Mirroring Application Manager 4.1 for Exchange white paper
HP Storage Mirroring Application Manager 4.1 for Exchange white paper Introduction... 2 Product description... 2 Features... 2 Server auto-discovery... 2 (NEW) Cluster configuration support... 2 Integrated
More informationHP LeftHand P4500 and P GbE to 10GbE migration instructions
HP LeftHand P4500 and P4300 1GbE to 10GbE migration instructions Part number: AT022-96003 edition: August 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential
More informationHP Service Manager. Process Designer Tailoring Best Practices Guide (Codeless Mode)
HP Service Manager Software Version: 9.41 For the supported Windows and UNIX operating systems Process Designer Tailoring Best Practices Guide (Codeless Mode) Document Release Date: September 2015 Software
More informationNetwork Time Protocol (NTP) Release Notes
Network Time Protocol (NTP) Release Notes Version 4.2.8 for HP-UX 11i v3 Abstract This document describes about new features and defect fixes for Network Time Protocol (NTP) version 4.2.8. Part Number:
More informationHP UFT Connection Agent
HP UFT Connection Agent Software Version: For UFT 12.53 User Guide Document Release Date: June 2016 Software Release Date: June 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise
More informationStatus of the Linux NFS client
Status of the Linux NFS client Introduction - aims of the Linux NFS client General description of the current status NFS meets the Linux VFS Peculiarities of the Linux VFS vs. requirements of NFS Linux
More informationHPE 3PAR OS MU3 Patch 23 Release Notes
HPE 3PAR OS 321 MU3 Patch 23 Release tes This release notes document is for Patch 23 and intended for HPE 3PAR Operating System Software 321292 (MU3)+Patch 18 Part Number: QL226-98364 Published: December
More informationIBM MQ Appliance HA and DR Performance Report Version July 2016
IBM MQ Appliance HA and DR Performance Report Version 2. - July 216 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before using this report,
More informationVeeam Cloud Connect. Version 8.0. Administrator Guide
Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,
More informationIDOL Site Admin. Software Version: User Guide
IDOL Site Admin Software Version: 11.5 User Guide Document Release Date: October 2017 Software Release Date: October 2017 Legal notices Warranty The only warranties for Hewlett Packard Enterprise Development
More informationOMi Management Pack for Oracle Database. Software Version: Operations Manager i for Linux and Windows operating systems.
OMi Management Pack for Oracle Database Software Version: 1.10 Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: June 2017 Software Release Date: February 2014
More informationHPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide
HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April
More informationHP 3PAR OS MU2 Patch 11
HP 3PAR OS 321 MU2 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software 321200 (MU2) Patch 11 (P11) HP Part Number: QL226-98118 Published:
More informationHP VMware ESXi and vsphere 5.x and Updates Getting Started Guide
HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi and vsphere. HP Part Number: 616896-409 Published: September
More informationHP Auto Port Aggregation (APA) Release Notes
HP Auto Port Aggregation (APA) Release Notes HP-UX 11i v1, 11i v2, and 11i v3 HP Part Number: 5900-2483 Published: September 2012 Edition: 1 Copyright 2012 Hewlett-Packard Development Company, L.P. Confidential
More informationWIDS Technology White Paper
Technical white paper WIDS Technology White Paper Table of contents Overview... 2 Background... 2 Functions... 2 Rogue detection implementation... 2 Concepts... 2 Operating mechanism... 2 Operating modes...
More informationHP StorageWorks Continuous Access EVA 2.1 release notes update
HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.
More informationHP Operations Orchestration
HP Operations Orchestration Software Version: 7.20 HP Network Node Manager (i series) Integration Document Release Date: July 2008 Software Release Date: July 2008 Legal Notices Warranty The only warranties
More informationHP Management Integration Framework 1.7
HP Management Integration Framework 1.7 Administrator Guide Abstract This document describes the use of HP Management Integration Framework interfaces and is intended for administrators involved in the
More informationGuest Management Software V2.0.2 Release Notes
Guest Management Software V2.0.2 Release Notes Abstract These release notes provide important release-related information for GMS (Guest Management Software) Version 2.0.2. GMS V2.0.2 is MSM software version
More informationIntelligent Provisioning 1.64(B) Release Notes
Intelligent Provisioning 1.64(B) Release Notes Part Number: 680065-407 Published: March 2017 Edition: 1 2017 Hewlett Packard Enterprise Development LP Notices The information contained herein is subject
More informationHP integrated Citrix XenServer 5.0 Release Notes
HP integrated Citrix XenServer 5.0 Release Notes Part Number 488554-003 March 2009 (Third Edition) Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to
More informationWhat s New in Oracle Cloud Infrastructure Object Storage Classic. Topics: On Oracle Cloud. Oracle Cloud
Oracle Cloud What's New in Classic E71883-15 February 2018 What s New in Oracle Cloud Infrastructure Object Storage Classic This document describes what's new in Classic on all the infrastructure platforms
More informationHP Database and Middleware Automation
HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty
More informationHPE FlexFabric 7900 Switch Series
HPE FlexFabric 7900 Switch Series VXLAN Configuration Guide Part number: 5998-8254R Software version: Release 213x Document version: 6W101-20151113 Copyright 2015 Hewlett Packard Enterprise Development
More informationHP 5820X & 5800 Switch Series Network Management and Monitoring. Configuration Guide. Abstract
HP 5820X & 5800 Switch Series Network Management and Monitoring Configuration Guide Abstract This document describes the software features for the HP 5820X & 5800 Series products and guides you through
More informationOMi Management Pack for Microsoft Active Directory. Software Version: Operations Manager i for Linux and Windows operating systems.
OMi Management Pack for Microsoft Active Directory Software Version: 1.00 Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: June 2017 Software Release Date:
More informationHP Fortify Scanning Plugin for Xcode
HP Fortify Scanning Plugin for Xcode Software Version: 4.40 User Guide Document Release Date: November 2015 Software Release Date: November 2015 Legal Notices Warranty The only warranties for HP products
More informationHPE Operations Agent. Concepts Guide. Software Version: For the Windows, HP-UX, Linux, Solaris, and AIX operating systems
HPE Operations Agent Software Version: 12.02 For the Windows, HP-UX, Linux, Solaris, and AIX operating systems Concepts Guide Document Release Date: December 2016 Software Release Date: December 2016 Legal
More information