PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR

Similar documents
VERITAS Storage Foundation 4.0 TM for Databases

VERITAS Volume Replicator. Successful Replication and Disaster Recovery

VERITAS Volume Replicator Successful Replication and Disaster Recovery

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES

VERITAS Volume Manager for Windows 2000

INTRODUCING VERITAS BACKUP EXEC SUITE

VERITAS Storage Replicator for Volume Manager 3.0.2

EMC Celerra Replicator V2 with Silver Peak WAN Optimization

VERITAS Storage Foundation 4.0 for Windows

Veritas Storage Foundation for Windows by Symantec

Veritas Volume Replicator Option by Symantec

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

MANAGING MULTI-VENDOR SANS WITH VERITAS SANPOINT CONTROL

Veritas Volume Replicator Planning and Tuning Guide

VERITAS Storage Foundation HA 4.3 for Windows

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008

Chapter 4 Data Movement Process

Oracle E-Business Availability Options. Solution Series for Oracle: 2 of 5

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

Veritas Storage Foundation for Windows by Symantec

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

VERITAS Foundation Suite for HP-UX

Veritas Storage Foundation for Oracle RAC from Symantec

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime

Volume Replicator Administrator's Guide

Data Protection and Synchronization for Desktop and Laptop Users VERITAS BACKUP EXEC 9.1 FOR WINDOWS SERVERS DESKTOP AND LAPTOP OPTION

Veritas Storage Foundation for Windows by Symantec

VERITAS Cluster Server. QuickStart. Product Overview

A Practical Guide to Cost-Effective Disaster Recovery Planning

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

Veritas Storage Foundation Volume Replicator Administrator's Guide

The VERITAS VERTEX Initiative. The Future of Data Protection

Simplified Storage Migration for Microsoft Cluster Server

VERITAS Dynamic Multipathing. Increasing the Availability and Performance of the Data Path

An Oracle White Paper May Oracle VM 3: Overview of Disaster Recovery Solutions

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

The Microsoft Large Mailbox Vision

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft Exchange 2007

V. Mass Storage Systems

Comparison: Microsoft Logical Disk Manager (LDM) and VERITAS Volume Manager

Business Continuity and Disaster Recovery. Ed Crowley Ch 12

Network Design Considerations for Grid Computing

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

VERITAS FlashSnap. Guidelines for Using VERITAS FlashSnap

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

Disk to Disk Data File Backup and Restore.

Veritas Cluster Server from Symantec

Building a 24x7 Database. By Eyal Aronoff

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

High availability and disaster recovery with Microsoft, Citrix and HP

EMC Business Continuity for Microsoft Applications

Symantec Storage Foundation for Oracle Real Application Clusters (RAC)

VERITAS Dynamic MultiPathing (DMP) Increasing the Availability and Performance of the Data Path

Building Backup-to-Disk and Disaster Recovery Solutions with the ReadyDATA 5200

VEEAM. Accelerating virtual machine replication with PORTrockIT

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group

10 The next chapter of this Web Based Training module describe the two different Remote Equivalent Transfer Modes; synchronous and asynchronous.

Real-time Protection for Microsoft Hyper-V

SIMPLE, FLEXIBLE CONNECTIONS FOR TODAY S BUSINESS. Ethernet Services from Verizon

vsan Disaster Recovery November 19, 2017

Oracle Rdb Hot Standby Performance Test Results

SAN for Business Continuity

Contingency Planning and Disaster Recovery

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

1Highwinds. Software. Highwinds Software LLC. Document Version 1.1 April 2003

VERITAS Database Edition for Sybase. Technical White Paper

Veritas Volume Replicator Administrator s Guide

A CommVault White Paper: Business Continuity: Architecture Design Guide

IBM System Storage DS5020 Express

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

PORTrockIT. IBM Spectrum Protect : faster WAN replication and backups with PORTrockIT

Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino

Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino

NetVault Backup Client and Server Sizing Guide 2.1

PowerVault MD3 Storage Array Enterprise % Availability

Veritas Storage Foundation and. Sun Solaris ZFS. A performance study based on commercial workloads. August 02, 2007

ECE Engineering Robust Server Software. Spring 2018

High Availability and Disaster Recovery Solutions for Perforce

Step into the future. HP Storage Summit Converged storage for the next era of IT

WHITEPAPER. Disk Configuration Tips for Ingres by Chip nickolett, Ingres Corporation

EMC VPLEX Geo with Quantum StorNext

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc.

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing

Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino

Symantec Backup Exec Blueprints

BUSINESS CONTINUITY: THE PROFIT SCENARIO

Performance Monitoring Always On Availability Groups. Anthony E. Nocentino

DAR (DIRECT ACCESS RECOVERY) - A LOOK UNDER THE COVERS

Database Management. Understanding Failure Resiliency CHAPTER

VERITAS Storage Foundation for Windows FlashSnap Option


Veritas Volume Replicator Administrator's Guide

Disaster Recovery-to-the- Cloud Best Practices

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

IBM TotalStorage Enterprise Storage Server Model 800

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Exam : Title : High-End Disk Solutions for Open Systems Version 1. Version : DEMO

COMPLETE ONLINE MICROSOFT SQL SERVER DATA PROTECTION

Transcription:

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR Tim Coulter and Sheri Atwood November 13, 2003 VERITAS ARCHITECT NETWORK

TABLE OF CONTENTS Introduction... 3 Overview of VERITAS Volume Replicator... 3 Replicated Volume Group (RVG)... 4 Storage Replication Log (SRL)... 4 Replication Links (Rlink)... 4 Data Change Maps... 5 Designing and Tuning the Replication System... 5 Designing the SRL... 5 Determining the Optimum Size... 6 Monitoring the Application Behavior... 6 Designing the Network Bandwidth and Rlink... 6 General Design Considerations... 7 Dos and Don ts for Setting Up Replication... 8 Configuring and Tuning VERITAS Volume Replicator Parameters... 8 Volume Design and Configuration... 8 Kernel Tuning Parameters... 9 Network Tuning Parameters... 11 Conclusion... 11 VERITAS ARCHITECT NETWORK 2

Introduction The long-term success of an enterprise depends on its ability to prepare for the unexpected. This means maintaining the high availability of mission-critical applications despite inevitable system failures or disasters. To keep information technology (IT) systems up and running, speedy data recovery is essential. While disaster recovery plans include a combination of backup methods such as tape storage and electronic vaulting, one of the fastest ways to recover data is through replication. Data replication is preferred by organizations that need high availability, and fear the consequences of lost revenue and lost productivity when systems go down. Replication is the process of duplicating primary data volumes over a network connection to a storage subsystem at an alternate location. A significant distance usually separates the source volume and target volume of a replication providing a safeguard for disasters that are centered in a specific geographic location, such as a region-wide power outage. When a replication is added to an environment, it increases the overall complexity of the system, and can negatively impact system performance, depending on the underlying hardware and software. This paper provides guidelines for designing and tuning a replication solution using VERITAS Volume Replicator. Overview of VERITAS Volume Replicator VERITAS Volume Replicator is a data replication tool that helps maintain a consistent copy of local application data at a remote site. VERITAS Volume Replicator reliably, efficiently, and consistently replicates data to remote locations over an IP network to ensure maximum business continuity, and eliminating the need for expensive proprietary network and storage hardware at every site. In the event that the primary data center is damaged or destroyed, the application data that was stored there is immediately available at the remote site, allowing the application to be restarted quickly at the remote site. Volume Replicator is a tightly integrated component of VERITAS Volume Manager and can use the existing Volume Manager configurations with some restrictions. Volume Replicator enables Volume Manager volumes on one system to be replicated to identically sized volumes on another system. To build the replication environment for Volume Replicator, four components (see Figure 1) must be added to the Volume Manager configuration: 1. Replicated Volume Group (RVG) 2. Storage Replication Log (SRL) 3. Replication Links (RLinks) 4. Data Change Maps (DCM). These components are explained in more detail in the sections below. VERITAS ARCHITECT NETWORK 3

Replicated Volume Group (RVG) Figure 1. VERITAS Volume Replicator Architecture The replicated volume group (RVG) is a collection of the volumes that are to be replicated. An RVG is a subset of volumes within a Volume Manager disk group whose data will be replicated to one or more secondary systems. An RVG can contain any number of data volumes from its disk group, but cannot span multiple disk groups. Multiple RVGs can be built inside one disk group. Storage Replication Log (SRL) The storage replication log (SRL) is a sequential log that keeps track of the write activity to all of the volumes in an RVG. All data writes destined for volumes configured for replication are first queued in this log. VERITAS Volume Replicator implements the SRL at the primary site to store all changes for transmission to the secondary site(s). The SRL is a Volume Manager volume configured as part of an RVG. The SRL tracks writes to specific volumes in the correct order and guarantees that the data will arrive at the secondary site in that same order, regardless of the mode of replication. Replication Links (Rlink) An rlink is a VERITAS Volume Replicator replication link that reads data from the replicator log volume (RVG) and sends it to the secondary RVG. Each rlink on a primary RVG represents the communication link to a secondary RVG, via an IP connection. Rlinks are configured to communicate between specific host names/ip addresses and can support both TCP and UDP communication protocols between systems. VERITAS ARCHITECT NETWORK 4

Data Change Maps Data change maps are volume logs that help with the initial synchronization when it is performed across the network. They mark the sections of primary volumes that change during an extended network outage, minimizing the amount of data that must be synchronized to the secondary site after the outage. If the amount of changes is greater than what the SRL will hold, data change maps will indicate this, and require a full resynchronization to recover from the outage. Designing and Tuning the Replication System In this section we will discuss designing and tuning the replication system before enabling the VERITAS Volume Replicator replication. Tuning the system during the design phase is important because it minimizes the need for tuning after the system is in production. Some tuning changes require a reboot to take affect. If the system is set up correctly initially, then there is no downtime requirement for adjustments. These guidelines will provide maximum performance in many circumstances, but they should be validated against actual conditions. 1 Designing the SRL To a great extent, the SRL determines the performance of the system as a whole. For synchronous configurations, the write time is the sum of the time it takes to write to the SRL plus the round-trip time for a write to travel across the network to the secondary system. For asynchronous configurations, the write time from the application s point of view is the write to the SRL. Therefore, we strongly recommend that the SRL be placed on the fastest possible disks. Some VERITAS customers have had great success using solid state disks for the SRL. Because the SRL is a sequentially written log, the stripe sizes need to be much larger then the defaults. Many JBOD ( Just a Bunch of Disks ) configurations use stripe sizes between 100 megabytes (MB) and 2 gigabytes (GB). If replication is asynchronous, stripes are still recommended to enable the system to read the data back from the disk. The proper configuration is to make vol_max_rdback_sz the same size as the stripe (for details on the tuning variables, see Kernel Tuning Parameters). This variable maps memory to the SRL. If it is the same size as the SRL, then the system can write new information and read information from the same stripe to send to the secondary site with no disk-head contention. The reads are done from memory, rather than from the disk. Before you determine of bandwidth needed for replication, put the system into the configuration that will be used for replication. This includes moving non-replicated data, Oracle s temp.df files for example, off the volumes to be replicated. On one customer site, the amount of replicated data was reduced by 25% when they moved temp.df off of the replicated volumes. Since RVG configuration allows for the selection of which data volumes will be replicated, it is easy to minimize the amount of storage that is required at the secondary site. Data that is not replicated can still be protected using traditional tape backups. If the SRL is on a disk array, configuring it is more complex. The SRL stripe size should take into account the size or the likely size of the cache memory that is allocated for each logical unit (LUN). Writes should be directed to memory, which is fast, rather than disk, which can be hundreds of times slower. Some arrays, including EMC, allocate LUN cache memory statically. Other arrays use an algorithm to allocate the memory dynamically. Generally, a stripe unit size that is half the size of the potential memory that could be allocated on the array per LUN is appropriate. 1 Performance can also be impacted by the hardware within the configuration. If the host bus adapter on the system is saturated, then performance cannot be improved by any of the tuning described in this document. VERITAS ARCHITECT NETWORK 5

We suggest that you mirror the SRL across arrays. This is a critical part of the storage, and we have seen people lose even the enterprise-class arrays on occasion. Also, reserve enough disk space for mirrors on both sides. This gives you copies of the data to test against, and it enables you to validate that everything is identical at both locations. Checking the performance of the SRL with vxstat and trying to verify it has the best performance possible is another critical step in the design and tuning process. Use common sense in building the SRL. The best way to maximize performance is to put it on the fastest disks, on the fastest array, and with as many parallel data paths to the device as possible. Determining the Optimum Size An important part of designing the system is choosing the optimum size for the SRL. While changes to the SRL size can be made while the application is operational, the best way to limit problems is to limit the need for future changes by implementing replication with an oversized log. A log that is too short will overflow, so an oversized log is preferable. Usually, an SRL that holds from one to two days of writes on the system is sufficient. Therefore, if the change rate for a system is 30 GB per day, then a 60 GB SRL is a good target size. Although there is no theoretical limit on how big the SRL can be, three factors dictate the optimal size of the SRL: The amount of data that must be logged during a network outage before going into DCM mode The number of writes the secondary can be behind the primary if VERITAS Volume Replicator is operating in asynchronous mode The time required to do an initial synchronization if the initial synchronization is done across the wire. If DCM protection is being used for the SRL, then the main consideration would be the length of time of the network outage that is being protected against. Monitoring the Application Behavior Another important aspect to tuning is the monitoring of write statistics, or data change rate, over the longest time period possible. This should include any time period (for example, a month end) where there might be spikes in the amount of writes to the replicated volume set. Designing the Network Bandwidth and Rlink Before starting the VERITAS Volume Replicator configuration, you must thoroughly test the network to ensure that it will support replication. Testing includes pinging with large packets, transferring files via FTP, and using Volume Replicator simulation tools. After thorough network testing, it is much easier to configure the Volume Replicator and be confident that it is operating correctly. 2 The network link should have sufficient bandwidth to support initial synchronizations over that link. Initial synchronization is defined as getting both the primary and secondary system into a consistent state so that they are exact copies of each other. There are other methods for doing the initial synchronization, but those methods are cumbersome and prone to problems. 2 VERITAS can provide a program to help test the network. Contact your local sales consultant for more information. VERITAS ARCHITECT NETWORK 6

Consider these additional factors when selecting the network bandwidth and rlink: Dedicate network cards (NICs) on each host to the replication. While this is not a requirement for replication, it is recommended to optimize performance. Perform the replication over a VPN or separate the bandwidth so that it can be monitored and measured. Note that you won t need GigaBit NICs for replication if your wide area line speed is slower than an OC-3 (155Mb/sec) per node. (Note: A VPN does not noticeably affect the performance of the replication.) The rlink generally does not directly affect performance if it can handle peak traffic requirements and provide a recovery point objective (RPO) of under two hours (see The Technology of Disaster Recovery article for a definition of RPO). However, it is easier to write the data to the secondary from memory than it is to read the data from the disk, and then write it to the secondary. So the wide area bandwidth should be sufficient to keep both sides in sync with the exception of some of the peak traffic times. The default configuration for VERITAS Volume Replicator is to use UDP, rather than TCP, and this is the right choice in almost every case. Volume Replicator does the write ordering and packet numbering within the application, so most of the function that TCP provides is redundant. However, in some instances, the WAN is congested, or has a significant loss rate, where TCP might be the right choice. There can be a performance drop using TCP, but you can counteract this by setting some parameters in the /etc/system file. These parameters are discussed further in Kernel Tuning Parameters. Do not share the network with applications that have inconsistent traffic. The WAN should have consistent, controlled, and measurable network capacity. General Design Considerations Plan on a certain amount of downtime during implementation. VERITAS Volume Replicator can be implemented without introducing application downtime, but it is very complex, and seldom possible in reality. The amount of downtime needed is usually under 30 minutes. Design with the idea of making management and maintenance as simple as possible. This includes configuring the notification to e-mail alerts if there are outages that affect the ability to replicate. Also, integrate VERITAS Volume Replicator into the network management system. Analyze the local area failover before implementing a wide area failover. Local failures are much more likely than a regional disaster that affects the entire data center. Protection against local failures will ensure the maximum availability for the environment. Under most circumstances, if there is limited budget, there is great value in spending money to increase the local availability, rather then implementing replication for wide area fail-over. Make sure you have the latest patches and product versions on the systems; this helps limit the number of required changes later. Think beyond the replication of the data, and address the ability to migrate the application and all of its required components to the secondary site. During the implementation, also consider that you must migrate the service back to the primary site in the future. Some examples of this would be to take into consideration the overall bandwidth coming into the DR site. For the application to support customers, there should be enough bandwidth to support them accessing the environment if it is operating out of the DR site rather than the primary site. Make sure that everyone involved understands that data is being replicated. For example, if DBAs and network administrators make changes in their configurations, then the performance and operation of the replication may be affected. The group of people that should have a basic VERITAS ARCHITECT NETWORK 7

understanding of the operation includes the system administrators, DBAs, storage administrators, security administrators, and networking administrators. Check the system from time to time. Performance characteristics can change, and it is better to catch things before you experience a problem. This can be done by running vxstat and vxmemstat to see how the storage is performing, and to see the effect of the VERITAS Volume Replicator kernel tunables. Some concerns have been expressed about managing the replication from the host, rather then having a single point of management when using array-based replication. However, it should be noted that even with array-based replication, the replication must still be managed on every host. This is because the replicated data is host specific. With VERITAS Volume Replicator, replication is managed only from primary hosts; array-based replication requires management of the primary and secondary hosts and arrays. All of the affected components must be tested and should be known commodities before installing VERITAS Volume Replicator. Before replication can begin, all of the secondary RVGs must be initialized so that they are identical to the primary. This process, called initial synchronization, enables you to begin replication with a known duplicate data set at both the primary and secondary site. Dos and Don ts for Setting Up Replication Refer to the following guidelines when configuring VERITAS Volume Replicator implementation: Don t try to replicate from rootdg, because the replication performance will be poor. Don t mount the secondary system s volumes in read-only mode, as the changes that happen to the primary are not reflected in what you can see from the secondary, and it can limit the ability to grow or shrink the volumes using the vradvisor. Do stop the RVG on the primary, with the application running. This will halt all I/O for volumes in the application group. The application will stop abruptly. Don t put the SRL on slow disks. Be sure to regularly test the configuration for functionality and performance. Be sure that all systems that will share replicated data have the same configurations. Configuring and Tuning VERITAS Volume Replicator Parameters When adding VERITAS Volume Replicator to a running system, the system s overall complexity increases. It is important, therefore, to consider other factors that can impact the overall performance of the system, because more components will impact the environment. This includes the performance of the WAN. While the specifics of performance tuning vary from system to system, there are some general rules that can, if followed, significantly improve the overall performance of the application. Volume Design and Configuration VERITAS Volume Replicator volumes are normal VERITAS Volume Manager volumes, and the normal recommendations for performance optimization apply. These best practices can be found in VERITAS Volume Manager 3.5 Administrator s Guide. After the data is written to the SRL and before it is transferred to the actual volumes, Volume Replicator blocks all reads to that location. This interval is normally less than 10ms, and is consistent with how things would work in a normal transaction. VERITAS ARCHITECT NETWORK 8

Although there are theoretically two writes one to the SRL and the other to the volumes where the data actually resides on the disk the extra write does not usually cause performance issues because the storage I/O paths tend to be among the least used resources within the environment. If the system is I/O constrained with respect to the SAN, then there can be performance problems because of the doubling of the amount of writes needed to write one block of data that is also going to be replicated. Note that all of the volumes being replicated should have the same names and be the same size on both the primary system, and any secondary systems. This is not required, but different names and sizes can cause confusion and other problems in the future. Shown below are additional guidelines that can help maximize performance of the replicated system when enterprise class storage arrays are not in use. You can follow this process for configuring Volume Manager as well as Volume Replicator, because they are the same product and use the same local configuration. VERITAS Volume Manager 3.5 Administrator s Guide further explains these guidelines: Stripe the volumes to improve write performance. Mirror the volumes to improve read performance. Separate the mirrors across controllers for better availability. Keep the stripes on the same controllers for better availability. Match the stripe size to the application write size, or smaller. If performance matters at all, and the writes make up more than about 15 percent of your I/O, do not use RAID 5. If enterprise class storage arrays are in use, then some of these recommendations change. (This is based on the goal of always writing to the arrays memory, rather then writing to disks in the array Often times, the LUNS will be mirrored at the array level, so we don t have to mirror the volumes on the system. It is also important to note that there are four basic disk I/O operations: Random reads Random writes Sequential reads Sequential writes If you take only the disks into consideration, the quickest I/Os are the sequential reads and writes. Most of the latency involved in getting the data to and from the disk platters is in direct proportion to how far the disk head must move to get to the cylinder that contains the data. Sequential reads and writes mean very little disk head movement. Kernel Tuning Parameters For certain changes to the replicated systems, you must take the application offline. This would include changing the SRL stripe size or layout, and changing the kernel tuning parameters if the VERITAS Volume Replicator version is 3.2 or earlier. Therefore, tuning the system before it goes online is important to minimize system disruptions. In addition to the SRL size and rlink configuration parameters mentioned above, you should tune the /etc/system and /kernel/drv/vxio.conf parameters. There are several parameters that you can adjust in the kernel to optimize performance based on the types of the writes on the system, the bandwidth, and the hardware. The utility to help manage and VERITAS ARCHITECT NETWORK 9

monitor the tuning parameters that is available in VERITAS Volume Replicator 4.0 is vxmemstat. In Volume Replicator 4.0, you can make the changes to the kernel parameters for Volume Replicator without rebooting the system, which allows you to try different settings to optimize the configuration. Remember to put the settings in the configuration file so they remain after a reboot. As always, the defaults are fine for a starting point, but you can use the following recommendations to increase the performance of the system. voliomem_maxpool_sz Default setting is 128MB or 5% of RAM whichever is smaller This parameter provides the buffer space in memory for reads. If it is too small, the primary system must wait on new writes until it has completed the existing writes, and VERITAS Volume Replicator might free the buffer early making it necessary to read data back from the disk. 3 vol_max_rdback_sz Default setting is 4MB This parameter enables the system to read the data from memory rather than from disk. VERITAS Volume Replicator stores all of the data in memory initially, and then flushes it to the disks as more data is processed. The secondary might be a bit behind the primary, but it can still read the data from the fast memory rather than the disk if this parameter is sized correctly. Another factor to consider is the stripe unit size. If the secondary is reading the data, the disk must be different from the disk the application is writing to. If these two activities are happening to the same disk, then there will be performance degradation, because of disk head contention. Most configurations will work best if this parameter is at least as large as the stripe size, or possibly even a multiple of the stripe size of the SRL. This can be determined with a simple vxprint command. vol_max_nmpool_sz Default setting is 4MB This is the one parameter that can be increased and may impact the performance of the replication. This is the buffer space on the secondary system. The goal in configuring this parameter is to make it as large as possible without having any write stay in the buffer on the secondary for more then one minute. A good rule of thumb for the maximum value is W MB/second x 50 (where W = writes). Also, keep in mind that this is a maximum amount. If this parameter is too small, it slows the network transfers. Keep in mind that each system can be either a primary or a secondary; as long as the systems have similar resources, make the tuning parameters the same. vol_min_lowmem_sz Default setting is 512KB The optimum value for this kernel parameter can be calculated by the following formula: 3 x N x I (where N = the number of concurrent writes to the replicated volumes and I = the average I/O size, rounded up to 8 kilobytes ). This information can be gathered using vxstat. Remember that you are allocating kernel space memory, so if you are using the 32-bit operating system Solaris 2.6, you could run out of memory by being too liberal with these values. This would be true for any other operating 32-bit operating system. 3 This parameter is changing to vol_rvio_maxpool_sz in VERITAS Volume Replicator 4.0. VERITAS ARCHITECT NETWORK 10

Shown below is an example of the entries in the /etc/system file of a Solaris 2.6 system. If the VERITAS Volume Replicator version is 3.2 or below, you would need to make these entries in the /etc/system file. If the version is 3.5 or above, make these entries in the vxio.conf file: set vxio:vol_max_rdback_sz=67108864 set vxio:voliomem_maxpool_sz=536870912 set vxio:vol_max_nmpool_sz=67108864 set vxio:vol_min_lowmem_sz=50331648 Shown below is an example of the entries in the /kernel/drv/vxio.conf file of a Solaris 2.6 system: vol_max_rdback_sz=67108864; voliomem_maxpool_sz=536870912; vol_max_nmpool_sz=67108864; vol_min_lowmem_sz=50331648; Network Tuning Parameters There can be performance problems using the default settings for GigE cards on Solaris systems. We have found that the following settings improve performance: udp_max_buf --> 524288 udp_recv_hiwat --> 65535 udp_xmit_hiwat --> 65535 udp_xmit_lowat --> 8192 These recommendations are illustrated on Sun s Blueprints site. http://www.sun.com/blueprints/0203/817-1657.pdf Conclusion VERITAS Volume Replicator is a stable, mature product that is used to replicate a customer s critical data between two hosts. When a replication is added to an environment, it increases the overall complexity of the system and can impact performance, depending on the underlying hardware and software. This article provides configuration guidelines that can help administrators design systems to proactively avoid performance problems, and to mitigate existing performance issues once the system is implemented. While the specifics of performance tuning will vary from system to system, this article provides general rules that can help to improve performance significantly. VERITAS ARCHITECT NETWORK 11

VERITAS ARCHITECT NETWORK VERITAS Software Corporation Corporate Headquarters 350 Ellis Street Mountain View, CA 94043 650-527-8000 or 866-837-4827 For additional information about VERITAS Software, its products, VERITAS Architect Network, or the location of an office near you, please call our corporate headquarters or visit our Web site at www.veritas.com. Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation. VERITAS, the VERITAS Logo Reg. U.S. Pat. & Tm. Off. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice.