VERITAS Storage Replicator for Volume Manager 3.0.2

Size: px
Start display at page:

Download "VERITAS Storage Replicator for Volume Manager 3.0.2"

Transcription

1 VERITAS Storage Replicator for Volume Manager Configuration Notes Solaris May

2 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Copyright Copyright 2000 VERITAS Software Corporation. All rights reserved. VERITAS is a registered trademark of VERITAS Software Corporation in the US and other countries. The VERITAS logo and VERITAS Storage Replicator for Volume Manager are trademarks of VERITAS Software Corporation. All other trademarks or registered trademarks are the property of their respective owners. Printed in the USA, May VERITAS Software Corporation 1600 Plymouth St. Mountain View, CA Phone Fax

3 Contents Chapter 1. Effects of Configuration on Performance Introduction Design Application Bandwidth Chapter 2. Configuring for Efficient Operation Overview RLINKs Synchronous versus Asynchronous Latency and SRL Protection Network Effects on Performance Choosing an Appropriate Bandwidth SRL SRL Layout SRL Bandwidth SRL Sizing Peak Usage Constraint Initialization Period Constraint Secondary Backup Constraint Downtime Constraint Additional Factors Example Buffer Space iii

4 Readback Buffer Space Write Buffer Space on the Primary Buffer Space on the Secondary Glossary iv Storage Replicator for Volume Manager Configuration Notes

5 Effects of Configuration on Performance 1 Introduction This chapter discusses some of the issues involved in configuring a Storage Replicator (SRVM) Replicated Data Set (RDS). To set up an efficient SRVM configuration, it is necessary to understand how the configuration, along with the design of SRVM, can combine to affect SRVM and application performance. The following section describes the flow of control in handling a write request within SRVM. Subsequent sections discuss how these design details can affect performance. Throughout this document, the term application refers to whichever program writes directly to the raw volume. So in the case of a database using a file system mounted on a volume, the file system is the application; if the database writes directly to a raw volume, then it is considered the application. Design On the Primary side, writes enter SRVM through the normal volume interfaces. However, rather than performing I/O operations directly on a volume, SRVM passes the request up to the Replicated Volume Group (RVG) containing the volume. Figure 1 on page 6 shows the flow of control for a typical SRVM configuration containing two remote sites, one connected via an asynchronous RLINK, the other via a synchronous one. When a write operation is passed to the RVG, the data must first be copied into a kernel memory buffer. SRVM then writes the data and some header information, to the Storage Replicator Log (SRL), and waits for the write to complete. As shown in Figure 1, this completes Phase 1 of the operation, which must be executed for all write requests. Phase 2 is divided into synchronous and asynchronous components, since an RVG may have one or more associated RLINKs. Each RLINK may operate independently in synchronous or asynchronous mode. This phase is responsible for sending the write request to all RLINKs and writing it to the Primary data volume. When the synchronous component has completed, the write is considered complete and may terminate. The asynchronous component might complete at a later time. Until both components are complete, the kernel memory buffer cannot be freed because the data might still need to be sent to remote nodes. 5

6 Design Figure 1. SRVM Flow of Control Synchronous RLINK Kernel Buffer (Remote) Data Volume Kernel Buffer Log Volume Asynchronous RLINK Kernel Buffer (Remote) Data Volume Data Volume Phase 1 Phase 2 = Asynchronous component = Synchronous component Flow of control for a write request on an SRVM RDS containing two remote sites, one connected via an asynchronous RLINK, the other via a synchronous one. 6 Storage Replicator for Volume Manager Configuration Notes

7 Design The synchronous component consists of sending a write request to each RLINK operating in synchronous mode, and then waiting for an acknowledgment that the request was received. The acknowledgment does not indicate that the request has been committed to the Secondary data volume. It only means that it is in a buffer on the Secondary system, and eventually will be committed to disk, barring a system failure. When all RLINKs operating in synchronous mode have acknowledged receiving the request, the synchronous component, and the overall write request, is complete. If all RLINKs are in asynchronous mode, the synchronous component becomes null, which means that the write latency consists solely of the time to write the SRL. The asynchronous component works similarly to the synchronous component. The difference is that the write request may complete before the asynchronous component does. This component consists of sending a write request to each RLINK operating in asynchronous mode, and then waiting for an acknowledgment that the request was received. Additionally, the asynchronous component is responsible for writing the request to the Primary data volume. This operation is performed asynchronously to avoid adding the penalty of a second full disk write to the overall write latency. Because the log write, but not the data write, is performed synchronously, the SRL becomes the final arbiter as to the correct contents of the data volume in the case of a system failure. Finally, an RLINK operating in asynchronous mode may be behind for various reasons, such as network outages or a burst of writes which exceed available network bandwidth. RLINKs that are behind are not handled by the asynchronous component, but by a separate asynchronous thread. Because the write requests for these RLINKs are no longer guaranteed to be held in memory, the asynchronous thread has the ability to read them back off the SRL. This allows the system to release resources for requests that can not be satisfied immediately. Note that there are actually two synchronous modes, FAIL and OVERRIDE. RLINKs with synchronous=override, referred to as soft synchronous RLINKs, change to asynchronous mode during any type of disconnect or pause. RLINKs with synchronous=fail, referred to as hard synchronous RLINKs, fail incoming writes if they cannot be replicated immediately because of disconnect or pause. Chapter 1, Effects of Configuration on Performance 7

8 Application Bandwidth Application Bandwidth Before attempting to configure an RDS can be made, it is necessary to know the data throughput that must be supported. That is the rate at which the application can be expected to write data. Only write operations are of concern here: read operations are always satisfied locally with very little SRVM interference.to perform the analyses described in later sections, a profile of application bandwidth is required. For an application with relatively constant bandwidth, the profile could take the form of certain values, such as: Average application bandwidth Peak application bandwidth Length of peak application bandwidth period For a more volatile application, a table of measured usages over specified intervals may be needed. Because matching application bandwidth to disk capacity is not an issue unique to replication, it is not discussed here. It is assumed that an application is already running, and that the VERITAS Volume Manager TM has been used to configure data volumes to support the bandwidth needs of the application. In this case, the application bandwidth characteristics may already have been measured. If the application characteristics are not known, they can be measured by running the application and using a tool to measure data written to all the volumes to be replicated. If the application will be writing to a file system rather than a raw data volume, be careful to include in the measurement all the metadata written by the file system itself. This can add a substantial amount to the total amount of replicated data. For example, if a database is using a file system mounted on a replicated volume, a tool such as vxstat (see vxstat(1m)) will correctly measure the total data written to the volume, while a tool that monitors the database and measures its requests will fail to include those made by the underlying file system. It is also important to consider both peak and average bandwidth created by the application. These numbers can be used to determine the type of network connection needed. For synchronous RLINKs, the network must support the peak application bandwidth. For asynchronous RLINKs that are not required to keep pace with the Primary, the network only needs to support the overall average application bandwidth. 8 Storage Replicator for Volume Manager Configuration Notes

9 Application Bandwidth Finally, once the measurements are made, the numbers calculated as the peak and average bandwidths should be close to the largest obtained over the measurement period, not the averages or medians. For example, assume that measurements are made over a 30-day period, yielding 30 daily peaks and 30 daily averages, and then the average of each of these is chosen as the application peak and average respectively. If the network is then sized based on these values, then for half the time there will be insufficient network capacity to keep up with the application. Instead, the numbers chosen should be close to the highest obtained over the period, unless there is reason to doubt that they are valid or typical. Chapter 1, Effects of Configuration on Performance 9

10 Application Bandwidth 10 Storage Replicator for Volume Manager Configuration Notes

11 Configuring for Efficient Operation 2 Overview This chapter discusses many of the decisions that must be made when setting up an RDS. Emphasis is on how each component can affect performance. The discussion assumes an understanding of the design details described in Chapter 1. Each major component must be configured properly and is discussed in turn. The components include RLINKs, the network, the SRL, and the Secondary. In an ideal configuration, replication proceeds at the same pace that the application generates data. As a result, all Secondary sites remain relatively up to date. For this to occur, each component within the configuration must be able to keep up with the incoming data. This includes the SRL, local and remote data volumes, and the network connection. The goal in configuring SRVM is that it must be able to handle temporary bottlenecks, such as an occasional burst of updates, or an occasional network problem. If one of the components cannot keep up with the update rate over the long term, SRVM will not work. The type of problem experienced depends on whether or not the lagging component is on the critical path. The problems likely to be caused by each component are discussed in more detail below. In general, the two most likely problems to occur are: (1) application slowdown due to increased write latency, and (2) overflow of the SRL. If a component on a critical path cannot keep up, additional latency may be added to each write. This in turn leads to poor application performance. If the component is not on a critical path, the application writes may proceed at their normal pace, with the excess accumulating in the SRL, and possibly causing an overflow. So, it is important to examine each component in turn to ensure that its bandwidth is sufficient to support the expected application bandwidth. 11

12 RLINKs RLINKs Synchronous versus Asynchronous The decision as to whether to use synchronous or asynchronous RLINKs should not be made without a full understanding of the effects of this choice on system performance. The relative merits of using synchronous or asynchronous RLINKs become apparent when the underlying implementation, described in Chapter 1, is understood. Synchronous RLINKs have the advantage that all writes are guaranteed to reach the Secondary before completing. For some applications, this may simply be a requirement that cannot be circumvented in this case, performance is not a factor in the decision. For applications where the choice is not so clear, however, this section discusses some of the performance implications of choosing synchronous operations. As illustrated in Figure 1 on page 6, all write requests first result in a write to the SRL. It is only after this write completes that replication begins. Since synchronous RLINKs require that the data reach the Secondary and be acknowledged before the write completes, this makes the latency for a write equal to: SRL latency + Network round trip latency Thus, synchronous RLINKs can significantly decrease application performance by adding the network round trip to the latency of each write request. Asynchronous RLINKs avoid increasing the per-write latency by sending the data to the Secondary after the write completes, thus removing the network round trip latency from the equation. The most obvious disadvantage of this is that there is no guarantee that a write which appears complete to the application has actually been replicated. A more subtle effect of asynchronous RLINKs is that while application throughput should increase due to decreased write latency, overall replication performance may decrease. This occurs because if the asynchronous RLINK cannot keep up with incoming data, it must begin freeing memory that is holding unsent requests for use by incoming requests. When it is finally ready to send the old requests, they must first be read back from the SRL. So while synchronous RLINKs always have their data available in memory, asynchronous ones frequently have to read it off the SRL. Consequently, their performance might suffer because of the delay of the added read. The need to perform readbacks also has a negative impact on SRL performance. For synchronous RLINKs, the SRL is only used for sequential writes and yields excellent performance. For asynchronous RLINKs, however, the writes may be interspersed with occasional reads from an earlier part of the SRL, and so performance suffers due to the increased disk head movement. 12 Storage Replicator for Volume Manager Configuration Notes

13 RLINKs Whether this readback slowdown effect occurs depends on whether the RLINK is able to keep up with incoming data, and, when it cannot, on whether the available memory buffer is large enough to hold the excess. If the RLINK always keeps up, or if it only falls behind for short periods during which the excess is small enough to fit in memory, readback will not be a problem. [See Section 2.6 for information on tuning the size of SRVM and VERITAS Volume Manager memory buffers.] If readback is a problem, striping the SRL volume over several disks using mid-sized stripes (for example, 10 times the average write size), should aid performance, Unfortunately, this conflicts with the tactic of striping using small stripes to improve SRL bandwidth as discussed in SRL Bandwidth on page 16. If synchronous RLINKs are to be used, another factor to consider is that hard synchronous RLINKs throttle incoming write requests while catching up from a checkpoint. This means that if, after the Secondary data volumes have been initialized, the RLINK takes ten hours to catch up, any application waiting for writes to complete will hang for ten hours if the RLINK is in hard synchronous mode. Thus, for all practical purposes, it is necessary to either shut down the application or temporarily set the RLINK to soft synchronous or asynchronous modes until the Secondary has caught up after a checkpoint. Latency and SRL Protection RLINKs have parameters, latencyprot and srlprot, available that provide a compromise between synchronous and asynchronous characteristics. These parameters allow the RLINK to fall behind, but limit the extent to which it does so. When latencyprot is enabled, the RLINK is only allowed to fall behind by a predefined number of requests, a high-water mark. Once this user-defined high-water mark is reached, throttling is triggered. This forces all incoming requests to be delayed until the RLINK has caught up to within another predefined number of requests, the low-water mark. Thus, the average write latency seen by the application increases. However, the behavior may appear different than with a synchronous RLINK, depending on the spread between the high-water mark and low-water mark. A large spread causes occasional long delays in write requests, which may appear to be application hangs, as the SRL drains down to the low-water mark. Most other write requests will remain unaffected. A smaller range will spread the delays more evenly over write requests, resulting in smaller but more frequent delays. For most cases, a smaller spread is probably preferable. The other relevant parameter, srlprot, is used to prevent the SRL from overflowing, and has an effect similar to latencyprot. When srlprot is enabled and SRVM detects that a write request would cause the SRL to overflow, the request and all subsequent requests are delayed until the SRL has drained to 95% full. The parameters for this feature are not user-tunable, so the expected behavior is that a large delay results for any writes made while the SRL is draining. All other writes are unaffected. Chapter 2, Configuring for Efficient Operation 13

14 Network Network Effects on Performance All replicated write requests must eventually travel over the network to one or more Secondary nodes. Whether or not this trip is on the critical path depends on the configuration of the RLINKs in the RVG. Since synchronous RLINKs require that data reach the Secondary node before the write can complete, the network is always part of the critical path for synchronous RLINKs. This means that for any period during which application bandwidth exceeds network capacity, write latency increases. Conversely, asynchronous RLINKs do not impose this requirement, so write requests are delayed if network capacity is insufficient. Instead, excess requests accumulate on the SRL, as long as the SRL is large enough to hold them. If there is a chronic shortfall in network capacity, the SRL will eventually overflow. However, this setup does allow the SRL to be used as a buffer to handle temporary shortfalls in network capacity, such as periods of peak usage, provided that these periods are followed by periods during which the RLINK can catch up as the SRL drains. If a configuration is planned with this functionality in mind, you must be aware that Secondary sites will frequently be significantly out of date. Asynchronous RLINKs have several parameters that can change the behavior described above, by placing the network round-trip on the critical path in certain situations. The latencyprot and srlprot features, when enabled, can both have this effect. These features are discussed fully in Latency and SRL Protection on page Storage Replicator for Volume Manager Configuration Notes

15 Network Choosing an Appropriate Bandwidth The network bandwidth depends on the type of connection between the Primary and Secondary nodes, and the use of the connection. The type of connection determines the maximum bandwidth available between the two locations, for example, a T3 line provides 45 Mb/second. The other important factor to consider is whether the available connection will be used by any other applications, or be exclusively reserved for SRVM. If other applications will be using the same line, it is important to be aware of the bandwidth requirements of these applications and subtract them from the total network bandwidth. If any applications sharing the line have variations in their usage pattern, it is also necessary to consider whether their times of peak usage are likely to coincide with SRVM s peaks. Additionally, overhead added by SRVM and the various underlying protocols reduces effective bandwidth by a small amount, typically 3 to 5%. To avoid problems caused by insufficient network bandwidth, the following general principles should be applied: If synchronous RLINKs will be in use, the network bandwidth must at least match the application bandwidth during its peak usage period. This leaves excess capacity during non-peak periods, which is useful to allow initialization of new volumes using checkpoints as described in Peak Usage Constraint on page 19. If only asynchronous RLINKs will be used, and you have the option of allowing the Secondary to fall behind during peak usage, then the network bandwidth only needs to match the overall average application bandwidth. This might require the application to be shut down during initialization procedures, because there will be no excess network capacity to handle the extra traffic generated by the catchup from the checkpoint. If asynchronous RLINKs will be used with latencyprot enabled to avoid falling too far behind, the requirements depend on how far the RLINK will be allowed to fall behind. RLINKs with a small high-water mark should be treated as synchronous RLINKs and therefore should have a network bandwidth sufficient to match the application bandwidth during its peak usage period. RLINKs with a relatively large high-water mark (that is, enough to allow the RLINK to fall behind by several hours, or even a day), may get by with a bandwidth that only matches the average application bandwidth, and thus be allowed to fall far behind during peak usage periods. Chapter 2, Configuring for Efficient Operation 15

16 SRL SRL SRL Layout It is critical that there be no overlap between the physical disks comprising the SRL and those comprising the data volumes, because all write requests to SRVM result in a write to both the SRL and the requested data volume. Any such overlap is guaranteed to lead to major performance problems, as the disk head thrashes between the SRL and data sections of the disk. Slowdowns of over 100% can be expected. Note that the SRL on the Secondary is not used as frequently, and so its placement is not considered important. It is highly recommended that the SRL be mirrored to improve its reliability. The loss of the SRL immediately puts all RLINKs into the STALE state. The only way to recover from this is to perform a full resynchronization, which is a time-consuming procedure to be avoided whenever possible. The risk of this failure can be minimized by mirroring the SRL. SRL Bandwidth The SRL is on the critical path for all writes, regardless of the RLINK configuration. This is because, as illustrated in Figure 2 on page 17, all write requests perform and complete a write to the SRL before any replication occurs. This makes it critical to ensure that the SRL capacity is sufficient for the application. Due to the design of SRVM, it may be difficult for a volume functioning as an SRL to keep pace. Figure 2 illustrates two keys points that can affect SRL volume performance: An RVG can contain multiple data volumes but only a single SRL volume. All writes to any data volume in the RVG also result in a write to the SRL volume. This means that while writes may be spread across multiple data volumes in the RVG, all of these writes will be concentrated on a single SRL volume. This makes it easy for the SRL to become a bottleneck. The problem is partially mitigated by the fact that writes to the SRL volume are sequential, while those to the data volumes are more likely to be random. As a result, the SRL volume essentially gets a head start on each write. If a large percentage of the application s accesses are reads, then much of the data volumes capacity will be used in satisfying reads, which do not affect the SRL volume, so the SRL volume should be able to keep up. 16 Storage Replicator for Volume Manager Configuration Notes

17 SRL Figure 2. Configuring for Efficient Operation Synchronous RLINK Kernel Buffer (Remote) Data Volumes Kernel Buffer Asynchronous RLINK Kernel Buffer (Remote) Data Volumes Kernel Buffer SRL Volume Kernel Buffer Data Volume Data Volume = Synchronous component Data Volume = Asynchronous component Phase 1 Phase 2 Flow of data when multiple write requests to different data volumes are being processed. Note how writes are concentrated on the SRL volume, then distributed among data volumes. (For clarity, multiple data buffers and data volumes are not shown on Secondary. Chapter 2, Configuring for Efficient Operation 17

18 SRL However, for some applications it is certainly possible that the overall bandwidth of writes to the data volumes may exceed the physical capacity of the disk containing the SRL volume. Since the latency of every write includes the time taken to write to the SRL volume, this situation would cause the SRL volume to become a bottleneck, and increase the latency of each write. In this case, it may be necessary to use standard Volume Manager procedures to stripe the SRL volume over several physical disks to increase the available bandwidth. The stripe size should be of the same order of magnitude as a typical write, so that consecutive writes often end up on different physical disks. If it is determined that the SRL is a bottleneck, but the situation is not alleviated through the use of striping or some other solution, then the application bandwidth measured in Application Bandwidth on page 8 becomes irrelevant, and the SRL bandwidth can be used in its place when sizing the remaining components. SRL Sizing The size of the SRL affects the likelihood that it will overflow. When the SRL overflows for a particular RLINK, that RLINK is marked STALE, and the corresponding remote RVG becomes out of date until a full resynchronization with the Primary is performed. Since this is a time-consuming process, and also renders the Secondary useless until it is completed, SRL overflows are to be avoided whenever possible. The SRL size needs to be large enough to satisfy four constraints: It must not overflow for asynchronous RLINKs during periods of peak usage when replication over the RLINK may fall far behind the application. It must not overflow while a Secondary RVG is being initialized. It must not overflow while a Secondary RVG is being restored. It must not overflow during extended outages (network or Secondary node). To determine the size needed for the SRL volume, you should determine the size required to satisfy each of these constraints individually. Then, choose a value at least equal to the maximum so that all will be satisfied. The information needed to perform this analysis, presented below, includes: The maximum expected downtime for Secondary nodes The maximum expected downtime for the network connection The method for initializing Secondary data volumes with data from Primary data volumes. If the application will be shut down to perform the initialization, then the SRL will not grow and the method is unimportant. Otherwise, this information could include: the time required to copy the data over a network, or the time required to copy it to a tape or disk, to send the copy to the Secondary site, and to load the data onto the Secondary data volumes. 18 Storage Replicator for Volume Manager Configuration Notes

19 SRL Note If Automatic Synchronization Option is used to initialize the Secondary, the previous paragraph is not a concern. If Secondary backup will be performed to avoid full resynchronization in case of Secondary data volume failure, the information needed also includes: The frequency of Secondary backups The maximum expected delay to detect and repair a failed Secondary data volume The expected time to reload backups onto the repaired Secondary data volume Peak Usage Constraint For some configurations, it might be common for replication to fall behind the application during some periods and catch up during others. For example, an RLINK might fall behind during business hours and catch up overnight if its peak bandwidth requirements exceed the network bandwidth. Of course, for synchronous RLINKs, this does not apply, as a shortfall in network capacity would cause each application write to be delayed, so the application would run more slowly, but would not get ahead of replication. For asynchronous RLINKs, the only limit to how far replication can fall behind is the size of the SRL. If it is known that the application s peak bandwidth requirements will exceed the available network bandwidth, then it becomes important to consider this factor when sizing the SRL. Assuming that data is available providing the typical application bandwidth over a series of intervals of equal length, it is simple to calculate the SRL size needed to support this usage pattern: 1. Calculate the network capacity over the given interval (BW N ). 2. For each interval n, calculate SRL growth (LG n ), and the excess of application bandwidth (BW AP ) over network bandwidth (LG n = BW AP(n) BW N ). 3. For each interval, accumulate all the SRL growth values to find the cumulative SRL size (LS): LS n = Σ LG i i=1...n The largest value obtained for any LS n is the value that should be used for SRL size as determined by the peak usage constraint. Chapter 2, Configuring for Efficient Operation 19

20 SRL Table 1 shows an example of this calculation. The second column contains the maximum likely application bandwidth per hour obtained by measuring the application as discussed in Application Bandwidth on page 8. Column 4 shows, for each hour, how much excess data the application generates that cannot be sent over the network. Column 5 shows the sums obtained for each interval. Since the largest sum is 37 GB, the SRL would need to be at least this large for this application. Note that several factors can reduce the maximum size to which the SRL can grow during the peak usage period. Among these are: The latencyprot characteristic can be enabled to restrict the amount by which the RLINK can grow. The network bandwidth can be increased to handle the full application bandwidth. Table 1. Example Calculation of SRL Size Required to Support Peak Usage Period Hour ending Application (GB/hour) Network (GB/hour) SRL Growth (GB) Cumulative SRL Size (GB) 8 a.m p.m Storage Replicator for Volume Manager Configuration Notes

21 SRL Initialization Period Constraint This section applies if not using the Automatic Synchronization Option. When a new Secondary RVG is brought online, its data volumes must be initialized to match those on the Primary unless the Primary is also starting from scratch. If the application on the Primary can be shut down while data is copied to the Secondary, this operation becomes trivial and the SRL size is irrelevant. However, in most cases, it will be necessary to copy existing data from the Primary to the Secondary while the application is still running on the Primary. The following procedure, referred to as a Primary checkpoint, is used in this case: 1. Start a checkpoint on the Primary RVG. 2. Copy all Primary data volumes. 3. End the checkpoint. 4. Transmit the data to the Secondary site. 5. Load the Secondary data volumes with the data. 6. Start replicating from the start of the checkpoint. If the total amount of data is small relative to network speed, then step 2, step 4, and step 5 may be accomplished as one by copying the Primary data volumes over the network to the Secondary data volumes. However, for large databases, it is likely to be faster to copy the Primary data volumes to tape in step 2 and ship the tapes via a courier in step 4. For distant locations,step 4 may take almost a day if an overnight courier is used. For large databases, writing and reading the tapes in step 2 and step 5 may also add significant delays. (Another option would be to copy the data directly to disks, ship the disks, and import them on the Secondary.) During the entire initialization period between step 1 and step 6, the application is running, so data is accumulating on the SRL. Thus, to ensure that the SRVM does not overflow during this period, it is necessary that the SRL be sized to hold as much data as the application could write during the initialization period. After the initialization period, this data will gradually be replicated and the Secondary will eventually catch up to the Primary. Chapter 2, Configuring for Efficient Operation 21

22 SRL Note that until the Secondary catches up, it will be inconsistent and out-of-date with respect to the Primary. This is an unavoidable consequence of the requirement that the application continue to run during this period. To perform the initialization period calculation, first obtain an estimate of the expected time to perform step 1 through step 6. Although the first time an initialization is performed, it may be possible to schedule it for a slow period such as a night or weekend, it is possible that a future initialization could be necessary during a busy period due to the need to resynchronize a Secondary after a failure. If this could be the case, then the calculation should use worst-case numbers for application bandwidth during the initialization period. If the site requirements will always allow an initialization to be performed at the most convenient time, then best-case values for application bandwidth can be used. In either case, given the application profile obtained in Application Bandwidth on page 8, it should be a simple matter to determine the maximum amount of data that could be generated by the application over the time period expected for an initialization. Since all this data must be available on the SRL at the end of initialization to bring the Secondary up to date, the SRL must be at least this large. Secondary Backup Constraint SRVM provides a mechanism to perform periodic backups of the Secondary data volumes. In case of a problem that would otherwise require a full resynchronization using a Primary checkpoint, as described in Initialization Period Constraint on page 21, a Secondary backup, if available, can be used to get the Secondary back on line much more quickly. An example of such a case would be the failure of an unmirrored Secondary data volume. A Secondary backup is made by defining a Secondary checkpoint and then making a copy of all the Secondary data volumes. Should a failure occur, the Secondary data volumes can be restored from this local copy, and then replication can proceed from the original checkpoint, thus replaying all the data from the checkpoint to the present. The constraint introduced by this process is that the SRL must be large enough to hold all the data between the most recent checkpoint and the present. This depends largely on three factors: The application data bandwidth. The SRL size. The frequency of Secondary backups. 22 Storage Replicator for Volume Manager Configuration Notes

23 SRL Thus, given an application data bandwidth and frequency of Secondary backups, it is possible to come up with a minimal SRL size. Realistically, an extra margin should be added to an estimate arrived at using these figures to cover other possible delays, including: Maximum delay before a data volume failure will be detected by a system administrator. Maximum delay to repair or replace the failed drive. Delay to reload disk with data from the backup tape. To arrive at an estimate of the SRL size needed to support this constraint, first determine the total time period the SRL needs to support by adding the period planned between Secondary backups to the time expected for the three factors mentioned above. Then, use the application bandwidth data to determine, for the worst case, the amount of data the application could generate over this time period. Downtime Constraint When the network connection to a Secondary node, or the Secondary node itself, goes down, the RLINK on the Primary node detects the broken connection and responds. For an RLINK in hard synchronous mode, the response is to fail all subsequent write requests until the connection is restored. In this case, the SRL will not grow, so the downtime constraint is irrelevant. For all other types of RLINKs, incoming write requests accumulate in the SRL until the connection is restored. Thus, the SRL must be large enough to hold the maximum output that the application could be expected to generate over the maximum possible downtime. Maximum downtimes may be difficult to estimate. In some cases, there may be vendor guarantees that failed hardware or network connections will be repaired within some period. Of course, if the repair is not completed within the guaranteed period, the SRL will overflow despite any guarantee, so it would be a good idea to add a safety margin to any such estimate. To arrive at an estimate of the SRL size needed to support this constraint, first obtain estimates for the maximum downtimes which the Secondary node and network connections could reasonably be expected to incur. Then, use the application bandwidth data to determine, for the worst case, the amount of data the application could generate over this time period. Chapter 2, Configuring for Efficient Operation 23

24 SRL Additional Factors Once estimates of required SRL size have been obtained under each of the constraints described above, several additional factors must be considered. For the initialization period, downtime and Secondary backup constraints, it is not unlikely that any of these situations could be immediately followed by a period of peak usage. In this case, the Secondary could continue to fall further behind rather than catching up during the peak usage period. As a result, it might be necessary to add the size obtained from the peak usage constraint to the maximum size obtained using the other constraints. Note that this applies even for soft synchronous RLINKs, which are not normally affected by the peak usage constraint, because after a disconnect, they act as asynchronous RLINKs until caught up. Of course, it is also possible that other situations could occur requiring addition of the constraints. For example, an initialization period could be immediately followed by a long network failure, or a network failure could be followed by a Secondary node failure. Whether and to what degree to plan for unlikely occurrences requires weighing the cost of additional storage against the cost of additional downtime caused by a SRL overflow. Once an estimate has been computed, one more adjustment must be made to account for the fact that all data written to the SRL also includes some header information. This adjustment must take into account the typical size of write requests. Each request uses at least one additional disk block (512 bytes) for header information, so the adjustment should be as follows: If Average Write Size is: Add this Percentage to SRL Size: 512 bytes K 50 2 K 25 5 K or more Storage Replicator for Volume Manager Configuration Notes

25 SRL Example This section contains an example of how a particular site might go about calculating a reasonable SRL size for its configuration. First, all the relevant parameters for the site must be collected. For this site, they are as follows: Application peak write bandwidth 1 GB/hour Duration of peak 8 am - 8 pm Application off-peak write bandwith 250 MB/hour Average write size 2 KB Number of Secondary sites 1 Type of RLINK soft synchronous Initialization Period application shutdown no copy data to tape 3 hours send tapes to Secondary site 4 hours load data 3 hours Total 10 hours Maximum downtime for Secondary node 4 hours Maximum downtime for network 24 hours Secondary backup not used Since synchronous RLINKs will be used, the network must be sized to handle the peak application bandwidth, so that the SRL will not grow during the peak usage period. Thus, the peak usage constraint is not an issue, and the largest constraint is that the network could be out for 24 hours. The data accumulating in the SRL over this period would be: 1 GB/hour x 12 hours 12 GB + 1/4 GB/hour x 12 hours3 GB = 15 GB Since the 24-hour downtime is already an extreme case, no additional adjustment will be made to handle other constraints. An adjustment of 25% is made to handle header information. The result shows that the SRL should be at least GB. Chapter 2, Configuring for Efficient Operation 25

26 Buffer Space Buffer Space When a write request is made, an SRVM data buffer is allocated to it. The amount of buffer space available affects SRVM performance, which can affect performance for the underlying Volume Manager volumes. You can use the following tunables to allocate buffer space on the Primary and Secondary according to your requirements: voliomem_max_readbackpool_sz voliomem_maxpool_sz voliomem_max_nmcompool_sz These tunables can be modified by adding lines to the /etc/system file. For details on changing the SRVM tunables, see Chapter 5, Administering SRVM, in the VERITAS Storage Replicator for Volume Manager Administrator s Guide. The following sections describe each of the above tunable. Readback Buffer Space When a write request is made, an SRVM data buffer is allocated to it. The data buffer is not released until the data has been written to the Primary and sent to all synchronous Secondary data volumes. When the buffer space becomes low, several effects are possible, depending on the configuration. SRVM will begin to free some buffers before sending the data across the asynchronous RLINKs. This frees up more space for incoming write requests so that they will not be delayed. The cost is that it forces the freed requests to be read back from the SRL later, when an RLINK is ready to send them. As discussed in Synchronous versus Asynchronous on page 12, the need to perform readback may have a slight impact on write latency because it makes the SRL perform more non-sequential I/O. The amount of buffer space available for these readbacks is defined by the tunable, voliomem_max_readbackpool_sz, which defaults to 4MB. To enable more readbacks at the same time, increase the value of voliomem_max_readbackpool_sz. You may need to increase this value if you have multiple asynchronous RLINKs. If multiple RVGs are present on a node, this value can be increased according to your requirements. 26 Storage Replicator for Volume Manager Configuration Notes

27 Buffer Space Write Buffer Space on the Primary The amount of buffer space that can be allocated within the operating system to handle incoming writes is defined by the tunable, voliomem_maxpool_sz, which defaults to 4MB. If the available buffer space is too small, writes are held up. SRVM must free old buffer space to allow new writes to be processed. The freed requests are read back from the SRL when an RLINK is ready to send them to the Secondary. If voliomem_maxpool_sz is large enough to hold the incoming writes, you can avoid reading back old buffers from the SRL when necessary. To increase the number of concurrent writes, or to reduce the number of readbacks from the Storage Replicator Log (SRL), increase the value of voliomem_maxpool_sz. Buffer Space on the Secondary Secondary data volumes are not directly on the critical path any individual write on the Primary can complete before the write to the Secondary data volumes completes, even for synchronous RLINKs. The following feedback mechanisms limits the amount by which Secondary data volumes can fall behind. This mechanism involves a limit on the amount of memory that is allocated on a Secondary node to handle incoming requests from the Primary node. Once this limit is reached, the Secondary rejects incoming requests until existing requests complete their writes to the Secondary data volumes and free their memory. Since this appears to the Primary as an inability to send requests over the network, the consequences are identical to those pertaining to insufficient network bandwidth. Thus, the results depend on whether synchronous or asynchronous RLINKs are in use. For asynchronous RLINKs, there may be no limit to how far Secondary data volumes can fall behind unless the mechanisms discussed in Latency and SRL Protection on page 13 are in force. The amount of buffer space available for requests coming in to the Secondary over the network is determined by the SRVM tunable, voliomem_max_nmcompool_sz, which defaults to 4 MB. Since this value is global, and therefore restricts all Secondary RVGs on a node, it may also be useful to increase it if multiple Secondary RVGs will be present on the Secondary node. If there is a high volume of requests, increase voliomem_max_nmcompool_sz. Chapter 2, Configuring for Efficient Operation 27

28 Buffer Space 28 Storage Replicator for Volume Manager Configuration Notes

29 Glossary hard synchronous A characteristic of an RLINK, which, when set, indicates that if the RLINK is disconnected or paused, any incoming write requests will be failed. high-water mark A parameter associated with an RLINK, used only when latencyprot is enabled. In this case, when the RLINK falls behind by this number of requests, throttling is triggered, so all incoming write requests are delayed until the number of requests behind drops to the low-water mark. low-water mark A parameter associated with an RLINK, used only when latencyprot is enabled. In this case, when throttling is triggered, it remains in effect (no new write requests are processed) until the number of requests the RLINK is behind drops to this number. RDS RVG Replicated Data Set Replicated Volume Group soft synchronous A characteristic of an RLINK, which, when set, indicates that if the RLINK is disconnected or paused, then the RLINK switches to asynchronous mode. SRVM SRL Storage Replicator for Volume Manager Storage Replicator Log 29

30 30 Storage Replicator for Volume Manager Configuration Notes

Veritas Volume Replicator Planning and Tuning Guide

Veritas Volume Replicator Planning and Tuning Guide Veritas Volume Replicator Planning and Tuning Guide AIX 5.1 Service Pack 1 Veritas Volume Replicator Planning and Tuning Guide The software described in this book is furnished under a license agreement

More information

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR Tim Coulter and Sheri Atwood November 13, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS Introduction... 3 Overview of VERITAS Volume Replicator...

More information

VERITAS Volume Replicator. Successful Replication and Disaster Recovery

VERITAS Volume Replicator. Successful Replication and Disaster Recovery VERITAS Volume Replicator Successful Replication and Disaster Recovery V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1

More information

Veritas Volume Replicator Administrator s Guide

Veritas Volume Replicator Administrator s Guide Veritas Volume Replicator Administrator s Guide Linux 5.0 N18482H Veritas Volume Replicator Administrator s Guide Copyright 2006 Symantec Corporation. All rights reserved. Veritas Volume Replicator 5.0

More information

VERITAS Volume Replicator Successful Replication and Disaster Recovery

VERITAS Volume Replicator Successful Replication and Disaster Recovery VERITAS Replicator Successful Replication and Disaster Recovery Introduction Companies today rely to an unprecedented extent on online, frequently accessed, constantly changing data to run their businesses.

More information

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note February 2002 30-000632-011 Disclaimer The information contained in this publication is subject to change without

More information

VERITAS Storage Foundation 4.0 TM for Databases

VERITAS Storage Foundation 4.0 TM for Databases VERITAS Storage Foundation 4.0 TM for Databases Powerful Manageability, High Availability and Superior Performance for Oracle, DB2 and Sybase Databases Enterprises today are experiencing tremendous growth

More information

Disk to Disk Data File Backup and Restore.

Disk to Disk Data File Backup and Restore. Disk to Disk Data File Backup and Restore. Implementation Variations and Advantages with Tivoli Storage Manager and Tivoli SANergy software Dimitri Chernyshov September 26th, 2001 Data backup procedure.

More information

Volume Replicator Administrator's Guide

Volume Replicator Administrator's Guide Volume Replicator Administrator's Guide Windows 7.2 October 2016 Volume Replicator Administrator's Guide Document version: 7.2 Rev 0 Last updated: 2016-10-25 Legal Notice Copyright 2016 Veritas Technologies

More information

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc.

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. Table of Contents Section I: The Need for Warm Standby...2 The Business Problem...2 Section II:

More information

Veritas Volume Replicator Administrator's Guide

Veritas Volume Replicator Administrator's Guide Veritas Volume Replicator Administrator's Guide Linux 5.1 Service Pack 1 Veritas Volume Replicator Administrator s Guide The software described in this book is furnished under a license agreement and may

More information

Chapter 11: File System Implementation. Objectives

Chapter 11: File System Implementation. Objectives Chapter 11: File System Implementation Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 11: File System Implementation Prof. Alan Mislove (amislove@ccs.neu.edu) File-System Structure File structure Logical storage unit Collection

More information

Veritas NetBackup for Lotus Notes Administrator's Guide

Veritas NetBackup for Lotus Notes Administrator's Guide Veritas NetBackup for Lotus Notes Administrator's Guide for UNIX, Windows, and Linux Release 8.0 Veritas NetBackup for Lotus Notes Administrator's Guide Document version: 8.0 Legal Notice Copyright 2016

More information

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Windows Server 2003 Windows Server 2008 5.1 Application Pack 1 Veritas Storage Foundation

More information

Performance Consistency

Performance Consistency White Paper Performance Consistency SanDIsk Corporation Corporate Headquarters 951 SanDisk Drive, Milpitas, CA 95035, U.S.A. Phone +1.408.801.1000 Fax +1.408.801.8657 www.sandisk.com Performance Consistency

More information

Veritas Volume Replicator Administrator's Guide

Veritas Volume Replicator Administrator's Guide Veritas Volume Replicator Administrator's Guide Solaris 5.0 Maintenance Pack 3 Veritas Volume Replicator Administrator's Guide The software described in this book is furnished under a license agreement

More information

Guide to Mitigating Risk in Industrial Automation with Database

Guide to Mitigating Risk in Industrial Automation with Database Guide to Mitigating Risk in Industrial Automation with Database Table of Contents 1.Industrial Automation and Data Management...2 2.Mitigating the Risks of Industrial Automation...3 2.1.Power failure and

More information

Oracle Rdb Hot Standby Performance Test Results

Oracle Rdb Hot Standby Performance Test Results Oracle Rdb Hot Performance Test Results Bill Gettys (bill.gettys@oracle.com), Principal Engineer, Oracle Corporation August 15, 1999 Introduction With the release of Rdb version 7.0, Oracle offered a powerful

More information

SONAS Best Practices and options for CIFS Scalability

SONAS Best Practices and options for CIFS Scalability COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING...2 MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS...2 SONAS SYSTEM CONFIGURATION...4 SONAS Best Practices and options for CIFS Scalability A guide

More information

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Page i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT

More information

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management WHITE PAPER: ENTERPRISE AVAILABILITY Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management White Paper: Enterprise Availability Introduction to Adaptive

More information

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) Dept. of Computer Science & Engineering Chentao Wu wuct@cs.sjtu.edu.cn Download lectures ftp://public.sjtu.edu.cn User:

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases Manageability and availability for Oracle RAC databases Overview Veritas Storage Foundation for Oracle RAC from Symantec offers a proven solution to help customers implement and manage highly available

More information

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Windows Server 2003 Windows Server 2008 5.1 Veritas Storage Foundation

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Lecture 22: File system optimizations and advanced topics There s more to filesystems J Standard Performance improvement techniques Alternative important

More information

IBM 3850-Mass storage system

IBM 3850-Mass storage system BM 385-Mass storage system by CLAYTON JOHNSON BM Corporation Boulder, Colorado SUMMARY BM's 385, a hierarchical storage system, provides random access to stored data with capacity ranging from 35 X 1()9

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

Veritas Volume Replicator Option by Symantec

Veritas Volume Replicator Option by Symantec Veritas Volume Replicator Option by Symantec Data replication for disaster recovery The provides organizations with a world-class foundation for continuous data replication, enabling rapid and reliable

More information

Important Announcement: Substantial Upcoming Enhancement to Mirroring. Change Required for Sites Currently Using IsOtherNodeDown^ZMIRROR

Important Announcement: Substantial Upcoming Enhancement to Mirroring. Change Required for Sites Currently Using IsOtherNodeDown^ZMIRROR One Memorial Drive, Cambridge, MA 02142, USA Tel: +1.617.621.0600 Fax: +1.617.494.1631 http://www.intersystems.com January 30, 2014 Important Announcement: Substantial Upcoming Enhancement to Mirroring

More information

Documentation Accessibility. Access to Oracle Support

Documentation Accessibility. Access to Oracle Support Oracle NoSQL Database Availability and Failover Release 18.3 E88250-04 October 2018 Documentation Accessibility For information about Oracle's commitment to accessibility, visit the Oracle Accessibility

More information

Enterprise Vault.cloud Archive Migrator Guide. Archive Migrator versions 1.2 and 1.3

Enterprise Vault.cloud Archive Migrator Guide. Archive Migrator versions 1.2 and 1.3 Enterprise Vault.cloud Archive Migrator Guide Archive Migrator versions 1.2 and 1.3 Enterprise Vault.cloud: Archive Migrator Guide Last updated: 2018-01-09. Legal Notice Copyright 2018 Veritas Technologies

More information

6. Results. This section describes the performance that was achieved using the RAMA file system.

6. Results. This section describes the performance that was achieved using the RAMA file system. 6. Results This section describes the performance that was achieved using the RAMA file system. The resulting numbers represent actual file data bytes transferred to/from server disks per second, excluding

More information

Operating Systems. Lecture File system implementation. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture File system implementation. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 7.2 - File system implementation Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Design FAT or indexed allocation? UFS, FFS & Ext2 Journaling with Ext3

More information

Virtuozzo Containers

Virtuozzo Containers Parallels Virtuozzo Containers White Paper An Introduction to Operating System Virtualization and Parallels Containers www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

VERITAS Foundation Suite for HP-UX

VERITAS Foundation Suite for HP-UX VERITAS Foundation Suite for HP-UX Enhancing HP-UX Performance and Availability with VERITAS Foundation Products V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1

More information

File System Implementation

File System Implementation File System Implementation Last modified: 16.05.2017 1 File-System Structure Virtual File System and FUSE Directory Implementation Allocation Methods Free-Space Management Efficiency and Performance. Buffering

More information

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Manageability and availability for Oracle RAC databases Overview Veritas InfoScale Enterprise for Oracle Real Application Clusters

More information

The Total Network Volume chart shows the total traffic volume for the group of elements in the report.

The Total Network Volume chart shows the total traffic volume for the group of elements in the report. Tjänst: Network Health Total Network Volume and Total Call Volume Charts Public The Total Network Volume chart shows the total traffic volume for the group of elements in the report. Chart Description

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

TSM Paper Replicating TSM

TSM Paper Replicating TSM TSM Paper Replicating TSM (Primarily to enable faster time to recoverability using an alternative instance) Deon George, 23/02/2015 Index INDEX 2 PREFACE 3 BACKGROUND 3 OBJECTIVE 4 AVAILABLE COPY DATA

More information

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. File-System Structure File structure Logical storage unit Collection of related information File

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

Veritas Storage Foundation for Oracle RAC from Symantec

Veritas Storage Foundation for Oracle RAC from Symantec Veritas Storage Foundation for Oracle RAC from Symantec Manageability, performance and availability for Oracle RAC databases Data Sheet: Storage Management Overviewview offers a proven solution to help

More information

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown Lecture 21: Reliable, High Performance Storage CSC 469H1F Fall 2006 Angela Demke Brown 1 Review We ve looked at fault tolerance via server replication Continue operating with up to f failures Recovery

More information

CLOUD-SCALE FILE SYSTEMS

CLOUD-SCALE FILE SYSTEMS Data Management in the Cloud CLOUD-SCALE FILE SYSTEMS 92 Google File System (GFS) Designing a file system for the Cloud design assumptions design choices Architecture GFS Master GFS Chunkservers GFS Clients

More information

Building a 24x7 Database. By Eyal Aronoff

Building a 24x7 Database. By Eyal Aronoff Building a 24x7 Database By Eyal Aronoff Contents Building a 24 X 7 Database... 3 The Risk of Downtime... 3 Your Definition of 24x7... 3 Performance s Impact on Availability... 4 Redundancy is the Key

More information

A GPFS Primer October 2005

A GPFS Primer October 2005 A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those

More information

Chapter 3. The Data Link Layer. Wesam A. Hatamleh

Chapter 3. The Data Link Layer. Wesam A. Hatamleh Chapter 3 The Data Link Layer The Data Link Layer Data Link Layer Design Issues Error Detection and Correction Elementary Data Link Protocols Sliding Window Protocols Example Data Link Protocols The Data

More information

Chapter 11. SnapProtect Technology

Chapter 11. SnapProtect Technology Chapter 11 SnapProtect Technology Hardware based snapshot technology provides the ability to use optimized hardware and disk appliances to snap data on disk arrays providing quick recovery by reverting

More information

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES Ram Narayanan August 22, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS The Database Administrator s Challenge

More information

White paper ETERNUS CS800 Data Deduplication Background

White paper ETERNUS CS800 Data Deduplication Background White paper ETERNUS CS800 - Data Deduplication Background This paper describes the process of Data Deduplication inside of ETERNUS CS800 in detail. The target group consists of presales, administrators,

More information

Chapter 4 Data Movement Process

Chapter 4 Data Movement Process Chapter 4 Data Movement Process 46 - Data Movement Process Understanding how CommVault software moves data within the production and protected environment is essential to understanding how to configure

More information

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

! Design constraints.  Component failures are the norm.  Files are huge by traditional standards. ! POSIX-like Cloud background Google File System! Warehouse scale systems " 10K-100K nodes " 50MW (1 MW = 1,000 houses) " Power efficient! Located near cheap power! Passive cooling! Power Usage Effectiveness = Total

More information

Netsweeper Reporter Manual

Netsweeper Reporter Manual Netsweeper Reporter Manual Version 2.6.25 Reporter Manual 1999-2008 Netsweeper Inc. All rights reserved. Netsweeper Inc. 104 Dawson Road, Guelph, Ontario, N1H 1A7, Canada Phone: +1 519-826-5222 Fax: +1

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Chapter 11: Implementing File

Chapter 11: Implementing File Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Veritas NetBackup for Microsoft SQL Server Administrator's Guide

Veritas NetBackup for Microsoft SQL Server Administrator's Guide Veritas NetBackup for Microsoft SQL Server Administrator's Guide for Windows Release 8.1.1 Veritas NetBackup for Microsoft SQL Server Administrator's Guide Last updated: 2018-04-10 Document version:netbackup

More information

V. Mass Storage Systems

V. Mass Storage Systems TDIU25: Operating Systems V. Mass Storage Systems SGG9: chapter 12 o Mass storage: Hard disks, structure, scheduling, RAID Copyright Notice: The lecture notes are mainly based on modifications of the slides

More information

VERITAS Volume Manager for Windows 2000

VERITAS Volume Manager for Windows 2000 VERITAS Volume Manager for Windows 2000 Advanced Storage Management Technology for the Windows 2000 Platform In distributed client/server environments, users demand that databases, mission-critical applications

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Replication is the process of creating an

Replication is the process of creating an Chapter 13 Local tion tion is the process of creating an exact copy of data. Creating one or more replicas of the production data is one of the ways to provide Business Continuity (BC). These replicas

More information

KEYPAD MODEL USER MANUAL

KEYPAD MODEL USER MANUAL KEYPAD MODEL USER MANUAL Contents SecureDrive Overview 3 Safety Information 3 SecureDrive Features 4 PINs and Procedures 5 User Mode 5 User PINs 5 Unlocking the Drive in User Mode 6 Changing the User PIN

More information

Physical Representation of Files

Physical Representation of Files Physical Representation of Files A disk drive consists of a disk pack containing one or more platters stacked like phonograph records. Information is stored on both sides of the platter. Each platter is

More information

EMC Celerra Virtual Provisioned Storage

EMC Celerra Virtual Provisioned Storage A Detailed Review Abstract This white paper covers the use of virtual storage provisioning within the EMC Celerra storage system. It focuses on virtual provisioning functionality at several levels including

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory

More information

Cisco Prime Network Registrar IPAM MySQL Database Replication Guide

Cisco Prime Network Registrar IPAM MySQL Database Replication Guide Cisco Prime Network Registrar IPAM 8.1.3 MySQL Database Replication Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Chapter 10: File System Implementation

Chapter 10: File System Implementation Chapter 10: File System Implementation Chapter 10: File System Implementation File-System Structure" File-System Implementation " Directory Implementation" Allocation Methods" Free-Space Management " Efficiency

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

WHITE PAPER. Recovery of a Single Microsoft Exchange 2000 Database USING VERITAS EDITION FOR MICROSOFT EXCHANGE 2000

WHITE PAPER. Recovery of a Single Microsoft Exchange 2000 Database USING VERITAS EDITION FOR MICROSOFT EXCHANGE 2000 WHITE PAPER Recovery of a Single Microsoft Exchange 2000 Database USING VERITAS EDITION FOR MICROSOFT EXCHANGE 2000 June, 2003 1 TABLE OF CONTENTS Overview...3 Background...3 Traditional Backup Processes...4

More information

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.8 April 2017 Last modified: July 17, 2017 2017 Nasuni Corporation All Rights Reserved Document Information Testing Disaster

More information

WHITEPAPER. Disk Configuration Tips for Ingres by Chip nickolett, Ingres Corporation

WHITEPAPER. Disk Configuration Tips for Ingres by Chip nickolett, Ingres Corporation WHITEPAPER Disk Configuration Tips for Ingres by Chip nickolett, Ingres Corporation table of contents: 3 Preface 3 Overview 4 How Many Disks Do I Need? 5 Should I Use RAID? 6 Ingres Configuration Recommendations

More information

VERITAS Storage Foundation 4.1 for Windows

VERITAS Storage Foundation 4.1 for Windows VERITAS Storage Foundation 4.1 for Windows VERITAS Volume Replicator Option Administrator s Guide Windows 2000, Windows Server 2003 N117888 May 2004 Disclaimer The information contained in this publication

More information

PROMISE ARRAY MANAGEMENT ( PAM) USER MANUAL

PROMISE ARRAY MANAGEMENT ( PAM) USER MANUAL PROMISE ARRAY MANAGEMENT ( PAM) USER MANUAL Copyright 2002, Promise Technology, Inc. Copyright by Promise Technology, Inc. (Promise Technology). No part of this manual may be reproduced or transmitted

More information

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD.

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. File System Implementation FILES. DIRECTORIES (FOLDERS). FILE SYSTEM PROTECTION. B I B L I O G R A P H Y 1. S I L B E R S C H AT Z, G A L V I N, A N

More information

Data Loss and Component Failover

Data Loss and Component Failover This chapter provides information about data loss and component failover. Unified CCE uses sophisticated techniques in gathering and storing data. Due to the complexity of the system, the amount of data

More information

Veritas Storage Foundation Volume Replicator Administrator's Guide

Veritas Storage Foundation Volume Replicator Administrator's Guide Veritas Storage Foundation Volume Replicator Administrator's Guide Windows Server 2012 (x64) 6.0.2 January 2013 Veritas Storage Foundation Volume Replicator Administrator's Guide The software described

More information

VERITAS Database Edition for Sybase. Technical White Paper

VERITAS Database Edition for Sybase. Technical White Paper VERITAS Database Edition for Sybase Technical White Paper M A R C H 2 0 0 0 Introduction Data availability is a concern now more than ever, especially when it comes to having access to mission-critical

More information

Data Protection Using Premium Features

Data Protection Using Premium Features Data Protection Using Premium Features A Dell Technical White Paper PowerVault MD3200 and MD3200i Series Storage Arrays www.dell.com/md3200 www.dell.com/md3200i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

EMC Celerra Replicator V2 with Silver Peak WAN Optimization

EMC Celerra Replicator V2 with Silver Peak WAN Optimization EMC Celerra Replicator V2 with Silver Peak WAN Optimization Applied Technology Abstract This white paper discusses the interoperability and performance of EMC Celerra Replicator V2 with Silver Peak s WAN

More information

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

Maximizing Performance of IBM DB2 Backups

Maximizing Performance of IBM DB2 Backups Maximizing Performance of IBM DB2 Backups This IBM Redbooks Analytics Support Web Doc describes how to maximize the performance of IBM DB2 backups. Backing up a database is a critical part of any disaster

More information

IBM MQ Appliance HA and DR Performance Report Version July 2016

IBM MQ Appliance HA and DR Performance Report Version July 2016 IBM MQ Appliance HA and DR Performance Report Version 2. - July 216 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before using this report,

More information

Performance Monitoring User s Manual

Performance Monitoring User s Manual NEC Storage Software Performance Monitoring User s Manual IS025-32E NEC Corporation 2003-2017 No part of the contents of this book may be reproduced or transmitted in any form without permission of NEC

More information

The VERITAS VERTEX Initiative. The Future of Data Protection

The VERITAS VERTEX Initiative. The Future of Data Protection The VERITAS VERTEX Initiative V E R I T A S W H I T E P A P E R The Future of Data Protection Table of Contents Introduction.................................................................................3

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

Catalogic DPX TM 4.3. ECX 2.0 Best Practices for Deployment and Cataloging

Catalogic DPX TM 4.3. ECX 2.0 Best Practices for Deployment and Cataloging Catalogic DPX TM 4.3 ECX 2.0 Best Practices for Deployment and Cataloging 1 Catalogic Software, Inc TM, 2015. All rights reserved. This publication contains proprietary and confidential material, and is

More information

Dell OpenManage Power Center s Power Policies for 12 th -Generation Servers

Dell OpenManage Power Center s Power Policies for 12 th -Generation Servers Dell OpenManage Power Center s Power Policies for 12 th -Generation Servers This Dell white paper describes the advantages of using the Dell OpenManage Power Center to set power policies in a data center.

More information

VERITAS Storage Foundation for Windows FlashSnap Option

VERITAS Storage Foundation for Windows FlashSnap Option VERITAS Storage Foundation for Windows FlashSnap Option Snapshot Technology for Microsoft Windows Server 2000 and Windows Server 2003 August 13, 2004 1 TABLE OF CONTENTS Introduction...3 Fast Data Recovery...3

More information

Veritas NetBackup Appliance Fibre Channel Guide

Veritas NetBackup Appliance Fibre Channel Guide Veritas NetBackup Appliance Fibre Channel Guide Release 2.7.3 NetBackup 52xx and 5330 Document Revision 1 Veritas NetBackup Appliance Fibre Channel Guide Release 2.7.3 - Document Revision 1 Legal Notice

More information

Outline. Failure Types

Outline. Failure Types Outline Database Tuning Nikolaus Augsten University of Salzburg Department of Computer Science Database Group 1 Unit 10 WS 2013/2014 Adapted from Database Tuning by Dennis Shasha and Philippe Bonnet. Nikolaus

More information

Technical Brief. NVIDIA Storage Technology Confidently Store Your Digital Assets

Technical Brief. NVIDIA Storage Technology Confidently Store Your Digital Assets Technical Brief NVIDIA Storage Technology Confidently Store Your Digital Assets Confidently Store Your Digital Assets The massive growth in broadband connections is fast enabling consumers to turn to legal

More information

CHAPTER. The Role of PL/SQL in Contemporary Development

CHAPTER. The Role of PL/SQL in Contemporary Development CHAPTER 1 The Role of PL/SQL in Contemporary Development 4 Oracle PL/SQL Performance Tuning Tips & Techniques When building systems, it is critical to ensure that the systems will perform well. For example,

More information

Backup and Restore Strategies

Backup and Restore Strategies Backup and Restore Strategies WHITE PAPER How to identify the appropriate life insurance for your data At home, you safeguard against any incident to protect your family, your life, your property everything

More information