DB2 for z/os Utilities Best Practices Part 2. Haakon Roberts DB2 for z/os Development IBM Corporation. Transcript of webcast.

Size: px
Start display at page:

Download "DB2 for z/os Utilities Best Practices Part 2. Haakon Roberts DB2 for z/os Development IBM Corporation. Transcript of webcast."

Transcription

1 DB2 for z/os Utilities Best Practices Part 2 Haakon Roberts DB2 for z/os Development 2011 IBM Corporation Transcript of webcast Slide 1 (00:00) My name is Haakon Roberts and I work for DB2 Silicon Valley Lab in California and I am going to be presenting DB2 for z/os Best Practices. If we take a look at the agenda, we'll start with general recommendations for utilities. Then we'll take a look at a set of primary utilities within the DB2 for z/os utilities suite. We'll take a look at COPY and include in that COPY's use of data set level FlashCopy that was introduced in version 10 of DB2. Then we'll look at RECOVER including QUIESCE and MODIFY RECOVERY utilities. Slide 3 (00:18) If we take a look at the agenda, I'll start by going through some general recommendations regarding utilities use for DB2 z/os. Then we'll look at some specific utilities areas such as COPY, and COPY s use of data set level FlashCopy that was introduced in version 10 of DB2. Then we'll look at RECOVER, including QUIESCE and MODIFY. We'll look at LOAD and UNLOAD processing. Spend quite some time

2 discussing REORG and then RUNSTATS, CHECK, and finally we'll have a look at DSN1COPY and use of the DSN1COPY utility. The general recommendations and COPY and FlashCopy will be in the first part of this presentation. And then RECOVER, QUIESCE, MODIFY, LOAD, UNLOAD, REORG, RUNSTATS, CHECK, and DSN1COPY will be in the second part of this presentation. Slide 13 (01:25) Moving on to RECOVER, QUIESCE, and MODIFY. Slide 14 (01:31) The first thing to note about the RECOVER utility is that it typically consists of two phases. One of them is restoring the recovery base and the other part is the log apply. Any time we are talking about RECOVER, we're talking about data and application availability. And anything that can be done to reduce the recovery time is going to improve availability for applications and for businesses. Therefore, if we take a look at what we're doing for RECOVER we either need to speed up the restore of data sets or recovery bases. Or we need to reduce the log apply time. So if you take a look at our recommendations in this area. The first recommendation is to maximize exploitation of parallel RESTORE and fast log apply. Our recommendation is to recover multiple objects in a single recover statement because the recovery bases are going to be restored in parallel and perhaps more importantly we will perform one

3 scan of the log for that RECOVER utility and we will be able to take full advantage of fast log apply which was introduce in version 6 of DB2. To have parallel log apply across multiple objects in a single RECOVER. Ideally our recommendation would be to specify less than a 100 objects in a recover list but we can support many more than that. If multiple recover jobs are being run, then avoid running more that 10 recover jobs in a single DB2 subsystem. The reason for that, is that fast log apply will use up to 100 megabytes of DBM1 address space storage. But each RECOVER job itself will only use a maximum of 10 megabytes. So if you run more than 10 recover jobs, the first 10 recover jobs will be able to acquire fast log apply storage, the 11th is unlikely to get any fast log apply storage and will then run doing slow log apply and you will not take advantage of the performance of fast log apply. Other recommendations are to image copy indexes and include those indexes in the recovery list. That's particularly important for point-in-time recovery because if you wish to recover data to a prior point in time and you don't include the indexes in that recover statement then the indexes will be put in rebuild pending and the index rebuilds can take a considerable length of time. By image copying the indexes and including them in the recovery list you avoid the need to do the rebuild of the indexes. Next, consider if you have multiple objects that need to be recovered, and you know that some of the objects are read only and have no updates to them and other objects do have updates to them, consider splitting off the page sets that don't have any updates and recover them in a separate RECOVER statement. The reason for that is if they don't

4 have any updates that require log apply, then the RECOVER utility will restore the image copies and then determine from SYSLGRNX range that there is no log apply to be done. Then the recover is complete and the objects are then available at that point. If those objects are included in a set with other page sets that do have logs that need to be applied, then none of the objects in the list are going to be made available until the entire recovery is completed and so therefore the objects that could be made available after the restore of the recovery base are not going to be available until the entire log apply is complete. Next for point in time recovery include the entire referential integrity set in the same recovery statement to avoid object to be put into check pending. And also include base and aux objects in the same recovery statement for the same reason. In fact, in version 10 of DB2 we provide an option on the RECOVER statement that would allow you either enforce or not enforce these particular issues. If you are using system level backups, i.e., if you are using the RESTORE SYSTEM utility in version 8 of DB2 and you want to perform object level recovery from a system level backup, then the recovery expert tool has that capability. In version 9 of DB2, the RECOVER utility itself has that capability. Finally, for point in time recovery for version 10 of DB2, consider using the BACKOUT YES option of the RECOVER utility. Now it s not the utilities job to determine whether it makes more sense to roll back to a prior point in time or whether to restore an image copy and roll forward on the log. That is a decision which you will have to make yourself or the recovery expert tool will provide the necessary recommendations.

5 Slide 15 (07:36) Moving on to slide 15. For the QUIESCE utility, quiesces typically run periodically to ensure that there is a consistent point to recover back to in case point in time recovery is required. In version 9 of DB2, the RECOVER utility was enhanced so that any point in time recovery to an RBA or to an LRSN value, will ensure that at the end of the recover the data set is transactionally consistent. What I mean by that is if you recover to a particular RBA in version 9 of DB2, the RECOVER utility will recover to that point in the log, the RECOVER utility will then determine what units of work were uncommitted at that point and will then back out those uncommitted units of work. So at the end of the RECOVER, what you have is a consistent data set. For that reason, you might want to consider whether it is really necessary to continue taking quiesce points on a regular basis. Since running QUIESCE has an application impact because it drains off update claims. Write claims against the object. So in version 9 of DB2, because of the enhancement to the recover utility it may no longer be necessary to take period quiesce points. On the other hand, if what you want is to just take a quiesce point, so that you have a mark on the log, so that you know what the RBA was, at mid-day yesterday, in case you want to recover back to mid-day yesterday. Then instead of running QUIESCE against the object that you want to recover, or you may want to recover, one thing you can consider doing, is running the QUIESCE utility against DSNDB06.SYSEBCDC. That just contains SYSIBM.SYSDUMMY and taking a quiesce point of that is not going to impact your applications and you will still end up

6 with a quiesce point logged in SYSCOPY so that that you can then have a look at the quiesce in SYSCOPY and see what the RBA or the LRSN was at that particular point in time. And that RBA or LRSN can then be used for point in time recovery of your real data. The other point to note about the QUIESCE is that our recommendation is to use write NO unless you absolutely do have to have pages written out forced out to disk. With respect to MODIFY RECOVERY, you should ensure that you base your modify strategy on your backup strategy and not vice versa. So you do not want to take backups when objects go in copy pending because the MODIFY RECOVERY utility that you ran removed your last recovery point, your last recovery base. you need to make sure you understand what your backup strategy is and you set your backup strategy based on your recovery time objective and then once you've set your backup strategy you would then set your MODIFY RECOVERY strategy to ensure you no longer keep lying around obsolete recovery information that you would never use for recovery purposes. So consider running MODIFY RECOVERY every time a backup is taken or at least weekly. In addition to that, in order to ensure that MODIFY RECOVERY runs optimally consider reorging the SYSLOG range tablespace on a regular basis in order to ensure optimal performance of MODIFY RECOVERY and ensure it has no impact on the system when it runs. Take advantage of the new features that were delivered on version 9 of DB2 for MODIFY RECOVERY to say not what is it you want to delete but what is it you actually want to keep. For example, I want to keep recovery information for the last

7 3 image copies. Also bear in mind that MODIFY RECOVERY will not clean up orphan entries in SYSLGRNX range. And by orphan entries, I mean SYSLGRNX entries for objects that have been dropped. Finally, run the MODIFY RECOVER utility to delete recovery information from prior to a REORG that materializes row alterations. Such as a table where new columns have been added and REORG has been run to materialize those alter added columns. Eventually you would want to run the MODIFY RECOVERY utility in order to have the old recovery information removed because that will make subsequent REORGs and other utilities more efficient. Slide 16 (13:05) Moving on now to LOAD and UNLOAD utilities. Slide 17 (13:10) On slide 17, as you would imagine, running the LOAD utility without logging and with reuse of existing data sets and without the need to build new compression dictionaries is going to make the LOAD utility run more efficiently. In addition to that, make sure that inline image copy data sets are allocated to disk. And if loading multiple partitions split the input data set up and drive load partition parallelism in a single load. Use SORTNUM elimination as was discussed earlier on in this talk. And in version 9 of DB2 in the maintenance stream we introduced a new parameter called NUMRECS. And NUMRECS is a table level replacement for the SORTKEYS parameter. Our recommendation would be

8 to take a look at NUMRECS and use NUMRECS rather than SORTKEYS. It is simpler to use and it is more robust. If loading a partitioned table with a single input data set, then consider presorting the data in clustering key order. The LOAD utility does not sort data, but by presorting the data outside of the LOAD utility, in partitioning key order, we have found that that can significantly improve performance of the LOAD utility when loading multiple partitions from a single input data set. And bear in mind that the utility enhancement tool has a presort option which would allow the tool to automatically presort the data before invoking the LOAD if you wish to purchase the utility enhancement tool, or you already have that tool available to you today. In addition to this, for improved performance and reduced CPU consumption, consider using the new option format internal which was delivered this year in the maintenance stream for the LOAD and UNLOAD utilities. The idea here is if you are unloading from table A, and loading the data into table B, and table A and table B have the same table definitions, then avoid converting the data into external format only to convert it from external format back into internal format to load the data into table B. That is what format internal does for you. Format internal unloads the data into internal format and avoids all of the row conversion and field conversion that otherwise would need to be done by the UNLOAD and LOAD utilities. Consider taking a look at USS named pipe support and I refer to you the APARs on this particular slide for details on that. The idea here is that with USS named pipes, it is possible to unload to a virtual file in memory and also populate from an application a virtual file

9 in memory and then have the LOAD utility pull the data from that virtual file and load the data into a DB2 table without landing the data on disk on the z/os system. Finally, in version 10 of DB2, we introduced hashed tables. Hashed access to data provides very fast access to data for applications, but it means that the tablespace structure is slightly different than normal tablespaces. It also means that the LOAD utility cannot load data in the order in which it resides in the SYSREC data set. Each row that is loaded, has to go to its specific hash position, as a result, one should not expect that loading data into a hash table is going to perform as well as loading into a non-hash table. However, the utility enhancement tool, has been enhanced so the presort option will sort the data in hash order and that provides a significant performance improvement when loading into hash tables in version 10 of DB2. Slide 18 (17:57) Moving on to slide 18. Consider whether you want to use the UNLOAD utility or the High Performance Unload tool. The UNLOAD utility is part of the utilities suite, High Performance Unload is a separately chargeable tool. They often have comparable elapsed time although HPU often runs in less CPU. HPU also has a full SQL interface, and also permits unload from page sets on disk. Next a quick word about file reference variable processing for LOAD and UNLOAD. If you have LOB data or XML data of any size, and that data needs to be unloaded or loaded, chances are you are using file reference

10 variables. File reference variable performance in version 8 was improved in version 9 with APAR PK Even so, for file reference variables you have a choice of using members of a PDS or using HFS files. And even though we improved the performance of PDS file reference variables in version 9 of DB2, HFS still performs better in terms of elapsed time. In addition to this, there is a limit on the number of members that can be created in a PDSE which could limit the number of records that are unloaded using PDS FRVs. Null LOBs are handled better than zero length LOBs. But in version 9 of DB2, that issue is resolved in the maintenance stream. In version 9, as I mentioned, the performance of FRVs tends to be better than the performance in version 8. But the true performance improvement comes in version 10 of DB2 where in UNLOAD and LOAD we now support VBS variable blocked spanned format for the SYSREC data set. And that now allows the LOB and XML documents to be put inline with the base row in the SYSREC data set and avoids the use of FRVs altogether and that can be used via the new spanned parameter in version 10 of DB2. Slide 19 (20:47) Moving on to the REORG utility. Slide 20 (20:51) If we look at slide 20. Obviously our recommendation is to run REORG SHRLEVEL CHANGE for maximum availability. If you are reorging a partition of a partitioned tablespace simply in order to compress the data in that partition, then if

11 your table is partitioned by data, consider using LOAD copy dictionary to copy your compression dictionary from one partition into a new partition in order to avoid having to run REORG in the first place. The REORG utility was changed in version 9 of DB2 to remove the build to phase when reorging a subset of partitions and non-partition secondary indexes exist. The way that was done, was by shadowing the entire NPI. So REORG of a small set or subset of partitions with NPIs can actually take longer in version 9 than it took in version 8 of DB2. And the performance of it can be worsened if the NPI is disorganized since keys for nonreorged partitions are unloaded in order. The performance is improved significantly in version 10 of DB2 with index leaf page list prefetch. In addition to that, further performance improvements are planned to help address this particular issue. And the expectation is that changes will be put in the maintenance stream for version 9 and version 10 of DB2 to provide additional performance improvements for reorging subsets of partitions when non-partitioned secondary indexes exist. The other issue surrounding the build to phase and the removal of the build to phase is that if you have an NPI then in version 9 of DB2 and version 10 also concurrent reorgs of partitions in the same tablespace is not permitted. The reason for that is that both reorgs will attempt to shadow the same NPI and you cannot have two shadows of the same page set at the same time. So in PK87762 we retrofitted from version 10 back into version 9 the ability to specify multiple partition ranges in a single REORG statement. So now in a single REORG you can specify as per this example,

12 you can specify that you want to reorg parts 1, part 10, part 50 through 71, and part 500 through 900 in a single REORG. And in version 9 of DB2 we will UNLOAD partitions in parallel and LOAD the partitions in parallel and we have fast log apply for the log phase as well. It's much more efficient and you only get a single processing of the NPIs. So much better than splitting those partitions up and running them in separate REORGs which is what would have had to occur prior to PK However, reorging all those partitions in a single REORG statement, requires shadowing of all of those partitions and therefore it can use more disk space. As a result of this, in order to allow customers to determine whether they want to run these in parallel or not. A new PARALLEL keyword was introduced with PM25525 and the default for that is PARALLEL yes. In addition to that, we're introducing a new ZPARM that will govern the parallelism for reorg of subsets of partitions when using a LISTDEF for the REORG utility and that LISTDEF is specifying a list of partitions. So the new ZPARM introduced in PM37293 will govern the PARALLEL keyword so that the PARALLEL keyword does not have to be specified. And the PARALLEL keyword and that parallelism processing governs whether we process partitions in parallel when the REORG utility has as its input a LISTDEF specifying multiple partitions for a particular partition tablespace. So in summary, the partition parallelism in version 9 in UNLOAD, RELOAD, and log apply means that multiple partition reorg is much more efficient. It is faster, log phase is better at keeping up with the logging rates, and we only process NPIs once. So if you have the DASD space, our

13 recommendation is to reorg multiple partitions in a singe REORG statement. Slide 21 (26:33) If we now take a look at slide 21. This slide discusses main recommendations for REORG SHRLEVEL CHANGE. First of all, our recommendation is to use DRAIN ALL rather than DRAIN WRITERS to minimize application impact. Secondly, to use TIMEOUT TERM so that objects get freed up if we hit a timeout against a drain. And next, we take a look at DRAIN_WAIT and MAXRO. REORG SHRLEVEL CHANGE needs to apply log records. So it does a log scan and it does that in the log phase. At some point, the REORG utility is going to determine that it is close enough to the end of the log that it should drain off writers and then catch up the last little bit on the log. What governs when we are trying to get the drain, is the MAXRO parameter. If MAXRO is set to 30, that says that the REORG utility should try to get the drain when we think that the last bit of log is going to take us less than 30 seconds to process. Now the thing to note here is that if we think that the last bit of log is only going to take us 30 seconds to process we will attempt to get the drain at that point. It may be that it takes us 20 seconds to drain off the claimers. While those claimers are being drained, further updates could be processing against the log. So by the time we've actually acquired the drain, we are now past MAXRO. So even though at the time we decided to get the drain we thought it was only going to take us 30 seconds. Well now by the time we succeeded in getting the drain we now actually have more log to apply than originally existed at the

14 time we decided to get the drain. That is why if you want to minimize application impact from the drain processing and from the last log iteration and from the switch phase of REORG, you should use DRAIN_WAIT and MAXRO and add those values together and then set those to be something less than your IRLM lock timeout interval. Because MAXRO says how much log we are going to be processing before we attempt to get the drain and then we need to wait for the drain to occur and then once we've got the drain we've got to catch up on the log and we still have our switch phase processing that needs to be down as well before we can allow applications back in to access the reorganized data. So if the idea is to minimize application impact then our recommendation is to set DRAIN_WAIT plus MAXRO to be something less than the IRLM lock timeout interval. The difficulty here is to ensure that you do not set MAXRO too low. Because if MAXRO is set too low you could potentially end up in a situation where the REORG utility determines that it can never actually catch up on the log. And therefore we will not attempt to acquire the drain. And then potentially you could hit the timeouts, the timeout interval where you have set a deadline threshold for how long this whole process should take. If we are unable to acquire the drain, then we will release the drain attempt, allow applications back in, wait for a period of time, and then try again. And that is governed by the RETRY parameter and the RETRY_WAIT parameter. And our recommendation is to use the default value of RETRY 6 and RETRY_WAIT of DRAIN_WAIT times the number of retries. Another option is to consider using MAXRO defer. If you have a 30 minute

15 window where you have to get the data reorganized within that 30 minute window and yet you are reorganizing a 5 billion row table. That REORG is not going to complete from start to end in 30 minutes. Therefore it is important to start the REORG earlier and what typically matters is whether it can complete within the 30 minute window. So with MAXRO defer, it is possible to start the REORG sometime earlier; hours or even days earlier. And then have the REORG utility run along in the background in the log phase, keeping up on the log. And then when you hit the 30 minute window, that you have available to you, at that point use the ALTER utility command to alter the utility and have the utility then try to complete and get the drain in that windows you have made available. Next regarding reorg of log page sets. Log tablespaces came along in version 6 of DB2 but REORG of log tablespaces had some drawbacks in versions 6, 7, and 8. The primary recommendation is to get to version 9 of DB2 and use the new SHRLEVEL REFERENCE capability that is available in version 9 conversion mode. And then in version 10 of DB2 use REORG SHRLEVEL CHANGE for log tablespaces. Bear in mind that in version 10 new function mode, REORG SHRLEVEL NONE for log tablespaces, even though it is technically still supported, no REORG will actually take place. The REORG will complete return code zero, but no REORG will actually be done. Therefore the strong recommendation is to convert REORGs of log tablespaces to either SHRLEVEL REFERENCE or SHRLEVEL CHANGE before moving to version 10 new function mode. Another point to note is that when reorging a PBG that has LOB columns in version 10 of DB2 and the

16 PBG grows in size, grows new partitions during the REORG, the corresponding newly grown log tablespace may be left in copy pending. Next if using REORG DISCARD on variable length records, REORG DISCARD performs better with the NOPAD option. And finally our recommendation is to use inline statistics in order to gather statistics against objects when running REORG rather than when running separate RUNSTATS. However, bear in mind that in version 10 of DB2 there is an availability improvement for REORG with inline stats. Because the catalog update, to update the statistics columns in the catalog, in version 10 of DB2 is done after we allow applications back in to access the data. In version 9 of DB2, the catalog update for inline statistics by REORG is done prior to releasing the drain. So if the REORG is reorging many objects, i.e., hundreds or thousands of partitions, then the catalog statistics update could have a significant impact on the duration of the application unavailability for REORG. Slide 22 (35:14) Moving on to slide 22. A quick word about REORG INDEX versus REBUILD INDEX. REBUILD INDEX SHRLEVEL CHANGE is provided in version 9 of DB2 and is very good for creation of new non-unique indexes and for indexes that are already broken or are already in rebuild pending. It doesn't operate against shadow page sets. So it will set rebuild pending if it isn't already set. REORG INDEX however, does operate against a shadow. Rebuilding indexes could be faster than reorging them, particularly if the index is disorganized. But REORG INDEX performance has

17 improved in version 10 due to index leaf page list prefetch. Slide 23 (36:10) Slide 24 (36:13) Moving on to slide 24 if we now take a quick look at RUNSTATS. A key thing to note about RUNSTATS is that one should not gather unnecessary statistics. It is important to take a look and see what statistics are really necessary to be gathered and only gather those statistics that are necessary. Also, do not use RUNSTATS to gather space statistics you should really be relying of real time statistics information for that instead. And use sampling for RUNSTATS. In version 10 of DB2, we provide page sampling rather than row sampling and our recommendation would be to use page level sampling with auto sampling rates through specification of the new table sample auto parameter for RUNSTATS. In addition, in version 10 of DB2, we provide statistics profiles for tables to simplify RUNSTATS processing and our recommendation would be to use that in version 10. However, rather than running RUNSTATS it's more efficient to gather statistics through inline statistics, for example, with the REORG utility. And finally our recommendation is to specify KEYCARD when gathering statistics through RUNSTATS. Then index cardinality statistics that this gathers are cheap to collect and they are heavily used by the DB2 optimizer. Slide 25 (37:54) Slide 26 (37:57)

18 Moving on to slide 26 and CHECK utilities. If all that is required is just a check consistency, then our recommendation would be to run CHECK utility SHRLEVEL CHANGE. Another option of course rather than run CHECK utilities would be to use SQL, for example SELECTs with isolation level UR which don't acquire any locks. One thing to be aware of is that if running the CHECK utility and you're not running SHRLEVEL CHANGE. If the CHECK utility detects any inconsistency, for example when running check LOB against a LOB tablespace. If it finds an inconsistency then it would put the entire LOB tablespace in check pending. Even though that one broken LOB may have had no application impact to you prior to that point. For that reason, consider either running CHECK SHRLEVEL CHANGE which does not set restrictive states nor would it reset restrictive states. Or if running the CHECK utility such as CHECK data or CHECK LOB, consider having a REPAIR utility ready to reset any check or ACHKP pending states. Bear in mind that check index is never set restrictive states if it finds any inconsistencies. In version 10 of DB2, this has changed so now no CHECK utility will set check pending or ACHKP pending any more. Instead they will reset them if they find that no inconsistencies exist and check pending or ACHKP is currently set. If running CHECK SHRLEVEL CHANGE, then as I said, the utility will not set check pending or ACHKP but nor will it set them. Also bear in mind, that the CHECK utilities, running SHRLEVEL CHANGE, will exploit data set level FlashCopy. So make sure that page sets are on FlashCopy enabled devices. And in the maintenance stream, we are going to be delivering a ZPARM that will

19 ensure that any CHECK run SHRLEVEL CHANGE can fail if the data set is not on a FlashCopy enabled device. The alternative is to allow the FlashCopy to continue and the CHECK utility to continue but instead of a FlashCopy a slow copy will occur and the data even though running CHECK SHRLEVEL CHANGE, the data could remain in read only mode for an extended period of time. In addition to this, because CHECK utility is running SHRLEVEL CHANGE, depend on data level FlashCopy, there could be an impact to DASD mirroring or the BACKUP system utility that is also using volume level FlashCopy. In order to avoid contention and impact to BACKUP system or DASD level mirroring a new ZPARM UTIL_TEMP_STORCLAS was introduced which would allow the target volume for the data set level FlashCopy to be directed outside of the DASD mirroring group or outside of the storage group for backup system. Slide 27 (42:01) Moving on to slide 27. This slide gives a visual depiction of the order in which we would recommend data integrity checking be performed for LOB data. First of all, our recommendation would be to run CHECK LOB against the LOB tablespace. CHECK LOB simply ensures the consistency of the LOB tablespace in isolation. Once check LOB runs clean and the LOB tablespace is consistent, then run CHECK INDEX against the aux index to ensure that the aux index matches the LOB tablespace. Once that is consistent, then run CHECK DATA with SCOPE AUXONLY and AUXERROR INVALIDATE to determine whether the base data rows match the aux entries. There is no other

20 way to validate base data rows against LOBs. There is no direct access from the direct row to the LOB data itself. All access to the LOB data is via the aux index. And therefore, by ensuring that the LOB data is consistent with the aux index and the base row is consistent with the aux index. We can then ensure that the base rows are consistent with the LOB data itself. Slide 28 (43:40) Moving on to slide 28. This is the equivalent for XML objects. The recommendation here would to run check index against the DOCID index on the base table. Then run CHECK INDEX against the node id index which will ensure consistency between the node id index and the XML data in the XML tablespace. And then once they are consistent, then run CHECK DATA to validate the base rows against the node id index. Bear in mind that in version 10 of DB2, CHECK DATA has been enhanced to provide additional XML data integrity checking. Slide 29 (44:36) Slide 30 (44:39) Moving on to slide 30. I want to spend a few minutes about DSN1COPY. DSN1COPY is an essential part of the utilities portfolio and is used by a large number of customers. DSN1COPY is a stand-alone utility. As such, it cannot access any control information in DB2 catalogs or anywhere else in DB2. Therefore, there is no policing by DSN1COPY of anything that would be considered a user error such as DSN1COPYing from a segmented tablespace into a simple

21 tablespace. Or DSN1COPYing from a segmented tablespace with SEGSIZE 8 into a segmented tablespace with SEGSIZE 32, for example. That is not policed and cannot be policed by DSN1COPY and one can expect various abends or other errors to occur when subsequently that particular tablespace and that particular page set is accessed by applications. All target data sets have to be pre-allocated for multi-piece tablespaces for DSN1COPY. Now areas to watch out for are BRF-RRF mismatches. So if you have a mismatch between basic row format and reordered row format, that is tolerated by SQL but not by a number of utilities. Primarily the REORG utility. As an example, it may be that your source data set is basic row format and your target data set is defined as reordered row format. If you DSN1COPY from the source to the target, now the target data set is in fact in BRF format, but the catalog and directory information say that it is RRF. SQL can handle that inconsistency, because we have control information in the page set that tells us that the data is actually BRF regardless of what the catalog and directory might say. For utilities however, the utilities expect the catalog and directory will reflect the true state of the data. And since it does not, then the REORG utility may fail or a number of various errors or abends may occur. The expectation is that we will enhance utilities to handle BRF- RRF mismatches in the future. In the meantime, it is wise to ensure that when DSN1COPYing, you not only ensure that the actual page set definition is the same, but also that the row format matches and that you are DSN1COPYing RRF to RRF or BRF to BRF. And if there is a mismatch between

22 BRF and RRF, between the source and the target, it's always possible to use the REORG utility to convert one of them from BRF to RRF or from RRF back to BRF before running the DSN1COPY. If that isn't possible, for example, if the image copy is in BRF format, then you can unload from the BRF image copy and load into the RRF target page set instead of using DSN1COPY. Another thing to be careful about, is table metadata changes, for example, by adding new columns. The general recommendation is to REORG at the source before taking a DSN1COPY. Particularly if no updates have occurred to that particular page set since the first alter has taken place. If the first alter has been done, then the first alter will create a new version of the table. But if no data has been inserted or updated in the page set, then no system page information will exist in that page set. And then if you DSN1COPY to the target, we may not have the system page information available in the page set to allow us to interpret the version rows in that page set. Therefore the general recommendation is to run a REORG before you run a DSN1COPY. Another option would be after DSN1COPYing to the target to use repair versions in order to fix the version information at the target site. And a new APAR PM27940 enhances repair versions so that we will extract system page information from any and all partitions of a partitioned tablespace and preserve that for use by data in other partitions. Slide 31 (53:02) Moving on to slide 31. With respect to XML and DSN1COPY, care needs to be taken with respect to the

23 DOCID. Because the DOCID is a sequence number that is generated by DB2 and it is possible that by DSN1COPYing to new target where the DOCID is actually lower, it could result in a -803 SQLCODE on insert. Because DB2 will generated a new DOCID but that DOCID already exists in the table. Another concern regarding XML is that one cannot DSN1COPY an XML tablespace from one DB2 system to a different DB2 environment. The reason for that is that the XML data in the XML tablespace is not self defining. The XML data also requires information in a catalog table XMLSTRINGS to allow its interpretation. So DSN1COPYing XML data to a completely different DB2 system that has a different DB2 catalog will not work. Because the XMLSTRINGS catalog table will not have the necessary information to allow interpretation of the XML documents. DSN1COPYing XML data within a single DB2 subsystem or within a single DB2 data sharing group will work fine. Slide 32 (54:49) Moving on to slide 32. In summary, our recommendation is to stay reasonably current on DB2 versions and maintenance. Understand what this gives you in terms of utility capability and then revisit existing utility jobs to see how you can benefit from the new capabilities that we have provided both in the maintenance stream and in versions of DB2 in terms of taking advantage of enhancements for both availability and performance and CPU reduction. Thank you.

24 (55:36)

DB2 for z/os Utilities Update

DB2 for z/os Utilities Update Information Management for System z DB2 for z/os Utilities Update Haakon Roberts DE, DB2 for z/os & Tools Development haakon@us.ibm.com 1 Disclaimer Information regarding potential future products is intended

More information

Agenda. DB2 Utilities Update and Best Practices. Paul Bartak IBM DB2 Advisor

Agenda. DB2 Utilities Update and Best Practices. Paul Bartak IBM DB2 Advisor DB2 Utilities Update and Best Practices Paul Bartak IBM DB2 Advisor Paul.Bartak@us.ibm.com 1 Agenda Overview REORG Statistics Backup and recovery UNLOAD and LOAD Compression dictionaries General enhancements

More information

DB2 11 for z/os Utilities Update

DB2 11 for z/os Utilities Update DB2 11 for z/os Utilities Update Andy Lai DB2 Utilities Development atlai@us.ibm.com Insert Custom Session QR if Desired. 1 Disclaimer Copyright IBM Corporation 2014. All rights reserved. IBM s statements

More information

TUC TOTAL UTILITY CONTROL FOR DB2 Z/OS. TUC Unique Features

TUC TOTAL UTILITY CONTROL FOR DB2 Z/OS. TUC Unique Features TUC Unique Features 1 Overview This document is describing the unique features of TUC that make this product outstanding in automating the DB2 object maintenance tasks. The document is comparing the various

More information

DB2 for z/os Backup and Recovery Update - V9 and V10

DB2 for z/os Backup and Recovery Update - V9 and V10 DB2 for z/os Backup and Recovery Update - V9 and V10 James Teng, Ph.D. Distinguished Engineer IBM Silicon Valley Laboratory August 9, 2011 October 25 29, 2009 Mandalay Bay Las Vegas, Nevada Disclaimer

More information

THE BUFFER POOL. Spring Utility Improvements in DB2 9 for z/os By Craig S. Mullins

THE BUFFER POOL. Spring Utility Improvements in DB2 9 for z/os By Craig S. Mullins Spring 2009 THE BUFFER POOL Utility Improvements in DB2 9 for z/os By Craig S. Mullins Every new release of DB2 brings with it new functionality and improvements for the IBM DB2 utilities. And DB2 Version

More information

PBR RPN & Other Availability Improvements in Db2 12

PBR RPN & Other Availability Improvements in Db2 12 PBR RPN & Other Availability Improvements in Db2 12 Haakon Roberts IBM Session code: A11 07.11.2018 11:00-12:00 Platform: Db2 for z/os 1 Disclaimer IBM s statements regarding its plans, directions, and

More information

Modern DB2 for z/os Physical Database Design

Modern DB2 for z/os Physical Database Design Modern DB2 for z/os Physical Database Design Northeast Ohio DB2 Users Group Robert Catterall, IBM rfcatter@us.ibm.com May 12, 2016 2016 IBM Corporation Agenda Get your partitioning right Getting to universal

More information

An A-Z of System Performance for DB2 for z/os

An A-Z of System Performance for DB2 for z/os Phil Grainger, Lead Product Manager BMC Software March, 2016 An A-Z of System Performance for DB2 for z/os The Challenge Simplistically, DB2 will be doing one (and only one) of the following at any one

More information

PBR RPN & Other Availability Enhancements In Db2 12 Dec IBM z Analytics

PBR RPN & Other Availability Enhancements In Db2 12 Dec IBM z Analytics PBR RPN & Other Availability Enhancements In Db2 12 Dec 2018 IBM z Analytics Disclaimer IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at

More information

PBR RPN - Removing Partitioning restrictions in Db2 12 for z/os

PBR RPN - Removing Partitioning restrictions in Db2 12 for z/os PBR RPN - Removing Partitioning restrictions in Db2 12 for z/os Steve Thomas CA Technologies 07/11/2017 Session ID Agenda Current Limitations in Db2 for z/os Partitioning Evolution of partitioned tablespaces

More information

DB2 9 for z/os V9 migration status update

DB2 9 for z/os V9 migration status update IBM Software Group DB2 9 for z/os V9 migration status update July, 2008 Bart Steegmans DB2 for z/os L2 Performance Acknowledgement and Disclaimer i Measurement data included in this presentation are obtained

More information

What Developers must know about DB2 for z/os indexes

What Developers must know about DB2 for z/os indexes CRISTIAN MOLARO CRISTIAN@MOLARO.BE What Developers must know about DB2 for z/os indexes Mardi 22 novembre 2016 Tour Europlaza, Paris-La Défense What Developers must know about DB2 for z/os indexes Introduction

More information

The Impact Of DB2 Version 4 On Recovery

The Impact Of DB2 Version 4 On Recovery The Impact Of DB2 Version 4 On Recovery By Willie Favero DB2 is once again the talk of the town with the arrival of Version 4. So it is time to take a look at how the latest release of IBM's Relational

More information

Maximizing IMS Database Availability

Maximizing IMS Database Availability Maximizing IMS Database Availability Rich Lewis IBM August 3, 2010 Session 7853 Agenda Why are databases unavailable We will discuss the reasons What can we do about it We will see how we can eliminate

More information

DB2 for z/os Best Practices Optimizing Insert Performance - Part 1

DB2 for z/os Best Practices Optimizing Insert Performance - Part 1 DB2 for z/os Best Practices Optimizing Insert Performance - Part 1 John J. Campbell IBM Distinguished Engineer DB2 for z/os Development CampbelJ@uk.ibm.com 2011 IBM Corporation Transcript of webcast Slide

More information

BMC Day Israel. 21 st Century Data Management Technology Jim Dee, Corporate Architect, BMC June 7, Where IT goes Digital

BMC Day Israel. 21 st Century Data Management Technology Jim Dee, Corporate Architect, BMC June 7, Where IT goes Digital 21 st Century Data Management Technology Jim Dee, Corporate Architect, BMC June 7, 2016 BMC Day Israel Where IT goes Digital Copyright 2016 BMC Software, Inc. 1 Transforming The Disruptive Mainframes Forces

More information

Basi di Dati Complementi. Mainframe

Basi di Dati Complementi. Mainframe Basi di Dati Complementi 3.1. DBMS commerciali DB2-3.1.2 Db2 in ambiente mainframe Andrea Maurino 2007 2008 Mainframe 1 Mainframe Terminologia Mainframe Storage Management Subsystem (SMS) Is an automated

More information

DB2 Partitioning Choices, choices, choices

DB2 Partitioning Choices, choices, choices DB2 Partitioning Choices, choices, choices Phil Grainger BMC Software Date of presentation (01/11/2016) Session IB DB2 Version 8 Table Based Partitioning Version 8 introduced TABLE BASED PARTITIONING What

More information

Reorganization Strategies in Depth

Reorganization Strategies in Depth Platform: DB2 UDB for z/os Reorganization Strategies in Depth Peter Plevka Software Consultant/BMC Software Session: B7 Tuesday, May 24, 2005, 3:30 pm With the number and size of database objects constantly

More information

DB2 Analytics Accelerator Loader for z/os

DB2 Analytics Accelerator Loader for z/os Information Management for System z DB2 Analytics Accelerator Loader for z/os Agenda Challenges of loading to the Analytics Accelerator DB2 Analytics Accelerator for z/os Overview Managing the Accelerator

More information

Optimizing Insert Performance - Part 1

Optimizing Insert Performance - Part 1 Optimizing Insert Performance - Part 1 John Campbell Distinguished Engineer DB2 for z/os development CAMPBELJ@uk.ibm.com 2 Disclaimer/Trademarks The information contained in this document has not been

More information

DB2 10 for z/os Technical Overview

DB2 10 for z/os Technical Overview DB2 10 for z/os Technical Overview John Campbell Distinguished Engineer DB2 for z/os Development IBM Silicon Valley Lab Email: CampbelJ@uk.ibm.com 2010 IBM Corporation DB2 10 for z/os IBM Software Group

More information

Contents. Using. Dynamic SQL 44. Bag of Tricks 56. Complex SQL Guidelines 90. Working with Nulls 115. Aggregate Functions 135

Contents. Using. Dynamic SQL 44. Bag of Tricks 56. Complex SQL Guidelines 90. Working with Nulls 115. Aggregate Functions 135 Contents Preface xxiii Part I SQL Techniques, Tips, and Tricks 1 The Magic Words 3 An Overview of SQL 4 SQL Tools of the Trade 13 Static SQL 42 Dynamic SQL 44 SQL Performance Factors 45 2 Data Manipulation

More information

Copyright 2007 IBM Corporation All rights reserved. Copyright 2007 IBM Corporation All rights reserved

Copyright 2007 IBM Corporation All rights reserved. Copyright 2007 IBM Corporation All rights reserved Structure and Format Enhancements : UTS & RRF Willie Favero Senior Certified IT Specialist DB2 for z/os Software Sales Specialist IBM Sales and Distribution West Region, Americas 713-9401132 wfavero@attglobal.net

More information

A Field Guide for Test Data Management

A Field Guide for Test Data Management A Field Guide for Test Data Management Kai Stroh, UBS Hainer GmbH Typical scenarios Common situation Often based on Unload/Load Separate tools required for DDL generation Hundreds of jobs Data is taken

More information

Pass IBM C Exam

Pass IBM C Exam Pass IBM C2090-612 Exam Number: C2090-612 Passing Score: 800 Time Limit: 120 min File Version: 37.4 http://www.gratisexam.com/ Exam Code: C2090-612 Exam Name: DB2 10 DBA for z/os Certkey QUESTION 1 Workload

More information

Simplify and Improve DB2 Administration by Leveraging Your Storage System

Simplify and Improve DB2 Administration by Leveraging Your Storage System Simplify and Improve Administration by Leveraging Your Storage System Ron Haupert Rocket Software, Inc. March 1, 2011 Session Number 8404 Session Agenda Database and Storage Integration Overview System-Level

More information

Quest Central for DB2

Quest Central for DB2 Quest Central for DB2 INTEGRATED DATABASE MANAGEMENT TOOLS Supports DB2 running on Windows, Unix, OS/2, OS/390 and z/os Integrated database management components are designed for superior functionality

More information

Software Announcement March 6, 2001

Software Announcement March 6, 2001 Software Announcement March 6, 2001 IBM DB2 Universal Database Server for OS/390 and z/os, Version 7 Utilities Deliver Improved Usability, Availability, and Performance for Managing your Databases Overview

More information

DB2 Data Sharing Then and Now

DB2 Data Sharing Then and Now DB2 Data Sharing Then and Now Robert Catterall Consulting DB2 Specialist IBM US East September 2010 Agenda A quick overview of DB2 data sharing Motivation for deployment then and now DB2 data sharing /

More information

DB2 for z/os: Conversion from indexcontrolled partitioning to Universal Table Space (UTS)

DB2 for z/os: Conversion from indexcontrolled partitioning to Universal Table Space (UTS) DB2 for z/os: Conversion from indexcontrolled partitioning to Universal Table Space (UTS) 1 Summary The following document is based on IBM DB2 11 for z/os. It outlines a conversion path from traditional

More information

CA Rapid Reorg for DB2 for z/os

CA Rapid Reorg for DB2 for z/os PRODUCT SHEET CA Rapid Reorg for DB2 for z/os CA Rapid Reorg for DB2 for z/os CA Rapid Reorg for DB2 for z/os (CA Rapid Reorg) helps you perform quick and effective DB2 data reorganizations to help increase

More information

Attack of the DB2 for z/os Clones Clone Tables That Is!

Attack of the DB2 for z/os Clones Clone Tables That Is! Attack of the DB2 for z/os Clones Clone Tables That Is! John Lyle DB2 for z/os Development Silicon Valley Laboratory, San Jose, CA New England DB2 Users Group Agenda Rationale and description DDL statements

More information

Redpaper. DB2 9 for z/os: Backup and Recovery I/O Related Performance Considerations. Introduction. Jeff Berger Paolo Bruni

Redpaper. DB2 9 for z/os: Backup and Recovery I/O Related Performance Considerations. Introduction. Jeff Berger Paolo Bruni Redpaper Jeff Berger Paolo Bruni DB2 9 for z/os: Backup and Recovery I/O Related Performance Considerations Introduction This IBM Redpaper provides best practices and I/O-related performance considerations

More information

With the growth of data, the reduction in of DBA staffing, tight budgets, and the business goal to be 24x7 it is becoming more important to automate

With the growth of data, the reduction in of DBA staffing, tight budgets, and the business goal to be 24x7 it is becoming more important to automate 1 With the growth of data, the reduction in of DBA staffing, tight budgets, and the business goal to be 24x7 it is becoming more important to automate as much Database Administration work as possible.

More information

Optimising Insert Performance. John Campbell Distinguished Engineer IBM DB2 for z/os Development

Optimising Insert Performance. John Campbell Distinguished Engineer IBM DB2 for z/os Development DB2 for z/os Optimising Insert Performance John Campbell Distinguished Engineer IBM DB2 for z/os Development Objectives Understand typical performance bottlenecks How to design and optimise for high performance

More information

V9 Migration KBC. Ronny Vandegehuchte

V9 Migration KBC. Ronny Vandegehuchte V9 Migration Experiences @ KBC Ronny Vandegehuchte KBC Configuration 50 subsystems (15 in production) Datasharing (3 way) 24X7 sandbox, development, acceptance, production Timings Environment DB2 V9 CM

More information

DB2 Z/OS Down level detection

DB2 Z/OS Down level detection DB2 Z/OS Down level detection DB2 for Z/OS Versions: 10,11,12 Contents 1. CLOSE / OPEN page sets 2. Level ID mismatch at DB2 normal operation 3. Level ID mismatch at DB2 Restart 4. DLD frequency 5. Bibliography

More information

DB2 for z/os. Best Practices. FlashCopy and DB2 for z/os. Florence Dubois DB2 for z/os Development 2014 IBM Corporation

DB2 for z/os. Best Practices. FlashCopy and DB2 for z/os. Florence Dubois DB2 for z/os Development 2014 IBM Corporation DB2 for z/os Best Practices FlashCopy and DB2 for z/os Florence Dubois DB2 for z/os Development fldubois@uk.ibm.com Disclaimer/Trademarks Copyright IBM Corporation 2014. All rights reserved. U.S. Government

More information

What s new in DB2 Administration Tool 10.1 for z/os

What s new in DB2 Administration Tool 10.1 for z/os What s new in DB2 Administration Tool 10.1 for z/os Joseph Reynolds, Architect and Development Lead, IBM jreynold@us.ibm.com Calene Janacek, DB2 Tools Product Marketing Manager, IBM cjanace@us.ibm.com

More information

Lessons Learned in Utility Management

Lessons Learned in Utility Management Jürgen Glag SOFTWARE ENGINEERING GmbH Düsseldorf, Germany juergen_glag@compuserve.com j.glag@seg.de Copyright Jürgen Glag, 1999 foil 01/39 Low consumption of CPU and elapsed time Compatibility with application

More information

Lesson 4 Transcript: DB2 Architecture

Lesson 4 Transcript: DB2 Architecture Lesson 4 Transcript: DB2 Architecture Slide 1: Cover Welcome to Lesson 4 of the DB2 on campus series. Today we are going to talk about the DB2 architecture. My name is Raul Chong and I am the DB2 on Campus

More information

Lesson 3 Transcript: Part 1 of 2 - Tools & Scripting

Lesson 3 Transcript: Part 1 of 2 - Tools & Scripting Lesson 3 Transcript: Part 1 of 2 - Tools & Scripting Slide 1: Cover Welcome to lesson 3 of the db2 on Campus lecture series. Today we're going to talk about tools and scripting, and this is part 1 of 2

More information

DB2 12 A new spin on a successful database

DB2 12 A new spin on a successful database Presenter: Dan Lohmeier Lead Developer BMC Software Author: Phil Grainger Product Manager BMC Software DB2 12 A new spin on a successful database So, what s new with DB2 12 We ll take a speedy journey

More information

DB2 9 for z/os Selected Query Performance Enhancements

DB2 9 for z/os Selected Query Performance Enhancements Session: C13 DB2 9 for z/os Selected Query Performance Enhancements James Guo IBM Silicon Valley Lab May 10, 2007 10:40 a.m. 11:40 a.m. Platform: DB2 for z/os 1 Table of Content Cross Query Block Optimization

More information

Vendor: IBM. Exam Code: C Exam Name: DB2 10 System Administrator for z/os. Version: Demo

Vendor: IBM. Exam Code: C Exam Name: DB2 10 System Administrator for z/os. Version: Demo Vendor: IBM Exam Code: C2090-617 Exam Name: DB2 10 System Administrator for z/os Version: Demo QUESTION 1 Assume that you have implemented identity propagation and that the distributed user name is 'MARY'.

More information

DB2 11 *NEW* Availability Functions and Features

DB2 11 *NEW* Availability Functions and Features DB2 11 *NEW* Availability Functions and Features Session 16331 John Iczkovits iczkovit@us.ibm.com IBM March 2, 2015 Insert Custom Session QR if Desired. Agenda Availability for BIND/REBIND/DDL to break-in

More information

DB2 Users Group. September 8, 2005

DB2 Users Group. September 8, 2005 DB2 Users Group September 8, 2005 1 General Announcements September 13 RICDUG, Richmond DB2 Users Group, Richmond, VA www.ricdug.org September 18 TIB 2005195-1143 Removal of COBOL 2.2 TIB 2005236-1154

More information

DB2 10 for z/os Technical Update

DB2 10 for z/os Technical Update DB2 10 for z/os Technical Update James Teng, Ph.D. Distinguished Engineer IBM Silicon Valley Laboratory March 12, 2012 Disclaimers & Trademarks* 2 Information in this presentation about IBM's future plans

More information

DB2 11 for z/os Application Functionality (Check out these New Features) Randy Ebersole IBM

DB2 11 for z/os Application Functionality (Check out these New Features) Randy Ebersole IBM DB2 11 for z/os Application Functionality (Check out these New Features) Randy Ebersole IBM ebersole@us.ibm.com Please note IBM s statements regarding its plans, directions, and intent are subject to change

More information

This paper will mainly applicable to DB2 versions 10 and 11.

This paper will mainly applicable to DB2 versions 10 and 11. This paper will mainly applicable to DB2 versions 10 and 11. Table of Contents SUMMARY 1. LOB data construction 1.1 Fundamental issues to store LOB 1.2 LOB datatypes 1.3 LOB implementation 1.4 LOB storage

More information

An Introduction to purexml on DB2 for z/os

An Introduction to purexml on DB2 for z/os An Introduction to purexml on DB2 for z/os Information Management 1 2012 IBM Corporation Agenda Introduction Create a Table the XML Storage Model Insert a Row Storing XML Data SQL/XML XMLEXISTS, XMLQUERY,

More information

Cloning - What s new and faster?

Cloning - What s new and faster? Cloning - What s new and faster? SOURCE TARGET DB2 z/os Database Cloning Using Instant CloningExpert for DB2 z/os Ulf Heinrich Director Solutions Delivery 1 Agenda Cloning basics - What type of cloning

More information

SAP White Paper SAP Sybase Adaptive Server Enterprise. New Features in SAP Sybase Adaptive Server Enterprise 15.7 ESD2

SAP White Paper SAP Sybase Adaptive Server Enterprise. New Features in SAP Sybase Adaptive Server Enterprise 15.7 ESD2 SAP White Paper SAP Sybase Adaptive Server Enterprise New Features in SAP Sybase Adaptive Server Enterprise 15.7 ESD2 Table of Contents 4 Introduction 4 Introducing SAP Sybase ASE 15.7 ESD 4 VLDB Performance

More information

DB2 12 Overview what s coming

DB2 12 Overview what s coming DB2 12 Overview what s coming Tom Crocker (tom_crocker@uk.ibm.com) IBM Date of presentation (01/11/2016) Session IA Please Note IBM s statements regarding its plans, directions, and intent are subject

More information

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in versions 8 and 9. that must be used to measure, evaluate,

More information

DB2 for z/os DB2 10 for z/os DBA Productivity

DB2 for z/os DB2 10 for z/os DBA Productivity DB2 for z/os DB2 10 for z/os DBA Productivity Midwest DB2 User Group September 15, 2010 Mark Rader Advanced Technical Skills (ATS) DB2 for z/os Disclaimer DB2 10 for z/os Disclaimer: Information regarding

More information

DB2 11 for z/os Availability Enhancements. More Goodies Than You May Think

DB2 11 for z/os Availability Enhancements. More Goodies Than You May Think DB2 11 for z/os Availability Enhancements More Goodies Than You May Think Bart Steegmans bart_steegmans@be.ibm.com June 2014 - DB2 GSE user group meeting - Brussels Disclaimer and Trademarks Information

More information

CDB/Auto-Online Unload CDB/Auto-Unload

CDB/Auto-Online Unload CDB/Auto-Unload CDB/Auto-Online Unload CDB/Auto-Unload 73 CDB/Auto-Unload is a tool that extracts data from DB2 tables and puts it in sequential files in a fraction of the time it takes the IBM DB2 sample program to do

More information

Db2 12 for z/os. Data Sharing: Planning and Administration IBM SC

Db2 12 for z/os. Data Sharing: Planning and Administration IBM SC Db2 12 for z/os Data Sharing: Planning and Administration IBM SC27-8849-02 Db2 12 for z/os Data Sharing: Planning and Administration IBM SC27-8849-02 Notes Before using this information and the product

More information

290 Index. Global statement cache. See Caching

290 Index. Global statement cache. See Caching Index A Active log, 7, 49-53, 55-60, 163, 166, 169, 170, 263, 265 Address spaces, 10-22 ADMF, 8 allied, 10-12 classifying, 78 database services, 8 dumps and, 68, 72 enclave and, 17 DDF, 8, 17, 18 DBAS,

More information

DB2 z/os Cloning What s new and faster?

DB2 z/os Cloning What s new and faster? DB2 z/os Cloning What s new and faster? Ulf Heinrich SEGUS Inc Session Code: A12 Thursday, May 5th, 2011 from 2:45 PM to 3:45 PM Platform: DB2 z/os Agenda/Content to be addressed Cloning basics: What type

More information

Simplify and Improve IMS Administration by Leveraging Your Storage System

Simplify and Improve IMS Administration by Leveraging Your Storage System Simplify and Improve Administration by Leveraging Your Storage System Ron Haupert Rocket Software, Inc. March 3, 2011 Session Number: 8568 Session Agenda Database and Storage Integration Overview System

More information

Craig S. Mullins. A DB2 for z/os Performance Roadmap By Craig S. Mullins. Database Performance Management Return to Home Page.

Craig S. Mullins. A DB2 for z/os Performance Roadmap By Craig S. Mullins. Database Performance Management Return to Home Page. Craig S. Mullins Database Performance Management Return to Home Page December 2002 A DB2 for z/os Performance Roadmap By Craig S. Mullins Assuring optimal performance is one of a database administrator's

More information

Heckaton. SQL Server's Memory Optimized OLTP Engine

Heckaton. SQL Server's Memory Optimized OLTP Engine Heckaton SQL Server's Memory Optimized OLTP Engine Agenda Introduction to Hekaton Design Consideration High Level Architecture Storage and Indexing Query Processing Transaction Management Transaction Durability

More information

Session: B11 DB2 9 for z/os Best Practices for SAP. Johannes Schuetzner IBM Boeblingen Lab. October 7, 2009, 11:00-12:00 Platform: DB2 for z/os

Session: B11 DB2 9 for z/os Best Practices for SAP. Johannes Schuetzner IBM Boeblingen Lab. October 7, 2009, 11:00-12:00 Platform: DB2 for z/os Session: B11 DB2 9 for z/os Best Practices for SAP Johannes Schuetzner IBM Boeblingen Lab October 7, 2009, 11:00-12:00 Platform: DB2 for z/os Agenda Overview DB2 9 for z/os with SAP Managing tablespaces

More information

<Insert Picture Here> DBA Best Practices: A Primer on Managing Oracle Databases

<Insert Picture Here> DBA Best Practices: A Primer on Managing Oracle Databases DBA Best Practices: A Primer on Managing Oracle Databases Mughees A. Minhas Sr. Director of Product Management Database and Systems Management The following is intended to outline

More information

IBM DB2 Tools updated to unlock the power of IBM DB2 12 and to enhance your IBM DB2 for z/os environments

IBM DB2 Tools updated to unlock the power of IBM DB2 12 and to enhance your IBM DB2 for z/os environments IBM United States Software Announcement 216-326, dated October 4, 2016 IBM DB2 Tools updated to unlock the power of IBM DB2 12 and to enhance your IBM environments Table of contents 1 Overview 26 Technical

More information

Cloning - What s new and faster?

Cloning - What s new and faster? Cloning - What s new and faster? SOURCE TARGET DB2 z/os Database cloning using Instant CloningExpert for DB2 z/os 2011 SOFTWARE ENGINEERING GMBH and SEGUS Inc. 1 Agenda/Content to be addressed Cloning

More information

Deep Dive Into Storage Optimization When And How To Use Adaptive Compression. Thomas Fanghaenel IBM Bill Minor IBM

Deep Dive Into Storage Optimization When And How To Use Adaptive Compression. Thomas Fanghaenel IBM Bill Minor IBM Deep Dive Into Storage Optimization When And How To Use Adaptive Compression Thomas Fanghaenel IBM Bill Minor IBM Agenda Recap: Compression in DB2 9 for Linux, Unix and Windows New in DB2 10 for Linux,

More information

Part VII Data Protection

Part VII Data Protection Part VII Data Protection Part VII describes how Oracle protects the data in a database and explains what the database administrator can do to provide additional protection for data. Part VII contains the

More information

Advanced Design Considerations

Advanced Design Considerations Advanced Design Considerations par Phil Grainger, BMC Réunion du Guide DB2 pour z/os France Mercredi 25 novembre 2015 Hôtel Hilton CNIT, Paris-La Défense Introduction Over the last few years, we have gained

More information

TestBase's Patented Slice Feature is an Answer to Db2 Testing Challenges

TestBase's Patented Slice Feature is an Answer to Db2 Testing Challenges Db2 for z/os Test Data Management Revolutionized TestBase's Patented Slice Feature is an Answer to Db2 Testing Challenges The challenge in creating realistic representative test data lies in extracting

More information

Crossing Over/ Breaking the DB2 Platform Barrier Comparing the Architectural Differences of DB2 on the Mainframe Vs. Distributed Platforms

Crossing Over/ Breaking the DB2 Platform Barrier Comparing the Architectural Differences of DB2 on the Mainframe Vs. Distributed Platforms Crossing Over/ Breaking the DB2 Platform Barrier Comparing the Architectural Differences of DB2 on the Mainframe Vs. Distributed Platforms Agenda Basic Components Terminology Differences Storage Management

More information

Lesson 3 Transcript: Part 2 of 2 Tools & Scripting

Lesson 3 Transcript: Part 2 of 2 Tools & Scripting Lesson 3 Transcript: Part 2 of 2 Tools & Scripting Slide 1: Cover Welcome to lesson 3 of the DB2 on Campus Lecture Series. Today we are going to talk about tools and scripting. And this is part 2 of 2

More information

IBM DB2 11 DBA for z/os Certification Review Guide Exam 312

IBM DB2 11 DBA for z/os Certification Review Guide Exam 312 Introduction IBM DB2 11 DBA for z/os Certification Review Guide Exam 312 The purpose of this book is to assist you with preparing for the IBM DB2 11 DBA for z/os exam (Exam 312), one of the two required

More information

C Exam code: C Exam name: IBM DB2 11 DBA for z/os. Version 15.0

C Exam code: C Exam name: IBM DB2 11 DBA for z/os. Version 15.0 C2090-312 Number: C2090-312 Passing Score: 800 Time Limit: 120 min File Version: 15.0 http://www.gratisexam.com/ Exam code: C2090-312 Exam name: IBM DB2 11 DBA for z/os Version 15.0 C2090-312 QUESTION

More information

Chapter 2. DB2 concepts

Chapter 2. DB2 concepts 4960ch02qxd 10/6/2000 7:20 AM Page 37 DB2 concepts Chapter 2 Structured query language 38 DB2 data structures 40 Enforcing business rules 49 DB2 system structures 52 Application processes and transactions

More information

Product Overview. Technical Summary, Samples, and Specifications

Product Overview. Technical Summary, Samples, and Specifications Product Overview Technical Summary, Samples, and Specifications Introduction IRI FACT (Fast Extract) is a high-performance unload utility for very large database (VLDB) systems. It s primarily for data

More information

CA Database Management Solutions for DB2 for z/os

CA Database Management Solutions for DB2 for z/os CA Database Management Solutions for DB2 for z/os Release Notes Version 17.0.00, Fourth Edition This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter

More information

Experiences of Global Temporary Tables in Oracle 8.1

Experiences of Global Temporary Tables in Oracle 8.1 Experiences of Global Temporary Tables in Oracle 8.1 Global Temporary Tables are a new feature in Oracle 8.1. They can bring significant performance improvements when it is too late to change the design.

More information

IBM DB2 Log Analysis Tool Version 1.3

IBM DB2 Log Analysis Tool Version 1.3 IBM DB2 Log Analysis Tool Version 1.3 Agenda Who needs a log analysis tool? What is the IBM DB2 Log Analysis Tool? Robust data change reporting Rapid data restore/change reversal Enhancements in Version

More information

Optimizing Testing Performance With Data Validation Option

Optimizing Testing Performance With Data Validation Option Optimizing Testing Performance With Data Validation Option 1993-2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording

More information

Workload Insights Without a Trace - Introducing DB2 z/os SQL tracking SOFTWARE ENGINEERING GMBH and SEGUS Inc. 1

Workload Insights Without a Trace - Introducing DB2 z/os SQL tracking SOFTWARE ENGINEERING GMBH and SEGUS Inc. 1 Workload Insights Without a Trace - Introducing DB2 z/os SQL tracking 2011 SOFTWARE ENGINEERING GMBH and SEGUS Inc. 1 Agenda What s new in DB2 10 What s of interest for geeks in DB2 10 What s of interest

More information

An Introduction to DB2 Indexing

An Introduction to DB2 Indexing An Introduction to DB2 Indexing by Craig S. Mullins This article is adapted from the upcoming edition of Craig s book, DB2 Developer s Guide, 5th edition. This new edition, which will be available in May

More information

Manual Trigger Sql Server 2008 Update Inserted Rows

Manual Trigger Sql Server 2008 Update Inserted Rows Manual Trigger Sql Server 2008 Update Inserted Rows Am new to SQL scripting and SQL triggers, any help will be appreciated Does it need to have some understanding of what row(s) were affected, sql-serverperformance.com/2010/transactional-replication-2008-r2/

More information

MITOCW ocw apr k

MITOCW ocw apr k MITOCW ocw-6.033-32123-06apr2005-220k Good afternoon. So we're going to continue our discussion about atomicity and how to achieve atomicity. And today the focus is going to be on implementing this idea

More information

Instructor: Craig Duckett. Lecture 04: Thursday, April 5, Relationships

Instructor: Craig Duckett. Lecture 04: Thursday, April 5, Relationships Instructor: Craig Duckett Lecture 04: Thursday, April 5, 2018 Relationships 1 Assignment 1 is due NEXT LECTURE 5, Tuesday, April 10 th in StudentTracker by MIDNIGHT MID-TERM EXAM is LECTURE 10, Tuesday,

More information

Simplify and Improve IMS Administration by Leveraging Your Storage System

Simplify and Improve IMS Administration by Leveraging Your Storage System Simplify and Improve Administration by Leveraging Your Storage System Ron Bisceglia Rocket Software, Inc. August 9, 2011 Session Number: 9406 Session Agenda and Storage Integration Overview System Level

More information

Runstats has always been a challenge in terms of what syntax to use, how much statistics to collect and how frequent to collect these statistics.

Runstats has always been a challenge in terms of what syntax to use, how much statistics to collect and how frequent to collect these statistics. 1 Runstats has always been a challenge in terms of what syntax to use, how much statistics to collect and how frequent to collect these statistics. The past couple of DB2 releases have introduced some

More information

Db2 for z/os: Lies, Damn lies and Statistics

Db2 for z/os: Lies, Damn lies and Statistics Db2 for z/os: Lies, Damn lies and Statistics SEGUS & SOFTWARE ENGINEERING GmbH Session code: A18 05.10.2017, 11:00 Platform: Db2 for z/os 1 Agenda Quotes Quotes Basic RUNSTATS knowledge Basic RUNSTATS

More information

CA Database Management Solutions for DB2 for z/os

CA Database Management Solutions for DB2 for z/os CA Database Management Solutions for DB2 for z/os Release Notes Version 17.0.00, Ninth Edition This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter

More information

Db2 V12 Gilbert Sieben

Db2 V12 Gilbert Sieben Db2 V12 Migration @KBC Gilbert Sieben Agenda 1. Time line 2. Premigration checks 3. Migration to V12 4. Measurements 5. New Features 6. Lessons learned Company 2 1. Time line Project of 1 year, 300 Mandays,

More information

(Refer Slide Time: 01:25)

(Refer Slide Time: 01:25) Computer Architecture Prof. Anshul Kumar Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 32 Memory Hierarchy: Virtual Memory (contd.) We have discussed virtual

More information

GSE Belux DB2. Thursday 6 December DB2 V10 upgrade BNP Paribas Fortis

GSE Belux DB2. Thursday 6 December DB2 V10 upgrade BNP Paribas Fortis GSE Belux DB2 Thursday 6 December 2012 DB2 V10 upgrade experience @ BNP Paribas Fortis Agenda Configuration Business Case Install Setup Preparation Move to CM Move to NFM System monitoring 2 Configuration

More information

Improving VSAM Application Performance with IAM

Improving VSAM Application Performance with IAM Improving VSAM Application Performance with IAM Richard Morse Innovation Data Processing August 16, 2004 Session 8422 This session presents at the technical concept level, how IAM improves the performance

More information

DB2 for z/os and OS/390 Performance Update - Part 1

DB2 for z/os and OS/390 Performance Update - Part 1 DB2 for z/os and OS/390 Performance Update - Part 1 Akira Shibamiya Orlando, Florida October 1-5, 2001 M15a IBM Corporation 1 2001 NOTES Abstract: The highlight of major performance enhancements in V7

More information

Vendor: IBM. Exam Code: C Exam Name: DB DBA for Linux UNIX and Windows. Version: Demo

Vendor: IBM. Exam Code: C Exam Name: DB DBA for Linux UNIX and Windows. Version: Demo Vendor: IBM Exam Code: C2090-611 Exam Name: DB2 10.1 DBA for Linux UNIX and Windows Version: Demo QUESTION 1 Due to a hardware failure, it appears that there may be some corruption in database DB_1 as

More information

Deadlocks were detected. Deadlocks were detected in the DB2 interval statistics data.

Deadlocks were detected. Deadlocks were detected in the DB2 interval statistics data. Rule DB2-311: Deadlocks were detected Finding: Deadlocks were detected in the DB2 interval statistics data. Impact: This finding can have a MEDIUM IMPACT, or HIGH IMPACT on the performance of the DB2 subsystem.

More information