HiperDispatch Logical Processors and Weight Management
|
|
- Merry Glenn
- 5 years ago
- Views:
Transcription
1 HiperDispatch Logical Processors and Weight Management Fabio Massimo Ottaviani EPV Technologies August Introduction In the last few years, the power and number of the physical processors available on mainframe hardware has greatly increased, allowing the concentration of a much higher number of LPARs than before on a single machine. A side effect of this high number of LPARs is an increase of the number of defined logical processors compared to the number of physical CPs. These factors tend to reduce the probability for a logical processor to be re-dispatched to the same physical processor and therefore reuse instructions and data previously loaded in the Level 1 cache (the amount of cache memory dedicated to each processor). A L1 cache miss will cause data and instructions to be loaded from the Level 2 cache (the cache memory shared among all the physical processors packaged in a book). Performance degradation and overhead occurs when this happens because the access to L2 cache will require more CPU cycles to be performed. However if the logical processor has been dispatched to a different physical processor which belongs to a different book, the required instructions and data have to be loaded from the previously used L2 cache and performance degradation and overhead can be much worse. HiperDispatch has been designed to minimise the number of L1 cache misses and, if a L1 cache miss occurs, to maximise the probability of finding instructions and data in the L2 cache of the book where the logical processor is dispatched. To reach this goal, a new weight called polarization weight has been introduced; polarization weight is a key element in the HiperDispatch design because it is the way z/os uses to give PR/SM indications on how logical processors should be dispatched to the physical processors. This paper, using real life examples, will discuss: PR/SM and IRD logical processors and weight management; HiperDispatch logical processors and weight management. 2 PR/SM Logical Processors and Weight Management Logical processors are the LPAR view of physical processors. A logical processor can be defined as dedicated or shared. When an LPAR using dedicated logical processors is activated, a physical processor is assigned to each one of them. The LPAR then has exclusive use of these physical processors. This is true for any processor type such as standard CPUs, AAPs, IIPs, etc. Dedicated logical processors are not relevant for the scope of this paper so, in the following, only shared logical processors will be discussed. When logical processors are defined as shared, a LPAR does not have exclusive use of the physical processors but have to share them with other LPARs. HiperDispatch Logical Processors and Weight Management 1
2 The maximum number of shared logical processors which can be assigned to each LPAR is the number of physical processors in the shared pool 1. This means that the total number of shared logical processors defined in all the LPARs can be much larger than the number of physical processors to share. For example, if five LPARs are active on a 8 CPU machine, the shared pool is 8 CPUs, and each LPAR has 8 logical processors defined, the total number of logical processors is 40. The PR/SM configuration of a 13 CPU machine hosting 5 LPARs is presented in Figure 1. The total number of shared logical processors (SHARED CPUs) is 33 that is 2,54 times the number of physical processors to share. LPARNAME SHARED CPUs LPAR % TARGET CPUs LPAR % 6,76 LPAR % 0,52 LPAR % 0,33 LPAR % 2,99 LPAR % 2, % 13,00 Figure 1 To determine the portion of the shared pool to be used by each LPAR, PR/SM uses LPAR weights (LPAR ). The algorithm used by PR/SM is the following: add the weights of all active, sharing LPARs; this total is considered to be 100% of the processing resource available in the shared pool; divide each LPAR weight by the total; the resulting percentage is the target share of processing resources for each LPAR (%). LPAR1 has a weight of 520 which is 52% of the sum of all LPAR weights. So LPAR1 target share is 52% of the number of shared CPUs in the pool (13 in this case) which corresponds to the use of 6,76 CPUs (TARGET CPUs). It s interesting to note that LPAR2 has a very low weight but a high number of shared logical processors (SHARED CPUs). That could make sense because PR/SM enforces LPAR weights only when all the LPARs want to use their target share or more. When there is available capacity, one or more LPARs may use more than their targets, up to the number of online shared CPUs. With these definitions LPAR2 may use only 50% of one CPU when in contention (TARGET CPUs) but up to the full power of the machine if the other LPARs are not active or very lightly loaded. Having this flexibility is one of the reasons why many customers designed configurations where each LPARs has assigned all the logical processors of the shared pool. Unfortunately, a high logical to physical processors ratio is one of the main sources of PR/SM overhead. In fact having so many logical processors to manage increases PR/SM s work and makes 1 The number of physical processors in the shared pool is the total number of physical processors minus the number of dedicated processors. Since z9 machines each processors type (CPU, AAP, IIP, ICF and IFL) has its own separate shared pool. HiperDispatch Logical Processors and Weight Management 2
3 much more unlikely the re-dispatching of a logical processor to the same physical processor used before. Another important issue to consider is that the number of logical processors and the target share of a LPAR are used by PR/SM to determine the time slice each logical processor can use of a physical processor every time it is dispatched. LPARNAME SHARED CPUs LPAR % TARGET CPUs %CPU LPAR % 6,76 85% LPAR % 0,52 4% LPAR % 0,33 16% LPAR % 2,99 75% LPAR % 2,41 40% % 13,00 Figure 2 The %CPU column reports the values PR/SM would use in this configuration to set time slice values for the logical processors of each LPAR. They have been calculated dividing TARGET CPUs by SHARED CPUs. LPAR2 values is only 4%. This means that its logical processors have to be dispatched many times (25) to use the full power of a CPU. Increasing the number of times a unit of work has to be dispatched to complete will increase the PR/SM overhead and more importantly the times a logical processor has to queue with consequent performance degradation. This is what the IBM Washington System Center calls the Short CP effect. In the same machine 2 IIPs are also used. Figure 3 shows the correspondent PR/SM configuration. LPARNAME SHARED IIPs LPAR % TARGET IIPs %IIP LPAR % 0,93 47% LPAR % 0,23 11% LPAR % 0,08 4% LPAR % 0,70 35% LPAR % 0,05 2% % 2,00 Figure 3 Starting from z9 hardware AAPs and IIPs are managed by PR/SM in the same way as standard CPUs. 3 PR/SM Logical Processors and Weight Management with IRD A possible solution to reduce PR/SM overhead and maintain the needed flexibility is to use the standard z/os CONFIG command to reduce or increase the number of logical processors available to each LPAR. Some companies do that manually or by using automation scripts but normally the variation of the number of logical processors is minimal and pre determined (often based on time shift, day of the week, etc) and not correlated to system load and application performance. A much better solution has been provided by IBM for some years with the Intelligent Resource Director (IRD). HiperDispatch Logical Processors and Weight Management 3
4 IRD is a set of functions designed to distribute hardware resources based on business importance. They are: LPAR Vary CPU Management; LPAR Weight Management; Dynamic Channel Path Management; Channel Subsystem Priority Queuing. Only the first two are relevant for the scope of this paper: LPAR Vary CPU Management is designed to maintain online the minimum number of logical processors required by an LPAR to use the capacity that corresponds to its target share; LPAR Weight Management is designed to adjust weights for the LPARs belonging to the same IRD cluster in order to better match workload demand and importance. An IRD cluster is composed of all the LPARs belonging to the same Sysplex and running on the same physical machine. SYSPLEX LPARNAME IRD CLUSTER SHARED CPUs LPAR CURRENT % TARGET CPUs %CPU SYSPLEX1 LPAR1 CLUSTER % 6,40 91% SYSPLEX1 LPAR2 CLUSTER % 0,83 42% SYSPLEX2 LPAR3 CLUSTER % 0,33 16% SYSPLEX1 LPAR4 CLUSTER % 3,04 76% SYSPLEX3 LPAR5 CLUSTER % 2,41 80% % 13,00 Figure 4 In Figure 4 you can appreciate the effect IRD on our configuration. Reported values are based on the situation at a specific point in time (10:00 am in this case) because, as described, IRD can dynamically change the number of logical processors and the LPAR weights. The number of logical processors of LPAR1, LPAR2, LPAR4 and LPAR5 have been reduced 2. The total number of shared logical processors (SHARED CPUs) is now 18 that is 1,38 time the number of physical processors to share. The weights of LPAR1, LPAR2 and LPAR4, all belonging to CLUSTER1, has been adjusted (see CURRENT ). No actions have been taken on LPAR3 and LPAR5 weights because they are the only LPAR respectively in CLUSTER2 and CLUSTER3. When IRD is active, the standard LPAR weight becomes the initial weight. Using that as a starting point, IRD will set the current weight using the indications of WLM and finally with respect to the minimum and maximum weight values set by the user. It s interesting to note that the total weight of all the LPARs in a cluster can not change (because otherwise the weights adjustment could impact another Sysplex). 2 The minimum number of online logical processors can be specified in the VARYCPUMIN parameter in the IEAOPTxx member of SYS1.PARMLIB. VARYCPUMIN(2) has been specified for all the LPARs in the analysed configuration; this is the reason why LPAR3 logical processors have not been reduced. HiperDispatch Logical Processors and Weight Management 4
5 So in Figure 4, the sum of the CLUSTER1 initial weights (LPAR ) and current weights (CURRENT ) is exactly the same (790). We can say that IRD did a good job by reducing the number of logical processors and, looking at the last column, also the Short CP effect (especially for LPAR2). Unfortunately IRD design has some important limitations: 1) It provides no information to PR/SM on the way logical processors should be dispatched to reduce cache miss performance degradation and overhead; 2) Weight management is only possible for LPARs belonging to the same cluster; 3) It only manages standard CPUs; not AAPs or IIPs. 4 Polarization Weights A key element in the HiperDispatch design is a new metric called polarization weight which represents the weight automatically assigned to a logical processor by z/os. In the following table a comparison is done among LPAR weights, IRD current weights and polarization weights. Reported values are based on the situation at a specific point in time (10:00 a.m. in this case) because HiperDispatch can dynamically change the number of logical processors and their weights. SYSPLEX LPARNAME IRD CLUSTER ADDR LPAR CURRENT POW SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR1 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX1 LPAR2 CLUSTER SYSPLEX2 LPAR3 CLUSTER SYSPLEX2 LPAR3 CLUSTER SYSPLEX1 LPAR4 CLUSTER SYSPLEX1 LPAR4 CLUSTER SYSPLEX1 LPAR4 CLUSTER SYSPLEX1 LPAR4 CLUSTER HiperDispatch Logical Processors and Weight Management 5
6 SYSPLEX3 LPAR5 CLUSTER SYSPLEX3 LPAR5 CLUSTER SYSPLEX3 LPAR5 CLUSTER SYSPLEX3 LPAR5 CLUSTER SYSPLEX3 LPAR5 CLUSTER SYSPLEX3 LPAR5 CLUSTER Figure 5 If HiperDispatch was not active, the LPAR current weights, set by IRD, would be evenly distributed among all the logical processors. So, to allow a comparison, the column in Figure5 has been calculated dividing the LPAR current weight (CURRENT ) by the number of logical processors 3. It s important to note that IRD is no longer reducing the number of logical processors of any LPAR; so for example all 13 logical processors are active in LPAR2. The IRD LPAR Vary CPU Management function is in fact incompatible with HiperDispatch and it is therefore disabled when HiperDispatch is active. The polarization weight is reported in the last column ( POW). We can note that: a) Total LPAR weight has not changed; b) Many logical processors have a polarization weight value equal to 0; c) No logical processor has a polarization weight value greater than a) HiperDispatch honours either the LPAR weight or the IRD current weight; it only distributes that weight in order to optimise the number of logical processors to use. The sum of logical processors weights by LPAR gives exactly the same values in the and POW columns. In HiperDispatch mode, the intention is to manage work across fewer logical processors. For example, only one logical processor is needed for LPAR2 to obtain the capacity specified by the IRD current weight so all the other processors have POW values set to 0. b) The logical processors for a LPAR in HiperDispatch mode will be assigned to one of the following groups: high processor share (or high polarity); they will have a target share corresponding to 100% of a physical processor; medium processor share (or medium polarity); they will have a target share greater than 0% and less than 100% of a physical processor; these medium logical processors have the remainder of the LPAR s shares after the allocation of the logical processors with the high share; one or two logical processors (from 0,5 to 1,5 physical processor capacity) will be assigned to this group; low processor share (or low polarity); they will receive a target share corresponding to 0% of a physical processor; these are considered discretionary logical processors which are not needed to allow the LPAR to fully utilise the physical processor resource associated with its weight. 3 As described in the previous chapter, when IRD is active the LPAR weight is considered as the initial weight; a current weight is dynamically set by IRD and used by PR/SM. HiperDispatch Logical Processors and Weight Management 6
7 c) Values reported in the POW column are not the physical processor shares but the logical processor polarization weights. To calculate the physical processor shares the following algorithm has to be used: 1) Sum the weights across all the LPARs; 2) Get the number of physical processors in the shared pool; 3) Calculate the weight value corresponding to 100% of a physical processor dividing the weights sum by the number of physical processors in the shared pool; 4) Divide each logical processor polarization weight ( POW) by the weight value calculated in step 3; the result is the physical processor share. Based on the configuration in Figure 5: 1) Weights sum is 1.000; 2) The number of physical processors in the shared pool is 13; 3) The weight value corresponding to 100% of a physical processor is 76,9 (1.000 / 13); 4) The PCPU SHARE column in Figure 6 shows the physical processor share for each logical processor. SYSPLEX LPARNAME IRD CLUSTER ADDR LPAR CURRENT POW PCPU SHARE SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % SYSPLEX1 LPAR1 CLUSTER % % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % SYSPLEX1 LPAR2 CLUSTER % % SYSPLEX2 LPAR3 CLUSTER % SYSPLEX2 LPAR3 CLUSTER % % SYSPLEX1 LPAR4 CLUSTER % SYSPLEX1 LPAR4 CLUSTER % SYSPLEX1 LPAR4 CLUSTER % SYSPLEX1 LPAR4 CLUSTER % % SYSPLEX3 LPAR5 CLUSTER % SYSPLEX3 LPAR5 CLUSTER % SYSPLEX3 LPAR5 CLUSTER % SYSPLEX3 LPAR5 CLUSTER % SYSPLEX3 LPAR5 CLUSTER % HiperDispatch Logical Processors and Weight Management 7
8 SYSPLEX3 LPAR5 CLUSTER % % Figure 6 1,300% It s worth noting that the total PCPU SHARE value is 1.300% which corresponds to the total number of physical processors in the shared pool (13). AAP and IIP logical processors and their weights are managed exactly in the same way as standard CPUs. 5 HiperDispatch affinity queues and nodes HiperDispatch provides a more complete and efficient solution than IRD to optimise the number of logical processors used by a LPAR and to maximise the logical processors weights. However the most important benefit provided by HiperDispatch is the fact that, for the first time, z/os and PR/SM communicate and work together, through the polarization weight, in order to redispatch a unit of work to the same physical processor or at least in the same group of physical processors previously used. To accomplish this, the z/os Dispatcher manages work in multiple affinity dispatch queues. All the logical processors associated to the same affinity dispatch queue are considered a logical processor affinity pool. All the logical processors in a logical processor affinity pool have the same polarization weight. In the current z/os HiperDispatch design: there is only one dispatch queue associated to medium polarity logical processors; no more than four high polarity logical processors can be associated with the same affinity dispatch queue; if more than four logical processors are used, z/os creates a new affinity dispatch queue and a new high polarity logical processor affinity pool. PR/SM with z10 establishes affinity nodes consisting of up to four physical processors which correspond to high polarity logical processors affinity pools. In the current HiperDispatch design PR/SM tries to: maintain all the physical processors in the same affinity node on the same book; dispatch a logical processor to the same physical processors previously used or in alternative to a physical processor in the same affinity node; dedicate a physical processor to each high polarity logical processor. 6 HiperDispatch Measurements All the measurements presented in this chapter refers to the SYS4 system running inside LPAR4. The most important new metrics available to control HiperDispatch behaviour are: the logical processor s share of a physical processor (derived from the polarization weight as discussed in chapter 5); HiperDispatch Logical Processors and Weight Management 8
9 the parked time. The logical processor s share of a physical processor has to be considered as a target utilization for PR/SM dispatching. So the logical processors busy may be different from that target depending on running workloads and other LPARs needs. In Figure 7 the SYS4 logical processors share profile is presented. SYSTEM LOGICAL CPU UTILIZATION SHARE SHARE OF PCPU/HOUR - SYS4 - FRI, 30 MAY % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 1 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 2 95% 95% 95% 75% 69% 65% 80% 66% 78% 95% 77% 74% 56% 56% 56% 56% 63% 95% 95% 95% 3 0% 0% 0% 29% 37% 34% 0% 41% 25% 0% 26% 30% 56% 56% 56% 56% 46% 0% 0% 0% 295% 295% 295% 304% 306% 298% 280% 307% 303% 295% 303% 304% 312% 312% 312% 312% 309% 295% 295% 295% Figure 7 Logical processors 0 and 1 are high polarity while logical processor 2 is medium polarity. Depending on the weight set by IRD, the total LPAR share (in the last row) is less or more than three physical processors. As discussed in Chapter 5, HiperDispatch is designed to assign a minimum of 0,5 up to a maximum of 1,5 logical processors share to the medium polarity group. In this case when the total LPAR share is less than 300%, two high polarity and one medium polarity logical processors are enough, so logical processor 3 is considered low polarity When the total LPAR share is greater than 300%, two high polarity and one medium polarity logical processors are not enough, so logical processor 3 is considered medium polarity. HiperDispatch assigns the same weight to all the LPAR medium polarity logical processors; this doesn t seem to happen always in our report but it s only the effect of averaging the values across a 30 minutes interval. PR/SM considers low polarity logical processors as discretionary processors and it can decide to park them when they are not needed to handle the LPAR workload (not enough load) or are not useful because physical capacity does not exist for PR/SM to dispatch (no available time from other LPARs). In a parked state, discretionary processors do not dispatch work; they are in a long term wait state. The report in Figure 8 shows that only logical processor 3 is parked part of the time. SYSTEM LOGICAL CPU UTILIZATION PARKED PARKED TIME/HOUR - SYS4 - FRI, 30 MAY % 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 1 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 2 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 3 92% 28% 2% 0% 20% 27% 86% 11% 42% 86% 48% 15% 0% 0% 0% 0% 18% 91% 99% 100% Figure 8 In a perfect world we d expect to see the usage of high polarity logical processors close to 100%, the LPAR residual load distributed between medium polarity logical processors and the usage of low polarity logical processors close to 0. Figure 9 shows a more complex situation. HiperDispatch Logical Processors and Weight Management 9
10 SYSTEM LOGICAL CPU UTILIZATION BUSY TIME/HOUR - SYS4 - FRI, 30 MAY % 93% 94% 92% 75% 84% 82% 73% 72% 66% 55% 66% 54% 65% 66% 57% 62% 59% 41% 35% 1 78% 94% 95% 93% 82% 88% 86% 80% 79% 71% 64% 76% 67% 75% 76% 69% 71% 65% 50% 44% 2 72% 64% 57% 65% 66% 69% 74% 65% 69% 69% 65% 74% 62% 65% 64% 62% 66% 67% 53% 45% 3 5% 41% 56% 65% 50% 49% 9% 55% 36% 10% 33% 62% 61% 64% 64% 63% 55% 7% 0% 0% 226% 292% 301% 314% 274% 289% 250% 274% 256% 216% 216% 277% 244% 269% 270% 251% 255% 198% 145% 125% Figure 9 High polarity logical processors busy is more than 90% for a few intervals only. When the LPAR load decreases, especially in the afternoon, the LPAR load seems to be evenly distributed across all the logical processors. The z/os HiperDispatch component seems not to work perfectly on this system. It could be due to: the system level (z/os 1.8 in this case); missing PTFs; the type of workload running on the system. It s important to remember that SRBs in the SYSSTC service class essentially bypasses the z/os HiperDispatch management algorithms. They can execute on any available logical processor. 4 The IIP logical processors analysis is much simpler. As showed in the following reports, logical processor 14 is medium polarity while logical processor 15 is always low polarity; so logical processor 15 is parked all the time. All the load is satisfied by logical processor 14. BUSY SYSTEM LOGICAL IIP UTILIZATION SHARE LIIP SHARE OF PIIP/HOUR - SYS4 - FRI, 30 MAY % 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 15 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% 70% SYSTEM LOGICAL IIP UTILIZATION Figure 10 PARKED LIIP PARKED TIME/HOUR - SYS4 - FRI, 30 MAY % 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% SYSTEM LOGICAL IIP UTILIZATION Figure 11 LIIP BUSY TIME/HOUR - SYS4 - FRI, 30 MAY % 28% 30% 21% 19% 25% 16% 22% 14% 7% 16% 22% 10% 13% 10% 12% 16% 12% 4% 7% 15 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 14% 28% 30% 21% 19% 25% 16% 22% 14% 7% 16% 22% 10% 13% 10% 12% 16% 12% 4% 7% Figure 12 BUSY 4 See "z/os: Planning Considerations for HiperDispatch Mode" - IBM WSC White Papers - WP HiperDispatch Logical Processors and Weight Management 10
11 7 Conclusions HiperDispatch has been essentially designed to avoid performance degradation and overhead due to L1 and L2 cache misses. HiperDispatch makes z/os and PR/SM communicate and work together in order to re-dispatch a unit of work to the same physical processor or at least in the same group of physical processors previously used HiperDispatch also provides a more complete and efficient solution than IRD to optimise the number of logical processors used by a LPAR and to maximise the logical processors weights. New specific metrics such as the polarization weight and the parked time have been introduced; they have to be fully understood and thoroughly analysed in order to control the HiperDispatch behaviour. All the reports presented here are included in EPV for z/os since version 8. HiperDispatch Logical Processors and Weight Management 11
z10 Capacity Planning Issues Fabio Massimo Ottaviani EPV Technologies White paper
z10 Capacity Planning Issues Fabio Massimo Ottaviani EPV Technologies White paper 1 Introduction IBM z10 machines present innovative architecture and features (HiperDispatch) designed to exploit the speed
More informationIntroduction to HiperDispatch Management Mode with z10 1
Introduction to HiperDispatch Management Mode with z10 1 Donald R. Deese Computer Management Sciences, Inc. Hartfield, Virginia 23071-3113 HiperDispatch was introduced with IBM s z10 server, and is available
More informationPractical Capacity Planning in 2010 zaap and ziip
Practical Capacity Planning in 2010 zaap and ziip Fabio Massimo Ottaviani EPV Technologies February 2010 1 Introduction When IBM released zaap (2004) and ziip(2006) most companies decided to acquire a
More informationManaging CPU Utilization with WLM Resource Groups Part 2
Managing CPU Utilization with WLM Resource Groups Part 2 Fabio Massimo Ottaviani EPV Technologies March 2013 4.2 Type 2 Type 2 resource groups have been introduced in z/os 1.8 to overcome the two major
More informationPlanning Considerations for HiperDispatch Mode Version 2 IBM. Steve Grabarits Gary King Bernie Pierce. Version Date: May 11, 2011
Planning Considerations for HiperDispatch Mode Version 2 IBM Steve Grabarits Gary King Bernie Pierce Version Date: May 11, 2011 This document can be found on the web, www.ibm.com/support/techdocs Under
More informationWhite Paper. 1 Introduction. Managing z/os costs with capping: what s new with zec12 GA2 and z/os 2.1? Fabio Massimo Ottaviani - EPV Technologies
White Paper Managing z/os costs with capping: what s new with zec12 GA2 and z/os 2.1? Fabio Massimo Ottaviani - EPV Technologies 1 Introduction In the current volatile economic environment, companies want
More informationz Processor Consumption Analysis Part 1, or What Is Consuming All The CPU?
z Processor Consumption Analysis Part 1, or What Is Consuming All The CPU? Peter Enrico Email: Peter.Enrico@EPStrategies.com z/os Performance Education, Software, and Managed Service Providers Enterprise
More informationPlanning Considerations for Running zaap Work on ziips (ZAAPZIIP) IBM. Kathy Walsh IBM. Version Date: December 3, 2012
Planning Considerations for Running zaap Work on ziips (ZAAPZIIP) IBM Kathy Walsh IBM Version Date: December 3, 2012 This document can be found on the web, www.ibm.com/support/techdocs Under the category
More informationFirst z/os Knights tournament
First z/os Knights tournament Danilo Gipponi EPV Technologies Fabio Massimo Ottaviani EPV Technologies May 2014 Introduction At the last IBM z Technical University held in May in Budapest, we had our first
More informationLarge Memory Pages Part 2
Large Memory Pages Part 2 Fabio Massimo Ottaviani EPV Technologies May 2013 3 Measuring TLB effectiveness Direct measurements of TLB1 and TLB2 effectiveness are provided in the extended counters collected
More informationThe Hidden Gold in the SMF 99s
The Hidden Gold in the SMF 99s Peter Enrico Email: Peter.Enrico@EPStrategies.com z/os Performance Education, Software, and Managed Service Providers Enterprise Performance Strategies, Inc. 3457-53rd Avenue
More informationWLM Quickstart Policy Update
WLM Quickstart Policy Update Cheryl Watson Session 2541; SHARE 101 in Washington, D.C. August 12, 2003 Watson & Walker, Inc. publishers of Cheryl Watson s TUNING Letter & BoxScore WLM Quickstart Policy
More informationWhat are the major changes to the z/os V1R13 LSPR?
Prologue - The IBM Large System Performance Reference (LSPR) ratios represent IBM's assessment of relative processor capacity in an unconstrained environment for the specific benchmark workloads and system
More informationWebsphere and Enclaves
Websphere and Enclaves Fabio Massimo Ottaviani EPV Technologies Enclaves are the units of work used by all the new z/os workloads. Enclaves can be dependent or independent. An independent enclave is a
More informationLPARs revisited. LPARs revisited. Jan Tits - STG. Thanks to Harv Emery and Jerry Moody IBM Corporation
LPARs revisited LPARs revisited Jan Tits - STG jantits@be.ibm.com Thanks to Harv Emery and Jerry Moody Agenda 2 What are logical partitions Divides physical resources of a single zseries machine amongst
More informationPerformance Impacts of Using Shared ICF CPs
Performance Impacts of Using Shared ICF CPs Enhancements to the 9672 CMOS family included the introduction of Internal Coupling Facilities, (ICFs). ICFs are Processing Units (PUs) on a 9672, zseries, or
More informationIBM. Container Pricing for IBM Z. z/os. Version 2 Release 3
z/os IBM Container Pricing for IBM Z Version 2 Release 3 Note Before using this information and the product it supports, read the information in Notices on page 129. This edition applies to Version 2 Release
More informationCPU and ziip usage of the DB2 system address spaces Part 2
CPU and ziip usage of the DB2 system address spaces Part 2 Fabio Massimo Ottaviani EPV Technologies February 2016 4 Performance impact of ziip over utilization Configurations where the number of ziips
More informationWhy did my job run so long?
Why did my job run so long? Speeding Performance by Understanding the Cause John Baker MVS Solutions jbaker@mvssol.com Agenda Where is my application spending its time? CPU time, I/O time, wait (queue)
More informationManaging WLM on your desktop
Managing WLM on your desktop Fabio Massimo Ottaviani EPV Technologies Together with z/os 1.8, IBM made available a tool named WLM Service Definition Editor. It is a workstation-based tool that allows you
More informationz/os Availability: Blocked Workload Support
z/os Availability: Blocked Workload Support This document highlights new function delivered in z/os (R) V1.9 and rolled back to z/os 1.7 and z/os 1.8 via APAR OA17735. This support was inspired by experiences
More informationz/os 1.11 and z196 Capacity Planning Issues
z/os 1.11 and z196 Capacity Planning Issues Fabio Massimo Ottaviani EPV Technologies White paper 1 Introduction Experienced capacity planners know that every new generation of machines provides a major
More informationManaging Logical Processors on the IBM eserver zseries z990
Managing Logical Processors on the IBM eserver zseries z990 The z990 is the first zseries processor to implement a multibook architecture. This article provides some suggestions on logical CP assignment
More informationIBM. Container Pricing for IBM Z. z/os. Version 2 Release 3
z/os IBM Container Pricing for IBM Z Version 2 Release 3 Note Before using this information and the product it supports, read the information in Notices on page 129. This edition applies to Version 2 Release
More informationWSC Experiences with IPSec on the ziip Processor
WSC Experiences with IPSec on the ziip Processor v Walt Caprice IBM Washington Systems Center Gaithersburg, MD Trademarks AIX* CICS* DB2* DB2 Connect DB2 Universal Database DRDA* FICON* GDPS* HiperSockets
More informationMeasuring the WebSphere Message Broker - Part 2
Measuring the WebSphere Message Broker - Part 2 Fabio Massimo Ottaviani EPV Technologies November 2011 5 Measuring WMB from inside Measuring WMB from inside is possible thanks to the availability of the
More informationThe World of z/os Dispatching on the zec12 Processor
The World of z/os Dispatching on the zec12 Processor Glenn Anderson, IBM Lab Services and Training Summer SHARE 2013 Session 14040 2013 IBM Corporation What I hope to cover... What are dispatchable units
More informationPredicting Mainframe Performance
Predicting Mainframe Performance Objective This tutorial shows you how to use the calibrated model you created earlier to perform capacity planning studies and troubleshoot any performance issues. This
More informationUnderstanding The Interaction Of z/os Workload Manager And DB2
IBM Software Group Understanding The Interaction Of z/os Workload Manager And DB2 Ed Woods / IBM Corporation 2010 IBM Corporation Agenda Workload Manager Overview Important WLM Concepts And Terminology
More informationThe Major CPU Exceptions in EPV Part 2
The Major CPU Exceptions in EPV Part 2 Mark Cohen Austrowiek EPV Technologies April 2014 6 System capture ratio The system capture ratio is an inverted measure of the internal system overhead. So the higher
More informationIBM Systems. Oracle and the ziip processor. G. Tom Russell IBM Canada Ltd. MVS Oracle SIG April 30, 2008 Redwood Shores, CA
Oracle and the ziip processor G. Tom Russell IBM Canada Ltd. Tom_Russell@ca.ibm.com MVS Oracle SIG April 30, 2008 Redwood Shores, CA Agenda IBM System z What is a ziip processor? z/os software support
More informationz/os Performance "Hot" Topics
IBM Advanced Technical Support z/os Performance "Hot" Topics Kathy Walsh IBM Corporation IBM Distinguished Engineer Washington Systems Center WSC IBM Corporation, 2009 1 Trademarks and Disclaimers AIX*
More informationDB2 and Memory Exploitation. Fabio Massimo Ottaviani - EPV Technologies. It s important to be aware that DB2 memory exploitation can provide:
White Paper DB2 and Memory Exploitation Fabio Massimo Ottaviani - EPV Technologies 1 Introduction For many years, z/os and DB2 system programmers have been fighting for memory: the former to defend the
More informationWLM Work Manager Delays (Part 2) Fabio Massimo Ottaviani EPV Technologies White paper WLM series
WLM Work Manager Delays (Part 2) Fabio Massimo Ottaviani EPV Technologies White paper WLM series In Part 1 an overview of WLM Work Manager and Execution Delay Services has been provided. The Single Address
More informationz/os Performance: Capacity Planning Considerations for zaap Processors
z/os Performance: Capacity Planning Considerations for zaap Processors White Paper November14, 2006 Version 1.6 Washington Systems Center Advanced Technical Support IBM Corporation, 2006 Capacity Planning
More informationzpcr Processor Capacity Reference for IBM Z and LinuxONE LPAR Configuration Capacity Planning Function Advanced-Mode QuickStart Guide zpcr v9.
zpcr Function Overview LPAR Configuration Capacity Planning Function Advanced-Mode QuickStart Guide zpcr v9.1a 1. Display LSPR Processor Capacity Ratios tables Multi-Image table: Provides capacity relationships
More informationz/os Performance Hot Topics Bradley Snyder 2014 IBM Corporation
z/os Performance Hot Topics Bradley Snyder Bradley.Snyder@us.ibm.com Agenda! Performance and Capacity Planning Topics Introduction of z Systems z13 Processor Overview of SMT CPUMF and HIS Support zpcr
More informationManaging LDAP Workloads via Tivoli Directory Services and z/os WLM IBM. Kathy Walsh IBM. Version Date: July 18, 2012
Managing LDAP Workloads via Tivoli Directory Services and z/os WLM IBM Kathy Walsh IBM Version Date: July 18, 2012 This document can be found on the web, www.ibm.com/support/techdocs Under the category
More informationYour Changing z/os Performance Management World: New Workloads, New Skills
Glenn Anderson, IBM Lab Services and Training Your Changing z/os Performance Management World: New Workloads, New Skills Summer SHARE August 2015 Session 17642 Agenda The new world of RMF monitoring RMF
More informationz/os 1.11 and z196 Capacity Planning Issues (Part 2) Fabio Massimo Ottaviani EPV Technologies White paper
z/os 1.11 and z196 Capacity Planning Issues (Part 2) Fabio Massimo Ottaviani EPV Technologies White paper 5 Relative Nest Intensity (RNI) Only three z/os 1.11 benchmarks are available: Low RNI, AVG RNI
More informationCA IDMS 18.0 & 18.5 for z/os and ziip
FREQUENTLY ASKED QUESTIONS CA IDMS 18.0 & 18.5 for z/os and ziip Important October 2013 update ziip (IBM System z Integrated Information Processor) is a specialty mainframe processor designed to help free
More informationz990 Performance and Capacity Planning Issues
z990 Performance and Capacity Planning Issues Cheryl Watson Session 2537; SHARE 104 in Anaheim March 2, 2005 Watson & Walker, Inc. home of Cheryl Watson's TUNING Letter, CPU Chart, BoxScore & GoalTender
More informationWhy is the CPU Time For a Job so Variable?
Why is the CPU Time For a Job so Variable? Cheryl Watson, Frank Kyne Watson & Walker, Inc. www.watsonwalker.com technical@watsonwalker.com August 5, 2014, Session 15836 Insert Custom Session QR if Desired.
More informationExample: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps
Interactive Scheduling Algorithms Continued o Priority Scheduling Introduction Round-robin assumes all processes are equal often not the case Assign a priority to each process, and always choose the process
More informationz/os Workload Management (WLM) Update for z/os V2.1 and V1.13
z/os Workload Management (WLM) Update for z/os V2.1 and V1.13 Horst Sinram IBM Germany Research & Development z/os Workload Management 10 Mar 2014 Session 15214 Trademarks Agenda z/enterprise EC12 GA2
More informationziip and zaap Software Update
ziip and zaap Software Update Overview The System z9 and z10 Integrated Information Processor (ziip) is the latest specialty engine for the IBM System z mainframe. The ziip is designed to help improve
More informationz990 and z9-109 Performance and Capacity Planning Issues
z990 and z9-109 Performance and Capacity Planning Issues Cheryl Watson Session 501; CMG2005 in Orlando December 8, 2005 Watson & Walker, Inc. home of Cheryl Watson's TUNING Letter, CPU Chart, BoxScore
More informationUnderstanding Simultaneous Multithreading on z Systems (post-announcement)
nderstanding Simultaneous Multithreading on z Systems (post-announcement) Bob Rogers 9/10/2015 Copyright NewEra Software and Robert Rogers, 2015, All rights reserved. 1 Abstract Simultaneous Multithreading
More informationLecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter
Lecture Topics Today: Advanced Scheduling (Stallings, chapter 10.1-10.4) Next: Deadlock (Stallings, chapter 6.1-6.6) 1 Announcements Exam #2 returned today Self-Study Exercise #10 Project #8 (due 11/16)
More informationECE519 Advanced Operating Systems
IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor
More informationIBM Mobile Workload Pricing Opportunity or Problem?
IBM Mobile Workload Pricing Opportunity or Problem? Fabio Massimo Ottaviani EPV Technologies June 2014 1 Introduction On May 6th 2014 IBM announced Mobile Workload Pricing for z/os (MWP). This new pricing
More informationSession 8861: What s new in z/os Performance Share 116 Anaheim, CA 02/28/2011
Marianne Hammer IBM Corporation Poughkeepsie, New York hammerm@us.ibm.com Session 8861: What s new in z/os Performance Share 116 Anaheim, CA 02/28/2011 Trademarks IBM Corporation 2009 IBM, the IBM logo
More informationz/os Heuristic Conversion of CF Operations from Synchronous to Asynchronous Execution (for z/os 1.2 and higher) V2
z/os Heuristic Conversion of CF Operations from Synchronous to Asynchronous Execution (for z/os 1.2 and higher) V2 z/os 1.2 introduced a new heuristic for determining whether it is more efficient in terms
More informationKey Metrics for DB2 for z/os Subsystem and Application Performance Monitoring (Part 1)
Key Metrics for DB2 for z/os Subsystem and Application Performance Monitoring (Part 1) Robert Catterall IBM March 12, 2014 Session 14610 Insert Custom Session QR if Desired. The genesis of this presentation
More informationz/os Workload Management (WLM) Update for z/os V2.1 and V1.13
z/os Workload Management (WLM) Update for z/os V2.1 and V1.13 Horst Sinram - STSM, z/os Workload and Capacity Management IBM Germany Research & Development August 2014 Session 15714 Insert Custom Session
More informationMultiprocessor and Real- Time Scheduling. Chapter 10
Multiprocessor and Real- Time Scheduling Chapter 10 Classifications of Multiprocessor Loosely coupled multiprocessor each processor has its own memory and I/O channels Functionally specialized processors
More informationz/vm Data Collection for zpcr and zcp3000 Collecting the Right Input Data for a zcp3000 Capacity Planning Model
IBM z Systems Masters Series z/vm Data Collection for zpcr and zcp3000 Collecting the Right Input Data for a zcp3000 Capacity Planning Model Session ID: cp3kvmxt 1 Trademarks The following are trademarks
More informationDB2 Data Sharing Then and Now
DB2 Data Sharing Then and Now Robert Catterall Consulting DB2 Specialist IBM US East September 2010 Agenda A quick overview of DB2 data sharing Motivation for deployment then and now DB2 data sharing /
More informationIBM Corporation
1 Trademarks 3 Agenda Concepts Importance levels Displaceable capacity Free capacity WLM Sysplex Routing Services IWMWSYSQ IWMSRSRS IWM4SRSC Basic capacity-based weights and additional influencers Observations,
More informationMeasuring zseries System Performance. Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012
Measuring zseries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Outline Computer System Performance Performance Factors and Measurements zseries
More informationzpcr Capacity Sizing Lab Sessions 2110/2111 IBM Advanced Technical Support August 26, 2009 John Burg Brad Snyder
IBM Advanced Technical Support zpcr Capacity Sizing Lab Sessions 2110/2111 August 26, 2009 John Burg Brad Snyder Materials created by John Fitch and Jim Shaw IBM Washington Systems Center 1 2 Advanced
More informationMaking System z the Center of Enterprise Computing
8471 - Making System z the Center of Enterprise Computing Presented By: Mark Neft Accenture Application Modernization & Optimization Strategy Lead Mark.neft@accenture.com March 2, 2011 Session 8471 Presentation
More informationCPU MF Counters Enablement Webinar
Advanced Technical Skills (ATS) North America MF Counters Enablement Webinar June 14, 2012 John Burg Kathy Walsh IBM Corporation 1 MF Enablement Education Part 2 Specific Education Brief Part 1 Review
More informationIBM Tivoli OMEGAMON XE on z/os Version 5 Release 1. User s Guide SC
IBM Tivoli OMEGAMON XE on z/os Version 5 Release 1 User s Guide SC27-4028-00 IBM Tivoli OMEGAMON XE on z/os Version 5 Release 1 User s Guide SC27-4028-00 Note Before using this information and the product
More informationIBM CICS Transaction Server V4.2
IBM CICS Transaction Server V4.2 A Comparison of CICS QR and OTE Performance March 2012 IBM Hardware Acceleration Lab Nicholas C. Matsakis Wei K. Liu Greg Dyck Terry Borden Copyright IBM Corporation 2012
More informationIBM. MVS Planning: Workload Management. z/os. Version 2 Release 3 SC
z/os IBM MVS Planning: Workload Management Version 2 Release 3 SC34-2662-30 Note Before using this information and the product it supports, read the information in Notices on page 259. This edition applies
More informationNon IMS Performance PARMS
Non IMS Performance PARMS Dave Viguers dviguers@us.ibm.com Edited By: Riaz Ahmad IBM Washington Systems Center Copyright IBM Corporation 2008 r SMFPRMxx Check DDCONS Yes (default) causes SMF to consolidate
More informationWLM Top 10 Things That Confuse You the Most!
SHARE, August 2011, Orlando WLM Top Ten Things That Confuse You the Most! Glenn Anderson, IBM Technical Training Session 10007 2011 IBM Corporation WLM Top 10 Things That Confuse You the Most! 1. How does
More informationz/os Performance HOT Topics Share, Winter 2008 Session: 2500
IBM Advanced Technical Support z/os Performance HOT Topics Share, Winter 2008 Session: 2500 Kathy Walsh IBM Corporation Advanced Technical Support Trademarks and Disclaimers AIX* AIX 5L* BladeCenter Chipkill
More informationzpcr Capacity Sizing Lab Exercise
Page 1 of 35 zpcr Capacity Sizing Lab Part 2 Hands On Lab Exercise John Burg Function Selection Window Page 2 of 35 Objective You will use zpcr (in Advanced Mode) to define a customer's current LPAR configuration
More informationFramework for Doing Capacity Sizing on System z Processors
Advanced Technical Skills (ATS) North America Framework for Doing Capacity Sizing on System z Processors Seattle Share: Session 2115 Bradley Snyder Email Address: bradley.snyder@us.ibm.com Phone: 972-561-6998
More informationOptimizing your Batch Window
Optimizing your Batch Window Scott Drummond (spd@us.ibm.com) Horst Sinram (sinram@de.ibm.com) IBM Corporation Wednesday, August 9, 2011 Session 9968 Agenda What differentiates batch workloads from online
More informationUnum s Mainframe Transformation Program
Unum s Mainframe Transformation Program Ronald Tustin Unum Group rtustin@unum.com Tuesday August 13, 2013 Session Number 14026 Unum Unum is a Fortune 500 company and one of the world s leading employee
More informationEvolution of CPU and ziip usage inside the DB2 system address spaces
Evolution of CPU and ziip usage inside the DB2 system address spaces Danilo Gipponi Fabio Massimo Ottaviani EPV Technologies danilo.gipponi@epvtech.com fabio.ottaviani@epvtech.com www.epvtech.com Disclaimer,
More informationToday s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1
Today s class Scheduling Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1 Aim of Scheduling Assign processes to be executed by the processor(s) Need to meet system objectives regarding:
More informationIntroduction. JES Basics
Introduction The Job Entry Subsystem (JES) is a #11 IN A SERIES subsystem of the z/os operating system that is responsible for managing jobs. The two options for a job entry subsystem that can be used
More informationAnnouncements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.
Program #1 Announcements Is due at 9:00 AM on Thursday Program #0 Re-grade requests are due by Monday at 11:59:59 PM Reading Chapter 6 1 CPU Scheduling Manage CPU to achieve several objectives: maximize
More informationAdvanced Topics UNIT 2 PERFORMANCE EVALUATIONS
Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Structure Page Nos. 2.0 Introduction 4 2. Objectives 5 2.2 Metrics for Performance Evaluation 5 2.2. Running Time 2.2.2 Speed Up 2.2.3 Efficiency 2.3 Factors
More informationMeasuring VMware Environments
Measuring VMware Environments Massimo Orlando EPV Technologies In the last years many companies adopted VMware as a way to consolidate more Windows images on a single server. As in any other environment,
More informationIBM p5 and pseries Enterprise Technical Support AIX 5L V5.3. Download Full Version :
IBM 000-180 p5 and pseries Enterprise Technical Support AIX 5L V5.3 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-180 A. The LPAR Configuration backup is corrupt B. The LPAR Configuration
More informationzenterprise exposed! Part 1: The Intersection of WLM, RMF, and zmanager Performance Management
SHARE, August 2011, Orlando zenterprise exposed! Part 1: The Intersection of WLM, RMF, and zmanager Performance Management Glenn Anderson, IBM Technical Training Session 10002 Agenda zenterprise Workload
More informationIBM Z: Technical Overview of HW and SW Mainframe Evolution Information Length: Ref: 2.0 Days ES82G Delivery method: Classroom. Price: INR.
IBM Z: Technical Overview of HW and SW Mainframe Evolution Information Length: Ref: 2.0 Days ES82G Delivery method: Classroom Overview Price: INR This course is designed to provide an understanding of
More informationCapacity Estimation for Linux Workloads. David Boyes Sine Nomine Associates
Capacity Estimation for Linux Workloads David Boyes Sine Nomine Associates 1 Agenda General Capacity Planning Issues Virtual Machine History and Value Unique Capacity Issues in Virtual Machines Empirical
More informationCPU MF Counters Enablement Webinar
Advanced Technical Skills (ATS) North America CPU MF Counters Enablement Webinar John Burg Kathy Walsh May 2, 2012 1 Announcing CPU MF Enablement Education Two Part Series Part 1 General Education Today
More informationMultiprocessor and Real-Time Scheduling. Chapter 10
Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems
More informationIBM Technical Brief. IBM System z9 ziip Measurements: SAP OLTP, BI Batch, SAP BW Query, and DB2 Utility Workloads. Authors:
IBM Technical Brief IBM System z9 ziip Measurements: SAP OLTP, BI Batch, SAP BW Query, and DB2 Utility Workloads Authors: Seewah Chan Veng K. Ly Mai N. Nguyen Howard E. Poole Michael R. Sheets Akira Shibamiya
More informationzpcr Capacity Sizing Lab Part 2 Hands on Lab
Advanced Technical Skills (ATS) North America zpcr Capacity Sizing Lab Part 2 Hands on Lab SHARE - Session 9098 March 2, 2011 John Burg Brad Snyder Materials created by John Fitch and Jim Shaw IBM 49 2011
More informationBenefit of Asynch I/O Support Provided in APAR PQ86769
IBM HTTP Server for z/os Benefit of Asynch I/O Support Provided in APAR PQ86769 A review of the performance results realized in a benchmarking effort where the key was supporting large numbers of persistent
More informationFramework for Doing Capacity Sizing for System z Processors
IBM Advanced Technical Support - WSC Framework for Doing Capacity Sizing for System z Processors Summer 2009 Share session: 2115 Bradley Snyder Email Address: bradley.snyder@us.ibm.com Phone: 972-561-6998
More informationKey Metrics for DB2 for z/os Subsystem and Application Performance Monitoring (Part 1)
Robert Catterall, IBM rfcatter@us.ibm.com Key Metrics for DB2 for z/os Subsystem and Application Performance Monitoring (Part 1) New England DB2 Users Group September 17, 2015 Information Management 2015
More informationDB2 for z/os Distributed Data Facility Questions and Answers
DB2 for z/os Distributed Data Facility Questions and Answers Michigan DB2 Users Group Robert Catterall, IBM rfcatter@us.ibm.com May 11, 2016 2016 IBM Corporation Agenda DDF monitoring and tuning DDF application
More informationWebSphere Application Server Base Performance
WebSphere Application Server Base Performance ii WebSphere Application Server Base Performance Contents WebSphere Application Server Base Performance............. 1 Introduction to the WebSphere Application
More informationUnderstanding The Importance Of Workload Manager And DB2
IBM Software Group Understanding The Importance Of Workload Manager And DB2 Ed Woods / IBM Corporation 2009 IBM Corporation Agenda Workload Manager Overview Important WLM Concepts And Terminology How DB2
More informationLinux on z Systems. IBM z/vm 6.3 HiperDispatch - Polarization Modes and Middleware Performance
Linux on z Systems IBM z/vm 6.3 HiperDispatch - Polarization Modes and Middleware Performance Before using this information and the product it supports, read the information in Notices on page 23. Edition
More informationCh 4 : CPU scheduling
Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,
More informationConcurrent VSAM access for batch and CICS
Concurrent VSAM access for batch and CICS Transparent VSAM file sharing A white paper from: Finding a better way to solve batch issues: concurrent VSAM file sharing When batch processes cause CICS applications
More informationz Processor Consumption Analysis, or What Is Consuming All The CPU? 14744
z Processor Consumption Analysis, or What Is Consuming All The CPU? 14744 Peter Enrico Email: Peter.Enrico@EPStrategies.com z/os Performance Education, Software, and Managed Service Providers Creators
More informationPJ Dynamic CPU Capacity
PJ44591 - Dynamic CPU Capacity Michael Shershin TPF Development lab z/tpf TPF Users Group, Austin, TX April 22-25, 2018 2018 IBM Corporation Dynamic CPU Capacity I-stream Cap Users can handle a sustained
More informationClustering Techniques A Technical Whitepaper By Lorinda Visnick
Clustering Techniques A Technical Whitepaper By Lorinda Visnick 14 Oak Park Bedford, MA 01730 USA Phone: +1-781-280-4000 www.objectstore.net Introduction Performance of a database can be greatly impacted
More informationz/os Performance Hot Topics
z/os Performance Hot Topics Glenn Anderson IBM Lab Services and Tech Training IBM Systems Technical Events ibm.com/training/events Copyright IBM Corporation 2017. Technical University/Symposia materials
More information