AIX Performance Tuning

Size: px
Start display at page:

Download "AIX Performance Tuning"

Transcription

1 AIX Performance Tuning Virtual User Group May 24, 2018 Jaqui Lynch 1 Agenda Architecting CPU Memory tuning Starter Set of Tunables I/O (Time Permitting) Volume Groups and File Systems AIO and CIO Flash Cache 2 1

2 Architecting 3 Have a Plan why are you doing this? 1. Describe the problem. 2. Measure where you re at (baseline). 1. Use your own scripts, nmon, perfpmr be consistent 3. Recreate the problem while getting diagnostic data (perfpmr, your own scripts, etc.). 4. Analyze the data. 5. Document potential changes and their expected impact, then group and prioritize them. a) Remember that one small change that only you know about can cause significant problems so document ALL changes 6. Make the changes. a) Group changes that go together if it makes sense to do so but don t go crazy 7. Measure the results and analyze if they had the expected impact; if not, then why not? 8. Is the problem still the same? If not, return to step If it s the same, return to step 3. This may look like common sense but in an emergency that is the first thing to go out the window Remember tuning is an iterative process You may not get it right the first time but over time and with a plan you can make things better 4 2

3 Things to Consider Cores less than socket size Try to keep profile of LPAR within a socket but definitely within a node Too many cores and threads Puts more stress on hypervisor managing them Bring up order Largest LPARs first Not all memory dims full Memory less than socket size Try to keep profile of LPAR within a socket Understand difference in latency between dedicated and shared processor cores Context matters there are no hard and fast rules as workload and other factors matter 5 Help the Hypervisor! Help the hypervisor cleanly place partitions when they are first defined and activated. Define dedicated partitions first. Define large partitions first. Within shared pool, define large partitions first. At system (not partition) IPL, PowerVM will allocate resources cleanly. Do not set maximum memory setting too high as you will waste memory Fill your memory dimms to get maximum bandwidth Don t mix memory dimms of different speeds Don t assign more cores to an LPAR than there are physical cores in the server or the pool 6 3

4 Memory & CPU Tips Avoid having chips without DIMMs. Attempt to fill every chip s DIMM slots, activating as needed. Hypervisor tends to avoid activating cores without local memory. DIMMs DIMMs Cores Cores Cores Cores DIMMs DIMMs Empty Empty Cores Cores Cores Cores DIMMs DIMMs As an example On a 48 core E880 node with 512GB 4 sockets each has 128GB and 12 cores If you keep LPARs at <=12 cores and <=128GB then You are less likely to cross a socket boundary At 49 cores you are crossing a node boundary DIMMs Cores Cores Empty Empty Cores Cores DIMMs IBM has memory rules about how memory is plugged into the sockets to try and help with this DIMMs Cores Cores Empty DIMMs Cores Cores DIMMs 7 Diagram courtesy of IBM 8 4

5 Low hanging fruit Check change control was there anything changed? Do I have any hardware errors in errpt? Does lsps a or lsps s show you have a lot of page space used? Is my system approaching 100% If shared pool am I constantly over entitlement or am I constantly folding/unfolding VPs Is the ratio of SYS% more than USR%? Does my batch window extend into my online? Is there unexplained I/O wait? Are my CPU s and threads being used fairly evenly? Is the I/O fairly well spread between disks? / Adapters? Any full filesystems especially /var or / or /usr Error messages /etc/syslog.conf will tell you where they are Look at errpt a lot of problems are made clear there Check at Fix Central in case it is a known bug ibm.com/support/fixcentral/ Do the same at the firmware history site in case it is fixed at the next firmware update Know how to use PerfPMR before you need to 9 What makes it go slow? Obvious: Not enough CPU Not enough memory Not enough disk I/O Not enough network I/O Not so obvious: AIX tuning Oracle/DB2 parameters log place, SGA, Buffers Read vs write characteristics, Adapter placement, overloading bus speeds Throttling effects e.g., single thread dependency Application errors Background processes (backups, batch processing) running during peak online times? Concurrent access to the same files Changes in shared resources Hardware errors 10 5

6 How to measure What is important response time or throughput? Response time is the elapsed time between when a request is submitted and when the response from that request is returned. Amount of time for a database query Amount of time it takes to echo characters to the terminal Amount of time it takes to access a Web page How much time does my user wait? Throughput is a measure of the amount of work that can be accomplished over some unit of time. Database transactions per minute File transfer speed in KBs per second File Read or Write KBs per second Web server hits per minute 11 Performance Support Flowchart Courtesy of XKCD 12 6

7 Actions Yes Sharing Resources? No No Yes Is there a performance problem? Normal Operations Actions Actions Actions Yes Yes Yes CPU Bound? No Memory Bound? No I/O Bound? No Performance Support Flowchart the real one Monitor system performance and check against requirements Does performance meet stated goals? Actions 13 No Yes Yes Network Bound? No Additional tests Actions CPU 14 7

8 Dispatching in shared pool VP gets dispatched to a core First time this becomes the home node All SMT threads for the VP go with the VP VP runs to the end of its entitlement If it has more work to do and noone else wants the core it gets more If it has more work to do but other VPs want the core then it gets context switched and put on the home node runq If it can t get serviced in a timely manner it goes to the global runq and ends up running somewhere else but its data may still be in the memory on the home node core 1 5 Understand SMT SMT Threads dispatch via a Virtual Processor (VP) Overall more work gets done (throughput) Individual threads run a little slower SMT1: Largest unit of execution work SMT2: Smaller unit of work, but provides greater amount of execution work per cycle SMT4: Smallest unit of work, but provides the maximum amount of execution work per cycle On POWER7, a single thread cannot exceed 65% utilization On POWER6 or POWER5, a single thread can consume 100% Understand thread dispatch order 16 SMT Thread Primary Secondary Tertiary Diagram courtesy of IBM 8

9 POWER5/6 vs POWER7/8 SMT Utilization POWER6 SMT2 POWER7 SMT2 POWER7 SMT4 POWER8 SMT4 Htc0 Htc1 Htc0 Htc1 busy idle busy busy 100% busy 100% busy Htc0 Htc1 Htc0 Htc1 busy idle busy busy ~70% busy 100% busy Htc0 Htc1 Htc2 Htc3 busy idle idle idle ~63% busy ~77% ~88% Up to 100% busy = user% + system% Htc0 Htc1 Htc2 Htc3 busy idle idle idle ~60% busy POWER7 SMT=2 70% & SMT=4 63% tries to show potential spare capacity Escaped most peoples attention VM goes 100% busy at entitlement & 100% from there on up to 10 x more CPU SMT4 100% busy 1 st CPU now reported as 63% busy 2 nd, 3 rd and 4 th LCPUs each report 12% idle time which is approximate POWER8 Notes Uplift from SMT2 to SMT4 is about 30% Uplift from SMT4 to SMT8 is about 7% POWER9 is a bigger uplift Check published rperf Numbers Server cores smt1 smt2 smt4 smt8 Rperf S S Ratios S S Nigel Griffiths Power7 Affinity Session 19 and 20 PowerVM VUG 17 POWER5/6 vs POWER7 /8 Virtual Processor Unfolding Virtual Processor is activated at different utilization threshold for P5/P6 and P7 P5/P6 loads the 1 st and 2 nd SMT threads to about 80% utilization and then unfolds a VP P7 loads first thread on the VP to about 50% then unfolds a VP Once all VPs unfolded then 2 nd SMT threads are used Once 2 nd threads are loaded then tertiaries are used This is called raw throughput mode Why? Raw Throughput provides the highest per-thread throughput and best response times at the expense of activating more physical cores Both systems report same physical consumption This is why some people see more cores being used in P7 than in P6/P5, especially if they did not reduce VPs when they moved the workload across. HOWEVER, idle time will most likely be higher I call P5/P6 method stack and spread and P7 spread and stack 18 9

10 Scaled Throughput P7 and higher with AIX v6.1 TL08 and AIX v7.1 TL02 Dispatches more SMT threads to a VP core before unfolding additional VPs Tries to make it behave a bit more like P6 Raw provides the highest per-thread throughput and best response times at the expense of activating more physical core Scaled provides the highest core throughput at the expense of per-thread response times and throughput. It also provides the highest system-wide throughput per VP because tertiary thread capacity is not left on the table. schedo p o vpm_throughput_mode= 0 Legacy Raw mode (default) 1 Enhanced Raw mode with a higher threshold than legacy 2 Scaled mode, use primary and secondary SMT threads 4 Scaled mode, use all four SMT threads 8 Scaled mode, use eight SMT threads (POWER8, AIX v7.1 required) Dynamic Tunable SMT unfriendly workloads could see an enormous per thread performance degradation 19 More on Dispatching How dispatching works Example 1 core with 6 VMs assigned to it VPs for the VMs on the core get dispatched (consecutively) and their threads run As each VM runs the cache is cleared for the new VM When entitlement reached or run out of work CPU is yielded to the next VM Once all VMs are done then system determines if there is time left Assume our 6 VMs take 6MS so 4MS is left Remaining time is assigned to still running VMs according to weights VMs run again and so on Problem if entitlement too low then dispatch window for the VM can be too low If VM runs multiple times in a 10ms window then it does not run full speed as cache has to be warmed up If entitlement higher then dispatch window is longer and cache stays warm longer fewer cache misses 20 Nigel Griffiths Power7 Affinity Session 19 and 20 PowerVM VUG 10

11 Entitlement and VPs Utilization calculation for CPU is different between POWER5, 6 and POWER7 VPs are also unfolded sooner (at lower utilization levels than on P6 and P5) May also see high VCSW in lparstat This means that in POWER7 & higher you need to pay more attention to VPs You may see more cores activated a lower utilization levels But you will see higher idle If only primary SMT threads in use then you have excess VPs Try to avoid this issue by: Reducing VP counts Use realistic entitlement to VP ratios 10x or 20x is not a good idea Try setting entitlement to.6 or.7 of VPs Ensure workloads never run consistently above 100% entitlement Too little entitlement means too many VPs will be contending for the cores NOTE VIO server entitlement is critical SEAs scale by entitlement not VPs All VPs have to be dispatched before one can be redispatched Performance may (in most cases, will) degrade when the number of Virtual Processors in an LPAR exceeds the number of physical processors The same applies with VPs in a shared pool LPAR these should exceed the cores in the pool 21 Avoiding Problems Stay current Known memory issues with 6.1 tl9 sp1 and 7.1 tl3 sp1 Java 7.1 SR1 is the preferred Java for POWER7 and POWER8 Java 6 SR7 is minimal on POWER7 but you should go to Java 7 WAS Refer to Section 8.3 of the Performance Optimization and Tuning Techniques Redbook SG HMC v8 required for POWER8 does not support servers prior to POWER6 Remember not all workloads run well in the shared processor pool some are better dedicated Apps with polling behavior, CPU intensive apps (SAS, HPC), latency sensitive apps (think trading systems) 22 11

12 lparstat 30 2 SPP lparstat 30 2 output System configuration: type=shared mode=uncapped smt=4 lcpu=72 mem=319488mb psize=17 ent=12.00 %user %sys %wait %idle physc %entc lbusy app vcsw phint lcpu=72 and smt=4 means I have 72/4=18 VPs but pool is only 17 cores BAD psize = processors in shared pool lbusy = %occupation of the LCPUs at the system and user level app = Available physical processors in the pool vcsw = Virtual context switches (virtual processor preemptions) phint = phantom interrupts received by the LPAR interrupts targeted to another partition that shares the same physical processor i.e. LPAR does an I/O so cedes the core, when I/O completes the interrupt is sent to the core but different LPAR running so it says not for me NOTE Must set Allow performance information collection on the LPARs to see good values for app, etc Required for shared pool monitoring 23 lparstat 30 2 Dedicated lparstat 30 2 output System configuration: type=dedicated mode=capped smt=4 lcpu=80 mem=524288mb %user %sys %wait %idle System configuration: type=dedicated mode=capped smt=8 lcpu=32 mem=80640mb %user %sys %wait %idle lcpu=4 and smt=80 means I have 80/4=20 cores lbusy = %occupation of the LCPUs at the system and user level lcpu=32 and smt8 so 4 cores lparstat h 30 2 output System configuration: type=dedicated mode=capped smt=4 lcpu=80 mem=524288mb %user %sys %wait %idle %hypv hcalls %hypv Indicates the percentage of physical processor consumption spent making hypervisor calls. Context matters high %hypv means very little if CPU utilization is very low hcalls Indicates the average number of hypervisor calls that were started

13 lparstat E 30 2 output lparstat E 30 2 Dedicated Lets you check frequency server is running at System configuration: type=dedicated mode=capped smt=8 lcpu=32 mem=80640mb Power=Disabled Physical Processor Utilisation: Actual Normalised user sys wait idle freq user sys wait idle GHz[101%] GHz[101%] This POWER8 is set up to run the same frequency all the time 25 Using sar mu P ALL (Power7 & SMT4) AIX (ent=10 and 16 VPs) so per VP physc entitled is about.63 System configuration: lcpu=64 ent=10.00 mode=uncapped 14:24:31 cpu %usr %sys %wio %idle physc %entc Average physc physc Lines for were here Above entitlement on average increase entitlement? So we see we are using cores which is 127.1% of our entitlement This is the sum of all the physc lines cpu0 3 = proc0 = VP0 May see a U line if in SPP and is unused LPAR capacity (compared against entitlement) 26 13

14 mpstat s SMT8 example mpstat s 1 1 System configuration: lcpu=64 ent=10.0 mode=uncapped Proc0 Proc4 Proc % 84.01% 81.42% cpu0 cpu1 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9 cpu10 cpu % 31.69% 7.93% 7.93% 42.41% 24.97% 8.31% 8.32% 39.47% 25.97% 7.99% 7.99% Proc % cpu60 cpu61 cpu62 cpu63 shows breakdown across the VPs (proc*) and smt threads (cpu*) 62.63% 13.22% 11.63% 11.63% Proc* are the virtual CPUs CPU* are the logical CPUs (SMT threads) 27 nmon Summary dedicated cores 28 14

15 nmon Summary Shared cores 29 lparstat bbbl tab in nmon lparno 3 lparname gandalf CPU in sys 24 Virtual CPU 16 Logical CPU 64 smt threads 4 capped 0 min Virtual 8 max Virtual 20 min Logical 8 max Logical 80 min Capacity 8 max Capacity 16 Compare VPs to poolsize LPAR should not have more VPs than the poolsize Entitled Capacity 10 min Memory MB max Memory MB online Memory Pool CPU 16 Weight 150 pool id

16 Entitlement and vps from lpar tab in nmon LPAR always above entitlement increase entitlement 31 Entitlement and vps from lpar tab VIOS 32 16

17 SMT4 CPU by thread from cpu_summ tab in nmon SMT8 all threads busy Note mostly primary thread used and some secondary we should possibly reduce cores/vps All threads busy maybe add cores 33 vmstat IW 60 2 vmstat IW Shared System configuration: lcpu=12 mem=24832mb ent=2.00 kthr memory page faults cpu r b p w avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec Note pc=2.42 is 120.0% of entitlement When looking at system time to user time ratios remember on a VIO server that high system time is most likely normal as the VIO handles all the I/O and network and really has little normal user type work I shows I/O oriented view and adds in the p column p column is number of threads waiting for I/O messages to raw devices. W adds the w column (only valid with I as well) w column is the number of threads waiting for filesystem direct I/O (DIO) and concurrent I/O (CIO) r column is average number of runnable threads (ready but waiting to run + those running) This is the global run queue use mpstat w and look at the rq field to get the run queue for each logical CPU b column is average number of threads placed in the VMM wait queue (awaiting resources or I/O) 34 17

18 vmstat IW Dedicated Busysvr on 5/1/2018 Power8 Has 4 memory pools and computational was 67.1% minfree=1024, maxfree= Shared Processor Pool Monitoring Turn on Allow performance information collection on the LPAR properties This is a dynamic change topas C pool Most important value is app available pool processors This represents the current number of free physical cores in the nmon option p for pool monitoring To the right of PoolCPUs there is an unused column which is the number of free pool cores nmon analyser LPAR Tab lparstat Shows the app column and poolsize 36 18

19 topas C Shows pool size of 16 with all 16 available Monitor VCSW as potential sign of insufficient entitlement 37 nmon Analyser LPAR Tab shared pool LPARs only 38 19

20 MEMORY 39 Memory Types Persistent Backed by filesystems Working storage Dynamic Includes executables and their work areas Backed by page space Shows as avm in a vmstat I (multiply by 4096 to get bytes instead of pages) or as %comp in nmon analyser or as a percentage of memory used for computational pages in vmstat v ALSO NOTE if %comp is near or >97% then you will be paging and need more memory Prefer to steal from persistent as it is cheap minperm, maxperm, maxclient, lru_file_repage and page_steal_method all impact these decisions 40 20

21 Memory with lru_file_repage=0 minperm=3 Always try to steal from filesystems if filesystems are using more than 3% of memory maxperm=90 Soft cap on the amount of memory that filesystems or network can use Superset so includes things covered in maxclient as well maxclient=90 Hard cap on amount of memory that JFS2 or NFS can use SUBSET of maxperm lru_file_repage goes away in v7 later TLs It is still there but you can no longer change it All AIX systems post AIX v5.3 (tl04 I think) should have these 3 set On v6.1 and v7 they are set by default Check /etc/tunables/nextboot to make sure they are not overridden from defaults on v6.1 and v7 41 /etc/tunables/nextboot VALUES YOU MIGHT SEE If you see other parameters check they are not restricted and that they are still valid vmo: no: ioo: 42 maxfree = 2048" minfree = 1024" tcp_recvspace = "262144" udp_recvspace = "655360" udp_sendspace = "65536" tcp_sendspace = "262144" rfc1323 = "1" j2_minpagereadahead = "2" j2_maxpagereadahead = "1024" j2_dynamicbufferpreallocation = "256" 21

22 page_steal_method Default in 5.3 is 0, in 6 and 7 it is 1 What does 1 mean? lru_file_repage=0 tells LRUD to try and steal from filesystems Memory split across mempools LRUD manages a mempool and scans to free pages 0 scan all pages 1 scan only filesystem pages 43 page_steal_method Example 500GB memory 50% used by file systems (250GB) 50% used by working storage (250GB) mempools = 5 So we have at least 5 LRUDs each controlling about 100GB memory Set to 0 Scans all 100GB of memory in each pool Set to 1 Scans only the 50GB in each pool used by filesystems Reduces cpu used by scanning When combined with CIO this can make a significant difference 44 22

23 Correcting Paging From vmstat v paging space I/Os blocked with no psbuf lsps output on above system that was paging before changes were made to tunables lsps a Page Space Physical Volume Volume Group Size %Used Active Auto Type paging01 hdisk3 pagingvg 16384MB 25 yes yes lv paging00 hdisk2 pagingvg 16384MB 25 yes yes lv hd6 hdisk0 rootvg 16384MB 25 yes yes lv lsps s Total Paging Space Percent Used Can also use vmstat I and vmstat s 49152MB 1% Should be balanced NOTE VIO Server comes with 2 different sized page datasets on one hdisk. As of there are 2 x 1024MB files. Best Practice More than one page volume All the same size including hd6 Page spaces must be on different disks to each other Do not put on hot disks Mirror all page spaces that are on internal or non raided disk If you can t make hd6 as big as the others then swap it off after boot All real paging is bad 45 Memory Breakdown UNIT is MB svmon -G size inuse free pin virtual available mmode memory Ded pg space work pers clnt other pin in use size inuse free pin virtual mmode memory Ded pg space work pers clnt other pin in use PageSize PoolSize inuse pgsp pin virtual s 4 KB m 64 KB L 16 MB S 16 GB

24 Looking for Problems lssrad av mpstat d topas M svmon Try G O unit=auto,timestamp=on,pgsz=on,affinity=de tail options Look at Domain affinity section of the report tprof/gprof trace Etc etc 47 Memory Problems Look at computational memory use Shows as avm in a vmstat I (multiply by 4096 to get bytes instead of pages) System configuration: lcpu=48 mem=32768mb ent=0.50 r b p w avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec AVM above is about 3.08GB which is about 9% of the 32GB in the LPAR or as %comp in nmon analyser or as a percentage of memory used for computational pages in vmstat v NOTE if %comp is near or >97% then you will be paging and need more memory Try svmon P Osortseg=pgsp Ounit=MB more This shows processes using the most pagespace in MB You can also try the following: svmon P Ofiltercat=exclusive Ofiltertype=working Ounit=MB more 48 24

25 nmon memnew tab 49 nmon memuse tab 50 25

26 Affinity LOCAL SRAD, within the same chip, shows as s3 NEAR SRAD, within the same node intra node, shows as s4 FAR SRAD, on another node inter node, shows as s5 Command is lssrad av or can look at mpstat d Topas M option shows them as Localdisp%, Neardisp%, Fardisp% The further the distance the longer the latency Problems you may see SRAD has CPUs but no memory or vice versa CPU or memory unbalanced Note on single node systems far dispatches are not as concerning To correct look at new firmware, entitlements and LPAR memory sizing Can also look at Dynamic Platform Optimizer (DPO) 51 mpstat d Example from POWER8 S814 b814aix1: mpstat d System configuration: lcpu=48 ent=0.5 mode=uncapped local near far cpu cs ics bound rq push S3pull S3grd S0rd S1rd S2rd S3rd S4rd S5rd ilcs vlcs S3hrd S4hrd S5hrd The above is for a single socket system (S814) so I would expect to see everything local (s3hrd) On a multi socket or multinode pay attention to the numbers under near and far 52 26

27 E850 4 socket mpstat d 53 Starter set of tunables Memory For AIX v5.3 No need to set memory_affinity=0 after 5.3 tl05 MEMORY vmo p o minperm%=3 vmo p o maxperm%=90 vmo p o maxclient%=90 vmo p o minfree=960 vmo p o maxfree=1088 vmo p o lru_file_repage=0 vmo p o lru_poll_interval=10 vmo p o page_steal_method=1 We will calculate these We will calculate these For AIX v6 or v7 including VIOS Memory defaults are already correctly except minfree and maxfree If you upgrade from a previous version of AIX using migration then you need to check the settings after 54 27

28 vmstat v Output uptime 02:03PM up 39 days, 3:06, 2 users, load average: 17.02, 15.35, memory pools 3.0 minperm percentage 90.0 maxperm percentage 14.9 numperm percentage 14.9 numclient percentage 90.0 maxclient percentage 66 pending disk I/Os blocked with no pbuf pbufs 0 paging space I/Os blocked with no psbuf pagespace 1972 filesystem I/Os blocked with no fsbuf JFS 527 client filesystem I/Os blocked with no fsbuf NFS/VxFS 613 external pager filesystem I/Os blocked with no fsbuf JFS2 numclient=numperm so most likely the I/O being done is JFS2 or NFS or VxFS Based on the blocked I/Os it is clearly a system using JFS2 This is a fairly healthy system as it has been up 39 days with few blockages 55 vmstat v Output Not Healthy 3.0 minperm percentage 90.0 maxperm percentage 45.1 numperm percentage 45.1 numclient percentage 90.0 maxclient percentage pending disk I/Os blocked with no pbuf pbufs (LVM) use lvmo command to correct paging space I/Os blocked with no psbuf pagespace (VMM) add page spaces or increase size and reboot 2048 file system I/Os blocked with no fsbuf JFS (FS layer) ioo numfsbufs 238 client file system I/Os blocked with no fsbuf NFS/VxFS (FS layer) external pager file system I/Os blocked with no fsbuf JFS2 (FS layer) ioo j2_dynamicbufferpreallocation numclient=numperm so most likely the I/O being done is JFS2 or NFS or VxFS Based on the blocked I/Os it is clearly a system using JFS2 It is also having paging problems pbufs also need reviewing Check uptime to see how long the system has been up as stats are from boot Context matters if this system has been up only a few days or weeks this is a serious problem If it has been up for a long time then you have no idea when these occurred So run vmstat v several times maybe 8 hours apart and compare them to get growth rates 56 28

29 lvmo a Output pending disk I/Os blocked with no pbuf Sometimes the above line from vmstat v only includes rootvg so use lvmo a to double check vgname = rootvg pv_pbuf_count = 512 total_vg_pbufs = 1024 max_vg_pbuf_count = pervg_blocked_io_count = 0 pv_min_pbuf = 512 Max_vg_pbuf_count = 0 global_blocked_io_count = this is rootvg this is the others Use lvmo v xxxxvg a For other VGs we see the following in pervg_blocked_io_count blocked total_vg_bufs nimvg sasvg backupvg lvmo v sasvg o pv_pbuf_count=2048 do this for each VG affected NOT GLOBALLY 57 Memory Pools and fre column fre column in vmstat is a count of all the free pages across all the memory pools When you look at fre you need to divide by memory pools Then compare it to maxfree and minfree This will help you determine if you are happy, page stealing or thrashing You can see high values in fre but still be paging You have to divide the fre column by mempools In below if maxfree=2000 and we have 10 memory pools then we only have 990 pages free in each pool on average. With minfree=960 we are page stealing and close to thrashing. kthr memory page faults cpu r b p avm fre fi fo pi po fr sr in sy cs us sy id wa Assuming 10 memory pools (you get this from vmstat v) 9902/10 = so we have 990 pages free per memory pool If maxfree is 2000 and minfree is 960 then we are page stealing and very close to thrashing 58 29

30 vmstat v grep memory 3 memory pools Calculating minfree and maxfree vmo a grep free maxfree = 1088 minfree = 960 Calculation is: minfree = (max (960,(120 * lcpus) / memory pools)) maxfree = minfree + (Max(maxpgahead,j2_maxPageReadahead) * lcpus) / memory pools So if I have the following: Memory pools = 3 (from vmo a or kdb) J2_maxPageReadahead = 128 CPUS = 6 and SMT on so lcpu = 12 So minfree = (max(960,(120 * 12)/3)) = 1440 / 3 = 480 or 960 whichever is larger And maxfree = minfree + (128 * 12) / 3 = = 1472 I would probably bump this to 1536 rather than using 1472 (nice power of 2) The difference between minfree and maxfree should be no more than 1K per IBM If you over allocate these values it is possible that you will see high values in the fre column of a vmstat and yet you will be paging. 59 /etc/tunables/nextboot Example vmo: no: ioo: minfree = 1024 maxfree = "2048" tcp_recvspace = "262144" udp_recvspace = "655360" udp_sendspace = "65536" tcp_sendspace = "262144" rfc1323 = "1" j2_maxpagereadahead = 256" j2_dynamicbufferpreallocation = "256" 60 30

31 I/O Time Permitting 61 Rough Anatomy of an I/O LVM requests a PBUF Pinned memory buffer to hold I/O request in LVM layer Then placed into an FSBUF 3 types These are also pinned Filesystem JFS Client NFS and VxFS External Pager JFS2 If paging then need PSBUFs (also pinned) Used for I/O requests to and from page space Then queue I/O to an hdisk (queue_depth) Then queue it to an adapter (num_cmd_elems) Adapter queues it to the disk subsystem Additionally, every 60 seconds the sync daemon (syncd) runs to flush dirty I/O out to filesystems or page space 62 31

32 IO Wait and why it is not necessarily useful SMT2 example for simplicity System has 7 threads with work, the 8 th has nothing so is not shown System has 3 threads blocked (red threads) SMT is turned on There are 4 threads ready to run so they get dispatched and each is using 80% user and 20% system Metrics would show: %user =.8 * 4 / 4 = 80% %sys =.2 * 4 / 4 = 20% Idle will be 0% as no core is waiting to run threads IO Wait will be 0% as no core is idle waiting for IO to complete as something else got dispatched to that core SO we have IO wait BUT we don t see it Also if all threads were blocked but nothing else to run then we would see IO wait that is very high 63 What is iowait? Lessons to learn iowait is a form of idle time It is simply the percentage of time the CPU is idle AND there is at least one I/O still in progress (started from that CPU) The iowait value seen in the output of commands like vmstat, iostat, and topas is the iowait percentages across all CPUs averaged together This can be very misleading! High I/O wait does not mean that there is definitely an I/O bottleneck Zero I/O wait does not mean that there is not an I/O bottleneck A CPU in I/O wait state can still execute threads if there are any runnable threads 64 32

33 Basics Data layout will have more impact than most tunables Plan in advance Large hdisks are evil I/O performance is about bandwidth and reduced queuing, not size 10 x 50gb or 5 x 100gb hdisk are better than 1 x 500gb Also larger LUN sizes may mean larger PP sizes which is not great for lots of little filesystems Need to separate different kinds of data i.e. logs versus data The issue is queue_depth In process and wait queues for hdisks In process queue contains up to queue_depth I/Os hdisk driver submits I/Os to the adapter driver Adapter driver also has in process and wait queues SDD and some other multi path drivers will not submit more than queue_depth IOs to an hdisk which can affect performance Adapter driver submits I/Os to disk subsystem Default client qdepth for vscsi is 3 chdev l hdisk? a queue_depth=20 (or some good value) Default client qdepth for NPIV is set by the Multipath driver in the client 65 More on queue depth Disk and adapter drivers each have a queue to handle I/O Queues are split into in service (aka in flight) and wait queues IO requests in in service queue are sent to storage and slot is freed when the IO is complete IO requests in the wait queue stay there till an in service slot is free queue depth is the size of the in service queue for the hdisk Default for vscsi hdisk is 3 Default for NPIV or direct attach depends on the HAK (host attach kit) or MPIO drivers used num_cmd_elems is the size of the in service queue for the HBA Maximum in flight IOs submitted to the SAN is the smallest of: Sum of hdisk queue depths Sum of the HBA num_cmd_elems Maximum in flight IOs submitted by the application For HBAs num_cmd_elems defaults to 200 typically Max range is 2048 to 4096 depending on storage vendor As of AIX v7.1 tl2 (or 6.1 tl8) num_cmd_elems is limited to 256 for VFCs See 01.ibm.com/support/docview.wss?uid=isg1IV

34 iostat Dl %tm bps tps bread bwrtn rps avg min max wps avg min max avg min max avg avg serv act serv serv serv serv serv serv time time time wqsz sqsz qfull hdisk K K hdisk K K hdisk M M hdisk K K hdisk M M hdisk K K hdisk K K hdisk K K System configuration: lcpu=32 drives=67 paths=216 vdisks= hdisk M M hdisk M M hdisk M 5 2.2M hdisk M M hdisk M M hdisk M M hdisk M M hdisk tps Transactions per second transfers per second to the adapter avgserv Average service time Avgtime Average time in the wait queue avgwqsz Average wait queue size If regularly >0 increase queue depth avgsqsz Average service queue size (waiting to be sent to disk) Can t be larger than queue depth for the disk servqfull Rate of IOs submitted to a full queue per second Look at iostat ad for adapter queues If avgwqsz > 0 or sqfull high then increase queue_depth. Also look at avgsqsz. Per IBM Average IO sizes: read = bread/rps write = bwrtn/wps Also try iostat RDTl int count iostat RDTl 30 5 Does 5 x 30 second snaps Plus sar d, nmon D Adapter Queue Problems Look at BBBF Tab in NMON Analyzer or run fcstat command fcstat D provides better information including high water marks that can be used in calculations Adapter device drivers use DMA for IO From fcstat on each fcs NOTE these are since boot FC SCSI Adapter Driver Information No DMA Resource Count: 0 No Adapter Elements Count: 2567 No Command Resource Count: Number of times since boot that IO was temporarily blocked waiting for resources such as num_cmd_elems too low No DMA resource adjust max_xfer_size No adapter elements adjust num_cmd_elems No command resource adjust num_cmd_elems If using NPIV make changes to VIO and client, not just VIO Reboot VIO prior to changing client settings 68 34

35 Adapter Tuning fcs0 bus_intr_lvl 115 Bus interrupt level False bus_io_addr 0xdfc00 Bus I/O address False bus_mem_addr 0xe Bus memory address False init_link al INIT Link flags True intr_priority 3 Interrupt priority False lg_term_dma 0x Long term DMA True max_xfer_size 0x Maximum Transfer Size True (16MB DMA) num_cmd_elems 200 Maximum number of COMMANDS to queue to the adapter True pref_alpa 0x1 Preferred AL_PA True sw_fc_class 2 FC Class for Fabric True Changes I often make (test first) max_xfer_size 0x Maximum Transfer Size True 128MB DMA area for data I/O num_cmd_elems 1024 Maximum number of COMMANDS to queue to the adapter True Often I raise this to 2048 check with your disk vendor lg_term_dma is the DMA area for control I/O Check these are ok with your disk vendor!!! chdev l fcs0 a max_xfer_size=0x a num_cmd_elems=1024 P chdev l fcs1 a max_xfer_size=0x a num_cmd_elems=1024 P At AIX 6.1 TL2 VFCs will always use a 128MB DMA memory area even with default max_xfer_size I change it anyway for consistency As of AIX v7.1 tl2 (or 6.1 tl8) num_cmd_elems there is an effective limit of 256 for VFCs See 01.ibm.com/support/docview.wss?uid=isg1IV63282 This limitation got lifted for NPIV and the maximum is now 2048 provided you are at 6.1 tl9 (IV76258), 7.1 tl3 (IV76968) or 7.1 tl4 (IV76270). Remember make changes to both VIO servers and client LPARs if using NPIV VIO server setting must be at least as large as the client setting 69 HBA max_xfer_size The default is 0x /* Default io_dma of 16MB */ After that, 0x200000,0x400000,0x80000 gets you 128MB After that 0x checks for bus type, and you may get 256MB, or 128MB There are also some adapters that support very large max_xfer sizes which can possibly allocate 512MB VFC adapters inherit this from the physical adapter (generally) Unless you are driving really large IO's, then max_xfer_size is rarely changed 70 35

36 fcstat D Output lsattr El fcs8 lg_term_dma 0x Long term DMA True max_xfer_size 0x Maximum Transfer Size True num_cmd_elems 2048 Maximum number of COMMANDS to queue to the adapter True fcstat D fcs8 FIBRE CHANNEL STATISTICS REPORT: fcs8... FC SCSI Adapter Driver Queue Statistics High water mark of active commands: 512 High water mark of pending commands: 104 FC SCSI Adapter Driver Information No DMA Resource Count: 0 No Adapter Elements Count: No Command Resource Count: 0 Adapter Effective max transfer value: 0x Some lines removed to save space Per Dan Braden: Set num_cmd_elems to at least high active + high pending or = Tunables 72 36

37 Starter set of I/O tunables The parameters below should be reviewed and changed as needed PBUFS Use the new way lvmo command JFS2 ioo p o j2_maxpagereadahead=128 (default above may need to be changed for sequential) dynamic Difference between minfree and maxfree should be > that this value j2_dynamicbufferpreallocation=16 Max is means 16 x 16k slabs or 256k Default that may need tuning but is dynamic Replaces tuning j2_nbufferperpagerdevice until at max. 73 Other Interesting Tunables These are set as options in /etc/filesystems for the filesystem noatime Why write a record every time you read or touch a file? mount command option Use for redo and archive logs Release behind (or throw data out of file system cache) rbr release behind on read rbw release behind on write rbrw both log=null Read the various AIX Difference Guides: bin/searchsite.cgi?query=aix+and+differences+and+guide When making changes to /etc/filesystems use chfs to make them stick 74 37

38 filemon Uses trace so don t forget to STOP the trace Can provide the following information CPU Utilization during the trace Most active Files Most active Segments Most active Logical Volumes Most active Physical Volumes Most active Files Process Wise Most active Files Thread Wise Sample script to run it: filemon v o abc.filemon.txt O all T sleep 60 trcstop OR filemon v o abc.filemon2.txt O pv,lv T sleep 60 trcstop 75 filemon v o pv,lv Most Active Logical Volumes util #rblk #wblk KB/s volume description /dev/gandalfp_ga71_lv /ga /dev/gandalfp_ga73_lv /ga /dev/misc_gm10_lv /gm /dev/gandalfp_ga15_lv /ga /dev/gandalfp_ga10_lv /ga /dev/misc_gm15_lv /gm /dev/misc_gm73_lv /gm /dev/gandalfp_ga20_lv /ga /dev/misc_gm72_lv /gm /dev/misc_gm71_lv /gm

39 filemon v o pv,lv Most Active Physical Volumes util #rblk #wblk KB/s volume description /dev/hdisk20 MPIO FC /dev/hdisk21 MPIO FC /dev/hdisk22 MPIO FC /dev/hdisk97 MPIO FC /dev/hdisk99 MPIO FC /dev/hdisk12 MPIO FC /dev/hdisk102 MPIO FC Asynchronous I/O and Concurrent I/O 78 39

40 Async I/O AIX v6 and v7 No more smit panels and no AIO servers start at boot Kernel extensions loaded at boot AIO servers go away if no activity for 300 seconds Only need to tune maxreqs normally ioo -a F more aio_active = 0 aio_maxreqs = aio_maxservers = 30 aio_minservers = 3 aio_server_inactivity = 300 posix_aio_active = 0 posix_aio_maxreqs = posix_aio_maxservers = 30 posix_aio_minservers = 3 posix_aio_server_inactivity = 300 pstat -a grep aio 22 a 1608e e aioppool 24 a 1804a a aiolpool You may see some aioservers on a busy system ##Restricted tunables aio_fastpath = 1 aio_fsfastpath = 1 aio_kprocprio = 39 aio_multitidsusp = 1 aio_sample_rate = 5 aio_samples_per_cycle = 6 posix_aio_fastpath = 1 posix_aio_fsfastpath = 1 posix_aio_kprocprio = 39 posix_aio_sample_rate = 5 posix_aio_samples_per_cycle = 6 These are per LCPU So for lcpu=10 and maxservers=100 you get 1000 aioservers AIO applies to both raw I/O and file systems Grow maxservers as you need to 79 PROCAIO tab in nmon Maximum seen was 192 but average was much less 80 40

41 DIO and CIO DIO Direct I/O Around since AIX v5.1, also in Linux Used with JFS CIO is built on it Effectively bypasses filesystem caching to bring data directly into application buffers Does not like compressed JFS or BF (lfe) filesystems Performance will suffer due to requirement for 128kb I/O (after 4MB) Reduces CPU and eliminates overhead copying data twice Reads are asynchronous No filesystem readahead No lrud or syncd overhead No double buffering of data Inode locks still used Benefits heavily random access workloads 81 DIO and CIO CIO Concurrent I/O AIX only, not in Linux Only available in JFS2 Allows performance close to raw devices Designed for apps (such as RDBs) that enforce write serialization at the app Allows non-use of inode locks Implies DIO as well Benefits heavy update workloads Speeds up writes significantly Saves memory and CPU for double copies No filesystem readahead No lrud or syncd overhead No double buffering of data Not all apps benefit from CIO and DIO some are better with filesystem caching and some are safer that way When to use it Database DBF files, redo logs and control files and flashback log files. Not for Oracle binaries or archive log files Can get stats using vmstat IW flags 82 41

42 Demoted I/O Check w column in vmstat IW CIO write fails because IO is not aligned to FS blocksize i.e app writing 512 byte blocks but FS has 4096 Ends up getting redone Demoted I/O consumes more kernel CPU And more physical I/O To find demoted I/O (if JFS2) trace aj 59B,59C ; sleep 2 ; trcstop ; trcrpt o directio.trcrpt grep i demoted directio.trcrpt Look in the report for: 83 Flash Cache Read only cache using SSDs. Reads will be processed from SSDs. Writes go direct to original storage device. Current limitations Only 1 cache pool and 1 cache partition It takes time for the cache to warm up Blog Nigel Griffiths Blog Manoj Kumar Article Prereqs Server attached SSDs, flash that is attached using SAS or from the SAN AIX 7.1 tl4 sp2 or 7.2 tl0 sp0 minimum Minimum 4GB memory extra for every LPAR that has cache enabled Cache devices are owned either by an LPAR or a VIO server Do not use this if your SAN disk is front ended by flash already Ensure following filesets are installed lslpp l grep Cache (on a 7.2 system) bos.pfcdd.rte COMMITTED Power Flash Cache cache.mgt.rte COMMITTED AIX SSD Cache Device bos.pfcdd.rte COMMITTED Power Flash Cache cache.mgt.rte COMMITTED AIX SSD Cache Device 84 42

43 Flash Cache Diagram Taken from Nigel Griffith s Blog at: 85 Flash Cache Monitoring cache_mgt monitor start cache_mgt monitor stop cache_mgt get h s Above gets stats since caching started. But no average it lists stats for every source disk so if you have 88 of them it is a very long report pfcras a dump_stats Undocumented but provides same statistics as above averaged for last 60 and 3600 seconds Provides an overall average then the stats for each disk Meaning of Statistics fields can be found at: Known Problems cache_mgt list gets core dump if >70 disks in source IV ibm.com/support/docview.wss?crawler=1&uid=isg1IV93772 PFC_CAC_NOMEM error IV ibm.com/support/docview.wss?uid=isg1IV91971 D47E07BC P U ETCACHE NOT ENOUGH MEMORY TO ALLOCATE Saw this when I had 88 source disks and went from 4 to 8 target SSDs System had 512GB memory and only 256GB was in use Waiting on an ifix when upgraded to AIX 7.2 tl01 sp2 IJ00115s2a is new ifix PFC_CAC_DASTOOSLOW Claims DAS DEVICE IS SLOWER THAN SAN DEVICE This is on hold till we install the apar for the PFC_CAC_NOMEM error 86 43

44 Flash Cache pfcras output 87 OTHER 88 44

45 Parameter Settings Summary DEFAULTS NEW PARAMETER AIXv5.3 AIXv6 AIXv7 SET ALL TO NETWORK (no) rfc tcp_sendspace (1Gb) tcp_recvspace (1Gb) udp_sendspace udp_recvspace MEMORY (vmo) minperm% maxperm% JFS, NFS, VxFS, JFS2 maxclient% JFS2, NFS lru_file_repage lru_poll_interval? Minfree calculation Maxfree calculation page_steal_method 0 0 /1 (TL) 1 1 JFS2 (ioo) j2_maxpagereadahead as needed j2_dynamicbufferpreallocation as needed For AIX v6 and v7 memory settings should be left at the defaults minfree and maxfree are exceptions Do not modify restricted tunables without opening a PMR with IBM first 89 Tips to keep out of trouble Monitor errpt Check the performance apars have all been installed Yes this means you need to stay current Keep firmware up to date In particular, look at the firmware history for your server to see if there are performance problems fixed Information on the firmware updates can be found at: ibm.com/support/fixcentral/ Firmware history including release dates can be found at: Power8 Midrange Firmware Hist.html Power8 High end Firmware Hist.html Power9 Midrange Firmware Hist.html Ensure software stack is current Ensure compilers are current and that compiled code turns on optimization To get true MPIO run the correct multipath software Ensure system is properly architected (VPs, memory, entitlement, etc) Take a baseline before and after any changes DOCUMENTATION Note you can t mix 512 and 4k disks in a VG 90 45

46 nmon Monitoring nmon ft AOPV^dMLW s 15 c 120 Grabs a 30 minute nmon snapshot A is async IO M is mempages t is top processes L is large pages O is SEA on the VIO P is paging space V is disk volume group d is disk service times ^ is fibre adapter stats W is workload manager statistics if you have WLM enabled If you want a 24 hour nmon use: nmon ft AOPV^dMLW s 150 c 576 May need to enable accounting on the SEA first this is done on the VIO chdev dev ent* attr accounting=enabled Can use entstat/seastat or topas/nmon to monitor this is done on the vios topas E nmon O VIOS performance advisor also reports on the SEAs 91 Thank you for your time If you have questions please me at: jaqui@circle4.com Thanks to Joe Armstrong for his awesome work organizing these webinars 92 46

47 Useful Links Jaqui Lynch Articles lynch/ Jay Kruemke Twitter chromeaix Nigel Griffiths Twitter mr_nmon Gareth Coates Twitter power_gaz Jaqui s Youtube Channel Movie replays IBM US Virtual User Group Power Systems UK User Group 93 References Processor Utilization in AIX by Saravanan Devendran %20Systems/page/Understanding%20CPU%20utilization%20on%20AIX SG PowerVM Virtualization Introduction and Configuration SG PowerVM Virtualization Managing and Monitoring SG Power Systems Performance Optimization Redbook Tip on Maximizing the Value of P7 and P7+ through Tuning and Optimization Dan Braden Queue Depth Articles 03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD

48 Backup Slides 95 Logical Processors Logical Processors represent SMT threads LPAR 1 LPAR 2 LPAR 1 LPAR 1 SMT on vmstat Icpu=4 SMT off lcpu=2 SMT on Icpu=4 SMT off Icpu=2 L L L L L L L L V V V V 2 Cores Dedicated VPs under the covers 2 Cores Dedicated VPs under the covers Hypervisor Core Core Core Core V= 0.6 V= 0.6 PU=1.2 Weight=128 V= 0.4 V= 0.4 PU=0.8 Weight=192 Core Core Core Core Core Core Logical (SMT threads) Virtual Physical

Agenda. Session AIX Performance Tuning Part 2 I/O. Part 2 I/O Volume Groups and File systems AIO and CIO

Agenda. Session AIX Performance Tuning Part 2 I/O. Part 2 I/O Volume Groups and File systems AIO and CIO Session 1086 AIX Performance Tuning Part 2 I/O Jaqui Lynch Flagship Solutions Group jlynch@flagshipsg.com 2016 IBM Corporation #ibmedge Agenda Part 1 CPU Memory tuning Starter Set of Tunables Part 3 Network

More information

Paging, Memory and I/O Delays

Paging, Memory and I/O Delays Page 1 of 6 close window Print Paging, Memory and I/O Delays How to tune AIX versions 5.3, 6.1 and 7 for increased performance September 2010 by Jaqui Lynch Editor s Note: This is the first of a two-part

More information

Disk I/O and the Network

Disk I/O and the Network Page 1 of 5 close window Print Disk I/O and the Network Increase performance with more tips for AIX 5.3, 6.1 and 7 October 2010 by Jaqui Lynch Editor s Note: This is the concluding article in a two-part

More information

AIX PERFORMANCE TUNING

AIX PERFORMANCE TUNING AIX PERFORMANCE TUNING This presentation at: http://www.circle4.com/papers/ftalks-performance.pdf Jaqui Lynch lynchj@forsythe.com AGENDA CPU Memory tuning Network Starter Set of Tunables Volume groups

More information

Jaqui Lynch AIX Performance Tuning for Databases Session Tuesday April 28, pm to 4.45pm Grand Sierra - Nevada 8

Jaqui Lynch AIX Performance Tuning for Databases Session Tuesday April 28, pm to 4.45pm Grand Sierra - Nevada 8 Jaqui Lynch AIX Performance Tuning for Databases Session 550568 Tuesday April 28, 2009 3.30pm to 4.45pm Grand Sierra - Nevada 8 This material is copyright Mainline Information Systems and Jaqui Lynch.

More information

AIX Performance Tuning Part 2. Jaqui Lynch AIX Performance Tuning Part 2

AIX Performance Tuning Part 2. Jaqui Lynch AIX Performance Tuning Part 2 AIX Performance Tuning Part 2 Jaqui Lynch AIX Performance Tuning Part 2 lynchj@forsythe.com Agenda Part 1 CPU Memory tuning Network Starter Set of Tunables Part 2 I/O Volume Groups and File systems AIO

More information

AIX Performance Tuning

AIX Performance Tuning AIX Performance Tuning Jaqui Lynch Senior Systems Engineer Mainline Information Systems http://www.circle4.com/papers/cmgperf.pdf Agenda Filesystem Types DIO and CIO AIX Performance Tunables Oracle Specifics

More information

AIX5 Initial Settings for Databases Servers

AIX5 Initial Settings for Databases Servers Introduction Here are the AIX 5L settings I automatically change when installing a pseries database server. They are provided here as a reference point for tuning an AIX system. As always, all settings

More information

AIX Performance Tuning

AIX Performance Tuning MAINLINE INFORMATION SYSTEMS A26 AIX Performance Tuning Jaqui Lynch Orlando, FL Sept. 12-16, 2005 AIX Performance Tuning Updated Presentation will be at: http://www.circle4.com/papers/pseries-a26-sep1405.pdf

More information

AIX Performance Tuning

AIX Performance Tuning AIX Performance Tuning http://www.circle4.com/papers/ukcmgperf.pdf Jaqui Lynch Senior Systems Engineer Mainline Information Systems Agenda Filesystem Types DIO and CIO AIX Performance Tunables Oracle Specifics

More information

STARTING TO LOOK AT A PERFORMANCE PROBLEM

STARTING TO LOOK AT A PERFORMANCE PROBLEM STARTING TO LOOK AT A PERFORMANCE PROBLEM This presentation at: http://www.circle4.com/papers/common-perfprobs.pdf Jaqui Lynch lynchj@forsythe.com AGENDA How to avoid performance crit sits! Where to start

More information

June IBM Power Academy. IBM PowerVM memory virtualization. Luca Comparini STG Lab Services Europe IBM FR. June,13 th Dubai

June IBM Power Academy. IBM PowerVM memory virtualization. Luca Comparini STG Lab Services Europe IBM FR. June,13 th Dubai June 2012 @Dubai IBM Power Academy IBM PowerVM memory virtualization Luca Comparini STG Lab Services Europe IBM FR June,13 th 2012 @IBM Dubai Agenda How paging works Active Memory Sharing Active Memory

More information

Planning for Virtualization on System P

Planning for Virtualization on System P Planning for Virtualization on System P Jaqui Lynch Systems Architect Mainline Information Systems Jaqui.lynch@mainline.com com http://www.circle4.com/papers/powervm-performance-may09.pdf http://mainline.com/knowledgecenter

More information

IBM Informix on POWER7

IBM Informix on POWER7 IBM Information Management IBM Informix on POWER7 Best Practices A Technical White Paper Contents Introduction... 5 What is an LPAR... 5 PowerVM... 5 Database workload... 6 Mission critical...7 High throughput...7

More information

AIX Power System Assessment

AIX Power System Assessment When conducting an AIX Power system assessment, we look at how CPU, Memory and Disk I/O are being consumed. This can assist in determining whether or not the system is sufficiently sized. An undersized

More information

Optimizing SAP Sybase Adaptive Server Enterprise (ASE) for IBM AIX

Optimizing SAP Sybase Adaptive Server Enterprise (ASE) for IBM AIX Optimizing SAP Sybase Adaptive Server Enterprise (ASE) for IBM AIX An IBM and SAP Sybase white paper on optimizing SAP Sybase ASE 15 for IBM AIX 5L, AIX 6.1, and AIX 7.1 on IBM POWER5, POWER6 and POWER7

More information

Performance and Workload Management

Performance and Workload Management Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 5.0 4.1 Performance and Workload Management Unit Objectives After completing this unit, you should

More information

Grover Davidson Senior Software Engineer, AIX Development Support IBM pin541 Hidden Features and Functionality in AIX

Grover Davidson Senior Software Engineer, AIX Development Support IBM pin541 Hidden Features and Functionality in AIX Grover Davidson grover@us.ibm.com Senior Software Engineer, AIX Development Support IBM pin541 Hidden Features and Functionality in AIX Copyright IBM Corporation 2014 Technical University/Symposia materials

More information

Application and Partition Mobility

Application and Partition Mobility IBM System p Application and Mobility Francis COUGARD IBM Products and Solutions Support Center Montpellier Copyright IBM Corporation 2009 Index Application and Mobility Presentation Videocharger and DB2

More information

Linux Performance Tuning

Linux Performance Tuning Page 1 of 5 close window Print Linux Performance Tuning Getting the most from your Linux investment February March 2007 by Jaqui Lynch This is the first article in a two-part series. The second installment

More information

KillTest *KIJGT 3WCNKV[ $GVVGT 5GTXKEG Q&A NZZV ]]] QORRZKYZ IUS =K ULLKX LXKK [VJGZK YKX\OIK LUX UTK _KGX

KillTest *KIJGT 3WCNKV[ $GVVGT 5GTXKEG Q&A NZZV ]]] QORRZKYZ IUS =K ULLKX LXKK [VJGZK YKX\OIK LUX UTK _KGX KillTest Q&A Exam : 000-108 Title : Enterprise Technical Support for AIX and Linux -v2 Version : Demo 1 / 7 1.Which power reduction technology requires a software component in order to be activated? A.

More information

pseries Partitioning AIX 101 Jaqui Lynch USERblue 2004

pseries Partitioning AIX 101 Jaqui Lynch USERblue 2004 pseries Partitioning AIX 0 Jaqui Lynch USERblue 2004 Jaqui.lynch@mainline.com http://www.circle4.com/papers/ubaixpart04.pdf Agenda Partitioning Concepts Hardware Software Planning Hints and Tips References

More information

Planning and Sizing Virtualization

Planning and Sizing Virtualization Planning and Sizing Virtualization Jaqui Lynch Architect, Systems Engineer Mainline Information Systems http://www.circle4.com/papers/cmgvirt.pdf 1 What is Virtualization? Being able to dynamically move

More information

IBM PowerVM Active Memory Sharing

IBM PowerVM Active Memory Sharing IBM PowerVM Active Memory Sharing One Customer s Experience Martin Mihalic Technology Engineer Asa Hendrick Technology Engineer DISCLAIMER The following documentation is based on one customer s experience

More information

, 16:00-16:45, Raum Krakau

, 16:00-16:45, Raum Krakau 2011 IBM Power Systems Technical University October Title: Running 10-14 Fontainebleau Oracle DB Miami on Beach IBM AIX Miami, and FL Power7 Best Pratices for Performance & Tuning IBM Session: Speaker:

More information

Vendor: IBM. Exam Code: C Exam Name: Enterprise Technical Support for AIX and Linux -v2. Version: Demo

Vendor: IBM. Exam Code: C Exam Name: Enterprise Technical Support for AIX and Linux -v2. Version: Demo Vendor: IBM Exam Code: C4040-108 Exam Name: Enterprise Technical Support for AIX and Linux -v2 Version: Demo QUESTION 1 Which power reduction technology requires a software component in order to be activated?

More information

Live Partition Mobility Update

Live Partition Mobility Update Power Systems ATS Live Partition Mobility Update Ron Barker Power Advanced Technical Sales Support Dallas, TX Agenda Why you should be planning for partition mobility What are your options? Which is best

More information

Planning and Sizing Virtualization. Jaqui Lynch Architect, Systems Engineer Mainline Information Systems

Planning and Sizing Virtualization.   Jaqui Lynch Architect, Systems Engineer Mainline Information Systems Planning and Sizing Virtualization http://www.circle4.com/papers/ukcmgvirt.pdf Jaqui Lynch Architect, Systems Engineer Mainline Information Systems 1 What is a CPU Chips are dual or quad core But in IBMese

More information

C exam. Number: C Passing Score: 800 Time Limit: 120 min IBM C IBM AIX Administration V1.

C exam. Number: C Passing Score: 800 Time Limit: 120 min IBM C IBM AIX Administration V1. C9010-022.exam Number: C9010-022 Passing Score: 800 Time Limit: 120 min IBM C9010-022 IBM AIX Administration V1 Exam A QUESTION 1 A customer has a virtualized system using Virtual I/O Server with multiple

More information

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Oracle Real Application Clusters on IBM AIX Best practices in memory tuning and configuring for system stability

Oracle Real Application Clusters on IBM AIX Best practices in memory tuning and configuring for system stability A white paper prepared with Oracle and IBM technical collaboration Version 1.3 Oracle Real Application Clusters on IBM AIX Best practices in memory tuning and configuring for system stability ORACLE RAC

More information

davidklee.net heraflux.com linkedin.com/in/davidaklee

davidklee.net heraflux.com linkedin.com/in/davidaklee @kleegeek davidklee.net heraflux.com linkedin.com/in/davidaklee Specialties / Focus Areas / Passions: Performance Tuning & Troubleshooting Virtualization Cloud Enablement Infrastructure Architecture Health

More information

IBM Enterprise Technical Support for AIX and Linux -v2. Download Full Version :

IBM Enterprise Technical Support for AIX and Linux -v2. Download Full Version : IBM 000-108 Enterprise Technical Support for AIX and Linux -v2 Download Full Version : http://killexams.com/pass4sure/exam-detail/000-108 QUESTION: 117 An administrator is preparing fibre-channel attached

More information

Monitoring Agent for Unix OS Version Reference IBM

Monitoring Agent for Unix OS Version Reference IBM Monitoring Agent for Unix OS Version 6.3.5 Reference IBM Monitoring Agent for Unix OS Version 6.3.5 Reference IBM Note Before using this information and the product it supports, read the information in

More information

BMC Capacity Optimization Extended Edition AIX Enhancements

BMC Capacity Optimization Extended Edition AIX Enhancements BMC Capacity Optimization Extended Edition 9.5.1 AIX Enhancements Support for AIX Active Memory Expansion (AME) mode Metrics displayed in BCO AIX PowerVM view for CPU and Memory Allocation Frame name,

More information

IBM Spectrum Scale on Power Linux tuning paper

IBM Spectrum Scale on Power Linux tuning paper IBM Spectrum Scale on Power Linux tuning paper Current Version Number: 7.1 Date: 09/11/2016 Authors: Sven Oehme Todd Tosseth Daniel De Souza Casali Scott Fadden 1 Table of Contents 1 Introduction... 3

More information

IBM POWER8 100 GigE Adapter Best Practices

IBM POWER8 100 GigE Adapter Best Practices Introduction IBM POWER8 100 GigE Adapter Best Practices With higher network speeds in new network adapters, achieving peak performance requires careful tuning of the adapters and workloads using them.

More information

Choosing Hardware and Operating Systems for MySQL. Apr 15, 2009 O'Reilly MySQL Conference and Expo Santa Clara,CA by Peter Zaitsev, Percona Inc

Choosing Hardware and Operating Systems for MySQL. Apr 15, 2009 O'Reilly MySQL Conference and Expo Santa Clara,CA by Peter Zaitsev, Percona Inc Choosing Hardware and Operating Systems for MySQL Apr 15, 2009 O'Reilly MySQL Conference and Expo Santa Clara,CA by Peter Zaitsev, Percona Inc -2- We will speak about Choosing Hardware Choosing Operating

More information

VERITAS Storage Foundation 4.0 for Oracle

VERITAS Storage Foundation 4.0 for Oracle D E C E M B E R 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief AIX 5.2, Oracle 9iR2 VERITAS Storage Foundation for Oracle Abstract This document details the high performance characteristics

More information

Virtualization Technical Support for AIX and Linux - v2

Virtualization Technical Support for AIX and Linux - v2 IBM 000-109 Virtualization Technical Support for AIX and Linux - v2 Version: 5.0 Topic 1, Volume A QUESTION NO: 1 An administrator is attempting to configure a new deployment of 56 POWER7 Blades across

More information

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM Note: Before you use this information and the product

More information

VERITAS Storage Foundation 4.0 for Oracle

VERITAS Storage Foundation 4.0 for Oracle J U N E 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief OLTP Solaris Oracle 9iR2 VERITAS Storage Foundation for Oracle Abstract This document details the high performance characteristics

More information

Infrastructure Tuning

Infrastructure Tuning Infrastructure Tuning For SQL Server Performance SQL PASS Performance Virtual Chapter 2014.07.24 About David Klee @kleegeek davidklee.net gplus.to/kleegeek linked.com/a/davidaklee Specialties / Focus Areas

More information

IBM Americas Advanced Technical Support

IBM Americas Advanced Technical Support IBM Oracle Technical Brief Oracle Architecture and Tuning on AIX Damir Rubic IBM SAP & Oracle Solutions Advanced Technical Skills Version: 2.30 Date: 08 October 2012 ACKNOWLEDGEMENTS... 5 DISCLAIMERS...

More information

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017 ECE 550D Fundamentals of Computer Systems and Engineering Fall 2017 The Operating System (OS) Prof. John Board Duke University Slides are derived from work by Profs. Tyler Bletsch and Andrew Hilton (Duke)

More information

OS-caused Long JVM Pauses - Deep Dive and Solutions

OS-caused Long JVM Pauses - Deep Dive and Solutions OS-caused Long JVM Pauses - Deep Dive and Solutions Zhenyun Zhuang LinkedIn Corp., Mountain View, California, USA https://www.linkedin.com/in/zhenyun Zhenyun@gmail.com 2016-4-21 Outline q Introduction

More information

Live Partition Mobility

Live Partition Mobility Live Partition Mobility Jaqui Lynch lynchj@forsythe.com Presentation at: http://www.circle4.com/forsythe/lpm2014.pdf 1 1 Why Live Partition Mobility? LPM LIVE PARTITION MOBILITY Uses Server Consolidation

More information

IBM Exam A Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ]

IBM Exam A Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ] s@lm@n IBM Exam A4040-101 Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ] IBM A4040-101 : Practice Test Question No : 1 Which of the following IOS commands displays

More information

Vendor: IBM. Exam Code: C Exam Name: Power Systems with POWER8 Scale-out Technical Sales Skills V1. Version: Demo

Vendor: IBM. Exam Code: C Exam Name: Power Systems with POWER8 Scale-out Technical Sales Skills V1. Version: Demo Vendor: IBM Exam Code: C9010-251 Exam Name: Power Systems with POWER8 Scale-out Technical Sales Skills V1 Version: Demo QUESTION 1 What is a characteristic of virtualizing workloads? A. Processors are

More information

IBM SYSTEM POWER7. PowerVM. Jan Kristian Nielsen Erik Rex IBM Corporation

IBM SYSTEM POWER7. PowerVM. Jan Kristian Nielsen Erik Rex IBM Corporation IBM SYSTEM POWER7 PowerVM Jan Kristian Nielsen jankn@dk.ibm.com - +45 28803310 Erik Rex Rex@dk.ibm.com - +45 28803326 PowerVM: Virtualization Without Limits Reduces IT infrastructure costs Improves service

More information

OPS-23: OpenEdge Performance Basics

OPS-23: OpenEdge Performance Basics OPS-23: OpenEdge Performance Basics White Star Software adam@wss.com Agenda Goals of performance tuning Operating system setup OpenEdge setup Setting OpenEdge parameters Tuning APWs OpenEdge utilities

More information

IBM EXAM QUESTIONS & ANSWERS

IBM EXAM QUESTIONS & ANSWERS IBM 000-106 EXAM QUESTIONS & ANSWERS Number: 000-106 Passing Score: 800 Time Limit: 120 min File Version: 38.8 http://www.gratisexam.com/ IBM 000-106 EXAM QUESTIONS & ANSWERS Exam Name: Power Systems with

More information

Technical Notes IBM Oracle International Competency Center (ICC)

Technical Notes IBM Oracle International Competency Center (ICC) Technical Notes IBM Oracle International Competency Center (ICC) August 22, 2012 email address: IBM POWER7 AIX and Oracle Database performance considerations Introduction. This document is intended to

More information

OPERATING SYSTEM. Chapter 9: Virtual Memory

OPERATING SYSTEM. Chapter 9: Virtual Memory OPERATING SYSTEM Chapter 9: Virtual Memory Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory

More information

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in versions 8 and 9. that must be used to measure, evaluate,

More information

Performance Sentry VM Provider Objects April 11, 2012

Performance Sentry VM Provider Objects April 11, 2012 Introduction This document describes the Performance Sentry VM (Sentry VM) Provider performance data objects defined using the VMware performance groups and counters. This version of Performance Sentry

More information

First-In-First-Out (FIFO) Algorithm

First-In-First-Out (FIFO) Algorithm First-In-First-Out (FIFO) Algorithm Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 3 frames (3 pages can be in memory at a time per process) 15 page faults Can vary by reference string:

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 10: Mass-Storage Systems Zhi Wang Florida State University Content Overview of Mass Storage Structure Disk Structure Disk Scheduling Disk

More information

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide V7 Unified Asynchronous Replication Performance Reference Guide IBM V7 Unified R1.4.2 Asynchronous Replication Performance Reference Guide Document Version 1. SONAS / V7 Unified Asynchronous Replication

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Parallels Virtuozzo Containers for Windows Capacity and Scaling www.parallels.com Version 1.0 Table of Contents Introduction... 3 Resources and bottlenecks...

More information

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip Reducing Hit Times Critical Influence on cycle-time or CPI Keep L1 small and simple small is always faster and can be put on chip interesting compromise is to keep the tags on chip and the block data off

More information

IBM p5 and pseries Enterprise Technical Support AIX 5L V5.3. Download Full Version :

IBM p5 and pseries Enterprise Technical Support AIX 5L V5.3. Download Full Version : IBM 000-180 p5 and pseries Enterprise Technical Support AIX 5L V5.3 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-180 A. The LPAR Configuration backup is corrupt B. The LPAR Configuration

More information

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced? Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Oracle Database on AIX - Key Ingredients for a Successful Relationship

Oracle Database on AIX - Key Ingredients for a Successful Relationship Oracle Database on AIX - Key Ingredients for a Successful Relationship... and improved communication between System Administrator, DBA and SAN team Ralf Schmidt-Dannert Executive IT Specialist, IBM 9/18/2012

More information

... Tuning AIX for Oracle Hyperion and Essbase Products Support documentation for Oracle Service.

... Tuning AIX for Oracle Hyperion and Essbase Products Support documentation for Oracle Service. Tuning AIX for Oracle Hyperion and Essbase Products Support documentation for Oracle Service......... Jubal Kohlmeier IBM STG Oracle Applications Enablement November 2013 Copyright IBM Corporation, 2013.

More information

Version 5.6 May, 2004 IBM / Oracle International Competency Center San Mateo, California IBM

Version 5.6 May, 2004 IBM / Oracle International Competency Center San Mateo, California IBM Oracle 9i on IBM AIX5L Tips & Considerations Version 5.6 May, 2004 IBM / Oracle International Competency Center San Mateo, California IBM Contents Introduction...3 Operating system Considerations...4 Maintenance-Level

More information

MySQL Performance Optimization and Troubleshooting with PMM. Peter Zaitsev, CEO, Percona

MySQL Performance Optimization and Troubleshooting with PMM. Peter Zaitsev, CEO, Percona MySQL Performance Optimization and Troubleshooting with PMM Peter Zaitsev, CEO, Percona In the Presentation Practical approach to deal with some of the common MySQL Issues 2 Assumptions You re looking

More information

Power Systems with POWER8 Scale-out Technical Sales Skills V1

Power Systems with POWER8 Scale-out Technical Sales Skills V1 Power Systems with POWER8 Scale-out Technical Sales Skills V1 1. An ISV develops Linux based applications in their heterogeneous environment consisting of both IBM Power Systems and x86 servers. They are

More information

Technical Documentation Version 7.4. Performance

Technical Documentation Version 7.4. Performance Technical Documentation Version 7.4 These documents are copyrighted by the Regents of the University of Colorado. No part of this document may be reproduced, stored in a retrieval system, or transmitted

More information

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication CDS and Sky Tech Brief Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication Actifio recommends using Dedup-Async Replication (DAR) for RPO of 4 hours or more and using StreamSnap for

More information

Lesson 1: Using Task Manager

Lesson 1: Using Task Manager 19-2 Chapter 19 Monitoring and Optimizing System Performance Lesson 1: Using Task Manager Task Manager provides information about the programs and processes running on your computer and the performance

More information

Anthony AWR report INTERPRETATION PART I

Anthony AWR report INTERPRETATION PART I Anthony AWR report INTERPRETATION PART I What is AWR? AWR stands for Automatically workload repository, Though there could be many types of database performance issues, but when whole database is slow,

More information

Oracle on HP Storage

Oracle on HP Storage Oracle on HP Storage Jaime Blasco EMEA HP/Oracle CTC Cooperative Technology Center Boeblingen - November 2004 2003 Hewlett-Packard Development Company, L.P. The information contained herein is subject

More information

Oracle Performance on M5000 with F20 Flash Cache. Benchmark Report September 2011

Oracle Performance on M5000 with F20 Flash Cache. Benchmark Report September 2011 Oracle Performance on M5000 with F20 Flash Cache Benchmark Report September 2011 Contents 1 About Benchware 2 Flash Cache Technology 3 Storage Performance Tests 4 Conclusion copyright 2011 by benchware.ch

More information

Exadata X3 in action: Measuring Smart Scan efficiency with AWR. Franck Pachot Senior Consultant

Exadata X3 in action: Measuring Smart Scan efficiency with AWR. Franck Pachot Senior Consultant Exadata X3 in action: Measuring Smart Scan efficiency with AWR Franck Pachot Senior Consultant 16 March 2013 1 Exadata X3 in action: Measuring Smart Scan efficiency with AWR Exadata comes with new statistics

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

10 MONITORING AND OPTIMIZING

10 MONITORING AND OPTIMIZING MONITORING AND OPTIMIZING.1 Introduction Objectives.2 Windows XP Task Manager.2.1 Monitor Running Programs.2.2 Monitor Processes.2.3 Monitor System Performance.2.4 Monitor Networking.2.5 Monitor Users.3

More information

Introduction to Linux features for disk I/O

Introduction to Linux features for disk I/O Martin Kammerer 3/22/11 Introduction to Linux features for disk I/O visit us at http://www.ibm.com/developerworks/linux/linux390/perf/index.html Linux on System z Performance Evaluation Considerations

More information

Lesson 2: Using the Performance Console

Lesson 2: Using the Performance Console Lesson 2 Lesson 2: Using the Performance Console Using the Performance Console 19-13 Windows XP Professional provides two tools for monitoring resource usage: the System Monitor snap-in and the Performance

More information

OS and Hardware Tuning

OS and Hardware Tuning OS and Hardware Tuning Tuning Considerations OS Threads Thread Switching Priorities Virtual Memory DB buffer size File System Disk layout and access Hardware Storage subsystem Configuring the disk array

More information

Let s Tune Oracle8 for NT

Let s Tune Oracle8 for NT Let s Tune Oracle8 for NT ECO March 20, 2000 Marlene Theriault Cahill Agenda Scope A Look at the Windows NT system About Oracle Services The NT Registry About CPUs, Memory, and Disks Configuring NT as

More information

Estimate performance and capacity requirements for Access Services

Estimate performance and capacity requirements for Access Services Estimate performance and capacity requirements for Access Services This document is provided as-is. Information and views expressed in this document, including URL and other Internet Web site references,

More information

The Art and Science of Memory Allocation

The Art and Science of Memory Allocation Logical Diagram The Art and Science of Memory Allocation Don Porter CSE 506 Binary Formats RCU Memory Management Memory Allocators CPU Scheduler User System Calls Kernel Today s Lecture File System Networking

More information

Inside the PostgreSQL Shared Buffer Cache

Inside the PostgreSQL Shared Buffer Cache Truviso 07/07/2008 About this presentation The master source for these slides is http://www.westnet.com/ gsmith/content/postgresql You can also find a machine-usable version of the source code to the later

More information

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 13 Virtual memory and memory management unit In the last class, we had discussed

More information

OS and HW Tuning Considerations!

OS and HW Tuning Considerations! Administração e Optimização de Bases de Dados 2012/2013 Hardware and OS Tuning Bruno Martins DEI@Técnico e DMIR@INESC-ID OS and HW Tuning Considerations OS " Threads Thread Switching Priorities " Virtual

More information

1 of 8 14/12/2013 11:51 Tuning long-running processes Contents 1. Reduce the database size 2. Balancing the hardware resources 3. Specifying initial DB2 database settings 4. Specifying initial Oracle database

More information

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Actual4Test.   Actual4test - actual test exam dumps-pass for IT exams Actual4Test http://www.actual4test.com Actual4test - actual test exam dumps-pass for IT exams Exam : 000-180 Title : P5 and pseries Enterprise Technical Support Aix 5L V5.3 Vendors : IBM Version : DEMO

More information

TA7750 Understanding Virtualization Memory Management Concepts. Kit Colbert, Principal Engineer, VMware, Inc. Fei Guo, Sr. MTS, VMware, Inc.

TA7750 Understanding Virtualization Memory Management Concepts. Kit Colbert, Principal Engineer, VMware, Inc. Fei Guo, Sr. MTS, VMware, Inc. TA7750 Understanding Virtualization Memory Management Concepts Kit Colbert, Principal Engineer, VMware, Inc. Fei Guo, Sr. MTS, VMware, Inc. Disclaimer This session may contain product features that are

More information

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1 Ref: Chap 12 Secondary Storage and I/O Systems Applied Operating System Concepts 12.1 Part 1 - Secondary Storage Secondary storage typically: is anything that is outside of primary memory does not permit

More information

HP ProLiant DL380 Gen8 and HP PCle LE Workload Accelerator 28TB/45TB Data Warehouse Fast Track Reference Architecture

HP ProLiant DL380 Gen8 and HP PCle LE Workload Accelerator 28TB/45TB Data Warehouse Fast Track Reference Architecture HP ProLiant DL380 Gen8 and HP PCle LE Workload Accelerator 28TB/45TB Data Warehouse Fast Track Reference Architecture Based on Microsoft SQL Server 2014 Data Warehouse Fast Track (DWFT) Reference Architecture

More information

Using Transparent Compression to Improve SSD-based I/O Caches

Using Transparent Compression to Improve SSD-based I/O Caches Using Transparent Compression to Improve SSD-based I/O Caches Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr

More information

DB2 Warehouse Version 9.5

DB2 Warehouse Version 9.5 DB2 Warehouse Version 9.5 Installation Launchpad GC19-1272-00 DB2 Warehouse Version 9.5 Installation Launchpad GC19-1272-00 Note: Before using this information and the product it supports, read the information

More information

Switching to Innodb from MyISAM. Matt Yonkovit Percona

Switching to Innodb from MyISAM. Matt Yonkovit Percona Switching to Innodb from MyISAM Matt Yonkovit Percona -2- DIAMOND SPONSORSHIPS THANK YOU TO OUR DIAMOND SPONSORS www.percona.com -3- Who We Are Who I am Matt Yonkovit Principal Architect Veteran of MySQL/SUN/Percona

More information

OpenVMS Performance Update

OpenVMS Performance Update OpenVMS Performance Update Gregory Jordan Hewlett-Packard 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Agenda System Performance Tests

More information

Experiences with VIOS support for IBM i

Experiences with VIOS support for IBM i Experiences with VIOS support for IBM i Pete Stephen, Power Systems / AIX Architect Sirius Computer Solutions Pete.Stephen@siriuscom.com Agenda VIO overview Virtualization Trends. PowerVM and how do I

More information

Measuring VMware Environments

Measuring VMware Environments Measuring VMware Environments Massimo Orlando EPV Technologies In the last years many companies adopted VMware as a way to consolidate more Windows images on a single server. As in any other environment,

More information

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III NEC Express5800 A2040b 22TB Data Warehouse Fast Track Reference Architecture with SW mirrored HGST FlashMAX III Based on Microsoft SQL Server 2014 Data Warehouse Fast Track (DWFT) Reference Architecture

More information

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information