... Characterizing IBM Power Systems POWER7+ and Solid State Drive Performance with Oracle s JD Edwards EnterpriseOne

Similar documents
Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization

... WebSphere 6.1 and WebSphere 6.0 performance with Oracle s JD Edwards EnterpriseOne 8.12 on IBM Power Systems with IBM i

... IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne

... Performance benefits of POWER6 processors and IBM i 6.1 for Oracle s JD Edwards EnterpriseOne A performance case study for the Donaldson Company

Infor Lawson on IBM i 7.1 and IBM POWER7+

Infor M3 on IBM POWER7+ and using Solid State Drives

Oracle Database Appliance characterization with JD Edwards EnterpriseOne

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013

... IBM AIX performance and tuning tips for Oracle s JD Edwards EnterpriseOne web server

IBM and Lawson M3 (an Infor affiliate) ERP software workload optimization on the new IBM PureFlex System

Lawson M3 7.1 Large User Scaling on System i

... HTTP load balancing for Oracle s JD Edwards EnterpriseOne HTML servers using WebSphere Application Server Express Edition

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

PSR Testing of the EnterpriseOne Adapter for JD Edwards EnterpriseOne 8.12, OBIEE , DAC 7.9.6, and Informatica 8.6

IBM Power Systems solution for SugarCRM

SAS workload performance improvements with IBM XIV Storage System Gen3

Storwize V7000 real-time compressed volumes with Symantec Veritas Storage Foundation

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

IBM System Storage DS8870 Release R7.3 Performance Update

IBM FileNet Content Manager 5.2. Asynchronous Event Processing Performance Tuning

Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

... IBM Power Systems with IBM i Performance and Tuning Tips for Oracle s JD Edwards EnterpriseOne 9.0

Netcool/Impact Version Release Notes GI

Demanding More in Today s Business World: IBM eserver Capabilities Proven Superior with Oracle Applications Standard Benchmark Performance

Server for IBM i. Dawn May Presentation created by Tim Rowe, 2008 IBM Corporation

Jeremy Canady. IBM Systems and Technology Group ISV Enablement March 2013

IBM Daeja ViewONE Virtual Performance and Scalability

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

iseries Tech Talk Linux on iseries Technical Update 2004

Implementing IBM CICS JSON Web Services for Mobile Applications IBM Redbooks Solution Guide

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III

IBM Application Runtime Expert for i

V6R1 System i Navigator: What s New

IBM Tivoli Netcool/Impact 7.1 Sizing and Tuning Guide

Executive Brief June 2014

... Tuning AIX for Oracle Hyperion and Essbase Products Support documentation for Oracle Service.

jetnexus ALB-X on IBM BladeCenter

A Rational software Whitepaper 05/25/03. IBM Rational Rapid Developer Scalability and Performance Benchmark

DS8880 High Performance Flash Enclosure Gen2

PO Approval: Securing Approvals from Your Users

IBM System Storage Reference Architecture featuring IBM FlashSystem for SAP landscapes, incl. SAP HANA

Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524

IBM Active Cloud Engine centralized data protection

IBM System Storage SAN Volume Controller IBM Easy Tier in release

Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide

IBM InfoSphere Data Replication s Change Data Capture (CDC) for DB2 LUW databases (Version ) Performance Evaluation and Analysis

IBM FileNet Content Manager and IBM GPFS

LINUX IO performance tuning for IBM System Storage

IBM System i5 models now feature advanced POWER5 processors

DS8880 High-Performance Flash Enclosure Gen2

IBM z/os Management Facility V2R1 Solution Guide IBM Redbooks Solution Guide

Elastic Caching with IBM WebSphere extreme Scale IBM Redbooks Solution Guide

IBM i Edition Express for BladeCenter S

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

Getting Started What?? Plan of Action Features and Function Short demo

Real Time Summarization. Copyright 2014, Oracle and/or its affiliates. All rights reserved.

IBM SAP Technical Brief. Live Partition Migration of SAP Systems Under Load. IBM SAP International Competence Center Walldorf, Germany

... IBM Power Systems with IBM i Performance and Tuning Tips for Oracle s JD Edwards EnterpriseOne 9.0 and 9.1

IBM System Storage IBM :

Continuous Availability with the IBM DB2 purescale Feature IBM Redbooks Solution Guide

ORACLG. Oracle Press. Advanced Tuning for. JD Edwards EnterpriseOne. Implementations

... Oracle Database 11g and 12c on IBM Power E870 and E880. Tips and considerations

IBM Platform LSF. Best Practices. IBM Platform LSF and IBM GPFS in Large Clusters. Jin Ma Platform LSF Developer IBM Canada

Deploying CICS regions with the z/os Provisioning Toolkit

Using the IBM DS8870 in an OpenStack Cloud Environment IBM Redbooks Solution Guide

The Deployment of SAS Enterprise Business Intelligence Solution in a large IBM POWER5 Environment

Planning for Easy Tier with IBM System Storage Storwize V7000 and SAN Volume Controller

IBM Lotus Domino 7 Performance Improvements

IBM Cloud Orchestrator. Content Pack for IBM Endpoint Manager for Software Distribution IBM

Brendan Lelieveld-Amiro, Director of Product Development StorageQuest Inc. December 2012

SAS Connectivity Card (CIOv) for IBM BladeCenter IBM Redbooks Product Guide

Redbooks Paper. Sizing Large-Scale Domino Workloads on iseries

E-BUSINESS SUITE APPLICATIONS R12 (R12.2.5) ORDER MANAGEMENT (OLTP) BENCHMARK - USING ORACLE11g

1 Revisions. Storage Layout, DB, and OS performance tuning guideline for SAP - V4.4. IBM System Storage layout & performance guideline for SAP

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Computing as a Service

IBM 6 Gb Performance Optimized HBA IBM Redbooks Product Guide

- Benchmark White Paper - Java CICS TS V2.2 Application

Best Practices for WebSphere Application Server on System z Linux

Using application properties in IBM Cúram Social Program Management JUnit tests

Intel Xeon Scalable Family Balanced Memory Configurations

IBM iseries Models 800 and 810 for small to medium enterprises

Platform LSF Version 9 Release 1.1. Migrating on Windows SC

Oracle s JD Edwards EnterpriseOne 8.11 Tuning Guidelines for the IBM eserver pseries

Behind the Glitz - Is Life Better on Another Database Platform?

IBM Tivoli Monitoring for Databases. Release Notes. Version SC

E-BUSINESS SUITE APPLICATIONS R12 (R12.1.3) iprocurement (OLTP) BENCHMARK - USING ORACLE DATABASE 11g ON FUJITSU S M10-4S SERVER RUNNING SOLARIS 11

IBM System i Model 515 offers new levels of price performance

IBM System Storage DS5020 Express

IBM. System i IT Simplification - Windows Linux and AIX 5L Solution Sales Version 1

An Oracle White Paper. Released April 2013

IBM. IBM i2 Analyze Windows Upgrade Guide. Version 4 Release 1 SC

IBM MQ Appliance Performance Report Version June 2015

Change Management Implementation Guide Release 9.2

Active Energy Manager. Image Management. TPMfOSD BOFM. Automation Status Virtualization Discovery

WebSphere Application Server 6.1 Base Performance September WebSphere Application Server 6.1 Base Performance

IBM Tivoli OMEGAMON DE for Distributed Systems

Transcription:

Characterizing IBM Power Systems POWER7+ and Solid State Drive Performance with Oracle s JD Edwards EnterpriseOne........ John Brock Dan Sundt IBM Rochester March 2014 Copyright IBM Corporation, 2014. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders

Table of contents Change history...1 Abstract...2 Introduction...2 Test design...2 Test environment...3 Interactive scalability...3 Conclusions... 6 Comparison of short and long-running batch impact...6 Conclusions... 10 Summary...10 Resources...11 IBM i and IBM Power Systems information... 11 IBM and JD Edwards EnterpriseOne whitepapers... 11 About the authors...12 Appendix...13 Appendix 1 CPU Utilization... 13 4000 user... 13 4000 user, 8 short UBE, 0 long UBE... 13 4000 user, 0 short UBE, 8 long UBE... 14 4000 user, 8 short UBE, 8 long UBE... 14 Appendix 2 - CPU utilization by job... 15 4000 user... 15 4000 user, 8 short UBE, 0 long UBE... 15 4000 user, 0 short UBE, 8 long UBE... 16 4000 user, 8 short UBE, 8 long UBE... 16 Appendix 3 Memory page demand... 17 4000 user... 17 4000 user, 8 short UBE, 0 long UBE... 17 4000 user, 0 short UBE, 8 long UBE... 18 4000 user, 8 short UBE, 8 long UBE... 18 Appendix 4 Jobs Created/Destroyed... 19 4000 user... 19 4000 user, 8 short UBE, 0 long UBE... 19 4000 user, 0 short UBE, 8 long UBE... 20 4000 user, 8 short UBE, 8 long UBE... 20 Appendix 5 Physical Disk I/O Rates... 21 4000 user... 21 4000 user, 8 short UBE, 0 long UBE... 21 4000 user, 0 short UBE, 8 long UBE... 22 4000 user, 8 short UBE, 8 long UBE... 22 Characterizing IBM Power Systems POWER7+ and Solid State Drive Performance with Oracle's JD Edwards EnterpriseOne

Appendix 6 Average Device Operations Rate...23 4000 user... 23 4000 User, 8 short UBE, 0 long UBE... 23 4000 User, 0 short UBE, 8 long UBE... 24 4000 User, 8 short UBE, 8 long UBE... 24 Appendix 7 - List of transactions used... 25 Interactive transactions... 25 Short-running UBEs... 25 Long-Running UBEs... 26 Trademarks and special notices...27 Characterizing IBM Power Systems POWER7+ and Solid State Drive Performance with Oracle's JD Edwards EnterpriseOne

Change history Version Date Editor Editing description 1.0 09/10/2013 John Brock Original 1.1 11/7/2013 Dan Sundt Editing and new content 1.2 01/03/2014 John Brock Revised for POWER7+ and SSDs 1.3 2/13/2014 Dan Sundt Editing 1.4 2/20/2014 John Brock Added tables of transactions used 1

Abstract In this paper the performance of a system with all disk storage consisting of solid state drives (SSD) is studied on an IBM Power Systems model 740 server with the latest POWER7+ processor technology, version 7.1 of the IBM i operating system, and Oracle s JD Edwards EnterpriseOne application software. Comparisons are made to a previously published study of an IBM Power Systems model 740 server with POWER7 processor technology and conventional disk drives. (Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization, D. Webster, January 2012 http://www- 03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102069). The intended audience for this paper are system administrators operating a JD Edwards EnterpriseOne environment on a server running the IBM i operating system. Introduction This paper presents the results of performance characterization tests based on the combination of an IBM Power Systems model 740 server with the latest POWER7+ processor technology, IBM i version 7.1 operating system, IBM Technology for Java, Java Development Kit (JDK) 1.6 32-bit JVM, IBM WebSphere Application Server Network Deployment (ND) edition version 7.0, and JD Edwards EnterpriseOne applications version 9.0.2 with tools version 8.98.4. The tests were conducted on a partition of twelve cores and 192 GB of memory to match the system used in the previous study. This paper explores performance differences between the POWER7 and POWER7+ processors and conventional hard disk drives (HDDs) versus SSDs in terms of response time and throughput for the JD Edwards EnterpriseOne application, illustrated with the JD Edwards EnterpriseOne Day in the Life (DIL) test kit. Test design The first goal of the tests was to compare the interactive performance of the latest POWER7+ processorbased systems using 1000, 2000, 3000, and 4000 users to the performance of the previously tested POWER7 processor-based system. The interactive portion of the DIL test kit was used to provide a workload comparable to previously run performance tests. The second goal of the tests was to compare combined interactive and batch performance to the previously tested system. A load of 4000 interactive users was created using the DIL test kit, and then both short-running and long-running batch workloads from the kit were submitted. Runs were made with only short-running UBEs, only long-running UBEs, and a combination of both short-running and long-running UBEs. The DIL test kit may not accurately represent a typical customer s production environment, however, it has been used in past performance studies by IBM and thus provides a similar workload for comparison to those previous studies. The short-running UBEs were submitted with CL programs to a subsystem that allowed eight UBEs to run simultaneously in addition to the submitting CL programs. The short-running UBEs ran at priority 40. The average number of UBEs completed per minute was used as the measurement, counted by querying the JD Edwards EnterpriseOne F986110 table. The long running UBEs were submitted to the default QBATCH subsystem and ran at priority 50. The long-running UBEs were measured by counting the number of GL post records processed by the R09801 General Ledger Post transaction. 2

Test environment The test used a single IBM Power Systems model 740 server (8205-E6D, feature code EPCR) with sixteen POWER7+ cores running at 4.2 GHz. Twelve cores were allocated to the testing partition. Eight feature code 58B8 SSDs of 387 GB each were allocated to the partition with RAID-5 protection. The system had 256 GB of memory, with 192 GB allocated to the partition. The CPW rating of the system was 120,000, or approximately 7,500 CPW/core. Multiplying by the twelve allocated cores gives a calculated partition CPW of approximately 90,000. Usable disk storage was 2,716 GB and varied between 40% and 45% full during the tests. The JD Edwards EnterpriseOne environment was configured as an all-in-one configuration, with the web, application, batch, and database functions all residing within the same partition. The partition was configured with two memory pools; the machine and base pools. WebSphere Application Server fix pack 7.0.0.27 was installed and the system was at the latest available IBM i 7.1 cumulative and group fix packs. Table 1 below shows a summary of the test environment configuration. IBM Power Systems model 740, 8205-E6D, 16-core Configuration Processor Technology Clock Rate CPW CPW/core POWER7+ System POWER7+ 4.2 GHz 120,000 7,500 Twelve Core Testing Partition 90,000 7,500 Compared POWER7 System POWER7 3.7 GHz 77,200 6,433 Memory 256 GB 192 GB 192 GB Internal disk arms SSD HDD Software Operating System WebSphere Application Server Network Deployment (ND) edition JD Edwards EnterpriseOne application/tools 8 0 IBM i 7.1 7.0.0.27 9.02/8.98.4 Table 1. Test environment hardware and software configuration Interactive scalability 8 48 IBM i 7.1 7.0.0.27 9.02/8.98.4 To measure the scalability of the server with an interactive user workload, runs were made with 1000, 2000, 3000, and 4000 users. The runs used a static configuration sized for 4000 users. Call object kernels were sized at one kernel per 25 users (160 kernels). WebSphere ND was used to create an eight node cluster corresponding to eight JVMs, for a 500 user per JVM ratio. JVM heap memory was a minimum of 436 MB and a maximum of 1,744 MB. 3

Interactive user workload results are shown below in Figure 1. The interactive response times at these levels are essentially the same for both systems. Response Times for Interactive User Load 0.7 0.6 0.5 Response Time (sec) 0.4 0.3 0.2 0.1 0 1000 2000 3000 4000 740 P7 HDD 0.085 0.091 0.118 0.127 740 P7+ SDD 0.11 0.105 0.106 0.109 Concurrent Users Figure 1: POWER7 and POWER7+ JD Edwards EnterpriseOne interactive response times Figure 2 on the next page shows the CPU utilization of the major components of the JD Edwards EnterpriseOne all-in-one environment. The system was lightly loaded at these user levels, with a maximum of approximately 30% CPU utilization with 4000 users. The constant slope of the line graphing maximum CPU utilization indicates very linear scaling. The breakout of CPU utilization was done using IBM i Collection Services data and idoctor reports. A CPU utilization by generic job name report was used to summarize the data. The logic category was all jobs beginning with JDENET*, database was all jobs beginning with QSQSRV*, web was the jobs beginning with the name of the WebSphere ND Application Servers, in this case AS_695*, and the HTTP servers, in this case ND695*. As can be seen in the graph, the JDE kernels are always the top consumer, followed closely by the web component, 85-90% of which are the WebSphere ND Application Server jobs. The database jobs are a significantly smaller portion of the overall CPU consumption. 4

12-core Power 740 P7+ server with SSDs CPU Utilization of Components Interactive workload 100 90 80 70 % CPU Utilization 60 50 40 30 20 10 0 1000 2000 3000 4000 Number of Concurrent Interactive Users Web Logic DB Total CPU Utilization Figure 2: POWER7+ with SSDs interactive workload CPU utilization by component Power 740 Server P7+ with SSDs CPU Utilization Comparison for EnterpriseOne Interactive Workload 100 90 80 70 % CPU Utilization 60 50 40 30 740 P7 HDD 740 P7+ SDD 20 10 0 1000 2000 3000 4000 Concurrent Users Figure 3: POWER7 and POWER7+ interactive workload CPU utilization comparison 5

Figure 3 above compares the total CPU utilization of the two systems. Although there was no discernible response time difference observed between the systems, there is a significant difference in CPU utilization. As can be seen in Figure 3, the POWER7+ system with SSDs takes up much less of the available CPU capacity to perform the same work, using 60%-70% of what would be required with the previously tested POWER7 system. Conclusions The interactive scenario shows us: As the number of users increased, we observed near constant response of 0.1 seconds. This is approximately the same as the previously tested system. This suggests some minimum response time is being approached for single interactive transactions. Increases in interactive loads scale linearly, as was seen in the earlier study. The POWER7+ system with SSDs can process the same load with a much lower CPU utilization. We believe this is mostly attributed to the SSDs. If the higher clock speed of the POWER7+ processor was the major differentiator, we would have expected an improvement in average response time. Comparison of short and long-running batch impact All batch scenarios were run with a base load of 4000 interactive users. Four separate scenarios were run: 8 short-running UBEs 8 long-running UBEs 4 short-running and 4 long-running UBEs 8 short-running and 8 long-running UBEs. The system was configured to run the QSQSRVR database server jobs in the application subsystems, separating the QSQSRVR jobs running for the interactive, short-running, and long-running UBEs. This was necessary to have the QSQSRVR jobs running at the priority of the served job (20 for interactive, 40 for the short UBEs and 50 for the long UBEs). When making tuning runs it was observed that interactive performance was being harmed by the batch job database activity running with all QSQSRVR jobs in QSYSWRK at priority 20. 6

12-core Power 740 P7+ server with SSDs CPU Utilization of Components Interactive and Batch Workload 60 0.25 50 0.2 % CPU Utilization 40 30 20 0.15 0.1 Response Time in Seconds 10 0.05 0 interactive only 8 short UBE 8 long UBE 4 short and 4 long UBE 8 short and 8 long UBE Concurrent Active Batch Streams, 4000 Interactive Users Batch Logic Web Avg Resp Time 0 Figure 4: POWER7+ with SSDs JD Edwards EnterpriseOne combined workload performance Figure 4 above illustrates the results of the various scenarios. Interactive response time was between 0.10 and 0.14 seconds, and was higher in scenarios with short UBEs and higher CPU utilization. It appears this negative impact is due to the short UBE workload, as scenarios with higher CPU utilization but no short UBE workload were approximately the same as the purely interactive workload. The short UBEs submit more transactions than the long UBEs, causing the logic component to be slightly more in short UBE scenarios and are likely impacting the interactive transaction scheduling. As would be expected with the constant 4000 user interactive component and submitting the batch workloads with CL programs, the web component is almost constant for the five scenarios at 12% of the CPU usage. 7

100 P7+ 740 All-SSD CPU utilization and response time comparison for Combined Workloads 4000 Interactive Users 0.25 90 80 0.2 % CPU utilization 70 60 50 40 30 0.15 0.1 GL Post Throughput Response Time in Seconds 20 0.05 10 0 P7 4 short, 4 long UBE P7+ SSD, 4 short, 4 long UBE P7 8 short, 8 long UBE P7+ SSD, 8 short, 8 long UBE 0 Concurrent Active Batch Streams Short-Running UBEs CPU Utilization GL Post Throughput Response Time Figure 5: Comparison of POWER7+ with SSDs JD Edwards EnterpriseOne combined workloads Figure 5 above compares POWER7+ with SSDs interactive transaction response time and total CPU utilization for two mixed workload cases to the previously tested POWER7 system with hard disk drives. Interactive response time was reduced by approximately one-third, and total CPU utilization was reduced 10%. Note that the CPU utilization numbers in Figure 4 are much lower than the same tests shown in Figure 5. For example, CPU utilization for the 8 short and 8 long-running UBE test in Figure 4 was 57%, where in Figure 5 it was 77%. This is due to the nature of the reports used. IBM i Collection Services data was used to obtain the values. For the by-component reports of Figure 4, the data analyzed is over the entire run of the benchmark, including the ramp-up and ramp-down. For the total utilization of Figure 5, the values used are for the first 60 minutes after ramp-up has completed and are therefore higher and more representative of the actual system load. The component reports are still useful though to indicate the relative amount of time spent between components, and conclusions can be drawn comparing them to other component reports. 8

120 P7+ 740 All-SSD UBE Throughput for Combined Workloads 4000 Interactive Users 25000 100 20000 Number of UBEs per Minute 80 60 40 15000 10000 Records per Minute (GL Post) 20 5000 0 P7 4 short, 4 long UBE P7+ SSD, 4 short, 4 long UBE P7 8 short, 8 long UBE P7+ SSD, 8 short, 8 long UBE 0 Short-Running UBEs GL Post Throughput Figure 6: Comparison of POWER7+ with SSDs JD Edwards EnterpriseOne mixed workloads Figure 6 above compares the throughput characteristics of the POWER7+ and SSD environment to the previously tested POWER7 system with hard disk drives. Two mixed-transaction scenarios are shown; the first with 4 short-running UBEs and 4 long-running UBEs, the second with 8 short-running UBEs and 8 long-running UBEs. Both scenarios had a base load of 4000 interactive users. As shown in Figure 4 previously, the interactive response time for these scenarios was between 0.10 and 0.14 seconds. The columns in Figure 6 show the average number of short-running UBEs completed per minute. The lines show the number of records per minute posted by the long-running General Ledger Post transaction. Throughput is improved in both scenarios. The number of short-running UBEs completed per minute has more than doubled, for example from 39 UBEs per minute to 97 UBEs per minute in the first scenario. The first scenario is also processing approximately 50% more long-running transaction records (23,200 compared to 15,800), and this percentage is higher in the second scenario. Fewer total long-running transactions are processed in the second scenario because as the system load increases, the effect of setting the long-running transactions to a lower priority (50) becomes more apparent. 9

Conclusions In these mixed UBE scenarios, the conclusions from the previous POWER7 report remain: The batch workloads had minimal impact on the interactive response time. The short-running UBEs had some small effect due to sharing transaction processing. As CPU utilizations increase, UBE throughput is impacted. Short-running UBE throughput continued to increase even though long-running UBE throughput decreased. It can be important to use multiple priorities for batch jobs to ensure throughput for critical jobs. Additionally, new conclusions can be drawn about the POWER7+ and SSD environment: Interactive response time is not going to see a large improvement. Batch throughput can be improved significantly in an all-ssd environment. CPU utilization can be reduced to some extent. The amount of improvement seen in another environment is dependent on the profile of the transactions being run. Workloads similar to the JD Edwards EnterpriseOne DIL kit should expect similar results, however, a different transaction mix may yield quite different results. Summary The results from this performance characterization reflect the following about JD Edwards EnterpriseOne when run on IBM Power Systems POWER7+ processor-based servers with IBM i: The system scales linearly across environments of 1000, 2000, 3000, and 4000 interactive users. An interactive response time of less than 0.15 seconds can be maintained with a mixed batch and interactive workload. Batch throughput can significantly increase in an all-ssd environment, depending on the transaction mix. System CPU utilization is reduced, potentially allowing more users in the same environment or the consolidation of applications to the same environment. 10

Resources IBM i and IBM Power Systems information IBM i http://www.ibm.com/systems/i IBM i 7.1 Information Center http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp IBM i on IBM developerworks https://www.ibm.com/developerworks/ibmi/ IBM Power Systems http://www-03.ibm.com/systems/power IBM Power 740 Express server overview http://www-03.ibm.com/systems/power/hardware/740/index.html IBM i Solution Editions http://www-03.ibm.com/systems/power/hardware/solutioneditions/ibmi/index.html IBM i Solution Edition for JD Edwards Solution Data Sheet http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/prs4143 JD Edwards EnterpriseOne Solutions from Oracle on IBM i http://ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/i%20erp1/page/e nterpriseone IBM Hardware Sizing Questionnaires for JD Edwards applications: http://ibm.com/erp/sizing IBM Redbooks http://www.redbooks.ibm.com IBM and JD Edwards EnterpriseOne whitepapers IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102059 IBM Power Systems with IBM i performance and tuning tips for Oracle s JD Edwards EnterpriseOne WebSphere-based HTML servers http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101777 11

IBM Power Systems with IBM i Performance and Tuning Tips for Oracle s JD Edwards EnterpriseOne 9.0 http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101504 IBM Power Systems with IBM i using Solid State Drives to boost your Oracle s JD Edwards EnterpriseOne Performance http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102061 Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102069 IBM i Solution Edition for Oracle s JD Edwards EnterpriseOne performance benchmark results http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101731 Oracle s JD Edwards EnterpriseOne Scaling with IBM POWER6, POWER7, and IBM i http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101555 About the authors John Brock is a Senior Software Engineer and a member of the IBM i Enterprise Resource Planning software development team in Rochester, Minnesota. He works closely with JD Edwards development teams on issues related to the installation, operation and performance of their JD Edwards World and EnterpriseOne products on IBM i servers. He can be contacted at jcbrock@us.ibm.com. Dan Sundt has over twenty-three years of experience with AS/400, iseries, System i and Power Systems servers running IBM i. He is currently a member of the IBM Americas Advanced Technical Skills (ATS) Solutions Center specializing in technical sales support for JD Edwards solutions that run on IBM hardware and software. Dan can work with customers in areas such as sizing JD Edwards on IBM infrastructure, and system architecture/design. He started his career in the IBM Rochester Support Center and has been in the ATS organization since 2000. He can be contacted at dansundt@us.ibm.com. 12

Appendix Additional report data is included here to provide comparison data to future study. The reports were produced from IBM i Collection Services data using the idoctor Collection Services Investigator. Appendix 1 CPU Utilization 4000 user 4000 user, 8 short UBE, 0 long UBE Short UBEs started at 13:57; other reports skip the interactive ramp-up and start at that point. 13

4000 user, 0 short UBE, 8 long UBE Long UBEs were started at 7:44 and ramped up by 7:47; other reports will skip the ramp-up and start at that point. 4000 user, 8 short UBE, 8 long UBE Short UBEs started at 19:34 and long UBEs at 19:42; other reports will skip the ramp-up and start at 19:43. 14

Appendix 2 - CPU utilization by job 4000 user 4000 user, 8 short UBE, 0 long UBE The short UBE submission CL programs were running as QDFTJOB. 15

4000 user, 0 short UBE, 8 long UBE 4000 user, 8 short UBE, 8 long UBE 16

Appendix 3 Memory page demand 4000 user 4000 user, 8 short UBE, 0 long UBE The demand cycling reflect batch CL programs which submit the short UBEs being released and held to control the queue of short UBE jobs. 17

4000 user, 0 short UBE, 8 long UBE 4000 user, 8 short UBE, 8 long UBE 18

Appendix 4 Jobs Created/Destroyed 4000 user 4000 user, 8 short UBE, 0 long UBE 19

4000 user, 0 short UBE, 8 long UBE 4000 user, 8 short UBE, 8 long UBE 20

Appendix 5 Physical Disk I/O Rates 4000 user 4000 user, 8 short UBE, 0 long UBE 21

4000 user, 0 short UBE, 8 long UBE 4000 user, 8 short UBE, 8 long UBE 22

Appendix 6 Average Device Operations Rate 4000 user 4000 User, 8 short UBE, 0 long UBE 23

4000 User, 0 short UBE, 8 long UBE 4000 User, 8 short UBE, 8 long UBE 24

Appendix 7 - List of transactions used Interactive transactions All 17 DIL kit transaction scripts were used H03B102E H0411I H051141E H17500 E H31114U H3411AE H3411BE H3411CE H4113E H42101E H42101U H4310E H4312U H4314U H4915AU H4915CE H4915CU (Apply Receipts) (Supplier Ledger Inquiry) (Daily Time Entry) (Case Management Add) (W.O. Completion) (MRP Messages (WO Orders)) (MRP Messages (OP Orders)) (MRP Messages (OT Orders)) (Inventory Transfer) (S.O. Entry 10 line items) (S.O. Update) (P.O. Entry 25 line items) (P.O. Receipts) (Voucher Match) (Ship Confirm Approval only) (Ship Confirm Confirm/Ship only) (Ship Confirm Confirm and Change entry) Short-running UBEs R0004P (UDC Records Type Print) R0006P (Business Unit Report) R00067 (Business Unit Translation Report) R0008P (Date Patterns Report) R0010P (Company Constants Report) R0012P1 (AAI Report) R0014 (Payment Terms Report) R0018P (Tax Detail Report) 25

R00425 (Organization Structure Report) R01402W (Who s Who Report) R03B155 (A/R Summary Analysis) R03B31 (Activity Log Report) R41411 (Select Items for Count 1 item) R42072 (Price Category Print) Long-Running UBEs R09801 (GL Post) R31410 (Work Order Processing) R31802A (MFG Acct Journal) R42520 (Print Pick Slips) R42565 (Sales Order Invoicing) R42800 (Sales Order Update) R43500 (Purchase Order Print) R4981 (Freight Update) 26

Trademarks and special notices Copyright. IBM Corporation 2014. All rights reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, AS/400. iseries, Power, Power Systems, POWER7, POWER7+, System i, and WebSphere are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both: Java is a registered trademarks of Oracle and/or its affiliates. Other company, product, or service names may be trademarks or service marks of others. The information provided in this document is distributed AS IS without any warranty, either express or implied. The information in this document may include technical inaccuracies or typographical errors. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-ibm products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-ibm list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-ibm products. Questions on the capability of non-ibm products should be addressed to the supplier of those products. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-ibm Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. 27