SAP ENTERPRISE PORTAL. Scalability Study - Windows

Similar documents
SAP Enterprise Portal 6.0 SP14, SAP NetWeaver 2004

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

An Oracle White Paper Released April 2008

Performance and Scalability Benchmark: Siebel CRM Release 7 on HP-UX Servers and Oracle9i Database. An Oracle White Paper Released October 2003

COMPARISON OF ORACLE APPLICATION SERVER, WEBLOGIC AND WEBSPHERE USING PEOPLESOFT ENTERPRISE CAMPUS SOLUTIONS 8.9

Java Performance Tuning

INFOBrief. Dell - Your next SAP NetWeaver Portal-based Solution? Results

Introduction. Architecture Overview

Diagnostics in Testing and Performance Engineering

Improve Web Application Performance with Zend Platform

SPECjAppServer2002 Statistics. Methodology. Agenda. Tuning Philosophy. More Hardware Tuning. Hardware Tuning.

Secure Login for SAP Single Sign-On Sizing Guide

Parallels Remote Application Server. Scalability Testing with Login VSI

An Oracle White Paper Released October 2008

Enabling Java-based VoIP backend platforms through JVM performance tuning

Deployment Scenario: WebSphere Portal Mashup integration and page builder

JVM Performance Study Comparing Java HotSpot to Azul Zing Using Red Hat JBoss Data Grid

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

Quality Center Benchmark March 2009

ArcGIS Enterprise Performance and Scalability Best Practices. Andrew Sakowicz

Contents Overview of the Performance and Sizing Guide... 5 Architecture Overview... 7 Performance and Scalability Considerations...

Blueprinting Questionnaire Sample

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

PEOPLESOFT ENTERPRISE PORTAL 8.9

Top Ten Enterprise Java performance problems. Vincent Partington Xebia


Release 12 Java Infrastructure. Brian Bent April 17, 2008

Java & Coherence Simon Cook - Sales Consultant, FMW for Financial Services

DEPLOYMENT GUIDE DEPLOYING F5 WITH ORACLE ACCESS MANAGER

Ch. 7: Benchmarks and Performance Tests

SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2

Test Methodology We conducted tests by adding load and measuring the performance of the environment components:

Best Practices for Setting BIOS Parameters for Performance

Part I Garbage Collection Modeling: Progress Report

Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide. An Oracle White Paper April 2011

Java Performance Tuning From A Garbage Collection Perspective. Nagendra Nagarajayya MDE

RIGHTNOW A C E

An Oracle White Paper. Released April 2013

IT Scenario Overview <insert scenario name>

OS-caused Long JVM Pauses - Deep Dive and Solutions

TrueSight 10 Architecture & Scalability Q&A Best Practice Webinar 8/18/2015

JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra

IBM WebSphere Application Server V4.0. Performance. 10/02/01 Copyright 2001 IBM Corporation WS40ST11.prz Page 248 of of 28

White Paper. Major Performance Tuning Considerations for Weblogic Server

Scaling Up Performance Benchmarking

Splashtop Enterprise for IoT Devices - Quick Start Guide v1.0

Java Without the Jitter

- Benchmark White Paper - Java CICS TS V2.2 Application

Capacity Planning Guide Release 9.3.6

Silicon House. Phone: / / / Enquiry: Visit:

Installing on WebLogic Server

PEOPLESOFT 8.8 ENTERPRISE PORTAL

Typical Issues with Middleware

An Oracle Technical White Paper February Oracle Optimized Solution for Oracle WebCenter Portal A Technical White Paper

ArcGIS Enterprise: Performance and Scalability Best Practices. Darren Baird, PE, Esri

vsan 6.6 Performance Improvements First Published On: Last Updated On:


Java Application Performance Tuning for AMD EPYC Processors

Overview of the Performance and Sizing Guide

VPN Connection to HFM Server at Poltrona FrauSite Below it is described how to connect You to Poltronafrau.it domain through a VPN connection.

Virtualizing Agilent OpenLAB CDS EZChrom Edition with VMware

LOADRUNNER INTERVIEW QUESTIONS

SAP BusinessObjects Explorer Sizing Guide SAP BusinessObjects Enterprise 4.0 Support Package 03

WebSphere Application Server 6.1 Base Performance September WebSphere Application Server 6.1 Base Performance

SecureAware Technical Whitepaper

Network Design Considerations for Grid Computing

Scott Lowden SAP America Technical Solution Architect

DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP Access Policy Manager with IBM, Oracle, and Microsoft

WHITE PAPER AGILOFT SCALABILITY AND REDUNDANCY

Technical white paper Asset Manager 9.5x Deployment Sizing Guide

IBM Cognos ReportNet and the Java Heap

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Contents About This Guide... 5 Architecture Overview... 5 Performance and Scalability Considerations... 7 Deployment Considerations...

Towards High Performance Processing in Modern Java-based Control Systems. Marek Misiowiec Wojciech Buczak, Mark Buttner CERN ICalepcs 2011

E-BUSINESS APPLICATIONS R12 (RUP 3) RECEIVABLES ONLINE PROCESSING: Using Oracle10g on an IBM System P5 595

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved.

White Paper. Executive summary

Application Management Webinar. Daniela Field

SAS 9.2 Web Applications: Tuning for Performance and Scalability

Primavera Compression Server 5.0 Service Pack 1 Concept and Performance Results

Performance Benchmark and Capacity Planning. Version: 7.3

Agile Product Lifecycle Management

Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide. An Oracle White Paper December 2011

Servlet Performance and Apache JServ

SaaS Providers. ThousandEyes for. Summary

zenterprise The Ideal Platform For Smarter Computing Improving Service Delivery With Private Cloud Computing

Magento Performance Testing

BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines. AtHoc SMS Codes

Contents Overview of the Gateway Performance and Sizing Guide... 5 Primavera Gateway System Architecture... 7 Performance Considerations...

Service Manager 9.31 web client end-to-end performance test report

Drilling Through The Stack. Burkhard Neidecker-Lutz Technical Director, SAP Research SAP AG

COMPARISON OF ORACLE APPLICATION SERVER, WEBLOGIC AND WEBSPHERE USING PEOPLESOFT ENTERPRISE ONLINE MARKETING 8.9

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

SAS 9.2 Web Applications: Tuning for Performance and Scalability

Optimizing WebCenter Interaction Performance

Sizing and Memory Settings for Sage X3 Web Server (Syracuse): Recommendations

User Manual. Admin Report Kit for IIS 7 (ARKIIS)

Is Your Project in Trouble on System Performance?

Hands-on Lab Session 9909 Introduction to Application Performance Management: Monitoring. Timothy Burris, Cloud Adoption & Technical Enablement

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Transcription:

SAP NetWeaver SAP ENTERPRISE PORTAL Scalability Study - Windows

ABOUT SAP ENTERPRISE PORTAL ABOUT THIS STUDY SAP Enterprise Portal is a key component of the SAP NetWeaver platform. SAP Enterprise Portal is the industry s most comprehensive portal solution, providing a complete portal infrastructure along with bundled knowledge management and collaboration capabilities. It provides people-centric integration of all types of enterprise information, including SAP and non- SAP applications, structured and unstructured data, and Web content. It is based on open standards, such as Web services and supports Java 2 Platform, Enterprise Edition (J2EE) and Microsoft.NET technology. Business content delivered with SAP Enterprise Portal speeds portal implementation and reduces the cost of integrating existing IT systems. SAP Enterprise Portal provides employees, supply chain partners, customers, and other user communities with immediate, secure, and role-based access to key information and applications across the extended enterprise. Because information and applications are unified through the portal, users can identify and address business issues faster, more effectively, and at lower cost creating measurable benefits and strategic advantages. A major strength of SAP Enterprise Portal lies in its ability to handle large numbers of users in enterprise-scale implementations. This study is comprised of tests to check the scalability and stability of the portal on Windows Server. The aim of this study is to demonstrate the linear scalability capabilities of SAP Enterprise Portal. Performance tests of the portal on a given hardware platform is based on the following assumptions: availability of specific content, and user behavior to obtain information about CPU usage, server response times, and cluster scalability. 2

TEST CONFIGURATION Hardware Configuration System: HP ProLiant BL2p G3 Blade servers with dual 3.2 Intel Xeon TM processors and,4gb RAM. System landscape components are installed on the same platform. For more information please see : http://h184.www1.hp.com/products/ servers/proliant-bl/p-class/2p/ specifications-g3.html Network: Portal, Database and Directory Server : 1-Gbit LAN Clients : 1-Mbit LAN Comments No firewalls or reverse proxies are used. The portal is accessed via HTTP. No load balancer is used. Load is generated by Mercury Interactive Load Runner addressed directly to blade units. Software Configuration Operating System: Microsoft Windows 23 Enterprise Server CLUSTER CONFIGURATION System landscape contains one central system, four dialog instances,each containing 2 server nodes, one oracle DB and Directory Server(7 blade server altogether), all servers with 4GB RAM. 1 Central system Windows two server node machine - will not be tested under load 4 Dialog instances - Windows two server nodes machine 1 Oracle DB - Windows23 server 1 Directory Server - Sun One 5.1 - Windows23 server Database: Oracle 9.2..5 on Microsoft Windows 23 Application Server: SAPJ2EE 6.4 SR1 (NW4 SP9) Portal: SAP Enterprise Portal 6. SR1 (NW4 SP9) Directory Server: Sun One 5.1 on Microsoft Windows 23 server, 2 named users in test group JDK: Sun j2sdk 1.4.2_6 Load Generator Mercury Interactive LoadRunner 8. 3

SAP Web Application Server (JAVA) Configuration ORACLE Database Configuration Each server node per dialog instance is configured with the following JVM settings: See SAP Note 72399 for information about the JVM settings. -XX:SoftRefLRUPolicyMSPerMB=1 -verbose:gc -XX:+UseParNewGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:MaxNewSize=17M -XX:NewSize=17M -XX:+DisableExplicitGC -XX:MaxPermSize=192M -XX:PermSize=192M -XX:SurvivorRatio=2 -XX:TargetSurvivorRatio=9 -Xmx124M -Xms124M It is very important to optimize the Oracle database. Before running the real scenarios, preliminary test executed in oracle database in order to check if the configuration is optimal for running 2 concurrent users. The preliminary test performed showed that, there is the need to perform additional configuration to optimize performance of Oracle and to prevent other bottlenecks across the landscape before running the real scenario. Tuned for 2 users Shared Pool 5MB Buffer Cache 16MB PGA 3MB Processes 6 Sessions 65 dml_locks 1 open_cursors 1 Performance optimizations applied : All logs were switched to error level. DSR Service disabled using offline configuration editor It is very important to apply optimization listed above because log and trace severity levels and DSR service has great impact on overall system performance. Additional optimization (which wasn t applied) that can significantly improve system performance : SAPJ2EE compression SAPJ2EE content expiration settings JavaScript and CSS optimization. Keep Alive configuration 4

GLOSSARY Response time is a key performance indicator, the time it takes a system to react to a given input. The response time includes the transmission time, the processing time, the time for searching records and the transmission time back to the originator. Scalability is the capability to increase resources to yield a linear (ideally) increases in service capacity. The key characteristic of a scalable application is that additional load only requires additional resources rather than extensive modification of the application itself. Scale out scenario testing scalability of hardware resources while adding more dialog instances on additional machines Scale in scenario testing scalability of hardware resources while adding more dialog instances on the same machine Think Time The time between a page being returned and the next user page request. Transaction A group of processing steps that are treated as a single activity to perform a desired result. The steps in transaction can be of various types (Database query, clientserver events and etc.) GC Garbage Collection is a thread that runs to reclaim the memory by destroying the objects that cannot be referenced anymore. Garbage collection is performed by a garbage collector which recycles memory in young generation. FGC Full Garbage collection, Garbage collection is performed by a garbage collector who recycles memory in both young and old generations. FGC is usually much longer than GC. 5

TEST SEQUENCE The Employee Self-Service Portal Scenario (based on the ESS benchmark package) The Business Package for Employee Self-Service (ESS) is used as the test scenario. It has a number of iviews that allow employees to create, display and change their own data, such as department, name and address. This scenario focuses on concurrent users with the employee role (workset Employee Self-Service), and their top level navigation behavior in the portal. From a technical point of view, this scenario enables us to examine the performance of the portal platform (running on the NetWeaver Java stack), while launching business transactions. The backend influence on the performance has been omitted, as there is no connection to a backend system Interaction Steps 1. Log on to the portal - Welcome Page (includes 3 iviews) 2. Choose Working Time Record Working Time 3. Choose Leave Request 4. Choose Leave Request Overview 5. Choose Personal Information Personal Data 6. Choose Address 7. Choose Bank Information 8. Choose Benefits and Payment Paycheck Inquiry Transaction in the tests can be divided into two types : Business transactions, consisting of 3 sets of seven different business transactions are launched per loop (steps 2 to 8 * 3). Transactions simulating navigation in ESS workset. Login/Logoff transactions are carried out once per loop. The login process is more complicated than ESS business transactions because it consists of user authentication, static and dynamic content delivery from the server to the client, portal navigation, and the Welcome page with 3 iviews. Directory server contains a pool of 2, different users, randomly loaded using the load runner script. Think time 1 seconds: Real think time is randomly spread between 5 and 15 seconds, thus one loop contains 2+7*3 transactions and takes between 23*5 and 23*15 seconds (assuming that transaction response times are less than think time). Think time 6 seconds: The real think time is randomly spread between 3 and 9 seconds, thus one loop contains 2+7*3 transactions and takes between 23*3 and 23*9 seconds (assuming that transaction response times are less than think time). Two users are added every 7 seconds during the user rampup stage (for instance: it takes 1 minutes to load 17 users). 9. Log off from portal After the user logs on to the portal, a series of transactions are with specific think time (a pause between transactions) are started. At the end of the test sequence, the user logs off.

TESTS OVERVIEW This document describes the tests performed in the portal using the Employee Self-Service (ESS) scenario based on two different think-times: think time 1 seconds and 6 seconds and the heap size 124K. Each test consisted of measurements for the following scenarios: scalability, and stability. Scalability The measurements for the scalability scenario simulated the logon activity of users till the CPU usage reached 6-7 percent. At a think time of 1 seconds, 17 users logged on, and at a think time of 6 seconds, 5 users logged on per dialog instance. At this level of CPU usage, another dialog instance is added to accommodate more users logging into the system. This simulation was repeated until the portal landscape consisted of four dialog instances with eight server nodes. Both tests lasted for about 2-3 hours (till users completed logging on to the fourth instance). The tests demonstrate system scalability by adding extra dialog instances (each dialog instance has the same number of users). The stability scenario contains the following information : Average response times for transactions Transactions behavior (response times) during 6 hours load test. JVM memory handling Consumption of system resources Why Measure Both Scalability and Stability? These two tests have different meaning in the performance world. Scalability test shows the ability to increase the dialog instances on additional machines to meet the needs of users as the demand for portal resources increases. Stability tests focuses on how steady is the overall system (response times, VM memory behavior and System resources). Generally, stability tests last longer (~6 hours in our case) than scalability tests (~2-3 hours in our case), in order to check the behavior of the system over time. The scalability scenario contains the following information : Cluster Scalability (System Resource vs. test time) Consolidation of system resources per dialog instance Average CPU utilization by dialog instances User scalability on a single dialog instance Scalability per dialog instance Stability The test measures the overall system behavior overtime, when the average CPU usage is ~6 percent. Gradual increase in the number of users on all application nodes was executed in parallel. The test lasted for 6 hours. 7

Preliminary Tests Preliminary test was performed using different number of users and content (internal SAP content:- consisting of different types of iviews, navigation structures, and pages). The preliminary test was performed in order to evaluate the system setup, and to check for possible bottlenecks. In addition, the preliminary test helped us to obtain specific settings to configure in order to optimize the performance of the database (DB), the operating system (OS), and the J2EE engine. Capacity Tests A preliminary test was carried using a set of different number of users, in order to obtain the suitable number of users that should run on each dialog instance, until the CPU utilization reaches ~6-65 percent. Summary This document describes two tests, each of which consisted of two scenarios. It provides information about the results and their interpretation for the following tests: 1. Think Time 1 seconds, HeapSize 124K 1.1. Scalability 1.2. Stability 2. Think Time 6 seconds, HeapSize 124K 1.1. Scalability 1.2. Stability The following sections provide the results, and show how linear is the scalability of the portal, and how the system behaves under load. In our test landscape, and on specific hardware, the tests showed that 17 concurrent users should run on each dialog instance, if the think time 1, and 5 concurrent users should run on each dialog instance, if the think time is 6 seconds. Load runner scripts were changed accordingly, in order to run the exact number of users on each dialog instance. 8

TEST RESULTS Scenario 1.1 - Scalability Test, ESS Scenario with Think Time 1 seconds, Heap size 124K Scalability on a Single Dialog Instances (DI) The result of this test on a single DI shows the user scalability. With 17 users, the CPU utilization reaches around 6 percent. The graph below shows a single instance example where CPU utilization reaches 57.8 percent. Note that the dips in the slope is attributed to the fact that there are transactions that are heavier then others (such as Login). The curve is relatively flat up to the point of 166 user where at this point it becomes more sharper. 7 6 5 4 3 2 1 2 28 38 46 56 64 74 82 92 User Scalability (for 17 users) Number of Users 12 11 12 128 138 146 156 166 17 Scalability on a Cluster with 4 Dialog Instances The scalability on a four (4) DI with regards to number of users is linear with 17 users per DI. This figure should be viewed in conjunction with cluster scalability, DI system resources consolidation and average CPU utilization graphs (right hand side) to see that at 17 users per DI, all DI behave the same over a period of time with CPU utilization in a closely tide band, except for the first (1) and second (2) DI which break above the rest after twelve (12) Min. of the test. Further, the CPU utilization of the Central instance, DIRECTORY SERVER and Oracle DB remains very low at the bottom, not to exceed 3.5 percent utilization during the test. Note that the linear scalability can be achieved due to lack of bottlenecks in resources described above. Conccurent Users 8 7 6 5 4 3 2 1 17 Resources VS. Users Instance scalability 34 51 1 2 3 4 Dialog Instances This graph describes the behavior of four (4) dialog instances (DI) with 17 concurrent users on each DI. During this test, each DI was loaded with users, and once it reached 17 users the next DI was launched. The CPU utilization for each DI is between 45 to 7 percent, with some spikes of the second (2) DI to 8 percent. This is due to the nature of the test Think Time which creates many requests in a short period of time. There is no significant impact or degradation in CPU utilization on any DI when a new DI is up and running with 17 users. At the bottom of the graph the CPU utilization of the Central Instance, Directory Server and Oracle DB are shown. The level of utilization of the system resources are insignificant at its peak at 3.5 percent. 9 8 7 6 5 4 3 2 1 Cluster Scalability (System Resource vs. test time) : :7 :14 :21 :28 :35 :42 :49 :56 1:3 1:1 1:17 1:24 Elapsed Time The consolidate graph (from the graph above) of the cluster CPU utilization provides more clear view of on the behavior of each DI during the ramp-up. 1:31 1:38 1:45 1:52 1:59 68 2:6 2:13 9

In the graph below, there is a break from the band after 12 min. of the first (1) and second (2) DI from a level of 6 percent to the level of 8 percent CPU utilization. After two min. it does return to the pattern of the other DI s DI System resources consolidation 9 8 7 6 5 4 3 2 1 : :1 :2 :3 :4 :5 :6 :7 :8 :9 :1 :11 :12 :13 :14 :15 Elapsed Time Finely the average CPU utilization of a DI in the cluster provides the ability to zoom and examine the behavior of the DI scalability. At its peak, the CPU utilization is at 66.4 percent. 7 6 5 4 3 2 1 Average CPU utilization by dialog instances : :1 :2 :3 :4 :5 :6 :7 :8 Elapsed Time :9 :1 :11 :12 :13 :14 :15 Scenario 1.2 - Stability Test, ESS Scenario with Think Time 1 seconds, Heap size 124K Response Times The server response time average is differs from transaction to transaction (see the figure below). Transaction in this test can be divided into two types : Login to the portal.53 seconds. Response times here are longer than in the other transactions because it is complicated process consisting of user authentication, static and dynamic content delivery from the server, portal navigation and welcome page loading with 3 iviews. Portal Navigation (business transactions) - less than.2 second. Basic navigation in ESS workset. Optimizing the portal by configuring SAPJ2EE compression, content expiration timeout, Keep Alive and JavaScript files can significantly decrease both response times, network traffic and might have impact especially on login transaction. There are several other parameters that influence the marshalling of the data and hence the response time. These 1

include the dependencies on network latency and bandwidth, connection to back end systems and etc. The numbers will differ when taking into consideration all these parameters..6.5.4.3.2.1. Response times (sec.).531.214.144.141.175.181.14.139.141.4 Open URL 1 Login 2 Record Working Time 3 Leave Request 4 Leave Request Overview 5 Personal Info 6 Address 7 Bank Information 8 Paycheck Inquiry Average transactions response times 9 Logoff The figure below shows response times of transactions versus test time (6 hours not including ramp-up). Server response times remain flat across the entire test. You can realize scaling at some points but response time remains under.7 seconds. The spike might happen because of VM memory behavior (GC and FGC occurrences) and other system Total time for all Full GCs occurred during the load test is 52.2 seconds in average (.2 percent of total time). Total GC time is 1147 seconds and this is 5 percent of total test time (7 hours including users ramp-up). 14 12 1 8 6 4 2 13.125 Number of FGC JVM Memory handling (GC).2225 % of FGC out of total time JVM memory handling 4.92875 % of GC out of total time When measuring application performance it is very important to have a broad vision and monitor all components in the system landscape. You can see in figure below that average database, directory server and central instances CPU are not under load and remain very low during 6 hours load test. resources such as database, directory server and etc... System resources 7 6 6.55 5 4 3 2 1 1.1 1.4 2.83 Average SCS CPU time Average DI CPU time Average CPU Database Average CPU LDAP Transactions response times during 6 hours test System Resources Garbage Collector has major impact on overall application performance and can become a bottleneck; pauses (because of FGC) are the times when an application appears unresponsive because garbage collection is going on. The average number of Full GCs per each server node (we have 8 nodes) in this test are 13.1. Average Full GC time is 4 seconds while maximum CPU utilization in system landscape components Here are the average CPU utilization results for 4 dialog instances: Dialog Instance 1 64 percent Dialog Instance 2 59.7 percent Dialog Instance 3 58.2 percent Dialog Instance 4 6.3 percent minor GC time is only.31 seconds. 11

Scenario 2.1 - Scalability Test, ESS Scenario with Think Time 6 seconds, Heap size 124K Scalability on a Single Dialog Instances (DI) The result of this test on a single DI shows the user scalability. With 5 users, the CPU utilization reaches about 55 percent. The graph below shows a single instance example where CPU utilization is at 55 percent. Note that the dips in the slope is attributed to the fact that there are transactions that are heavier then others (such as Login). The curve is relatively flat up to the point of 35 users with CPU utilization is up to 17 percent. Then at the point of 37 users where at this point it becomes steeper. 6 5 4 3 2 1 2 56 User Scalability (for 5 users) 92 128 166 22 Number of Users 238 274 312 348 384 42 458 494 Scalability on a Cluster with 4 Dialog Instances The scalability on a four (4) DI with regards to number of users is linear with 5 users per DI. This figure should be viewed in conjunction with cluster scalability, DI system resources consolidation and average CPU utilization graphs (right hand side) to see that at 5 users per DI, all DI behave the same over a period of time with CPU utilization in a closely tide band, except for the first (1) and second (2) DI which break above the rest after twelve (12) Min. of the test. Further, the CPU utilization of the Central instance, DIRECTORY SERVER and Oracle DB remains very low at the bottom, at max to level of 3 percent utilization during the test. Note that the linear scalability can be achieved due to lack of bottlenecks in resources described above. Instance scalability 25 2 2 15 15 1 1 5 5 1 2 3 4 Resources VS. Users This graph describes the behavior of four (4) dialog instances (DI) with 5 concurrent users on each DI. During this test, each DI was loaded with users, and once it reached 5 users the next DI was launched. The CPU utilization for each DI is between 65 to 75 percent. There is no apparent impact or degradation in CPU utilization on any DI when a new DI is up and running with 17 users. At the bottom of the graph the CPU utilization of the Central Instance, Directory Server and Oracle DB are shown. The level of utilization of the system resources are insignificant at its peak at 3 percent. Cluster Scalability (System Resource versus test time) 1 8 6 4 2 : :13 :26 :39 :52 1:5 1:18 1:31 1:44 1:57 2:1 2:23 2:36 2:49 3:2 Elapsed Time The consolidate graph (from the graph above) of the cluster CPU utilization provides more clear view of on the behavior of each DI during the ramp-up. All DI s have the same pattern 12

8 6 4 2 DI System resources consolidation : :2 :4 :6 :8 :1 :12 :14 :16 :18 :2 :22 :24 :26 :28 :3 :32 :34 :36 :38 Elapsed Time Finely the average CPU utilization of a DI in the cluster provides the ability to zoom and examine the behavior of the DI scalability. At its peak, the CPU utilization is at 68.6 percent. 8 7 6 5 4 3 2 1 Average CPU utilization by dialog instances : :2 :4 :6 :8 :1 :12 :14 :16 :18 :2 :22 :24 :26 :28 :3 :32 :34 :36 :38 Elapsed Time Scenario 2.2 - Stability Test, ESS Scenario with Think Time 6 seconds, Heap size 124K Response Times The server response time average is differs from transaction to transaction (see the figure below). Transaction in this test can be divided into two types : Login to the portal.52 seconds. Response times here are longer than in the other transactions because it is complicated process consisting of user authentication, static and dynamic content delivery from the server, portal navigation and welcome page loading with 3 iviews. Portal Navigation (business transactions) - less than.2 second. Basic navigation in ESS workset. Optimizing the portal by configuring SAPJ2EE compression, content expiration timeout, Keep Alive and JavaScript files can significantly decrease both response times, network traffic and might have impact especially on login transaction. There are several other parameters that influence the marshalling of the data and hence the response time. These include the dependencies on network latency and bandwidth, connection to back end systems and etc. The numbers will differ when taking into consideration all these parameters. 13

.6.5.4.3.2.1. Response times (sec.).523.192.143.139.168.172.183.139.137.41 Open URL Login Record Working Time Leave Request Leave Request Overview Personal Info Address Bank Information Paycheck Inquiry Logoff Average transactions response times The figure below shows response times of transactions versus test time (6 hours not including ramp-up). Server response times remain flat across the entire test. You can realize scaling at some points and it reach.9 seconds but overall response time remains under.6 seconds. The spike might happen because of VM memory behavior (GC and FGC occurrences) and other system resources such as database, directory server and etc... Total time for all Full GCs occurred during the load test is 41.6 seconds in average (.15 percent of total time). Total GC time is 928 seconds and this is 3.5 percent of total test time (7 hours including users ramp-up). 12 1 8 6 4 2 1.375 Number of FGC JVM Memory handling (GC).15375 % of FGC out of total time JVM memory handling 3.48625 % of GC out of total time When measuring application performance it is very important to have a broad vision and monitor all components in the system landscape. You can see in figure below that average database, directory server and central instances CPU are not under load and remain very low during 6 hours load test. 8 7 System resources 68.65 6 5 4 3 Transactions response times during 6 hours test System Resources 2 1.67 Average SCS CPU time Average DI CPU time.89 1.84 Average CPU Database Average CPU LDAP Garbage Collector has major impact on overall application performance and can become a bottleneck; pauses (because of FGC) are the times when an application appears unresponsive because garbage collection is going on. The average number of Full GCs per each server node (we have 8 nodes) in this test are 1.3. Average Full GC time is 4 seconds while maximum minor GC time is only.4 seconds. CPU utilization in system landscape components Here are the average CPU utilization results for 4 dialog instances Dialog Instance 1 67.8 percent Dialog Instance 2 67.9 percent Dialog Instance 3 67.1 percent Dialog Instance 4 69.46 percent 14

INTERPRETATION OF THE RESULTS AND COMMENTS Linear Scalability The testing results indicated that additional dialog instances provided a linear scalability. System maintained stabled behavior when adding additional instances (CPU utilization was equal on all the instances during all test phases like ramp-up, long load and etc.). No scalability bottlenecks in order to support increasing user demand, J2EE instances are easily and dynamically added to the configuration as needed. Resources are optimally utilized by distributing the overall request load among the servers. Traffic is distributed across multiple SAPJ2EE Instances in order to balance the load placed on the SAPJ2EE engine. This allows the capacity of a Web site to be scaled by simply adding more dialog instances. What is reasonable think time? It can be 3 sec or 3 hours depending on context (for example, a data operator typing a few numbers into a simple form and then save it in the first case and a data analytic getting data from a database and then analyzing them offline in the second case). In our test we have decided to use two think times. One of 1 seconds and the other of 6 seconds. 1 seconds think time described very high user activity and this is unlikely possible scenario. 6 seconds think time represent high user activity on one hand and makes it possible to ramp-up couple of hundreds of users per instance on the other hand. This may seem to be a long time; however one must consider an average for an entire work day. Think time 1 Based on the measurements provided in the study, the tested 4- server configuration supported 68 concurrent users and CPU utilization reached 66.4 percent. Think time 6 Based on the measurements provided in the study, the tested 4- server configuration supported 2 concurrent users and CPU utilization reached 68.6 percent Transaction response times are very similar in both tests. System resources : The table below summarize the differences in system resources usage between two tests: Think Time Think Time 1 6 Number of FGC 13.1 1.3 Average FGC time 4s 4s Max GC time.31s.4s Total FGC time 52.2s 41.6s Total GC time 1147s 928s The main difference is in the number of FGC occurrences and total GC time. Heavy load executes on the system during the first test, a lot of new object created and this is the reason why the GC and FGC times are greater than in the second test. 15