Performance Extrapolation for Load Testing Results of Mixture of Applications

Size: px
Start display at page:

Download "Performance Extrapolation for Load Testing Results of Mixture of Applications"

Transcription

1 Performance Extrapolation for Load Testing Results of Mixture of Applications Subhasri Duttagupta, Manoj Nambiar Tata Innovation Labs, Performance Engineering Research Center Tata Consulting Services Mumbai, India Abstract Load testing of IT applications faces the challenge of providing high quality test results that would represent the performance in production like scenarios, without incurring high cost of commercial load testing tools. It would help IT projects to be able to test with a small number of users and extrapolate to scenarios with much larger number of users. Such an extrapolation strategy when applied to mixture of application workloads running on a shared server environment must take into consideration application characteristics (CPU/IO intensive, memory bound) as well the server capabilities. The goal is to predict the performance of mixture workload, the maximum throughput offered by the application mix and the maximum number of users supported by the system before the throughput starts degrading. In this paper, we propose an extrapolation strategy that analyses a system workload mix based on its service demand on various resources and extrapolates its performance using simple empirical modeling techniques. Moreover, its ability to extrapolate throughput of an application mixture even if there is a change in the mixture, can help in capacity planning of the system. Keywords-Extrapolation; Load Testing; S-curve; multiclasses of job, mixture of applications; I. INTRODUCTION A complex multi-tiered IT application comprises of multiple transactions of various characteristics and is deployed in a distributed complex environment. Before the application is launched, load testing is performed to ensure the application meets the SLA. The application performance characteristics would depend on many aspects such as workload characteristics, the number of users in the system, background load and server hardware configurations etc. Firstly, test environment results under small number of users cannot be directly mapped to production environment where the system load may be hundred or thousand times more. It is also not feasible to create a production-like test environment due to high cost involved; Load testing software with limited number of virtual users licenses add to the problem. The second issue is to accurately characterize the production server workload. In case of critical enterprise applications, every production server workload has distinct performance characteristics in storage access, processing power, and memory requirements that affect the scalability of the application. Moreover, in some organizations, different workloads frequently run side by side on the same hardware. In such a situation, rather than demand of the individual workload, the aggregate demand of the multiple classes of workloads running together decides the bottlenecks to the server. Besides, the application access pattern may undergo a change or shift during operation of the system resulting in a change in the mixture of production workloads which in turn may necessitate redoing the entire testing exercise. Thus, to estimate the application performance accurately under production workloads comprising of different characteristics, we require an extrapolation strategy that would takes into account load testing results with certain workload mixture and allow us to systematically predict the performance for larger workload of the same or different workload mixture. This paper proposes such an extrapolation strategy which does not require knowledge of the application functionalities but is able to predict the performance of the system for varied workloads. The significant contributions of the paper can be listed as follows: Given the throughput of the application and utilization of various system resources while performing the load testing only for a number of users (e.g., 5 4 users), the proposed extrapolation technique is able to extrapolate throughput for more than 6 users thus reducing the load test time drastically. The extrapolation strategy is applicable to mixture of workload scenarios where individual workload may have very different characteristics in terms of system resource requirement. Ingredients of our solution are based on simple mathematical tools such as linear regression and statistical S-curve. Thus, using two previously known techniques, the proposed solution is able to extrapolate application performance without any details of the system functionalities. The proposed solution is verified with a number of sample applications and over a number of server configurations. The paper is organized as follows: Section 2 outlines the related work and Section 3 formulates the specific problem of extrapolation. Section 4 introduces the basic extrapolation strategy and shows the result of extrapolation for a sample application. Section 5 discusses how the extrapolation strategy can be applied for multiple applications workload. The paper is concluded in Section 6.

2 II. RELATED WORK Two well-known approaches for extrapolating from a test environment to a production configuration are discrete-event simulation modeling and analytical modeling. Extrapolation using simulation modeling [3] involves representing each of the components of the infrastructure in the simulation and implementing the business function flow through the system. Analytical models based on various queuing network [1], [8] can be cost-effective solutions. Authors in [6] demonstrate how model building along with load testing information can help in making the application ready for deployment. A hybrid methodology combining layered queuing network and industry benchmarks is proposed in [5] for extrapolating performance measures of an application in case of any hardware resource changes. But in all these cases, model building requires knowledge of the application whereas in our strategy an application can be taken as a black box and only the load testing results are required for extrapolation. Besides earlier techniques of extrapolation are not tested with various applications and are not tried out on various server platforms. III. PROBLEM FORMULATION The paper considers load testing of an IT application that is accessed by N users as shown in Fig. 1. It is assumed that IT System may comprise of multiple applications and these N users may comprise of users accessing more than one applications or different transactions of the same application. The mixture of users is known a-priori i.e., if there are two applications what percentage of users accessing each application is known beforehand. In an IT system, users submit requests and wait for responses. The average response time of a request is denoted by the symbol R. A user typically spends time in entering the details of the request or in reviewing the responses the time that a user spends outside of waiting for a response, is referred to as think time. The average think time of a user is denoted by the symbol Z. The number of requests per unit of time (usually seconds) is the throughput of the system and denoted by the symbol X. Both X and R are functions of N. Then the problem of extrapolation can be defined as follows: Given the actual throughput and response time X and R of the system for a small number of users on a specific deployment scenario, using extrapolation the technique must provide an estimate of the performance of the system for a larger number of users. Given a certain mixture of users (workload mixture), the extrapolation technique should be able to provide the performance metrics for larger loads even if the workload mixture changes in future. In this paper, we deal with mixture of multiple applications but the same strategy is applicable for complex business applications with multiple transactions. The second scenario is commonly referred to as multiple job classes. The extrapolation strategy assumes that the server configuration on which the applications are running and Figure 1. Load testing of an IT system. initial performance metrics are gathered remains unchanged for larger number of users. Thus, the performance extrapolation of a set of applications is performed only in terms of loads. IV. EXTRAPOLATION OF LOAD TESTING OF INDIVIDUAL APPLICATIONS In our earlier work [7], we proposed the basic extrapolation strategy that uses a combination of linear regression and statistical S-curve and is capable of predicting maximum throughput as well as the maximum number of users that can be supported by the application. In this section, this strategy is explained briefly and the main steps are exemplified using two sample applications. In this paper our earlier strategy is extended to multiple applications where transactions performed by users of one application may vary significantly from users of another application. The proposed performance extrapolation technique takes two sets of input as follows: 1. Load testing results of the application for small number of users (typically below 5). It requires throughput for at least four such distinct number of users. 2. Utilization information of four hardware resources such as CPU, Disk, Network and Memory gathered from all the servers while performing the load test. A. Load Testing Setup We perform load testing on various applications. All load testing is done with Apache Tomcat 6. as the application server and MySql 5.5 as the database server which is hosted on a different machine other than the application server. Load testing is done using FASTEST [2], a framework for automated system performance testing based on grinder that provides a single report of load testing correlating different metrics. Our proposed strategy is tested with various sample applications. All the sample applications are tested with three server configurations as given in Table I. These servers are categorized into high, mid and small-range servers based on the number of CPUs, available RAM and amount of disk space. Proposed strategy is tested with various sample applications such as ibatis JPetStore [4] an ecommerce J2EE benchmark, a telecom reporting application on mobile

3 Throughput (pages/sec) usage with a star schema and an equiz system an online quizzing system used to identify and reward the best technical talents in a large IT company. TABLE I. SERVER CATEGORIES FOR SAMPLE APPLICATIONS Server Category Features High Range 8 Core CPU 2.66 GHz Xeon with 1MB Servers L2 cache, 8 GB Physical RAM Mid-range Servers Low-range Servers Quad Core AMD Opteron CPU 2.19 GHz with 2MB L2 cache, 4 GB RAM Quad Core SPARC Sun Fire V89 1.5GHz UltraSPARC IV+, 16 GB RAM Inter Core Duo CPU 2.33 GHz with 4MB Cache, 2 GB RAM Actual Test Result S curve Mixed Mode Linear Regression B. Linear Regression and S-curve Throughput of a system is limited by either hardware or software bottlenecks. Before a system encounters any bottleneck, the throughput would increase linearly with the number of concurrent users. This indicates that linear extrapolation is an obvious choice for predicting throughput of a system. Fig. 2 shows the result of extrapolation using linear regression where x-axis gives the number of users and y-axis gives the throughput in terms of pages/sec. Fig. 2 also shows the actual load testing results of JPetStore application from 1 users to 4 users. Throughputs from 1 users to 4 users are used by linear regression to extrapolate throughput up-to 4 users. We observe that the predicted throughput provides high accuracy until the number of users reaches 2. As the throughput starts to saturate, the rate of increase of the throughput reduces but the extrapolated throughput does not show this trend. This specific problem is addressed by alternate technique namely statistical S-curve. Mathematical S-Curves are sigmoid functions with the shape of alphabet S. These curves are used to represent the rate at which the performance of a technology improves or market penetration of a product happens over time. Implicit in S-curve are assumptions of slow initial growth, subsequent rapid growth, and followed by declining growth closer to the saturation level. The characteristic of initial increase followed by saturation makes S-curve a natural choice for extrapolation of throughput. If the number of users for load testing is N, then the following formula represents the throughput X using S-curve, X X /[1 a exp( bn)] (1) max Here gives the maximum throughput a system can achieve and constants a and b are estimated using initial throughput values from load testing tool. The same Fig. 2, shows the throughput obtained from extrapolation using S-curve. This technique uses the actual throughput from 1 users to 4 users and it predicts the throughput for the remaining 5 users to 4 users. The maximum throughput is taken as 595 pages/sec and is derived based on service demand as outlined in the next section. It can be observed that S-curve has the problem of Figure 2. Extrapolation of throughput using various techniques. steep rate of increase from 5 users to 13 users, the throughput increases from 14 pages/sec to 541 pages/sec. Finally, we propose an alternate solution referred to as Mixed mode which uses a combination of linear regression and S curve. Regression method provides better accuracy for smaller number of users; it is used initially to predict the throughput until the throughput predicted reaches a certain threshold (X th ). This threshold indicates the load beyond which there is a declining rate of growth for throughput. Beyond this point, S-curve is used to predict the throughput. Fig. 2 shows the performance of all three techniques and it can be observed that the performance of extrapolation using Mixed mode exceeds that of two other techniques. Mixed mode technique utilizes the benefits of the two techniques and incurs smaller error (less than 5%) of prediction for any number of users. Details of the algorithm can be found in [5]. Extrapolation using mixed mode requires the estimate of the maximum throughput which is discussed below. C. Maximum Throughput Computation using Service Demand The objective is to estimate the maximum throughput achieved by an application in the multi-tiered environment while performing the load testing. This is done by calculating the service demand of different resources. In a typical load testing scenario, the application and database may run on different servers such that the resource set includes CPU, memory, disk and network associated with all the machines involved. During load testing a sample application script is run for certain duration and resource utilization on each of the servers is captured. In the beginning of load testing, the virtual number of users is slowly increased until it reaches the desired number of users. This duration is referred to as ramp up period. Further, the number of users is reduced gradually before the end of the test until it drops to zero - this duration is referred to as ramp down period. For resource utilization, it is essential to exclude these two periods and include only the duration over which the number of users remains approximately constant.

4 If the average utilization of a specific resource r during the observed period is U r and the average throughput obtained in the load test is X units/sec, then resource demand of that resource is given by: (2) For example, if average utilization of disk is 67% and the average throughput is 4 pages/sec, then the service demand of disk is: D r =.67/4 = 1.68 ms. Another technique to compute service demand of a resource is outlined in [7] where a sample web application script is run in single user mode over a fixed duration and resource usage statistics is gathered (in seconds) for loading a single page or performing a single transaction. The resource with maximum service demand among all the servers is the one that saturates first when the number of users or the number of transactions is increased. If the maximum service demand is denoted by D max, then the maximum throughput X max satisfies the following formula for N users and Z think time: (3) where is the sum of service demand of all the hardware resources. The first term limits at lighter load and second term limits at higher load. For the JPetstore application on a small-range server, disk is the resource with maximum service demand of 1.68 ms and the maximum throughput from the second term of (3) is 1/.168 = 595 pages/sec. Knee of the curve Maximum throughput provides the upper bound of throughput that the application can provide. But it is also important to know the number of users for which the throughput curve starts to saturate. This specific load of the system identifies the knee of the throughput curve and is denoted as N*. Using the two bounds on maximum throughput i.e., bounds at light load and heavy load (as mentioned earlier), N* is obtained by equating these two bounds. Thus, For the telecom reporting application on a mid-range server, Z is taken as 5. sec and the network service demand is.66 ms. Hence, N* = 5./.66 = Throughput extrapolation is done at least till N* users. V. PERFORMANCE EXTRAPOLATION OF MIXTURE OF APPLICATIONS In this section, we consider a situation where multiple applications having very different resource demands run simultaneously on the same server. The service demand of multiple applications is obtained by taking weighted average of service demands of individual applications where weights reflect the proportion of workload corresponding to a specific application. We consider three resources CPU, disk, network and service demand of memory is handled differently. If service demands of three resources of application 1 and 2 are known, then service demand of these resources for multiple applications is obtained as follows: (4) where w i reflects the percentage of workload belonging to i th application and D CPU1, D CPU2, are service demands of CPU for these two applications. Since w i gives the percentage, they add up-to 1. Here, Similarly, service demand for disk and network can be obtained by taking weighted average of D Disk1, D Disk2 and D Net1, D Net2. Similar to individual applications, it is the service demand of workload mixture that decides the maximum throughput and the maximum number of users that can be supported. First, we verify through actual testing this method of computing service demand for multiple applications mixture. Secondly, the maximum throughput value X max is computed from the maximum service demand and is used in the proposed Mixed mode extrapolation strategy. Table II: Service Demand (in ms) of various applications Applicat ion Mid-range server Small-range server Disk Network CPU Disk CPU Network Telecom PetStore Mixture In Table II, the service demands of two applications telecom and JPetStore application on a SUN mid-range server are shown for three resources. In the multi-class scenario, 5% of the workload belongs to telecom whereas 5% workload belongs to JPetStore. Telecom reporting application has high service demand for network and CPU and very low service demand for disk. On the other hand, PetStore is an I/O bound job and has disk service demand of 2.56ms. Disk service demand for the mixture workload can be obtained using (4) as follows: D Disk =.5 x x.1 = 1.33 ms From the table we verify that the workload mixture of workloads indeed has disk service demand as 1.3 ms. When the mixture changes, this is going to be different as the weights applied to individual service demand changes. Next, the maximum throughput is obtained from the disk service demand as X max = 1/ D Disk = 744 pages/sec. This value is used in the extrapolation of throughput for mixture of workloads where both the applications have equal percentage. In Fig. 3 the extrapolated throughput and response time using mixed mode technique and actual load testing results are shown. It can be verified that even for mixture of applications, the mixed mode extrapolation technique is able to provide more than 9% accuracy. The service demand of

5 Throughput (pages/sec) Response time Throughput (pages/sec) 8 Actual throughput Model throughput Actual:Response time Model Response time Model:2 % Telecom Model:5 % Telecom Model:8% Telecom Actual:2% Telecom Figure 3. Extrapolation of throughput for mixture of applications. applications mixture also can be used to find the maximum number of users supported. For Z= 5. sec, N* = 5/ D Disk = In Fig. 3, the maximum throughput is obtained for 45 users beyond which the throughput is expected to degrade. Though the method of computing service demand is known from Queueing theory, it was not used earlier to predict throughput and the maximum users supported for mixture of applications. A. Applications with Common Bottleneck Resource In this section, we consider a scenario where two applications for example ibatis JPetStore and telecom reporting application run on a small-range server. The mixture of system workload is changed in order to find out the effect on the overall throughput. In Fig. 4, we show the extrapolated throughput as the percentage of workload belonging to telecom application varies from 2% to 8%. It can be seen that as the percentage of telecom application is increased, the maximum throughput of the combined workload is higher and overall extrapolated throughput is also higher. For both of these applications, CPU is the resource with maximum service demand. However, for the telecom application, the resource demand of CPU is lower. For a mixture of workloads, the maximum throughput depends on the service demand of mixture workloads and throughput is higher as the percentage of application with lower resource demand is increased. The percentage of the workload belonging to the telecom application varies between 2%, 5% and 8%. The maximum throughputs for these cases are 771, 865 or 13 respectively. Thus, for 4 users load and 2% telecom application load, the total throughput is 682 pages/sec and it is 737 pages/sec when the percentage increases to 8%. As we know the X max for individual applications, it is possible to extrapolate throughput for any other mixture of workloads. Thus, mixed mode extrapolation is capable of predicting the performance of a system even based on its future usage pattern. Figure 4. Extrapolation for applications with common bottleneck resource. B. Applications with Different Bottleneck This section deals with a situation where the mixture of workload is such that different applications have different bottleneck resources. A mixture of JPetStore and telecom reporting applications run simultaneously on a mid-range AMD server. In case of JPetStore application, I/O becomes the bottleneck and the service demand of disk is highest, whereas for telecom application, the network becomes the bottleneck. In addition, disk service demand is 1.6 ms for JPetStore whereas the network service demand is.7 ms for telecom application. Thus for a mixture of these two applications, the throughput is lowest when the percentage of telecom application is just 1% and throughput increases significantly as more workload belongs to telecom application. In Fig. 5, the extrapolated throughput is shown for three scenarios. For 5% of the load belonging to telecom application, the throughput is 826 pages/sec for N= 4 users 19% higher as compared to the throughput (672 pages/sec) where 1% load belongs to telecom application. Workload belonging to telecom applications contends for the network usage whereas workload belonging to JPetstore contends for I/O. As the applications have different bottlenecks, it helps in achieving higher throughput, when the workload consists of equal percentage from both the applications. For N= 3 users, JPetStore gives a throughput of 54 pages/sec and telecom application provides throughput of 623 pages/sec whereas mixture workload of equal percentage provides throughput of 1147 pages/sec for 6 users and the mixture can support more users. This result can be useful in obtaining higher throughput even in a virtualized environment where multiple applications run on a common shared server. C. Applications with Bottleneck resources being on different servers The third scenario we consider is when the applications have the resources causing bottleneck on different servers. This occurs when the workload is a mixture of telecom

6 Throughput Throughput Figure Model:1 % Telecom Model:9% Telecom Extrapolation for Applications with Different bottleneck resources. reporting application and an e-quizzing application. For the telecom application, network is the bottleneck on the application server and in case of the e-quiz application, CPU is the bottleneck on the database server. Thus, workloads of different applications do not contend for the same resource and throughput of one application is mostly not affected by the other application. Fig. 6 shows the extrapolated throughput for three different mixtures of two applications as they run on a highrange server. In e-quiz application users views the questions, take a test and then submit their results. This application requires a thinktime (Z = 2 sec) more than that of the telecom reporting application (Z = 4 sec). Thus, in load testing, the maximum throughput for e-quiz is much lower. When this application constitutes 8% of the workload, the throughput for 5 users is 56 pages/sec and as we increase percentage of workload belonging to telecom application, higher throughput of 13 pages/sec is obtained. VI. CONCLUSIONS Model:5 % Telecom Actual: 5% Telecom Load testing of IT projects attempts to ensure that the application meets SLA before it is actually launched in the production environment. But, limitations of load testing are its applicability for large number of users, lack of knowledge about the exact production workload characteristics etc. This paper proposes an extrapolation strategy for load testing results which allows one to obtain throughput and response time of an application for large number of users. The strategy uses initial load testing results and service demand computed from the utilization statistics of hardware resources. The proposed solution uses linear regression until the throughput reaches about half of the maximum throughput, then it uses statistical S-curve to extrapolate throughput. The paper considers mixture of application workloads having different resource demands. It presents the formula for computing the service demand of multiple applications mixture and demonstrates how the mixed mode extrapolation strategy can be applied to obtain the throughput for a mixture Model:2% telecom Model:5 % Telecom Model:8% Telecom Actual:5% Telecom Figure 6. Extrapolation when bottleneck resources are on different servers. of workloads. Depending on the bottleneck resources and their locations, the maximum throughput of mixture can vary. The strategy allows extrapolation of any mixture of applications provided service demand information of individual applications is available. This can cut down the load testing time drastically and help in analyzing different scenarios without actually performing the test. Further, incorporating this tool with capacity planning model could fasten the process of making an application ready for deployment. This paper still leaves few areas which need to be explored in future. The proposed technique is currently going through the process of validation on the virtual environment and on the clouds. This technique can be further extended to situations where hardware configurations change to reflect the production environment. This will truly bridge the gap between the test and production environments. REFERENCES [1] A. M. Ahmed, An efficient performance extrapolation for queuing models in transient analysis, In Proceedings of the 37th conference on Winter simulation, 25. [2] A. Khanapurkar, S. Malan, and M. Nambiar, A Framework for Automated System Performance Testing, in Proceedings of the Computer Measurements Group s Conference, 21 [3] H. Arsham, Performance extrapolation in discrete-event systems simulation, Int. Journal of Systems Science, vol. 27, no. 9, 1996, pp [4] JPetStore Application [5] N Tiwari and K. C. Nair, "Performance Extrapolation that uses Industry Benchmarks with Performance Models", In Proceedings of Symposium on Peformance Evaluation of Computer and Telecommunication Systems, 21. [6] R. Gimarc, A. Spellmann, and J. Reynolds, Moving Beyond Test and Guess Using modeling with load testing to improve web application Readiness, In Proceedings of the Computer Measurement Group's Conference, 24. [7] S. Duttagupta, and R. Mansharamani, Extrapolation Tool for Load Testing Results, Proc. of Int. Symp. on Performance Evaluation of Computer Systems and Telecommunication Systems, SPECTS 211. [8] S. Kounev, and A. Buchmann, Performance modeling and evaluation of large-scale J2EE applications, In Proc. of the Computer Measurement Group's Conference, 23

Extrapolation Tool for Load Testing Results

Extrapolation Tool for Load Testing Results Extrapolation Tool for Load Testing Results Subhasri Duttagupta, Rajesh Mansharamani Performance Engineering Lab Tata Consulting Services Mumbai, India subhasri.duttagupta@tcs.com, rajesh.mansharamani@tcs.com

More information

Performance Extrapolation across Servers

Performance Extrapolation across Servers Performance Extrapolation across Servers Subhasri Duttagupta www.cmgindia.org 1 Outline Why do performance extrapolation across servers? What are the techniques for extrapolation? SPEC-Rates of servers

More information

For. Rupinder 240 Singh 251 Virk 202. Dheeraj Chahal. Title and Content. Light 1. Accent 1. Dark 2. Accent 2. Dark 1. Light 2. Hyperlink.

For. Rupinder 240 Singh 251 Virk 202. Dheeraj Chahal. Title and Content. Light 1. Accent 1. Dark 2. Accent 2. Dark 1. Light 2. Hyperlink. Title and Content 109 207 246 255 255 255 131 56 155 0 99 190 85 165 28 214 73 42 Dark 1 Light 1 Dark 2 Light 2 Accent 1 Accent 2 185 175 164 151 75 7 193 187 0 255 221 62 255 255 255 236 137 29 Trace

More information

Future-ready IT Systems with Performance Prediction using Analytical Models

Future-ready IT Systems with Performance Prediction using Analytical Models Future-ready IT Systems with Performance Prediction using Analytical Models Madhu Tanikella Infosys Abstract Large and complex distributed software systems can impact overall software cost and risk for

More information

Determining the Number of CPUs for Query Processing

Determining the Number of CPUs for Query Processing Determining the Number of CPUs for Query Processing Fatemah Panahi Elizabeth Soechting CS747 Advanced Computer Systems Analysis Techniques The University of Wisconsin-Madison fatemeh@cs.wisc.edu, eas@cs.wisc.edu

More information

Performance Modeling of IoT Applications

Performance Modeling of IoT Applications Performance Modeling of IoT Applications Dr. Subhasri Duttagupta, TCS www.cmgindia.org 1 Contents Introduction to IoT System Performance Modeling of an IoT platform Performance Modeling of a sample IoT

More information

Performance Modeling of Multi-tiered Web Applications with Varying Service Demands

Performance Modeling of Multi-tiered Web Applications with Varying Service Demands International Journal of Networking and Computing www.ijnc.org ISSN 2185-2839 (print) ISSN 2185-2847 (online) Volume 6, Number 1, pages 64 86, January 2016 Performance Modeling of Multi-tiered Web Applications

More information

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Structure Page Nos. 2.0 Introduction 4 2. Objectives 5 2.2 Metrics for Performance Evaluation 5 2.2. Running Time 2.2.2 Speed Up 2.2.3 Efficiency 2.3 Factors

More information

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Baoning Niu, Patrick Martin, Wendy Powley School of Computing, Queen s University Kingston, Ontario, Canada, K7L 3N6 {niu martin wendy}@cs.queensu.ca

More information

Application of the Computer Capacity to the Analysis of Processors Evolution. BORIS RYABKO 1 and ANTON RAKITSKIY 2 April 17, 2018

Application of the Computer Capacity to the Analysis of Processors Evolution. BORIS RYABKO 1 and ANTON RAKITSKIY 2 April 17, 2018 Application of the Computer Capacity to the Analysis of Processors Evolution BORIS RYABKO 1 and ANTON RAKITSKIY 2 April 17, 2018 arxiv:1705.07730v1 [cs.pf] 14 May 2017 Abstract The notion of computer capacity

More information

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER Hardware Sizing Using Amazon EC2 A QlikView Scalability Center Technical White Paper June 2013 qlikview.com Table of Contents Executive Summary 3 A Challenge

More information

Terminal Services Scalability Study

Terminal Services Scalability Study Terminal Services Scalability Study Part 1 The Effect of CPS 4.0 Microsoft Windows Terminal Services Citrix Presentation Server 4.0 June 2007 Table of Contents 1 Executive summary 3 2 Introduction 4 2.1

More information

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4

More information

Chapter 14 Performance and Processor Design

Chapter 14 Performance and Processor Design Chapter 14 Performance and Processor Design Outline 14.1 Introduction 14.2 Important Trends Affecting Performance Issues 14.3 Why Performance Monitoring and Evaluation are Needed 14.4 Performance Measures

More information

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 143 CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 6.1 INTRODUCTION This chapter mainly focuses on how to handle the inherent unreliability

More information

ArcGIS Enterprise Performance and Scalability Best Practices. Andrew Sakowicz

ArcGIS Enterprise Performance and Scalability Best Practices. Andrew Sakowicz ArcGIS Enterprise Performance and Scalability Best Practices Andrew Sakowicz Agenda Definitions Design workload separation Provide adequate infrastructure capacity Configure Tune Test Monitor Definitions

More information

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018 Scalability Testing with Login VSI v16.2 White Paper Parallels Remote Application Server 2018 Table of Contents Scalability... 3 Testing the Scalability of Parallels RAS... 3 Configurations for Scalability

More information

Joe Wingbermuehle, (A paper written under the guidance of Prof. Raj Jain)

Joe Wingbermuehle, (A paper written under the guidance of Prof. Raj Jain) 1 of 11 5/4/2011 4:49 PM Joe Wingbermuehle, wingbej@wustl.edu (A paper written under the guidance of Prof. Raj Jain) Download The Auto-Pipe system allows one to evaluate various resource mappings and topologies

More information

A Time-To-Live Based Reservation Algorithm on Fully Decentralized Resource Discovery in Grid Computing

A Time-To-Live Based Reservation Algorithm on Fully Decentralized Resource Discovery in Grid Computing A Time-To-Live Based Reservation Algorithm on Fully Decentralized Resource Discovery in Grid Computing Sanya Tangpongprasit, Takahiro Katagiri, Hiroki Honda, Toshitsugu Yuba Graduate School of Information

More information

Computational performance and scalability of large distributed enterprise-wide systems supporting engineering, manufacturing and business applications

Computational performance and scalability of large distributed enterprise-wide systems supporting engineering, manufacturing and business applications Computational performance and scalability of large distributed enterprise-wide systems supporting engineering, manufacturing and business applications Janusz S. Kowalik Mathematics and Computing Technology

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

Accelerate Applications Using EqualLogic Arrays with directcache

Accelerate Applications Using EqualLogic Arrays with directcache Accelerate Applications Using EqualLogic Arrays with directcache Abstract This paper demonstrates how combining Fusion iomemory products with directcache software in host servers significantly improves

More information

An Oracle White Paper. Released April 2013

An Oracle White Paper. Released April 2013 Performance and Scalability Benchmark: Siebel CRM Release 8.1.1.4 Industry Applications and Oracle 11.2.0.3 Database on Oracle's SPARC T5 Servers and Oracle Solaris An Oracle White Paper Released April

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

COL862 Programming Assignment-1

COL862 Programming Assignment-1 Submitted By: Rajesh Kedia (214CSZ8383) COL862 Programming Assignment-1 Objective: Understand the power and energy behavior of various benchmarks on different types of x86 based systems. We explore a laptop,

More information

A Capacity Planning Methodology for Distributed E-Commerce Applications

A Capacity Planning Methodology for Distributed E-Commerce Applications A Capacity Planning Methodology for Distributed E-Commerce Applications I. Introduction Most of today s e-commerce environments are based on distributed, multi-tiered, component-based architectures. The

More information

Four-Socket Server Consolidation Using SQL Server 2008

Four-Socket Server Consolidation Using SQL Server 2008 Four-Socket Server Consolidation Using SQL Server 28 A Dell Technical White Paper Authors Raghunatha M Leena Basanthi K Executive Summary Businesses of all sizes often face challenges with legacy hardware

More information

Small verse Large. The Performance Tester Paradox. Copyright 1202Performance

Small verse Large. The Performance Tester Paradox. Copyright 1202Performance Small verse Large The Performance Tester Paradox The Paradox Why do people want performance testing? To stop performance problems in production How do we ensure this? Performance test with Realistic workload

More information

VERITAS Storage Foundation 4.0 for Oracle

VERITAS Storage Foundation 4.0 for Oracle J U N E 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief OLTP Solaris Oracle 9iR2 VERITAS Storage Foundation for Oracle Abstract This document details the high performance characteristics

More information

W H I T E P A P E R U n l o c k i n g t h e P o w e r o f F l a s h w i t h t h e M C x - E n a b l e d N e x t - G e n e r a t i o n V N X

W H I T E P A P E R U n l o c k i n g t h e P o w e r o f F l a s h w i t h t h e M C x - E n a b l e d N e x t - G e n e r a t i o n V N X Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R U n l o c k i n g t h e P o w e r o f F l a s h w i t h t h e M C x - E n a b

More information

Hybrid Auto-scaling of Multi-tier Web Applications: A Case of Using Amazon Public Cloud

Hybrid Auto-scaling of Multi-tier Web Applications: A Case of Using Amazon Public Cloud Hybrid Auto-scaling of Multi-tier Web Applications: A Case of Using Amazon Public Cloud Abid Nisar, Waheed Iqbal, Fawaz S. Bokhari, and Faisal Bukhari Punjab University College of Information and Technology,Lahore

More information

Managing Performance Variance of Applications Using Storage I/O Control

Managing Performance Variance of Applications Using Storage I/O Control Performance Study Managing Performance Variance of Applications Using Storage I/O Control VMware vsphere 4.1 Application performance can be impacted when servers contend for I/O resources in a shared storage

More information

Contents Overview of the Performance and Sizing Guide... 5 Architecture Overview... 7 Performance and Scalability Considerations...

Contents Overview of the Performance and Sizing Guide... 5 Architecture Overview... 7 Performance and Scalability Considerations... Unifier Performance and Sizing Guide for On-Premises Version 17 July 2017 Contents Overview of the Performance and Sizing Guide... 5 Architecture Overview... 7 Performance and Scalability Considerations...

More information

Performance of Multicore LUP Decomposition

Performance of Multicore LUP Decomposition Performance of Multicore LUP Decomposition Nathan Beckmann Silas Boyd-Wickizer May 3, 00 ABSTRACT This paper evaluates the performance of four parallel LUP decomposition implementations. The implementations

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

An Experimental Study of Rapidly Alternating Bottleneck in n-tier Applications

An Experimental Study of Rapidly Alternating Bottleneck in n-tier Applications An Experimental Study of Rapidly Alternating Bottleneck in n-tier Applications Qingyang Wang, Yasuhiko Kanemasa, Jack Li, Deepal Jayasinghe, Toshihiro Shimizu, Masazumi Matsubara, Motoyuki Kawaba, Calton

More information

Configuration changes such as conversion from a single instance to RAC, ASM, etc.

Configuration changes such as conversion from a single instance to RAC, ASM, etc. Today, enterprises have to make sizeable investments in hardware and software to roll out infrastructure changes. For example, a data center may have an initiative to move databases to a low cost computing

More information

VMware and Xen Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments

VMware and Xen Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments VMware and Hypervisor Performance Comparisons in Thick and Thin Provisioned Environments Devanathan Nandhagopal, Nithin Mohan, Saimanojkumaar Ravichandran, Shilp Malpani Devanathan.Nandhagopal@Colorado.edu,

More information

Measuring the Processing Performance of NetSniff

Measuring the Processing Performance of NetSniff Measuring the Processing Performance of NetSniff Julie-Anne Bussiere *, Jason But Centre for Advanced Internet Architectures. Technical Report 050823A Swinburne University of Technology Melbourne, Australia

More information

Lesson 2: Using the Performance Console

Lesson 2: Using the Performance Console Lesson 2 Lesson 2: Using the Performance Console Using the Performance Console 19-13 Windows XP Professional provides two tools for monitoring resource usage: the System Monitor snap-in and the Performance

More information

SoftNAS Cloud Performance Evaluation on AWS

SoftNAS Cloud Performance Evaluation on AWS SoftNAS Cloud Performance Evaluation on AWS October 25, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for AWS:... 5 Test Methodology... 6 Performance Summary

More information

Dell PowerEdge R910 SQL OLTP Virtualization Study Measuring Performance and Power Improvements of New Intel Xeon E7 Processors and Low-Voltage Memory

Dell PowerEdge R910 SQL OLTP Virtualization Study Measuring Performance and Power Improvements of New Intel Xeon E7 Processors and Low-Voltage Memory Dell PowerEdge R910 SQL OLTP Virtualization Study Measuring Performance and Power Improvements of New Intel Xeon E7 Processors and Low-Voltage Memory A Dell Technical White Paper Dell, Inc. Waseem Raja

More information

QoS-aware resource allocation and load-balancing in enterprise Grids using online simulation

QoS-aware resource allocation and load-balancing in enterprise Grids using online simulation QoS-aware resource allocation and load-balancing in enterprise Grids using online simulation * Universität Karlsruhe (TH) Technical University of Catalonia (UPC) Barcelona Supercomputing Center (BSC) Samuel

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

I. INTRODUCTION FACTORS RELATED TO PERFORMANCE ANALYSIS

I. INTRODUCTION FACTORS RELATED TO PERFORMANCE ANALYSIS Performance Analysis of Java NativeThread and NativePthread on Win32 Platform Bala Dhandayuthapani Veerasamy Research Scholar Manonmaniam Sundaranar University Tirunelveli, Tamilnadu, India dhanssoft@gmail.com

More information

Qlik Sense Enterprise architecture and scalability

Qlik Sense Enterprise architecture and scalability White Paper Qlik Sense Enterprise architecture and scalability June, 2017 qlik.com Platform Qlik Sense is an analytics platform powered by an associative, in-memory analytics engine. Based on users selections,

More information

Copyright 2018, Oracle and/or its affiliates. All rights reserved.

Copyright 2018, Oracle and/or its affiliates. All rights reserved. Beyond SQL Tuning: Insider's Guide to Maximizing SQL Performance Monday, Oct 22 10:30 a.m. - 11:15 a.m. Marriott Marquis (Golden Gate Level) - Golden Gate A Ashish Agrawal Group Product Manager Oracle

More information

Study of Load Balancing Schemes over a Video on Demand System

Study of Load Balancing Schemes over a Video on Demand System Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video

More information

Qlik Sense Performance Benchmark

Qlik Sense Performance Benchmark Technical Brief Qlik Sense Performance Benchmark This technical brief outlines performance benchmarks for Qlik Sense and is based on a testing methodology called the Qlik Capacity Benchmark. This series

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Oracle Application Server Forms Services 10g (9.0.4) Capacity Planning Guide. An Oracle White Paper November 2004

Oracle Application Server Forms Services 10g (9.0.4) Capacity Planning Guide. An Oracle White Paper November 2004 Oracle Application Server Forms Services 10g (9.0.4) Capacity Planning Guide An Oracle White Paper November 2004 Oracle Application Server Forms Services 10g (9.0.4) Capacity Planning Guide What is in

More information

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Performance Study Dell EMC Engineering October 2017 A Dell EMC Performance Study Revisions Date October 2017

More information

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage Performance Study of Microsoft SQL Server 2016 Dell Engineering February 2017 Table of contents

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE

BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE Maximizing Revenue per Server with Parallels Containers for Linux Q1 2012 1 Table of Contents Overview... 3 Maximizing Density

More information

Performance measurement. SMD149 - Operating Systems - Performance and processor design. Introduction. Important trends affecting performance issues

Performance measurement. SMD149 - Operating Systems - Performance and processor design. Introduction. Important trends affecting performance issues Performance measurement SMD149 - Operating Systems - Performance and processor design Roland Parviainen November 28, 2005 Performance measurement Motivation Techniques Common metrics Processor architectural

More information

WHITE PAPER AGILOFT SCALABILITY AND REDUNDANCY

WHITE PAPER AGILOFT SCALABILITY AND REDUNDANCY WHITE PAPER AGILOFT SCALABILITY AND REDUNDANCY Table of Contents Introduction 3 Performance on Hosted Server 3 Figure 1: Real World Performance 3 Benchmarks 3 System configuration used for benchmarks 3

More information

Why load test your Flex application?

Why load test your Flex application? Why load test your Flex application? Your Flex application is new and exciting, but how well does it perform under load? Abstract As the trend to implement Web 2.0 technologies continues to grow and spread

More information

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits

More information

MobiLink Performance. A whitepaper from ianywhere Solutions, Inc., a subsidiary of Sybase, Inc.

MobiLink Performance. A whitepaper from ianywhere Solutions, Inc., a subsidiary of Sybase, Inc. MobiLink Performance A whitepaper from ianywhere Solutions, Inc., a subsidiary of Sybase, Inc. Contents Executive summary 2 Introduction 3 What are the time-consuming steps in MobiLink synchronization?

More information

Abstract. The Challenges. ESG Lab Review InterSystems IRIS Data Platform: A Unified, Efficient Data Platform for Fast Business Insight

Abstract. The Challenges. ESG Lab Review InterSystems IRIS Data Platform: A Unified, Efficient Data Platform for Fast Business Insight ESG Lab Review InterSystems Data Platform: A Unified, Efficient Data Platform for Fast Business Insight Date: April 218 Author: Kerry Dolan, Senior IT Validation Analyst Abstract Enterprise Strategy Group

More information

HANA Performance. Efficient Speed and Scale-out for Real-time BI

HANA Performance. Efficient Speed and Scale-out for Real-time BI HANA Performance Efficient Speed and Scale-out for Real-time BI 1 HANA Performance: Efficient Speed and Scale-out for Real-time BI Introduction SAP HANA enables organizations to optimize their business

More information

Price Performance Analysis of NxtGen Vs. Amazon EC2 and Rackspace Cloud.

Price Performance Analysis of NxtGen Vs. Amazon EC2 and Rackspace Cloud. Price Performance Analysis of Vs. EC2 and Cloud. Performance Report: ECS Performance Analysis of Virtual Machines on ECS and Competitive IaaS Offerings An Examination of Web Server and Database Workloads

More information

R-Capriccio: A Capacity Planning and Anomaly Detection Tool for Enterprise Services with Live Workloads

R-Capriccio: A Capacity Planning and Anomaly Detection Tool for Enterprise Services with Live Workloads R-Capriccio: A Capacity Planning and Anomaly Detection Tool for Enterprise Services with Live Workloads Qi Zhang, Lucy Cherkasova, Guy Matthews, Wayne Greene, Evgenia Smirni Enterprise Systems and Software

More information

Automatic Data Optimization with Oracle Database 12c O R A C L E W H I T E P A P E R S E P T E M B E R

Automatic Data Optimization with Oracle Database 12c O R A C L E W H I T E P A P E R S E P T E M B E R Automatic Data Optimization with Oracle Database 12c O R A C L E W H I T E P A P E R S E P T E M B E R 2 0 1 7 Table of Contents Disclaimer 1 Introduction 2 Storage Tiering and Compression Tiering 3 Heat

More information

Forecasting Oracle Performance

Forecasting Oracle Performance Forecasting Oracle Performance - Better than a Crystal Ball Yuri van Buren Senior Oracle DBA Specialist End-2-End Performance Management Engineer Yuri van Buren 17 Years with Logica which is now part of

More information

Performance Testing White Paper

Performance Testing White Paper Performance Testing White Paper Scapa Technologies ThreadLocker Demonstrating & Measuring the Positive Impact ThreadLocker has on End User Experience & Server Capacity in RDS Environments +353 87 2365269

More information

A Cool Scheduler for Multi-Core Systems Exploiting Program Phases

A Cool Scheduler for Multi-Core Systems Exploiting Program Phases IEEE TRANSACTIONS ON COMPUTERS, VOL. 63, NO. 5, MAY 2014 1061 A Cool Scheduler for Multi-Core Systems Exploiting Program Phases Zhiming Zhang and J. Morris Chang, Senior Member, IEEE Abstract Rapid growth

More information

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels:

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels: Reduce Costs & Increase Oracle Database OLTP Workload Service Levels: PowerEdge 2950 Consolidation to PowerEdge 11th Generation A Dell Technical White Paper Dell Database Solutions Engineering Balamurugan

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

RIGHTNOW A C E

RIGHTNOW A C E RIGHTNOW A C E 2 0 1 4 2014 Aras 1 A C E 2 0 1 4 Scalability Test Projects Understanding the results 2014 Aras Overview Original Use Case Scalability vs Performance Scale to? Scaling the Database Server

More information

Software within building physics and ground heat storage. HEAT3 version 7. A PC-program for heat transfer in three dimensions Update manual

Software within building physics and ground heat storage. HEAT3 version 7. A PC-program for heat transfer in three dimensions Update manual Software within building physics and ground heat storage HEAT3 version 7 A PC-program for heat transfer in three dimensions Update manual June 15, 2015 BLOCON www.buildingphysics.com Contents 1. WHAT S

More information

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Version 1.0 Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction

More information

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008 Rapid Bottleneck Identification A Better Way to do Load Testing An Oracle White Paper June 2008 Rapid Bottleneck Identification A Better Way to do Load Testing. RBI combines a comprehensive understanding

More information

Parallels Remote Application Server. Scalability Testing with Login VSI

Parallels Remote Application Server. Scalability Testing with Login VSI Parallels Remote Application Server Scalability Testing with Login VSI Contents Introduction... 3 Scalability... 4 Testing the Scalability of Parallels RAS... 4 Configurations for Scalability Testing...

More information

ibench: Quantifying Interference in Datacenter Applications

ibench: Quantifying Interference in Datacenter Applications ibench: Quantifying Interference in Datacenter Applications Christina Delimitrou and Christos Kozyrakis Stanford University IISWC September 23 th 2013 Executive Summary Problem: Increasing utilization

More information

SoftNAS Cloud Performance Evaluation on Microsoft Azure

SoftNAS Cloud Performance Evaluation on Microsoft Azure SoftNAS Cloud Performance Evaluation on Microsoft Azure November 30, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for Azure:... 5 Test Methodology...

More information

Comparison of Storage Protocol Performance ESX Server 3.5

Comparison of Storage Protocol Performance ESX Server 3.5 Performance Study Comparison of Storage Protocol Performance ESX Server 3.5 This study provides performance comparisons of various storage connection options available to VMware ESX Server. We used the

More information

Characterizing Storage Resources Performance in Accessing the SDSS Dataset Ioan Raicu Date:

Characterizing Storage Resources Performance in Accessing the SDSS Dataset Ioan Raicu Date: Characterizing Storage Resources Performance in Accessing the SDSS Dataset Ioan Raicu Date: 8-17-5 Table of Contents Table of Contents...1 Table of Figures...1 1 Overview...4 2 Experiment Description...4

More information

Modification and Evaluation of Linux I/O Schedulers

Modification and Evaluation of Linux I/O Schedulers Modification and Evaluation of Linux I/O Schedulers 1 Asad Naweed, Joe Di Natale, and Sarah J Andrabi University of North Carolina at Chapel Hill Abstract In this paper we present three different Linux

More information

AMP in the Enterprise Open Source Confidence. March 2005

AMP in the Enterprise Open Source Confidence. March 2005 AMP in the Enterprise Open Source Confidence March 2005 SourceLabs Mission Dependable Open Source Systems Trusted source of free server infrastructure software stacks No vendor lock in Technology agnostic

More information

PerfCenterLite: Extrapolating Load Test Results for Performance Prediction of Multi-Tier Applications

PerfCenterLite: Extrapolating Load Test Results for Performance Prediction of Multi-Tier Applications PerfCenterLite: Extrapolating Load Test Results for Performance Prediction of Multi-Tier Applications Varsha Apte Nadeesh T. V. Department of Computer Science and Engineering Indian Institute of Technology

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: May 20, 2013 Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01

More information

Esri Best Practices: Tuning, Testing, and Monitoring. Andrew Sakowicz, Frank Pizzi,

Esri Best Practices: Tuning, Testing, and Monitoring. Andrew Sakowicz, Frank Pizzi, Esri Best Practices: Tuning, Testing, and Monitoring Andrew Sakowicz, asakowicz@esri.com Frank Pizzi, fpizzi@esri.com Process and tools Process and tools Esri tools Process and tools Esri tools Tools download

More information

A Quantitative Model for Capacity Estimation of Products

A Quantitative Model for Capacity Estimation of Products A Quantitative Model for Capacity Estimation of Products RAJESHWARI G., RENUKA S.R. Software Engineering and Technology Laboratories Infosys Technologies Limited Bangalore 560 100 INDIA Abstract: - Sizing

More information

A Simple Model for Estimating Power Consumption of a Multicore Server System

A Simple Model for Estimating Power Consumption of a Multicore Server System , pp.153-160 http://dx.doi.org/10.14257/ijmue.2014.9.2.15 A Simple Model for Estimating Power Consumption of a Multicore Server System Minjoong Kim, Yoondeok Ju, Jinseok Chae and Moonju Park School of

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

Reducing Disk Latency through Replication

Reducing Disk Latency through Replication Gordon B. Bell Morris Marden Abstract Today s disks are inexpensive and have a large amount of capacity. As a result, most disks have a significant amount of excess capacity. At the same time, the performance

More information

PowerVault MD3 SSD Cache Overview

PowerVault MD3 SSD Cache Overview PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS

More information

Was ist dran an einer spezialisierten Data Warehousing platform?

Was ist dran an einer spezialisierten Data Warehousing platform? Was ist dran an einer spezialisierten Data Warehousing platform? Hermann Bär Oracle USA Redwood Shores, CA Schlüsselworte Data warehousing, Exadata, specialized hardware proprietary hardware Introduction

More information

Copyright 2009 by Scholastic Inc. All rights reserved. Published by Scholastic Inc. PDF0090 (PDF)

Copyright 2009 by Scholastic Inc. All rights reserved. Published by Scholastic Inc. PDF0090 (PDF) Enterprise Edition Version 1.9 System Requirements and Technology Overview The Scholastic Achievement Manager (SAM) is the learning management system and technology platform for all Scholastic Enterprise

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Parallels Virtuozzo Containers for Windows Capacity and Scaling www.parallels.com Version 1.0 Table of Contents Introduction... 3 Resources and bottlenecks...

More information

T E C H N I C A L S A L E S S O L U T I O N S

T E C H N I C A L S A L E S S O L U T I O N S Product Management Document InterScan Web Security Virtual Appliance Customer Sizing Guide September 2010 TREND MICRO INC. 10101 N. De Anza Blvd. Cupertino, CA 95014 www.trendmicro.com Toll free: +1 800.228.5651

More information

N-Model Tests for VLSI Circuits

N-Model Tests for VLSI Circuits 40th Southeastern Symposium on System Theory University of New Orleans New Orleans, LA, USA, March 16-18, 2008 MC3.6 N-Model Tests for VLSI Circuits Nitin Yogi and Vishwani D. Agrawal Auburn University,

More information

PROPORTIONAL fairness in CPU scheduling mandates

PROPORTIONAL fairness in CPU scheduling mandates IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. XX, NO. XX, MAY 218 1 GVTS: Global Virtual Time Fair Scheduling to Support Strict on Many Cores Changdae Kim, Seungbeom Choi, Jaehyuk Huh, Member,

More information

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured System Performance Analysis Introduction Performance Means many things to many people Important in any design Critical in real time systems 1 ns can mean the difference between system Doing job expected

More information

Annex 10 - Summary of analysis of differences between frequencies

Annex 10 - Summary of analysis of differences between frequencies Annex 10 - Summary of analysis of differences between frequencies Introduction A10.1 This Annex summarises our refined analysis of the differences that may arise after liberalisation between operators

More information

When, Where & Why to Use NoSQL?

When, Where & Why to Use NoSQL? When, Where & Why to Use NoSQL? 1 Big data is becoming a big challenge for enterprises. Many organizations have built environments for transactional data with Relational Database Management Systems (RDBMS),

More information

EsgynDB Enterprise 2.0 Platform Reference Architecture

EsgynDB Enterprise 2.0 Platform Reference Architecture EsgynDB Enterprise 2.0 Platform Reference Architecture This document outlines a Platform Reference Architecture for EsgynDB Enterprise, built on Apache Trafodion (Incubating) implementation with licensed

More information

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT PhD Summary DOCTORATE OF PHILOSOPHY IN COMPUTER SCIENCE & ENGINEERING By Sandip Kumar Goyal (09-PhD-052) Under the Supervision

More information