RECORD LEVEL CACHING: THEORY AND PRACTICE 1

Size: px
Start display at page:

Download "RECORD LEVEL CACHING: THEORY AND PRACTICE 1"

Transcription

1 RECORD LEVEL CACHING: THEORY AND PRACTICE 1 Dr. H. Pat Artis Performance Associates, Inc Spyglass Lane Palm Desert, CA (760) drpat@perfassoc.com Abstract: In this paper, we will examine experimental data collected for a responding to an I/O workload comprised of random read/write pairs. Read/write pair workloads are typical of business critical online transaction processing (OLTP) workloads such as credit card or ATM applications. The experimental data will be used as a basis to examine past theoretical explanations as well as to comment on the efficacy of IBM s new record level cache one (RLC I) facility. 1 Introduction Since the introduction of the cache control unit in 1981, users have enjoyed the benefits of substantially reduced device service times for read oriented workloads which demonstrate a high locality of reference. Briefly, the term locality of reference is used to describe the tendency of a sequence of I/O operations to make multiple references to one or just a few tracks over a brief period of time. To exploit this behavior, cache control units incorporate a memory resource to store recently referenced tracks. When a read-miss occurs, the cache control unit loads more than just the record which caused the read-miss in an effort to reduce the number of physical I/O operations associated with future read requests. Typically, the requested record and the remainder of the records on the track are staged into cache. However, a variety of other algorithms may also be employed to optimize the quantity of data staged in the event of a cache-miss or prestaged in anticipation of future I/O requests. These algorithms are discussed in Section 3. Read requests which are serviced from the cache, i.e. avoid a reference to the disk, are called read-hits. While the increased the number of workloads for which read caching was effective, caching writes presented data integrity issues. Specifically, when a control unit caches a write operation, it must accept the data integrity responsibility for the write until the data is actually written to disk. While the performance benefits of write caching were seductive, the microcode of the 3880-x3 control units only supported read caching. To ensure data integrity and address write performance, the incorporated nonvolatile storage (NVS) as well as traditional cache. When a write-hit or format write is cached, data is stored in both NVS and normal cache until it is written to disk. Hence, a failure of either resource during this brief interval (prior to destage) cannot result in the loss of data. To protect the data from environmental factors (i.e., power failures), a battery backup system was incorporated in the to maintain the data in NVS for 48 hours. Similar control unit implementations have been introduced by Amdahl, Hitachi, and StorageTek. Unfortunately, some of the most critical workload types were still resistant to performance improvement by caching. Specifically, online transaction processing (OLTP) workloads that have very low read-hit fractions and high write percentages often run worse when cached even with traditional /6 class control units. In March of 1994, IBM introduced record level caching one (RLC I) to extend the benefits of caching to OLTP workloads. In this paper we will examine the performance characteristics of record level caching in comparison with normal and bypass cache modes of cache operation. 2 Caching Issues for OLTP Workloads To understand the requirement for record level caching, it is useful to create a mental image of the type of I/O workload that present the greatest problems to traditional caching schemes. Consider the case of a bank with an online system that supports thousands of ATMs. Customers randomly arrive at the different ATM terminals and enter their account number by swiping their cash card through a reader and then entering their PIN number to validate the transaction. If you think about it for a minute, it is clear that the read for your account number should be a read-miss since your cash card has been in your pocket (i.e., not in use for a long time) and there are only a few other customer accounts on the same disk track as yours out of the hundreds of thousands of accounts the bank services Performance Associates, Inc., all rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of the copyright owner. Specific permission is granted to the Computer Measurement Group to publish this paper in their 1995 Winter Transactions.

2 Normal DEF EXT NOR READ /6 Stage to End of Track Front End Sequential DEF EXT SEQ READ n Pass Record to Channel /6 Full Cylinder Stage Read Record 4 Pass Record to Channel stage 3 to 15 tracks RLC I DEF EXT RLC READ Potentially Stage the Record Pass Record to Channel Read Record 5 Figure 1. Define Extent Options On a read-miss, the traditional mode of operation (normal caching) is to stage the record and the remainder of the track into memory. If we assume a 3390 track architecture and a 4K record (12 records per track), on average you will request the 6th or 7th record on a track and then stage the requested record along with the remaining records on the track in anticipation of future I/O operations. Unfortunately, in a random environment like the ATM workload we have discussed, there is little or no probability that the other records on the track ever be referenced in the near term. Hence, the control unit stages six and one-half times more data into cache than actually is required to service your request. As a result of this excess staging, the back-end data paths of the control unit can become saturated, resulting in degraded performance. However, the read provides some benefit when the write half of your transaction occurs. Specifically, when the transaction debits your account at the end of the transaction to reflect your new balance. On a class controller, this write will be a write-hit only if the underlying track is still in cache when the write occurs so that the track format can be verified without reading the track layout again from the disk. Unfortunately from the perspective of traditional cached controllers, some data bases employ large buffers and deferred write back schemes to improve performance. In the case of deferred write back, if the delay before the write exceeds the residency time [1] of the cache control unit, the write half of the read/write pair will also be a write-miss since the underlying track will have aged out of the cache by the time the write occurs. [2] Since a class control unit normally stages the remainder of the track on a write-miss too, the back-end data paths become even more saturated with data traffic that is unlikely to be of benefit to any future I/O operations.

3 3 Define Extent Options When an I/O request is issued by the operating system, the channel program is preceded by a define extent and locate record. [3] The define extent provides the control unit with advanced information about the probable access pattern of future I/Os in the stream. Figure 1 provides an overview of normal, sequential, and record level cache one (RLC I) define extents. For most operations, the define extent specifies normal cache replacement. In the top of the figure, a read is issued for record 4 of a track that contains 12 records. In the case shown in the figure, record 4 is passed to the channel and records 4 through 12 are staged into the cache. In the event that one or more of these records are later accessed (i.e., before the track leaves on a least recently used (LRU) basis), then the resulting read-hits will improve performance. In the same fashion, if any of the records are written, the writes will also be write-hits contributing to a decrease in overall response time. As we have already noted, if none of the other records on the track are ever accessed, then the back-end control unit resources employed to stage the track are wasted. The second example in the figure denotes the sequential mode of operation. Clearly, normal cache replacement would be unsuitable for sequential access patterns since most sequential files are blocked half track. That is, a read-miss would occur for the first record on every track followed by a read-hit for the second record, yielding a 50% read hit ratio which is far from ideal. Moreover, reading a large file sequentially would tend to flush all of the other data from the cache since normal cache replacement tracks age from the cache on an LRU basis. Hence, when define extent specifies sequential, the control unit pre-stages from 3 to 15 tracks in anticipation of future read requests. Unlike LRU schemes where previously used tracks age out of cache, the track images in cache are immediately freed (and reused) after they are read. In the ideal case, a job might be able to read an entire file from cache after the first read-miss started the prestage process. The third example in the figure denotes the record level cache one (RLC I) mode of operation. To qualify for IBM s record level cache one, a data set must meet one of two criteria. They are: 1) the track must have a regular data format to be eligible for write caching without verification of the underlying track format, or 2) the RLC I function must be enabled in the define extent by the DCME function of DFSMS/MVS for record level caching of reads. [4] The term regular data format is used to define data sets that have a fixed block size and a constant number of records per track. This condition is imperative since a RLC I data set need not verify the track format via a read operation before accepting a write. To meet the second criteria, the algorithms in DCME maintain a dynamic access history by job and data set to identify record level cache candidates. With RLC I, the control unit responds to a read-miss by staging at most, only the requested record. 2 As shown in the lower third of the figure, the read request for record 5 does not result in the staging of records 6 through 12 on the track as would have been the case with normal cache replacement. In addition, if record 5 is later rewritten, the write will be a write-hit regardless of whether or not the underlying track has aged out of the cache (or was ever present) since the track has a regular data format. Simply stated, when RLC I is enabled for a write operation, a write-miss cannot occur since the track layout need not be verified before the write is accepted by the cache control unit. Hence, RLC I conceptually addresses the problems of excess control unit back-end utilization as well as deferred write backs for OLTP workloads. 4 Theoretical Explanations Historically, there have been theoretical explanations of the performance problems encountered by OLTP workloads employing normal cache replacement. They are subsequent I/O delay and excess back-end utilization. The subsequent I/O delay explanation is similar to the concept of stepping on your own feet. That is, the application issues a request for another record on the same track while the control unit is still staging the remainder of the track for the prior I/O request. Under this explanation, the subsequent I/O request is PENDed by the control unit since the logical device is still busy staging data. For this explanation to be true, the interarrival time of I/Os to the device would have to be less than one-half of the revolution time (an average stage) plus the service time for the device. The second explanation is back-end utilization. That is, the excess utilization of the control unit s back-end data paths staging records which will never be used results in delays for all of the other I/O requests being processed by the control unit. Unlike the subsequent I/O delay explanation, delays are a function of the aggregate arrival rate to the control unit rather than the interarrival rate of any specific device. Both of these potential theoretical explanations will be investigated in the experiment described in the following section. 2 When DCME detects that the read-hit ratio for a data set being accessed by a job is lower than 10%, not even the record is staged into cache. Note, not even the record need to be in cache for writes to be treated as write-hits for a RLC I managed data set.

4 5 Experimental Design A simple experiment was designed to test the effectiveness of RLC I as well as to investigate which of the two theoretical explanations best describe the behavior observed during the experiment. The experiment was conducted using a tool developed by the author called the PAI/O Driver which was designed to evaluate the performance of DASD subsystems. The experiments were conducted on an enhanced at engineering change level C The control unit was configured with one gigabyte of cache, 32 megabytes of NVS, and sixty-four devices. A 401 cylinder data set, formatted with 4K blocks, was allocated on each device. Hence, the total data active behind the control unit was approximately 20 gigabytes. The same reference pattern was employed for each data set. Specifically, a read operation was issued to randomly selected block on a randomly selected cylinder, i.e., a read-miss. The program then issued a write to the same block, i.e., a write-hit. After the write half of the read/write pair was completed, the PAI/O Driver selected another random cylinder for the subsequent read/write pair. The average seek distance between the read/write pairs was five cylinders. Using the timing facilities of the PAI/O Driver, the arrival rate of each device in the DASD subsystem was ramped from 4 to 16 I/Os per second in steps of two. For sixty-four devices, this corresponds to a target subsystem rate ranging from 256 to 1,024 I/Os per second. Each of the experimental rates was synchronized with the system s RMF measurement interval so that the experiment results could easily be evaluated using data from RMF and the Cache RMF Reporter. The test series was repeated three times with the define extents set to bypass cache, normal cache replacement, and record level caching to evaluate the relative performance of each option. To ensure that the outcomes were not a result of some other variable, initial cache conditions, or random chance, the entire series of tests were repeated several times. 6 Results Table 1 provides an overview of the test results comparing the relative benefits of normal cache replacement, record level caching, and bypass cache for read/write pairs where 100% of the reads are probable misses. 3 The first column of the table provides the subsystem I/O rate and the remaining three columns of the table provide the average subsystem response time in milliseconds for the define extent options. Blanks in the table indicate that the subsystem could not sustain the desired I/O rate with the specified define extent option. Subsystem Normal Cache Record Level Bypass I/O Rate Replacement Cache Cache Table 1. Average Subsystem Response Time (Milliseconds) The second column of the table provides the results for normal cache replacement. While the results at low subsystem I/O rates (i.e., less than 250 I/Os per second) were superior to the bypass cache results shown in the fourth column, the average cache response time was actually worse than bypass cache at higher arrival rates. Moreover, the subsystem began to saturate at approximately 495 I/Os per second as can be seen in Figure 2. Although the subsystem was able to sustain slightly higher I/O rates, the response time curve for normal cache replacement demonstrates a classic saturation profile. The third column of the table provides the results for record level caching. As can be seen in the table, record level caching is superior to both normal cache replacement and bypass cache for I/O rates of up to 618 4K blocks a second. It is important for the reader to note that this I/O rate is comprised of 314 reads and 314 writes. Since the back-end path utilization was significantly lower because only 4K blocks rather than half-tracks were being transferred, back-end utilization obviously was not the cause of the subsystem saturation. Investigation of the RMF Cache Reporter data revealed that the number of fast write delays substantially increased when the aggregate NVS destage rate exceeded 1 megabyte per second. At the 618 I/Os per second saturation point, the aggregate NVS destage rate was 1.2 megabytes per second. Repeated experiments revealed this as a maximum limit for the configuration being tested. In fact, after this limit was reached, the aggregate sustainable I/O rate for the controller significantly decreased as is depicted in Figure 2. 3 The reader may wish to note that it is physically impossible to achieve a zero read-hit percentage due to the residual data content of the cache controller. In our experiments, the data sets were roughly twenty times larger than the control unit s one gigabyte cache. Hence, it is reasonable to expect approximately a 5% gratuitous read-hit rate simply based on the ratio of the size of the cache to the aggregate size of the data sets.

5 Response Msec IBM Record Level Cache 4K Blocks - 100% R/W - 100% Read Miss SSCH Rate Normal Cache Replacement Record Level Cache Bypass Cache IBM GB/32MB CYLs 4 17MB ESCON Channels - LV C s 25 June 1994 Figure 2. Average Subsystem Response Time (Milliseconds) The final column of the table provides the results for bypass cache. While the bypass cache results were inferior to both normal cache replacement and record level caching at rates less than 250 I/Os per second, bypass is superior to normal cache replacement at higher rates. Bypass cache provides the highest aggregate throughput for the due to the NVS destage bottleneck discussed in the prior paragraph. 7 Observations Based on the experimental data collected during this study, it is clear that back-end utilization is the proper explanation of the poor response time which was characteristic of OLTP workloads that employed normal cache replacement. This conclusion is based on the fact that saturation occurred at a rate of approximately 8 I/Os per device per second. Since this corresponds to an interarrival time of approximately 125 milliseconds between subsequent I/Os at a response time of 39.2 milliseconds, we must reject the assertion that the delay is a result of a subsequent I/O encountering the completion of the stage event for a prior I/O. The reader should also note that the results presented in this paper are ideal results for the DASD subsystem tested since the I/Os were uniformly distributed. If the I/Os had not been uniformly distributed, the maximum results would have likely been defined by the saturation of a few devices at a far lower aggregate arrival rate. In addition, microcode updates which have been made since the tests were conducted in late June of 1994 or changes which may be made in the future could result in performance improvements for the same engineering test series. Hence, these results should be viewed as a performance observation at a point in time rather than as an absolute prediction of any future characteristics of RLC I or the From a broader perspective, one fact is abundantly clear. IBM s record level cache one (RLC I) works as advertised and extends the benefits of caching to OLTP workloads that have been historically resistant to caching with IBM s prior cache control unit offerings.

6 References [1] McNutt, B. & J.W. Murray, A Multiple Workload Approach to Cache Planning, Proceedings of CMG 87. [2] Houtekamer, G.E. & Artis, H.P., MVS I/O Subsystems: Configuration Management and Performance Analysis, McGraw Hill, [3] Fairchild, W., The Anatomy of a Thoroughly Modern I/O, Proceedings of CMG 90, [4] Berger, J. A., DFSMS: Dynamic Cache Management Enhancement, CMG Spring Transactions, [5] Artis, H.P., DASD Subsystems: Evaluating the Performance Envelope, CMG Spring Transactions, 1994.

IBM with s

IBM with s Performance Profile IBM 3990-6 with 3390-3s October 1995 Performance Associates, Inc. 72-687 Spyglass Lane Palm Desert, CA 92260 (619) 346-0310 Licensed Materials. Your license agreement prohibits you

More information

Workload Characterization Algorithms for DASD Storage Subsystems 1

Workload Characterization Algorithms for DASD Storage Subsystems 1 Workload Characterization Algorithms for DASD Storage Subsystems 1 Dr. H. Pat Artis Performance Associates, Inc. 72-687 Spyglass Lane Palm Desert, CA 92260 (760) 346-0310 drpat@perfassoc.com Abstract:

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

Understanding the Performance Implications of HyperPAVs

Understanding the Performance Implications of HyperPAVs Understanding the Performance Implications of HyperPAVs CMG 2006 Tuesday BOF 6:30-7:30 Dr. H. Pat Artis Performance Associates, Inc. PAI/O Driver is a registered trademark of Performance Associates, Inc.

More information

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Hyunchul Seok Daejeon, Korea hcseok@core.kaist.ac.kr Youngwoo Park Daejeon, Korea ywpark@core.kaist.ac.kr Kyu Ho Park Deajeon,

More information

Improving VSAM Application Performance with IAM

Improving VSAM Application Performance with IAM Improving VSAM Application Performance with IAM Richard Morse Innovation Data Processing August 16, 2004 Session 8422 This session presents at the technical concept level, how IAM improves the performance

More information

On the Relationship of Server Disk Workloads and Client File Requests

On the Relationship of Server Disk Workloads and Client File Requests On the Relationship of Server Workloads and Client File Requests John R. Heath Department of Computer Science University of Southern Maine Portland, Maine 43 Stephen A.R. Houser University Computing Technologies

More information

White paper ETERNUS Extreme Cache Performance and Use

White paper ETERNUS Extreme Cache Performance and Use White paper ETERNUS Extreme Cache Performance and Use The Extreme Cache feature provides the ETERNUS DX500 S3 and DX600 S3 Storage Arrays with an effective flash based performance accelerator for regions

More information

Reducing Disk Latency through Replication

Reducing Disk Latency through Replication Gordon B. Bell Morris Marden Abstract Today s disks are inexpensive and have a large amount of capacity. As a result, most disks have a significant amount of excess capacity. At the same time, the performance

More information

(b) External fragmentation can happen in a virtual memory paging system.

(b) External fragmentation can happen in a virtual memory paging system. Alexandria University Faculty of Engineering Electrical Engineering - Communications Spring 2015 Final Exam CS333: Operating Systems Wednesday, June 17, 2015 Allowed Time: 3 Hours Maximum: 75 points Note:

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Lecture 22: File system optimizations and advanced topics There s more to filesystems J Standard Performance improvement techniques Alternative important

More information

Name: Instructions. Problem 1 : Short answer. [56 points] CMU Storage Systems 25 Feb 2009 Spring 2009 Exam 1

Name: Instructions. Problem 1 : Short answer. [56 points] CMU Storage Systems 25 Feb 2009 Spring 2009 Exam 1 CMU 18 746 Storage Systems 25 Feb 2009 Spring 2009 Exam 1 Instructions Name: There are four (4) questions on the exam. You may find questions that could have several answers and require an explanation

More information

Design of Parallel Algorithms. Course Introduction

Design of Parallel Algorithms. Course Introduction + Design of Parallel Algorithms Course Introduction + CSE 4163/6163 Parallel Algorithm Analysis & Design! Course Web Site: http://www.cse.msstate.edu/~luke/courses/fl17/cse4163! Instructor: Ed Luke! Office:

More information

ANALYZING DB2 DATA SHARING PERFORMANCE PROBLEMS

ANALYZING DB2 DATA SHARING PERFORMANCE PROBLEMS ANALYZING ATA SHARING PERFORMANCE PROLEMS onald R. eese Computer Management Sciences, Inc. 6076- Franconia Road Alexandria, VA 310 data sharing offers many advantages in the right environment, and performs

More information

Index. ADEPT (tool for modelling proposed systerns),

Index. ADEPT (tool for modelling proposed systerns), Index A, see Arrivals Abstraction in modelling, 20-22, 217 Accumulated time in system ( w), 42 Accuracy of models, 14, 16, see also Separable models, robustness Active customer (memory constrained system),

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008 Rapid Bottleneck Identification A Better Way to do Load Testing An Oracle White Paper June 2008 Rapid Bottleneck Identification A Better Way to do Load Testing. RBI combines a comprehensive understanding

More information

Implementing a Statically Adaptive Software RAID System

Implementing a Statically Adaptive Software RAID System Implementing a Statically Adaptive Software RAID System Matt McCormick mattmcc@cs.wisc.edu Master s Project Report Computer Sciences Department University of Wisconsin Madison Abstract Current RAID systems

More information

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Structure Page Nos. 2.0 Introduction 4 2. Objectives 5 2.2 Metrics for Performance Evaluation 5 2.2. Running Time 2.2.2 Speed Up 2.2.3 Efficiency 2.3 Factors

More information

Disks and I/O Hakan Uraz - File Organization 1

Disks and I/O Hakan Uraz - File Organization 1 Disks and I/O 2006 Hakan Uraz - File Organization 1 Disk Drive 2006 Hakan Uraz - File Organization 2 Tracks and Sectors on Disk Surface 2006 Hakan Uraz - File Organization 3 A Set of Cylinders on Disk

More information

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005 Oracle Database 10g Resource Manager An Oracle White Paper October 2005 Oracle Database 10g Resource Manager INTRODUCTION... 3 SYSTEM AND RESOURCE MANAGEMENT... 3 ESTABLISHING RESOURCE PLANS AND POLICIES...

More information

IBM Tivoli OMEGAMON XE for Storage on z/os Version Tuning Guide SC

IBM Tivoli OMEGAMON XE for Storage on z/os Version Tuning Guide SC IBM Tivoli OMEGAMON XE for Storage on z/os Version 5.1.0 Tuning Guide SC27-4380-00 IBM Tivoli OMEGAMON XE for Storage on z/os Version 5.1.0 Tuning Guide SC27-4380-00 Note Before using this information

More information

A Disk Head Scheduling Simulator

A Disk Head Scheduling Simulator A Disk Head Scheduling Simulator Steven Robbins Department of Computer Science University of Texas at San Antonio srobbins@cs.utsa.edu Abstract Disk head scheduling is a standard topic in undergraduate

More information

Is BranchCache right for remote, serverless software distribution?

Is BranchCache right for remote, serverless software distribution? Is BranchCache right for remote, serverless software distribution? 1E Technical Whitepaper Microsoft BranchCache and System Center Configuration Manager 2007 Abstract BranchCache is a new feature available

More information

CSE 410 Final Exam Sample Solution 6/08/10

CSE 410 Final Exam Sample Solution 6/08/10 Question 1. (12 points) (caches) (a) One choice in designing cache memories is to pick a block size. Which of the following do you think would be the most reasonable size for cache blocks on a computer

More information

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER Hardware Sizing Using Amazon EC2 A QlikView Scalability Center Technical White Paper June 2013 qlikview.com Table of Contents Executive Summary 3 A Challenge

More information

HP SmartCache technology

HP SmartCache technology Technical white paper HP SmartCache technology Table of contents Abstract... 2 Introduction... 2 Comparing storage technology performance... 3 What about hybrid drives?... 3 Why caching?... 4 How does

More information

Logical File Organisation A file is logically organised as follows:

Logical File Organisation A file is logically organised as follows: File Handling The logical and physical organisation of files. Serial and sequential file handling methods. Direct and index sequential files. Creating, reading, writing and deleting records from a variety

More information

Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware

Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware openbench Labs Executive Briefing: March 13, 2013 Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware Optimizing I/O for Increased Throughput and Reduced

More information

Future File System: An Evaluation

Future File System: An Evaluation Future System: An Evaluation Brian Gaffey and Daniel J. Messer, Cray Research, Inc., Eagan, Minnesota, USA ABSTRACT: Cray Research s file system, NC1, is based on an early System V technology. Cray has

More information

CMU Introduction to Computer Architecture, Spring 2014 HW 5: Virtual Memory, Caching and Main Memory

CMU Introduction to Computer Architecture, Spring 2014 HW 5: Virtual Memory, Caching and Main Memory CMU 18-447 Introduction to Computer Architecture, Spring 2014 HW 5: Virtual Memory, Caching and Main Memory Instructor: Prof. Onur Mutlu TAs: Rachata Ausavarungnirun, Varun Kohli, Xiao Bo Zhao, Paraj Tyle

More information

IBM 3850-Mass storage system

IBM 3850-Mass storage system BM 385-Mass storage system by CLAYTON JOHNSON BM Corporation Boulder, Colorado SUMMARY BM's 385, a hierarchical storage system, provides random access to stored data with capacity ranging from 35 X 1()9

More information

Introduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC...

Introduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC... HP StorageWorks P2000 G3 FC MSA Dual Controller Virtualization SAN Starter Kit Protecting Critical Applications with Server Application Optimization (SAO) Technical white paper Table of contents Introduction...

More information

CSC 553 Operating Systems

CSC 553 Operating Systems CSC 553 Operating Systems Lecture 1- Computer System Overview Operating System Exploits the hardware resources of one or more processors Provides a set of services to system users Manages secondary memory

More information

TECHNOLOGY BRIEF. Compaq 8-Way Multiprocessing Architecture EXECUTIVE OVERVIEW CONTENTS

TECHNOLOGY BRIEF. Compaq 8-Way Multiprocessing Architecture EXECUTIVE OVERVIEW CONTENTS TECHNOLOGY BRIEF March 1999 Compaq Computer Corporation ISSD Technology Communications CONTENTS Executive Overview1 Notice2 Introduction 3 8-Way Architecture Overview 3 Processor and I/O Bus Design 4 Processor

More information

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference A Comparison of File System Workloads D. Roselli, J. R. Lorch, T. E. Anderson Proc. 2000 USENIX Annual Technical Conference File System Performance Integral component of overall system performance Optimised

More information

Linux 2.6 performance improvement through readahead optimization

Linux 2.6 performance improvement through readahead optimization Linux 2.6 performance improvement through readahead optimization Ram Pai IBM Corporation linuxram@us.ibm.com Badari Pulavarty IBM Corporation badari@us.ibm.com Mingming Cao IBM Corporation mcao@us.ibm.com

More information

IBM and HP 6-Gbps SAS RAID Controller Performance

IBM and HP 6-Gbps SAS RAID Controller Performance IBM and HP 6-Gbps SAS RAID Controller Performance Evaluation report prepared under contract with IBM Corporation Introduction With increasing demands on storage in popular application servers, the server

More information

IBM TotalStorage Enterprise Storage Server Model RAID 5 and RAID 10 Configurations Running Oracle Database Performance Comparisons

IBM TotalStorage Enterprise Storage Server Model RAID 5 and RAID 10 Configurations Running Oracle Database Performance Comparisons IBM TotalStorage Enterprise Storage Server Model 800 - RAID 5 and RAID 10 Configurations Running Oracle Database Performance Comparisons May 2003 IBM Systems Group Open Storage Systems Laboratory, San

More information

Disk to Disk Data File Backup and Restore.

Disk to Disk Data File Backup and Restore. Disk to Disk Data File Backup and Restore. Implementation Variations and Advantages with Tivoli Storage Manager and Tivoli SANergy software Dimitri Chernyshov September 26th, 2001 Data backup procedure.

More information

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures Technical Report Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures Tomohiro Iwamoto, Supported by Field Center of Innovation,

More information

DELL EMC DATA DOMAIN SISL SCALING ARCHITECTURE

DELL EMC DATA DOMAIN SISL SCALING ARCHITECTURE WHITEPAPER DELL EMC DATA DOMAIN SISL SCALING ARCHITECTURE A Detailed Review ABSTRACT While tape has been the dominant storage medium for data protection for decades because of its low cost, it is steadily

More information

Lecturer 4: File Handling

Lecturer 4: File Handling Lecturer 4: File Handling File Handling The logical and physical organisation of files. Serial and sequential file handling methods. Direct and index sequential files. Creating, reading, writing and deleting

More information

SMS Volume Selection. z/series Expo Session Z30 September, 2005 Ruth Ferziger

SMS Volume Selection. z/series Expo Session Z30 September, 2005 Ruth Ferziger SMS Volume Selection Or: Why did my data set go there??? z/series Expo Session Z30 September, 2005 Ruth Ferziger ruthf@us.ibm.com Trade Marks DFSMSdfp DFSMSdss DFSMShsm DFSMS/MVS DFSORT IPCS RAMAC SnapShot

More information

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. Dell EMC VMAX. User Guide

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. Dell EMC VMAX. User Guide VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR Dell EMC VMAX User Guide TABLE OF CONTENTS 1. Purpose...3 2. Introduction to the Management Pack...3 2.1 How the Management Pack Collects Data...3 2.2 Data

More information

Cache Management for Shared Sequential Data Access

Cache Management for Shared Sequential Data Access in: Proc. ACM SIGMETRICS Conf., June 1992 Cache Management for Shared Sequential Data Access Erhard Rahm University of Kaiserslautern Dept. of Computer Science 6750 Kaiserslautern, Germany Donald Ferguson

More information

Performance Impacts of Using Shared ICF CPs

Performance Impacts of Using Shared ICF CPs Performance Impacts of Using Shared ICF CPs Enhancements to the 9672 CMOS family included the introduction of Internal Coupling Facilities, (ICFs). ICFs are Processing Units (PUs) on a 9672, zseries, or

More information

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Baoning Niu, Patrick Martin, Wendy Powley School of Computing, Queen s University Kingston, Ontario, Canada, K7L 3N6 {niu martin wendy}@cs.queensu.ca

More information

EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology

EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology Abstract This paper examines the attributes of the IBM DB2 UDB V8.2 database as they relate to optimizing the configuration for the

More information

Operating Systems Design Exam 2 Review: Spring 2012

Operating Systems Design Exam 2 Review: Spring 2012 Operating Systems Design Exam 2 Review: Spring 2012 Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 Under what conditions will you reach a point of diminishing returns where adding more memory may improve

More information

Hitachi Dynamic Provisioning The Unofficial Best Practice Guide

Hitachi Dynamic Provisioning The Unofficial Best Practice Guide Hitachi Dynamic Provisioning The Unofficial Best Practice Guide V1.3 1 Contents 1. Introduction 2. Pool Pre-requisites and Restrictions 3. Pool Performance 4. DP-VOL size recommendations 5. Other Considerations

More information

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Xiao Qin, Hong Jiang, Yifeng Zhu, David R. Swanson Department of Computer Science and Engineering

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

Operating Systems. Lecture 09: Input/Output Management. Elvis C. Foster

Operating Systems. Lecture 09: Input/Output Management. Elvis C. Foster Operating Systems 141 Lecture 09: Input/Output Management Despite all the considerations that have discussed so far, the work of an operating system can be summarized in two main activities input/output

More information

Buffer Caching Algorithms for Storage Class RAMs

Buffer Caching Algorithms for Storage Class RAMs Issue 1, Volume 3, 29 Buffer Caching Algorithms for Storage Class RAMs Junseok Park, Hyunkyoung Choi, Hyokyung Bahn, and Kern Koh Abstract Due to recent advances in semiconductor technologies, storage

More information

ECE 5730 Memory Systems

ECE 5730 Memory Systems ECE 5730 Memory Systems Spring 2009 Command Scheduling Disk Caching Lecture 23: 1 Announcements Quiz 12 I ll give credit for #4 if you answered (d) Quiz 13 (last one!) on Tuesday Make-up class #2 Thursday,

More information

Hashing. Hashing Procedures

Hashing. Hashing Procedures Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

More information

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long

More information

WebSphere Application Server Base Performance

WebSphere Application Server Base Performance WebSphere Application Server Base Performance ii WebSphere Application Server Base Performance Contents WebSphere Application Server Base Performance............. 1 Introduction to the WebSphere Application

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

CAVA: Exploring Memory Locality for Big Data Analytics in Virtualized Clusters

CAVA: Exploring Memory Locality for Big Data Analytics in Virtualized Clusters 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing : Exploring Memory Locality for Big Data Analytics in Virtualized Clusters Eunji Hwang, Hyungoo Kim, Beomseok Nam and Young-ri

More information

Chapter 4 Data Movement Process

Chapter 4 Data Movement Process Chapter 4 Data Movement Process 46 - Data Movement Process Understanding how CommVault software moves data within the production and protected environment is essential to understanding how to configure

More information

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering McGill University - Faculty of Engineering Department of Electrical and Computer Engineering ECSE 494 Telecommunication Networks Lab Prof. M. Coates Winter 2003 Experiment 5: LAN Operation, Multiple Access

More information

Chapter 1 Computer System Overview

Chapter 1 Computer System Overview Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Ninth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides

More information

Optimizing Testing Performance With Data Validation Option

Optimizing Testing Performance With Data Validation Option Optimizing Testing Performance With Data Validation Option 1993-2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording

More information

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic An Oracle White Paper September 2011 Oracle Utilities Meter Data Management 2.0.1 Demonstrates Extreme Performance on Oracle Exadata/Exalogic Introduction New utilities technologies are bringing with them

More information

Appendix PERFORMANCE COUNTERS SYS-ED/ COMPUTER EDUCATION TECHNIQUES, INC.

Appendix PERFORMANCE COUNTERS SYS-ED/ COMPUTER EDUCATION TECHNIQUES, INC. Appendix E PERFORMANCE COUNTERS SYS-ED/ COMPUTER EDUCATION TECHNIQUES, INC 1 Default s for Commonly Used Objects Object Default Cache Data Map Hits % How often requested data is found in the cache This

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

Clustering Techniques A Technical Whitepaper By Lorinda Visnick

Clustering Techniques A Technical Whitepaper By Lorinda Visnick Clustering Techniques A Technical Whitepaper By Lorinda Visnick 14 Oak Park Bedford, MA 01730 USA Phone: +1-781-280-4000 www.objectstore.net Introduction Performance of a database can be greatly impacted

More information

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in versions 8 and 9. that must be used to measure, evaluate,

More information

Oracle Rdb Hot Standby Performance Test Results

Oracle Rdb Hot Standby Performance Test Results Oracle Rdb Hot Performance Test Results Bill Gettys (bill.gettys@oracle.com), Principal Engineer, Oracle Corporation August 15, 1999 Introduction With the release of Rdb version 7.0, Oracle offered a powerful

More information

WHITE PAPER. Optimizing Virtual Platform Disk Performance

WHITE PAPER. Optimizing Virtual Platform Disk Performance WHITE PAPER Optimizing Virtual Platform Disk Performance Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower operating costs has been driving the phenomenal

More information

Reducing The De-linearization of Data Placement to Improve Deduplication Performance

Reducing The De-linearization of Data Placement to Improve Deduplication Performance Reducing The De-linearization of Data Placement to Improve Deduplication Performance Yujuan Tan 1, Zhichao Yan 2, Dan Feng 2, E. H.-M. Sha 1,3 1 School of Computer Science & Technology, Chongqing University

More information

Determining the Number of CPUs for Query Processing

Determining the Number of CPUs for Query Processing Determining the Number of CPUs for Query Processing Fatemah Panahi Elizabeth Soechting CS747 Advanced Computer Systems Analysis Techniques The University of Wisconsin-Madison fatemeh@cs.wisc.edu, eas@cs.wisc.edu

More information

Perf-Method-06.PRZ The Swami's Performance Methodology Ideas Dan Janda The Swami of VSAM The Swami of VSE/VSAM Dan Janda VSE, VSAM and CICS Performanc

Perf-Method-06.PRZ The Swami's Performance Methodology Ideas Dan Janda The Swami of VSAM The Swami of VSE/VSAM Dan Janda VSE, VSAM and CICS Performanc The Swami's Performance Methodology Ideas Dan Janda The Swami of VSAM The Swami of VSE/VSAM Dan Janda VSE, VSAM and CICS Performance Consultant RR 2 Box 49E Hamlin Road Montrose, PA 18801-9624 World Alliance

More information

Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System

Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System Meenakshi Arunachalam Alok Choudhary Brad Rullman y ECE and CIS Link Hall Syracuse University Syracuse, NY 344 E-mail:

More information

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1 Main Memory Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 Main Memory Background Swapping Contiguous allocation Paging Segmentation Segmentation with paging

More information

CPE300: Digital System Architecture and Design

CPE300: Digital System Architecture and Design CPE300: Digital System Architecture and Design Fall 2011 MW 17:30-18:45 CBC C316 Cache 11232011 http://www.egr.unlv.edu/~b1morris/cpe300/ 2 Outline Review Memory Components/Boards Two-Level Memory Hierarchy

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information

Benchmarking Enterprise SSDs

Benchmarking Enterprise SSDs Whitepaper March 2013 Benchmarking Enterprise SSDs When properly structured, benchmark tests enable IT professionals to compare solid-state drives (SSDs) under test with conventional hard disk drives (HDDs)

More information

Process size is independent of the main memory present in the system.

Process size is independent of the main memory present in the system. Hardware control structure Two characteristics are key to paging and segmentation: 1. All memory references are logical addresses within a process which are dynamically converted into physical at run time.

More information

Modification and Evaluation of Linux I/O Schedulers

Modification and Evaluation of Linux I/O Schedulers Modification and Evaluation of Linux I/O Schedulers 1 Asad Naweed, Joe Di Natale, and Sarah J Andrabi University of North Carolina at Chapel Hill Abstract In this paper we present three different Linux

More information

Kernel Korner I/O Schedulers

Kernel Korner I/O Schedulers Kernel Korner I/O Schedulers Here's how I/O schedulers contribute to disk performance in Linux and the improvements you can get from the new I/O schedulers in the 2. by Robert Love Although most Linux

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Selective Fill Data Cache

Selective Fill Data Cache Selective Fill Data Cache Rice University ELEC525 Final Report Anuj Dharia, Paul Rodriguez, Ryan Verret Abstract Here we present an architecture for improving data cache miss rate. Our enhancement seeks

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

Tradeoff between coverage of a Markov prefetcher and memory bandwidth usage

Tradeoff between coverage of a Markov prefetcher and memory bandwidth usage Tradeoff between coverage of a Markov prefetcher and memory bandwidth usage Elec525 Spring 2005 Raj Bandyopadhyay, Mandy Liu, Nico Peña Hypothesis Some modern processors use a prefetching unit at the front-end

More information

Locality of Reference

Locality of Reference Locality of Reference 1 In view of the previous discussion of secondary storage, it makes sense to design programs so that data is read from and written to disk in relatively large chunks but there is

More information

A Simulation: Improving Throughput and Reducing PCI Bus Traffic by. Caching Server Requests using a Network Processor with Memory

A Simulation: Improving Throughput and Reducing PCI Bus Traffic by. Caching Server Requests using a Network Processor with Memory Shawn Koch Mark Doughty ELEC 525 4/23/02 A Simulation: Improving Throughput and Reducing PCI Bus Traffic by Caching Server Requests using a Network Processor with Memory 1 Motivation and Concept The goal

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD

UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD Chuck Silvers The NetBSD Project chuq@chuq.com, http://www.netbsd.org/ Abstract This paper introduces UBC ( Unified Buffer Cache ),

More information

I/O Commercial Workloads. Scalable Disk Arrays. Scalable ICDA Performance. Outline of This Talk: Related Work on Disk Arrays.

I/O Commercial Workloads. Scalable Disk Arrays. Scalable ICDA Performance. Outline of This Talk: Related Work on Disk Arrays. Scalable Disk Arrays I/O Commercial Workloads David Kaeli Northeastern University Computer Architecture Research Laboratory Boston, MA Manpreet Singh William Zahavi EMC Corporation Hopkington, MA Industry

More information

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms The levels of a memory hierarchy CPU registers C A C H E Memory bus Main Memory I/O bus External memory 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms 1 1 Some useful definitions When the CPU finds a requested

More information

MAINVIEW Batch Optimizer. Data Accelerator Andy Andrews

MAINVIEW Batch Optimizer. Data Accelerator Andy Andrews MAINVIEW Batch Optimizer Data Accelerator Andy Andrews Can I push more workload through my existing hardware configuration? Batch window problems can often be reduced down to two basic problems:! Increasing

More information

Azure database performance Azure performance measurements February 2017

Azure database performance Azure performance measurements February 2017 dbwatch report 1-2017 Azure database performance Azure performance measurements February 2017 Marek Jablonski, CTO dbwatch AS Azure database performance Introduction The popular image of cloud services

More information

Chapter 1 Computer System Overview

Chapter 1 Computer System Overview Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Seventh Edition By William Stallings Objectives of Chapter To provide a grand tour of the major computer system components:

More information

White Paper. File System Throughput Performance on RedHawk Linux

White Paper. File System Throughput Performance on RedHawk Linux White Paper File System Throughput Performance on RedHawk Linux By: Nikhil Nanal Concurrent Computer Corporation August Introduction This paper reports the throughput performance of the,, and file systems

More information

Intel Hyper-Threading technology

Intel Hyper-Threading technology Intel Hyper-Threading technology technology brief Abstract... 2 Introduction... 2 Hyper-Threading... 2 Need for the technology... 2 What is Hyper-Threading?... 3 Inside the technology... 3 Compatibility...

More information

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory. Virtual Memory 1 Learning Outcomes An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory. 2 Memory Management Unit (or TLB) The position and function

More information