SONAS Best Practices and options for CIFS Scalability

Size: px
Start display at page:

Download "SONAS Best Practices and options for CIFS Scalability"

Transcription

1 COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING...2 MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS...2 SONAS SYSTEM CONFIGURATION...4 SONAS Best Practices and options for CIFS Scalability A guide to achieving high levels of CIFS scalability on a SONAS system June 2013 INTERFACE NODE CONFIGURATION...4 Processor and Memory configuration...4 Networking adapter configuration...4 STORAGE PLANNING AND CONFIGURATION...4 PROTOCOL SPECIFIC CONFIGURATION TO MAXIMIZE CONCURRENT CIFS CONNECTIONS...5 LEASES, LOCKING AND SHARE MODES...5 Leases...5 Locking...6 Share Modes...6 HOME DIRECTORY EXPORTS USING SUBSTITUTION VARIABLES...7 SHARING FILES AND DIRECTORIES AMONG CIFS CLIENTS...7 CIFS share coherency options...8 OTHER CONSIDERATIONS...9 PLANNING FOR FAIL-OVER SCENARIOS AND UPGRADE...9 SCHEDULING ADVANCED FUNCTIONS FOR DATA MANAGEMENT...10 TUNING AND INVESTIGATING PERFORMANCE CONCERNS.11 REFERENCES...13 Page 1

2 Common Internet File system (CIFS) File Serving The IBM Scale Out Network Attached Storage (SONAS) system provides CIFS file serving to Window client computers via CIFS shares defined on the SONAS system. A maximum of 1000 CIFS shares may be defined on each SONAS system. Having a large number of Windows/CIFS clients (thousands) concurrently accessing the CIFS shares in a SONAS system requires planning to ensure that the configuration of the system can support the planned number of active concurrent CIFS connections from the CIFS clients. This paper describes methods to achieve a highly scalable CIFS environment. It does not guarantee increased response time for any specific connection as that would vary on a number of factors and more specifically the workload driven by each connection. Maximum number of active concurrent CIFS connections Each SONAS interface node (including the integrated management node) is capable of handling a large number of active concurrent CIFS connections. The exact number of concurrent CIFS connections per interface node is dependent on many factors including the number of processors and the amount of memory installed in the interface node, IO workload, storage configuration and advanced functions that are configured to run at the same time that it is serving high number of CIFS connections. Advanced functions include operations such as creating and deleting file set or file system level snapshots, TSM or NDMP backup/restore processing, async replications, Active Cloud Engine (ACE) WAN caching and others. For planning purposes, IBM recommends that you plan on no more than 2500 active concurrent CIFS connections and no more than a total of 4000 connections per SONAS interface node. This recommendation is based on traditional Windows home directory workloads and testing performed by IBM on the SONAS Release. SONAS software does not impose this maximum CIFS connection limit because the actual maximum number that is achievable on a Page 2

3 given SONAS system may vary based on a number of factors that are described later. However, if the maximum number of concurrent connections goes beyond the maximum recommended or the limit that your SONAS system is capable of supporting, CIFS clients may experience longer response times, session disconnect, or other symptoms. If you see these symptoms, IBM recommends you expand your SONAS system configuration or reduce the connections, assuming all the best practices described in this document have already been implemented to the fullest extent. Page 3

4 SONAS System Configuration Two important factors in determining the maximum number of active CIFS connections per interface node that your SONAS system can support are the configuration of the SONAS interface node and the underlying configuration of the storage system(s) on which the file system containing the CIFS share (or shares) reside. Interface Node Configuration Processor and Memory configuration To achieve the maximum possible number of active concurrent CIFS connections per interface node, IBM recommends that the SONAS interface nodes be configured with the maximum number of processors and the maximum amount of memory. For the current SONAS interface node (2851-SI2) this is two 2.66GHz Intel Xeon 6- core processors (Feature Code 0102 is ordered for 2nd processors) and 144GB of memory (quantity of Feature Code 1003 is five). Networking adapter configuration In addition, to achieve that maximum possible number of concurrent CIFS connections per interface node, IBM recommends that the SONAS interface nodes be configured with the maximum number of 10GbE networking adapters. For the current SONAS interface node this is two dual-port 10Gb Universal Converged Networking Adapters (two of feature code 1102). Storage planning and configuration The storage configuration must be carefully planned based on the overall workload characteristics intended for the SONAS system. Factors to consider in planning the storage configuration include expected response time during peak and off peak hours, workload profiling to understand the data access requirements, both in terms of throughput (MB/second) and I/O operations per second (IOPS) and planning for adequate storage resources not just for normal network file serving but also for advanced functions. Also, metdata intensive workloads need to be considered during storage planning. This storage configuration detail would include the number of storage nodes, number of disk storage subsystems, number and types of disk drives, total raw/usable storage capacity and projected IO throughput and IOPS. Page 4

5 The recommended maximum number of active concurrent CIFS connections assumes that the file system containing the CIFS share (or shares) resides on a file system that has a minimum of twelve (12) file system disks (known as a GPFS Network Shared Disk or NSD) for meta-data and data usage regardless of the workload characteristics. The required number of file system disks must be determined based on the factors described above. Generally, in a SONAS environment each file system disk would corresponds to a single SCSI logical unit (LUN) on a single RAID-6 8+P+Q array on a set of ten physical disk drives, such as high performance (10K or 15K RPM) SAS disk drives. In some SONAS configurations this may not be the case, such as SONAS Gateway configurations attached to external disk storage systems like the IBM XIV or IBM DCS3700. Having a file system reside on fewer NSDs, or disks mapped on to RAID arrays comprised of slower lower speed disk drives, can result in a lower number of active concurrent CIFS connections per interface node. Contact your IBM sales representative, client representative or IBM Business Partner for assistance in determining a suitable storage configuration that will support the performance and capacity needs of your network attached storage (NAS) environments. Protocol specific configuration to maximize concurrent CIFS connections Leases, Locking and Share Modes If the CIFS shares are only being accessed by Windows clients using the CIFS protocol (and are not enabled for access via other NAS file protocols, such as NFS, FTP, HTTPS) then it is highly recommended that you disable inter-protocol level leases, locking and share modes to achieve the maximum number of concurrent connections. Disabling these various locking modes still ensures data consistency within the CIFS protocol while avoiding unnecessary overhead incurred for ensuring consistency across multiple NAS protocols. Leases Leases are enabled by default when a CIFS share is created. When leases are enabled it specifies that clients accessing the file over other NAS protocols can break the opportunistic lock of a CIFS Page 5

6 client, so the CIFS client is informed when another client is now accessing the same file at the same time using a non-cifs protocol. Disabling this feature provides a slight performance increase each time a file is opened, but it does increase the risk of data corruption when files are accessed over multiple NAS protocols concurrently without this inter-protocol synchronization. If files are accessed using CIFS protocol alone and leases are disabled, opportunistic locks are maintained by the CIFS protocol using a method that is less performance intensive. Leases can be disabled for a particular CIFS share by specifying the - -cifs "leases=no" option on the mkexport or chexport commands. Locking Locking is enabled by default when a CIFS share is created. When locking is enabled it specifies that before granting a byte range lock to a CIFS client, a determination is made as to whether a byte range file control lock is already present on the requested portion of the file. Clients that access the same file using another NAS protocol, such as NFS, are able to determine whether a CIFS client has set a lock on that file. When a share that is only accessed by CIFS clients, it is highly recommended to disable inter-protocol level byte-range locking to enhance CIFS file serving performance. Inter-protocol level locking can be disabled for a particular CIFS share by specifying the --cifs "locking=no" option on the mkexport or chexport commands. Share Modes The CIFS protocol allows an application to permit simultaneous access to a file by defining share modes when the file is first opened, which can be any combination of SHARE_READ, SHARE_WRITE, and SHARE_DELETE. If no share mode is specified, all simultaneous access attempts by another application or client to open a file in a manner that conflicts with the existing open mode is denied, even if the user has the appropriate permissions granted by share and file system access control lists. The sharemodes option is enabled by default when a CIFS share is created. When enabled, the share modes specified by CIFS clients Page 6

7 are respected by other NAS protocols. When disabled, it specifies that the share modes apply only to access by CIFS clients, and clients using all other NAS protocols are granted or denied access to a file without regard to any share mode defined by a CIFS client. If the share/export is not being accessed by clients using other network file protocols (such as NFS) then it is highly recommended that --cifs "sharemodes=no" option be specified on the mkexport or chexport commands. For additional information about these options and other performance and data integrity related options, see the following section of the SONAS Information Center: Administering->Managing->Managing shares and exports->creating shares and exports->cifs and NFS data integrity options Note: If your environment requires data sharing over multiple protocols and these options can not be disabled, you may not be able to achieve the maximum active CIFS connections per node. In that case, consider adding additional interface nodes as well as increased storage bandwidth. Home directory exports using substitution variables Having a large number of Windows users all concurrently accessing the same CIFS share can lead to performance bottlenecks, because Windows clients automatically open the root folder of a share when connecting. In a home directory environment, it is recommended that substitution variables be used when creating CIFS exports for home directories. For example, home directory exports can be created using the %U substitution variable representing the user name on the mkexport command (mkexport home /ibm/gpfs0/.../%u --cifs). For additional information about substitution variables, see the following section of the SONAS Information Center: Administering->Managing->Managing shares and exports->creating shares and exports->using substitution variables Sharing files and directories among CIFS clients If your environment calls for extensive file and directory sharing among a large number of users, such as large set of department documents, you may experience slowdown in performance. In this type of environment, it is possible to improve the performance Page 7

8 based on the specific needs of the environment. Consider the following as options to optimize performance: 1. Limit all sharing via single or fewer interface nodes. This means restricting CIFS connections that share data to a single interface node will help reduce internal communication among the interface nodes in your SONAS system. 2. When sharing a directory among large set of CIFS clients, if possible, distribute workload in subdirectories to reduce number of CIFS connections simultaneously accessing the same directory. 3. Utilize the CIFS share coherency options as described in the following section. CIFS share coherency options The SONAS system provides an advanced option to further increase the performance of CIFS workload called coherency. This option controls data consistency needs for a CIFS share. While all options described earlier affect cross-protocol interaction, this option applies when a share is being accessed by CIFS clients only. When the default value of YES is changed, it will help improve performance. However, this option must only be considered when all other options have been utilized. Extreme caution must be taken to determine right settings for your data as it impacts data integrity. Each share must be evaluated individually to determine right setting. Limit using this option to as few CIFS shares as possible. The applications must ensure that files/directories are not modified by multiple processes at the same time and that reading of file content does not happen while another process is still writing the file or alternatively, the application is coordinating all file accesses to avoid conflicts. The coherency option can be changed for a particular CIFS share by specifying the --cifs "coherency= {yes no nodirs norootdir}" option on the mkexport or chexport commands. norootdir : Setting coherency=norootdir will disable synchronization of directory locks for the root directory of the specified share, but will keep lock coherency for all files and directories within and underneath the share root. This option is Page 8

9 useful for scenario where large set of connections are accessing different subdirectories within the same share. The most common scenario for this value is when a single share used for home directories of a large number of users like /ibm/gpfs0/homeroot which then contains a sub-directory for each user. nodirs : Setting coherency=nodirs will disable synchronization of directory locks across the cluster nodes, but will leave lock coherency enabled for files. This option is useful if data sharing is not dependent of the changes to the directory attributes like timestamps or a consistent view of the directory contents. yes : Setting coherency=yes will enable cross-node lock coherency for both directories and files. This is the default setting. no: Setting coherency=no will completely disable cross-node lock coherency for both directories and files. It should only be used with applications that guarantee data consistency and all other options to enhance performance have been exhausted. Other Considerations Planning for fail-over scenarios and upgrade When planning for a SONAS system that will have a high number of active concurrent CIFS connections, sufficient consideration must be given to the potential performance impact during fail-over scenarios. In the event that a SONAS interface node fails, the IP addresses hosted by that interface node will be re-located to other SONAS interface nodes, and CIFS client re-connections will be redistributed among the remaining SONAS interface nodes. If the failed interface node hosts only one IP in the network group, all connections served by that interface node will be moved to the interface node taking over the IP. This results in increasing the number of connections served by the interface node taking over the IP. Recommended best practice is to assign multiple IPs to each interface node. In the event of an interface node failure, all affected IPs will be re-distributed among the available interface nodes, thus distributing CIFS connections across all nodes versus shifting the entire workload from the failed interface node to another causing workload imbalance. Page 9

10 Therefore, when planning a SONAS system that will have a high number of active concurrent CIFS connections, some buffer (in terms of maximum active concurrent CIFS connections) needs to be factored into the overall system configuration to account for the potential performance implications during these fail-over scenarios. During the SONAS software upgrade process, IP addresses are frequently re-located and, depending on SONAS system configuration, multiple interface nodes could be suspended at once to minimize the upgrade time thus leaving fewer interface nodes to serve various protocol clients including CIFS. Therefore, the maximum number of active CIFS connections can not be sustained during the SONAS software upgrade process. You should plan for SONAS software upgrades during the off-peak hours or schedule a maintenance window to minimize the impact on clients accessing the SONAS system. If it is not possible to find a long enough maintenance window to schedule SONAS software upgrade, consult with your IBM representative to discuss an alternative of upgrading at a slower pace with one node at a time. For information on upgrade planning, please refer to the section Planning-> Planning for software maintenance in the SONAS Information Center. Scheduling advanced functions for data management In most environments, it is typical to have an off-peak window of time, at some point during the day, which can be utilized to perform data management tasks like nightly backup, snapshots and asynchronous replication. Ensure that you have some period of time of lower CIFS file serving activity and that this time window is sufficient for the desired advanced functions to complete. When running advanced functions that require a file system policy scan such as backup, asynchronous replication, GPFS policy invocations or Active Cloud Engine (ACE) cache pre-population, schedule them sufficiently apart to allow adequate time to complete the policy scan to avoid two overlapping policy scans. You can adjust the scheduling of TSM backups using the mktask, lstask and rmtask CLI commands Page 10

11 You can adjust the scheduling of async replications using the mkrepltask, lsrepltask and rmrepltask CLI commands You can adjust the scheduling of file movement, migration and deletion policies using the mkpolicytask and rmpolicytask CLI commands In a typical system, a couple of hours gap between two advanced functions should suffice. However, you should review the logs of each function to ensure policy scan completes before the next scheduled advanced function starts. If necessary make adjustments like increasing the time gap, adding more interface nodes, more disks for metadata, or adding Solid State Disks (SSDs) for metadata for SONAS gateway configuration. As you plan data management tasks for your SONAS system, you need to ensure adequate resources are available to complete all data management tasks. If these tasks do not complete in the expected time window or impact overall performance of the system during peak hours, consider adding additional resources (like dedicated interface nodes for backup or additional storage resources to enhance storage response time) to eliminate bottlenecks. Tuning and investigating performance concerns If the SONAS system begins to experience performance problems relative to the high number of active concurrent CIFS connections on each interface node, here are actions that can be taken to improve the maximum number of active concurrent CIFS connections that can be supported by the entire SONAS system: - You can use the SONAS performance center GUI or lsperfdata CLI command to investigate what physical resources (CPU, memory, networking, disks) in the system are under high utilization in an effort to gain insight into the physical system resource that may be inhibiting or limiting performance - Ensure the SONAS interface nodes are configured with the maximum number of processors, memory and networking adapters. - Add more SONAS interface nodes to your system Page 11

12 - Move certain advanced functions, such as TSM or NDMP backups and async replications, to periods of time when CIFS file serving activity will be lower - Reduce the frequency at which file set and/or file system level snapshot are being created and deleted, especially during the periods of highest CIFS user activity - Investigate and tune the performance of the underlying disk storage systems containing the file systems on which the CIFS shares reside. This should include the following: - Ensure that the file system disks belonging to a given storage system are appropriately distributed between the pair of SONAS storage nodes to which that storage system is attached. One half of the file system disks in a given storage system should have one of the storage nodes identified as the primary NSD server and the other half of the file system disks in the storage system should have the other SONAS storage node in the pair assigned as the primary NSD server. The lsdisk CLI command with the v (verbose) option will show the SONAS storage nodes that are the primary and secondary NSD server for each file system disk. - If GPFS metadata replication or data replication is being used, ensure that you have assigned GPFS file system disks to failure groups in a manner that reasonably balances the I/O and data across a given set of file system disks, RAID arrays and disk storage systems. The lsdisk CLI command will show the failure group to which each file system disk is assigned. The chdisk CLI command can be used to change the failure group to which each file system disk is assigned. - If the underlying disk storage systems on which the file system resides are becoming a performance bottleneck, consider adding more physical resources to those disk storage systems. More resources could include more cache memory, more disk drives, more RAID arrays and more GPFS file system disks for the file system(s) containing the CIFS shares. - If the existing disk storage systems on which the GPFS file system reside have reached their limit in terms of capacity and/or performance, then considering adding Page 12

13 References more disk storage systems and extending the GPFS file system (on which the CIFS shares reside) by adding new file system disks (residing on the new disk storage systems) to it. SONAS Concepts, Architecture, and Planning Guide Redbook, IBM publication number SC SONAS Implementation Guide Redbook, IBM publication number SC SONAS Copy Services Asynchronous Replication Best Practices, Version 1.4 SONAS Active Cloud Engine (ACE) White Paper Page 13

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide V7 Unified Asynchronous Replication Performance Reference Guide IBM V7 Unified R1.4.2 Asynchronous Replication Performance Reference Guide Document Version 1. SONAS / V7 Unified Asynchronous Replication

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

IBM Storwize V7000 Unified

IBM Storwize V7000 Unified IBM Storwize V7000 Unified Pavel Müller IBM Systems and Technology Group Storwize V7000 Position Enterprise Block DS8000 For clients requiring: Advanced disaster recovery with 3-way mirroring and System

More information

IBM Active Cloud Engine centralized data protection

IBM Active Cloud Engine centralized data protection IBM Active Cloud Engine centralized data protection Best practices guide Sanjay Sudam IBM Systems and Technology Group ISV Enablement December 2013 Copyright IBM Corporation, 2013 Table of contents Abstract...

More information

IBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights

IBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights IBM Spectrum NAS Easy-to-manage software-defined file storage for the enterprise Highlights Reduce capital expenditures with storage software on commodity servers Improve efficiency by consolidating all

More information

IBM SONAS Storage Intermix

IBM SONAS Storage Intermix IBM SONAS Storage Intermix The best practices guide Jason Auvenshine, Storage Architect Tom Beglin, Product Architect IBM Systems and Technology Group November 2013 Copyright IBM Corporation, 2013 Page

More information

A GPFS Primer October 2005

A GPFS Primer October 2005 A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates Technical Report A Thorough Introduction to 64-Bit Aggregates Shree Reddy, NetApp September 2011 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES The NetApp Data ONTAP 8.0 operating system operating

More information

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems April 2017 215-12035_C0 doccomments@netapp.com Table of Contents 3 Contents Before you create ONTAP Cloud systems... 5 Logging in

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

CONFIGURING IBM STORWIZE. for Metadata Framework 6.3

CONFIGURING IBM STORWIZE. for Metadata Framework 6.3 CONFIGURING IBM STORWIZE for Metadata Framework 6.3 Publishing Information Software version 6.3.160 Document version 4 Publication date May 22, 2017 Copyright 2005-2017 Varonis Systems Inc. All rights

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates TECHNICAL REPORT A Thorough Introduction to 64-Bit egates Uday Boppana, NetApp March 2010 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES NetApp Data ONTAP 8.0 7-Mode supports a new aggregate type

More information

New HPE 3PAR StoreServ 8000 and series Optimized for Flash

New HPE 3PAR StoreServ 8000 and series Optimized for Flash New HPE 3PAR StoreServ 8000 and 20000 series Optimized for Flash AGENDA HPE 3PAR StoreServ architecture fundamentals HPE 3PAR Flash optimizations HPE 3PAR portfolio overview HPE 3PAR Flash example from

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

PowerVault MD3 SSD Cache Overview

PowerVault MD3 SSD Cache Overview PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS

More information

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 Product Group - Enterprise Dell White Paper By Farrukh Noman Ananda Sankaran April 2008 Contents Introduction... 3

More information

Dell EMC CIFS-ECS Tool

Dell EMC CIFS-ECS Tool Dell EMC CIFS-ECS Tool Architecture Overview, Performance and Best Practices March 2018 A Dell EMC Technical Whitepaper Revisions Date May 2016 September 2016 Description Initial release Renaming of tool

More information

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information

More information

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM Note: Before you use this information and the product

More information

Hitachi HH Hitachi Data Systems Storage Architect-Hitachi NAS Platform.

Hitachi HH Hitachi Data Systems Storage Architect-Hitachi NAS Platform. Hitachi HH0-450 Hitachi Data Systems Storage Architect-Hitachi NAS Platform http://killexams.com/exam-detail/hh0-450 QUESTION: 104 You are asked to design a Hitachi NAS Platform solution with a requirement

More information

Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales 2011 IBM Corporation

Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales 2011 IBM Corporation Nový IBM Storwize V7000 Unified block-file storage system Simon Podepřel Storage Sales simon_podeprel@cz.ibm.com Agenda V7000 Unified Overview IBM Active Cloud Engine for V7kU 2 Overview V7000 Unified

More information

From an open storage solution to a clustered NAS appliance

From an open storage solution to a clustered NAS appliance From an open storage solution to a clustered NAS appliance Dr.-Ing. Jens-Peter Akelbein Manager Storage Systems Architecture IBM Deutschland R&D GmbH 1 IBM SONAS Overview Enterprise class network attached

More information

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

IBM EXAM QUESTIONS & ANSWERS

IBM EXAM QUESTIONS & ANSWERS IBM 000-452 EXAM QUESTIONS & ANSWERS Number: 000-452 Passing Score: 800 Time Limit: 120 min File Version: 68.8 http://www.gratisexam.com/ IBM 000-452 EXAM QUESTIONS & ANSWERS Exam Name: IBM Storwize V7000

More information

An introduction to GPFS Version 3.3

An introduction to GPFS Version 3.3 IBM white paper An introduction to GPFS Version 3.3 Scott Fadden, IBM Corporation Contents 1 Overview 2 What is GPFS? 2 The file system 2 Application interfaces 3 Performance and scalability 3 Administration

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform A vendor-neutral medical-archive offering Dave Curzio IBM Systems and Technology Group ISV Enablement February

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

Isilon Scale Out NAS. Morten Petersen, Senior Systems Engineer, Isilon Division

Isilon Scale Out NAS. Morten Petersen, Senior Systems Engineer, Isilon Division Isilon Scale Out NAS Morten Petersen, Senior Systems Engineer, Isilon Division 1 Agenda Architecture Overview Next Generation Hardware Performance Caching Performance SMB 3 - MultiChannel 2 OneFS Architecture

More information

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John

More information

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4

More information

IBM řešení pro větší efektivitu ve správě dat - Store more with less

IBM řešení pro větší efektivitu ve správě dat - Store more with less IBM řešení pro větší efektivitu ve správě dat - Store more with less IDG StorageWorld 2012 Rudolf Hruška Information Infrastructure Leader IBM Systems & Technology Group rudolf_hruska@cz.ibm.com IBM Agenda

More information

Samba in a cross protocol environment

Samba in a cross protocol environment Mathias Dietz IBM Research and Development, Mainz Samba in a cross protocol environment aka SMB semantics vs NFS semantics Introduction Mathias Dietz (IBM) IBM Research and Development in Mainz, Germany

More information

SurFS Product Description

SurFS Product Description SurFS Product Description 1. ABSTRACT SurFS An innovative technology is evolving the distributed storage ecosystem. SurFS is designed for cloud storage with extreme performance at a price that is significantly

More information

Evaluating Cloud Storage Strategies. James Bottomley; CTO, Server Virtualization

Evaluating Cloud Storage Strategies. James Bottomley; CTO, Server Virtualization Evaluating Cloud Storage Strategies James Bottomley; CTO, Server Virtualization Introduction to Storage Attachments: - Local (Direct cheap) SAS, SATA - Remote (SAN, NAS expensive) FC net Types - Block

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

NetVault Backup Client and Server Sizing Guide 2.1

NetVault Backup Client and Server Sizing Guide 2.1 NetVault Backup Client and Server Sizing Guide 2.1 Recommended hardware and storage configurations for NetVault Backup 10.x and 11.x September, 2017 Page 1 Table of Contents 1. Abstract... 3 2. Introduction...

More information

SONAS Performance: SPECsfs benchmark publication

SONAS Performance: SPECsfs benchmark publication SONAS Performance February 2011 SONAS Performance: SPECsfs benchmark publication February 24, 2011 SPEC and the SPECsfs Benchmark SPEC is the Standard Performance Evaluation Corporation. SPEC is a prominent

More information

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP Silverton Consulting, Inc. StorInt Briefing BENEFITS OF MULTI- NODE SCALE- OUT CLUSTERS RUNNING NETAPP CDOT PAGE 2 OF 7 Introduction

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS)

Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS) Experiences in Clustering CIFS for IBM Scale Out Network Attached Storage (SONAS) Dr. Jens-Peter Akelbein Mathias Dietz, Christian Ambach IBM Germany R&D 2011 Storage Developer Conference. Insert Your

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

IBM Active Cloud Engine/Active File Management. Kalyan Gunda

IBM Active Cloud Engine/Active File Management. Kalyan Gunda IBM Active Cloud Engine/Active File Management Kalyan Gunda kgunda@in.ibm.com Agenda Need of ACE? Inside ACE Use Cases Data Movement across sites How do you move Data across sites today? FTP, Parallel

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

NetVault Backup Client and Server Sizing Guide 3.0

NetVault Backup Client and Server Sizing Guide 3.0 NetVault Backup Client and Server Sizing Guide 3.0 Recommended hardware and storage configurations for NetVault Backup 12.x September 2018 Page 1 Table of Contents 1. Abstract... 3 2. Introduction... 3

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Hitachi HQT-4210 Exam

Hitachi HQT-4210 Exam Volume: 120 Questions Question No: 1 A large movie production studio approaches an HDS sales team with a request to build a large rendering farm. Their environment consists of UNIX and Linux operating

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

VoltDB vs. Redis Benchmark

VoltDB vs. Redis Benchmark Volt vs. Redis Benchmark Motivation and Goals of this Evaluation Compare the performance of several distributed databases that can be used for state storage in some of our applications Low latency is expected

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage Silverton Consulting, Inc. StorInt Briefing 2017 SILVERTON CONSULTING, INC. ALL RIGHTS RESERVED Page 2 Introduction Unstructured data has

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

Global Locking. Technical Documentation Global Locking

Global Locking. Technical Documentation Global Locking Lock The purpose of the feature is to prevent conflicts when two or more users attempt to change the same file on different Nasuni Filers. If you enable the feature for a directory and its descendants,

More information

White Paper. Extending NetApp Deployments with stec Solid-State Drives and Caching

White Paper. Extending NetApp Deployments with stec Solid-State Drives and Caching White Paper Extending NetApp Deployments with stec Solid-State Drives and Caching Contents Introduction Can Your Storage Throughput Scale to Meet Business Demands? Maximize Existing NetApp Storage Investments

More information

Warsaw. 11 th September 2018

Warsaw. 11 th September 2018 Warsaw 11 th September 2018 Dell EMC Unity & SC Series Midrange Storage Portfolio Overview Bartosz Charliński Senior System Engineer, Dell EMC The Dell EMC Midrange Family SC7020F SC5020F SC9000 SC5020

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

ASN Configuration Best Practices

ASN Configuration Best Practices ASN Configuration Best Practices Managed machine Generally used CPUs and RAM amounts are enough for the managed machine: CPU still allows us to read and write data faster than real IO subsystem allows.

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 Incentives for migrating to Exchange 2010 on Dell PowerEdge R720xd Global Solutions Engineering

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching QLE10000 Series Adapter Provides Application Benefits Through I/O Caching QLogic Caching Technology Delivers Scalable Performance to Enterprise Applications Key Findings The QLogic 10000 Series 8Gb Fibre

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

Backup and archiving need not to create headaches new pain relievers are around

Backup and archiving need not to create headaches new pain relievers are around Backup and archiving need not to create headaches new pain relievers are around Frank Reichart Senior Director Product Marketing Storage Copyright 2012 FUJITSU Hot Spots in Data Protection 1 Copyright

More information

64-Bit Aggregates. Overview and Best Practices. Abstract. Data Classification. Technical Report. GV Govindasamy, NetApp April 2015 TR-3978

64-Bit Aggregates. Overview and Best Practices. Abstract. Data Classification. Technical Report. GV Govindasamy, NetApp April 2015 TR-3978 Technical Report 64-Bit Aggregates Overview and Best Practices GV Govindasamy, NetApp April 2015 TR-3978 Abstract Earlier releases of NetApp Data ONTAP used data block pointers in 32-bit format which limited

More information

DocuShare 6.6 Customer Expectation Setting

DocuShare 6.6 Customer Expectation Setting Customer Expectation Setting 2011 Xerox Corporation. All Rights Reserved. Unpublished rights reserved under the copyright laws of the United States. Contents of this publication may not be reproduced in

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

GETTING GREAT PERFORMANCE IN THE CLOUD

GETTING GREAT PERFORMANCE IN THE CLOUD WHITE PAPER GETTING GREAT PERFORMANCE IN THE CLOUD An overview of storage performance challenges in the cloud, and how to deploy VPSA Storage Arrays for better performance, privacy, flexibility and affordability.

More information

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE DELL EMC ISILON F800 AND H600 I/O PERFORMANCE ABSTRACT This white paper provides F800 and H600 performance data. It is intended for performance-minded administrators of large compute clusters that access

More information

RAIDIX Data Storage Solution. Clustered Data Storage Based on the RAIDIX Software and GPFS File System

RAIDIX Data Storage Solution. Clustered Data Storage Based on the RAIDIX Software and GPFS File System RAIDIX Data Storage Solution Clustered Data Storage Based on the RAIDIX Software and GPFS File System 2017 Contents Synopsis... 2 Introduction... 3 Challenges and the Solution... 4 Solution Architecture...

More information

Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware

Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware openbench Labs Executive Briefing: March 13, 2013 Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware Optimizing I/O for Increased Throughput and Reduced

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Exam : Title : High-End Disk for Open Systems V2. Version : DEMO

Exam : Title : High-End Disk for Open Systems V2. Version : DEMO Exam : 000-968 Title : High-End Disk for Open Systems V2 Version : DEMO 1.An international company has a heterogeneous IBM storage environment with two IBM DS8700 systems in a Metro Mirror relationship.

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays TECHNICAL REPORT: Performance Study Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays ABSTRACT The Dell EqualLogic hybrid arrays PS6010XVS and PS6000XVS

More information

Storage Designed to Support an Oracle Database. White Paper

Storage Designed to Support an Oracle Database. White Paper Storage Designed to Support an Oracle Database White Paper Abstract Databases represent the backbone of most organizations. And Oracle databases in particular have become the mainstream data repository

More information

VERITAS Storage Foundation 4.0 TM for Databases

VERITAS Storage Foundation 4.0 TM for Databases VERITAS Storage Foundation 4.0 TM for Databases Powerful Manageability, High Availability and Superior Performance for Oracle, DB2 and Sybase Databases Enterprises today are experiencing tremendous growth

More information

System recommendations for version 17.1

System recommendations for version 17.1 System recommendations for version 17.1 This article contains information about recommended hardware resources and network environments for version 17.1 of Sage 300 Construction and Real Estate. NOTE:

More information

GFS: The Google File System. Dr. Yingwu Zhu

GFS: The Google File System. Dr. Yingwu Zhu GFS: The Google File System Dr. Yingwu Zhu Motivating Application: Google Crawl the whole web Store it all on one big disk Process users searches on one big CPU More storage, CPU required than one PC can

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

Performance comparisons and trade-offs for various MySQL replication schemes

Performance comparisons and trade-offs for various MySQL replication schemes Performance comparisons and trade-offs for various MySQL replication schemes Darpan Dinker VP Engineering Brian O Krafka, Chief Architect Schooner Information Technology, Inc. http://www.schoonerinfotech.com/

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

EsgynDB Enterprise 2.0 Platform Reference Architecture

EsgynDB Enterprise 2.0 Platform Reference Architecture EsgynDB Enterprise 2.0 Platform Reference Architecture This document outlines a Platform Reference Architecture for EsgynDB Enterprise, built on Apache Trafodion (Incubating) implementation with licensed

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

Synology Alex Wang CEO, Synology America

Synology Alex Wang CEO, Synology America Win a DS718+! Share what you see today using #Synology2019NYC Once posted, send the link to synology2019nyc@synology.com Entries close October 19, 2018 The winner will be notified via e-mail Synology 2019

More information

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT By Joshua Kwedar Sr. Systems Engineer By Steve Horan Cloud Architect ATS Innovation Center, Malvern, PA Dates: Oct December 2017 INTRODUCTION

More information

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers By Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter

More information

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) Dept. of Computer Science & Engineering Chentao Wu wuct@cs.sjtu.edu.cn Download lectures ftp://public.sjtu.edu.cn User:

More information

Improve Web Application Performance with Zend Platform

Improve Web Application Performance with Zend Platform Improve Web Application Performance with Zend Platform Shahar Evron Zend Sr. PHP Specialist Copyright 2007, Zend Technologies Inc. Agenda Benchmark Setup Comprehensive Performance Multilayered Caching

More information

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo Exam : S10-200 Title : Snia Storage Network Management/Administration Version : Demo 1. A SAN architect is asked to implement an infrastructure for a production and a test environment using Fibre Channel

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information