Accelerate with ATS: SVC DH8 and V7.3 Code Updates. Byron Grossnickle N.A. Storage Specialty Team. Advanced Technical Skills (ATS) North America

Similar documents
z/vm 6.3 Installation or Migration or Upgrade Hands-on Lab Sessions

Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524

IBM System Storage DS8870 Release R7.3 Performance Update

VIOS NextGen: Server & Storage Integration

Run vsphere in a box on your laptop, to learn, demonstrate, and test vcenter, ESX4/ESXi4, VMotion, HA, and DRS.

z/vm 6.3 A Quick Introduction

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

z/os Data Set Encryption In the context of pervasive encryption IBM z systems IBM Corporation

z/osmf 2.1 User experience Session: 15122

Mobile access to the existing z/vse application

z/vm Data Collection for zpcr and zcp3000 Collecting the Right Input Data for a zcp3000 Capacity Planning Model

Accelerate with IBM Storage: Spectrum Virtualize Copy Services

IBM Client Center z/vm 6.2 Single System Image (SSI) & Life Guest Relocation (LGR) DEMO

zmanager: Platform Performance Manager Hiren Shah IBM March 14,

Setting up DB2 data sharing the easy way

z/vm Live Guest Relocation - Planning and Use

Advanced Technical Skills (ATS) North America. John Burg Brad Snyder Materials created by John Fitch and Jim Shaw IBM Washington Systems Center

ZVM20: z/vm PAV and HyperPAV Support

IBM Application Runtime Expert for i

Active Energy Manager. Image Management. TPMfOSD BOFM. Automation Status Virtualization Discovery

IBM Data Center Networking in Support of Dynamic Infrastructure

IBM Multi-Factor Authentication in a Linux on IBM Z environment - Example with z/os MFA infrastructure

IBM Mainframe Life Cycle History

Lawson M3 7.1 Large User Scaling on System i

z/vm Paging with SSD and Flash- Type Disk Devices

Behind the Glitz - Is Life Better on Another Database Platform?

z/vse 5.2 Tapeless Initial Installation

Infor M3 on IBM POWER7+ and using Solid State Drives

Running Docker applications on Linux on the Mainframe

WebSphere Application Server 6.1 Base Performance September WebSphere Application Server 6.1 Base Performance

ZVM17: z/vm Device Support Overview

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release

zpcr Capacity Sizing Lab

Infor Lawson on IBM i 7.1 and IBM POWER7+

z/vm Evaluation Edition

IBM TotalStorage Enterprise Storage Server Model 800

EMC SYMMETRIX VMAX 40K STORAGE SYSTEM

IBM z Systems z/vse VM Workshop z/vse Wellness. How to keep your z/vse in good shape. Ingo Franzki, IBM IBM Corporation

EMC SYMMETRIX VMAX 40K SYSTEM

z/vm Live Guest Relocation Planning and Use

CSI TCP/IP for VSE Update

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

The Power of PowerVM Power Systems Virtualization. Eyal Rubinstein

IBM Systems Director Active Energy Manager 4.3

WebSphere Application Server Base Performance

Greg Daynes z/os Software Deployment

Redefining x86 A New Era of Solutions

IBM DS8870 Release 7.0 Performance Update

Computing as a Service

z/vm Single System Image and Guest Mobility Preview

Deploying FC and FCoE SAN with IBM System Storage SVC and IBM Storwize platforms

IBM EXAM QUESTIONS & ANSWERS

IBM Storwize V5000 disk system

Release Notes. IBM Tivoli Identity Manager Universal Provisioning Adapter. Version First Edition (June 14, 2010)

9387: Setting up DB2 data sharing the easy way

Accelerate with ATS. DS8870 R7.3 Technical Teleconference June 26, 2014

Getting Started with z/osmf Resource Monitoring

DFSMS Basics: Just How Does DFSMS System Managed Storage (SMS) Select Volumes?

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

IBM System Storage SAN Volume Controller IBM Easy Tier in release

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

iseries Tech Talk Linux on iseries Technical Update 2004

V6R1 System i Navigator: What s New

IBM TotalStorage Enterprise Storage Server Model 800

Release Notes. IBM Tivoli Identity Manager Rational ClearQuest Adapter for TDI 7.0. Version First Edition (January 15, 2011)

IBM System Storage DS6800

CPU MF Counters Enablement Webinar

SAS workload performance improvements with IBM XIV Storage System Gen3

DS8880 High Performance Flash Enclosure Gen2

IBM i supports additional IBM POWER6 hardware

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

IBM Lifecycle Extension for z/os V1.8 FAQ

IBM System Storage IBM :

zpcr Capacity Sizing Lab

IBM TotalStorage Enterprise Storage Server (ESS) Model 750

KVM for IBM z Systems Limits and Configuration Recommendations

IBM System i Model 515 offers new levels of price performance

Cisco HyperFlex HX220c M4 Node

IBM TotalStorage SAN Switch M12

A Pragmatic Path to Compliance. Jaffa Law

IBM z/os Early Support Program (ESP)

IBM i Version 7.2. Systems management Logical partitions IBM

Best Practices for WebSphere Application Server on System z Linux

Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu.

Managing LDAP Workloads via Tivoli Directory Services and z/os WLM IBM. Kathy Walsh IBM. Version Date: July 18, 2012

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Oracle PeopleSoft Applications for IBM z Systems

IBM System Storage DS3000 Interoperability Matrix IBM System Storage DS3000 series Interoperability Matrix

DS8880 High-Performance Flash Enclosure Gen2

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Release Notes. IBM Security Identity Manager GroupWise Adapter. Version First Edition (September 13, 2013)

Benefits of the IBM Storwize V7000 Real-time Compression feature with VMware vsphere 5.5

IBM System Storage DS5020 Express

Flexible General-Purpose Server Board in a Standard Form Factor

p5 520 server Robust entry system designed for the on demand world Highlights

64 bit virtual in z/vse V5.1

zpcr Capacity Sizing Lab

IBM System Storage DS4800

Transcription:

Accelerate with ATS: SVC DH8 and V7.3 Code Updates Byron Grossnickle N.A. Storage Specialty Team

Agenda Advanced Technical Skills (ATS) North America New 2145-DH8 Hardware Software Enhancements in SVC Version 7.3 2

SAN Volume Controller 2145-DH8 HARDWARE 3

Enhancements in SVC DH8 No separate node UPS required Avoids mis-cabling issues; data center daisy-chained UPS concerns Dual, redundant (n+1) PSUs No external redundant AC power switch Two boot drives Boot data mirrored: node will still boot in presence of drive failure Dump data striped for performance Superior system set-up Do not have to input IP addresses via front panel Up to 12 Host I/O ports Allows traffic separation Variable types 4

SVC DH8 Front View Boot drives 2 300GB 10K SAS Battery 1 System indicators Battery 2 5

SVC DH8 Internal View PCIe Riser cards PSUs DIMMs CPU Fans Boot drives Batteries 6

SVC DH8 Hardware Overview 8-core CPU with 32GB memory for SVC Intel E5-2650V2-2.6 GHz Ivy Bridge processor Minimum 1 HIC for I/O Can add a 2 nd I/O HIC, and SAS HIC on this CPU system battery Boot / dump drives 2 nd CPU is optional Comes with extra 32GB memory Required for RTC CPU1 SVC QPI CPU2 RTC Required to access 3 rd I/O HIC At least 1 Compression Accelerator card PCIe Gen3 PCIe Gen3 required for RTC Note: PCI-E Gen3 is roughly double PCI-E Gen-2 used in previous models, 985MB/s vs 500MB/s full duplex. 8 lanes per slot gives @ 8GB per slot 7

SVC DH8 Rear View Mgmt ports PCIe expansion slots 750W PSUs Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 1 Gb iscsi ports 4 USB ports Technician Port 8

SVC - Flexibility Item Min Max in R1 CPU 1x 8-core Ivy Bridge 2x 8-core Ivy Bridge Memory 32GB 64GB FC / 10 Gbps Ethernet cards 1 3 * 12 Gb SAS cards 0 1 Compression accelerator cards 0 2 Supported GA configurations 3 variants Boot drives 2 2 Memory CPU No. I/O cards Number of Compression cards Compression support 32GB 1x 8 core 1 to 2 0 NO 64GB 2x 8 core 1 to 3 * 0 NO 64GB 2x 8 core 1 to 3 * 1-2 YES * Support 3 FC cards, but only one 10Gbps Ethernet card for R1 * Extra 32GB of RAM and the right hand card slot require a 2 nd CPU to be installed. If 2 nd CPU is not installed the user cant use the extra memory or half the expansion cards 9

SVC - Flexibility CPU 1 attach Top of Node CPU 2 attach 1 I/O card (FC only) 4 Compression Accelerator card 2 I/O card 5 I/O card 3 SAS (for expansion) 6 Compression Accelerator card There must be at least one Host I/O card but it does not have to be in a particular slot. The 10 Gbps Ethernet card will not fit in PCIe expansion slot 1 or slot 4 but that will be fixed in the future. The SAS card should be in slot 3 It is harder to remove an SFP from slot 1 or slot 4, so if there is only one HIC and one microprocessor it is best to put the HIC in slot 2 A compression card can be in any slot connected to the second microprocessor (i.e. in PCI express riser card assembly 2 nearest the PSUs) 10

10Gb Ethernet Card (FCoE and iscsi) The new 4x port 10GbE card will only be supported in the new SVC DH8 and in the Storwize V7000 2076-524 The card is delivered with the SFPs fitted, unless it is a FRU. In SVC 7.3.0 we will only support 1 x 10GbE adapter in each of the platforms (see above) Only IBM supported 10Gb SFPs should be used Each adapter port has amber and green coloured LED to indicate port status (fault LED is not used in 7.3.0.) Green LED On Off Link established No link Meaning iscsi access to volumes is possible via the customers 10 Gbps Ethernet network. FCoE frame routing should be done by FCoE Switch SVC doesn't support multihop FCoE 11

Compression accelerator adapter Up to a total of 2 compression accelerator adapters can be installed, each additional adapter installed will improve I/O performance when using compressed volumes Intel QuickAssist technology is used. IBM is the first in the industry to integrate this technology into our products 2 nd CPU and extra 32 GB of memory are compulsory with the compression accelerator adapter The use of compression accelerator adapters is compulsory (at least one) if users wish to use compression on SVC DH8. For an I/O group containing a SVC DH8 with no compression accelerator, an attempt to create first compressed volume will fail. The addnode command will also fail, if trying to add a SVC DH8 without a compression accelerator, to an I/O group which has compressed volumes 12

SVC DH8: Compression Support Base hardware: No RTC support Add hardware option 1: 2 nd CPU, 1x Compression Accelerator adapter + 32GB memory: 8 cores dedicated to RTC 1 Compression Accelerator adapter 38GB memory for compression stack (32 additional + 6 from the SVC stack) Additional hardware option 2: 2 nd Compression Accelerator 8 cores dedicated to RTC (same as for option 1) 2 Compression Accelerator adapters (doubles bandwidth) 38GB memory for compression stack (32 additional + 6 from the SVC stack) Note: The 2 nd CPU is required to open the PCIe lanes as well as schedule traffic into and out of the compression accelerator cards 13

SVC DH8 Expansion Enclosure 2145 24F With R1 and 730 support 2 expansion enclosures per I/O Group Ports 1 and 3 of the 12 Gb SAS card can be used to attach 2U24 Expansion enclosures of flash drives Expansion enclosures are physically identical to the V7000 Gen2 expansion enclosures, but will have a different product ID SVC DH8 cannot use the V7000 Gen2 expansion enclosures, V7000 Gen2 cannot use the SVC DH8 expansion enclosure 14

Expansion Enclosures SAS Attach IO Group Node 1 Node 2 Expansion Enclosure 1 Expansion Enclosure 2 15

SVC CG8 vs DH8 Attribute (per node) SVC CG8 SVC DH8 CPU 2X 6 cores Westmere 2x 8- cores Ivy Bridge Controller memory 24GB to 48GB 32GB to 64GB Host I/O Compression resources 2x 1GbE 4x to 8x - 8Gb FC 2x 10GbE (2 card max) 3x 1GbE 0 to 12x - 8Gb FC 0 to 4x 10GbE (3 I/O card max) 8 cores (with 2 nd CPU fitted) 8 cores (with 2 nd CPU fitted) 1 or 2 Compression Accelerator Card Drive expansion 4 flash drive local to node (RAID 0,1,10 only) SAS fabric 6Gb SAS 12Gb SAS 48 flash drives shared by 2 nodes (RAID 0,1,5,6,10) 16

Technician Port (1) Technician port is marked with a T (Ethernet port 4) Technician port is used for the Initialization of the system As soon the system is installed and the user connects to the Technician Port he will be directed to the new Init tool welcome panel This port will run a dedicated DHCP server in order to facilitate service/maintenance and out of box in lieu of the front panel Service IP will NOT be associated with the Technician Port, but will continue to be assigned to Ethernet port 1 (lowest Ethernet port for management) * If the users laptop has DHCP configured, nearly all do, it will automatically configure to bring up Initization screen * If they do not have DHCP they will need to set IP of their Ethernet adapter to 192.168.0.2 192.168.0.20 17

Technician Port (2) 2) Waiting panel, while the system initialization completes 1) Example if the enclosure has a stored cluster ID, while attempting to create a cluster 18

Technician Port (3) 19

SVC DH8: Hardware Upgrade (1) The existing system software must be at a version that supports the new node If a node is being replaced by a 2145 DH8, the system software version must be v7.3.0 or later If the node being replaced is a CG8, CF8 or 8A4 and the replacement node is a DH8 then the replacement node must have a four port FC card in slot 1. If the node being replaced has a second I/O card in addition to the required FC card, then the replacement node must have the same card in slot 2 SVC DH8 will use the new 80c product ID, that provides the ability for a new scheme of WWNN/WWPNs Native 'WWPNs' follow: 500507680c <S><P> XXXX Where <S> is the PCIe slot number (1-6) and <P> is the port number in that slot (1-4) XXXX is the sequence number of the SVC DH8 assigned at manufacturing which may be changed by the user if needed for migration WWNN has <S><P><0> 20

SVC DH8: Hardware Upgrade (2) New Scheme for SVC DH8 Upgrading to SVC DH8 21

Best Practice Port Designations 22

SVC - 2 Node (1IOG) Performance SVC CG8 SVC DH8 Cache Read MB/s 6,050 17,000 Cache Write MB/s 3,500 7,000 Cache Read IOPs 800,000 1,150,000 Cache Write IOPs 300,000 500,000 Disk Read MB/s 5,380 14,000 Disk Write MB/s 2,800 4,000 Disk Read IOPs 365,000 700,000 Disk Write IOPs 115,000 200,000 70/30 Mixed IOPs 200,000 395,000 SUMMARY: DH8 is 2x IOPs and up to 3x MB/s of CG8 SVC tests use FlashSystem 840 and 820 backend storage controllers DH8 includes all 3 FC I/O Cards scales linearly from 1, through 3 for bandwidth Requires 2 cards for max IOPs 1 card approx half, or roughly CG8 equivalent

SVC Compression Performance (One I/O Group) Compressed SVC CG8 New SVC DH8 Read Miss IOPS 2,600-50,000 71,000-175,000 Write Miss IOPS 1,200-16,000 28,000-115,000 DB-like 2,200-40,000 59,000-149,000 Compressed performance shows a range depending on I/O distribution Compressed performance is better than uncompressed in some cases because of fewer I/Os to drives and additional cache benefits

Statements of Direction IBM intends to enhance the new SVC engine and new Storwize V7000 to support 16 Gb Fibre Channel connectivity The second CPU with 32 GB memory feature on SVC Storage Engine Model DH8 provides performance benefit only when Real-time Compression is used. IBM intends to enhance IBM Storwize Family Software for SVC to extend support of this feature to also benefit uncompressed workloads. 25

SAN Volume Controller V7.3 UPDATES 26

Storwize Family Software Version 7.3 New Storwize V7000 Unit 2X performance 2X connectivity Up to 1056 drives (clustered) Can be clustered with Gen 1 models New cache design Easy Tier v3 Storage Pool Balancing Miscellaneous Enhancements 27

New cache design - why re-architect? More scalable for the future Required for supporting more volumes Required for support more nodes in the cluster Required for 64 bit user addressing beyond 28 GB Required for larger memory sizes in nodes/canisters Required for more CPU cores Reduces # of IOPs copy services do directly to the back end storage Most beneficial to Storwize systems Minimizes FlashCopy prepare time to a second or less RtC benefits from the cache underneath Read-Only cache mode In addition to the read/write or none available today Switch preferred node of a volume with-in same I/O group 28

Cache Architecture pre-v7.3.x Host I/O FWL = Forwarding Layer FWL Volume Mirror Front End FWL TP/RtC TP/RtC Remote Copy FWL Virtualization Virtualization Cache FWL RAID 1/5/6/10 RAID 1/5/6/10 Backend Backend FlashCopy 29

Cache Architecture V7.3.x Host I/O FWL = Forwarding Layer FWL Volume Mirror Front End FWL TP/RtC TP/RtC Remote Copy Lower Cache Lower Cache Upper Cache FWL Virtualization Virtualization FWL RAID 1/5/6/10 RAID 1/5/6/10 FlashCopy Backend Backend 30

Upper Cache Simple 2-way write cache between node pair of the I/O group This is it s primary function Receives write Transfers to secondary node of the I/O group Destages to lower cache Very limited read cache This is mainly provided by the lower cache Same sub-millisecond response time Partitioned the same way as the original cache 31

Lower Cache Advanced 2-way write between node pair of an I/O group Primary read cache Write caching for host i/o as well as advanced function i/o Read/write caching is beneath copy servies for vastly improved performance to FlashCopy, Thin Provisioning, RtC and Volume Mirroring 32

Upper Cache Allocation - Fixed 4GB V3700 128MB All other Platforms 256MB The rest of the cache is designated to the lower cache 33

Changing preferred node in 7.3 In 7.3 the movevdisk command can be used to change the preferred node in the i/o group Prior to 7.3, this could not be done without using Non Disruptive Volume Move (NDVM) between i/o groups If no new i/o group is specified, the volume will stay in the same i/o group but will change to the preferred node specified. 34

SVC Enhanced Stretch Cluster Old Cache Design Site1 Preferred Node IO group Node Pair Site2 Non-Preferred Node Destage Cache Write Data Cache Data is replicated twice over ISL Mirror Copy 1 Copy2 Storage at Site 1 Storage at Site 2 35

SVC Enhanced Stretch Cluster New Cache Design (7.3) Site1 Preferred Node IO group Node Pair Site2 Non-Preferred Node UC Write Data with location UC Destage Mirror Reply with location Data is replicated once across ISL Copy 1 Preferred Copy 2 Non preferred Copy 1 Non preferred LC_1 LC_ 2 LC_1 LC_ 2 Copy 2 Preferred Destage Token write data message with location Destage Storage at Site 1 Storage at Site 2 36

Stretch Clustered Old Cache with compression at both Destage Site1 Preferred Node CA IO group Node Pair Uncompressed Write Data Site2 Non-Preferred Node CA Mirror Data is replicated twice over ISL.1 x compressed 1 x uncompressed Cmp Cmp Mdisk FW Compressed Write Data Storage at Site 1 Storage at Site 2 37

Enhanced Stretch Cluster with compression at both (7.3) Site1 Preferred Node UCA IO group Node Pair Uncompressed Write Data Site2 Non-Preferred Node UCA Destage Mirror Data is replicated three times over ISL. 1 x uncompressed, 2 x compressed RtC changes buffer location, invalidates UCA location. Copy 1 Preferred C LCA1 C Copy 2 Non preferred LCA 2 Cmp'd Write data Copy 1 Copy 1 Non preferred LCA1 LCA 2 Copy 2 Preferred Destage Cmp'd Write data Copy 2 Destage Storage at Site 1 Storage at Site 2 38

Easy Tier v3: Support for up to 3 Tiers Support any combination of 1-3 tiers MDisks in SVC will always show up as Enterprise tier Unless using SSD Expansion drawer, you must designate tier on SVC On other members of the Storwize family the tier of internal disk is known ENT is Enterprise 15K/10K SAS or FC and NL is NL-SAS 7.2K or SATA Tier 0 Tier 1 Tier2 Flash/SSD ENT NL Flash/SSD ENT NONE Flash/SSD NL NONE NONE ENT NL Flash/SSD NONE NONE NONE ENT NONE NONE NONE NL 39

Percent of workload Advanced Technical Skills (ATS) North America Easy Tier: Workload Skew Drives Benefits 100 90 80 50% of the extents do 10% of the MB and virtually no random IOPS! 70 60 50 40 30 58% of the random IOPS and 33% of the MB from about 5% of the extents! 20 10 0 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Percent of extents 40 Percent of small Ios Percent of MB

Easy Tier v3: Planning Deploy flash and enterprise disk for performance Grow capacity with low cost disk Flash Arrays Moves data automatically between tiers New volumes will use extents from Tier 1 initially If no free Tier 1 capacity then Tier 2 will be used if available, otherwise capacity comes from Tier 0 Best to keep some free extents in pool and Easy Tier will attempt to keep some free per Tier Plan for one extent times the number of MDisks in the storage pool plus 16 as Easy Tier will try to keep some extents free in Tiers 0 and 1 if possible E.G. 20 MDisks in an Easy Tier storage pool with either two or 3 MDisk tiers (20*1) + 16 = 36 extents free in the pool if possible Note that as long as one extent free in the pool Easy Tier can operate If no free extents in the pool then nothing will change until more capacity is added to the pool Less Active Data Migrates Down HDD Arrays Active Data Migrates Up 41

Easy Tier v3: Automated Storage Pool Balancing Any storage medium has a performance threshold: Performance threshold means once IOPS on a MDisk exceed this threshold, IO response time will increase significantly Knowing the performance threshold we could: Avoid overloading MDisks by migrating extents Protect upper tier's performance by demoting extents when upper tier's MDisks are overloaded Balance workload within tiers based on utilization Use xml file to record the MDisk s threshold and make intelligent migration decisions automatically 42

Easy Tier v3: Automated Storage Pool Balancing XML files have stanzas for various drive classes, RAID types/widths and workload characteristics to determine MDisk thresholds Internal drives on Storwize systems we are aware of so more stanzas for them Externally virtualized LUNs we don t know what is behind them so based on controller 43

SVC Requires Hints SVC knows what storage array a particular MDisk is coming from, but that is all SVC does NOT own the disks and therefore does not definitively know the performance characteristics Unlike the other members of the Storwize family that own their drives By default, all MDisks will be marked as Enterprise. You must manually designate the tier to which each MDisk belongs. Flash Enterprise Near Line From these 2 things ET will use the XML file to know how hard to drive a particular MDisk 44

Easy Tier Adjustments If Easy Tier happens to guess wrong, the workload of a particular MDisk can be adjusted with the chmdisk command from the command line 45

Easy Tier v3: Automated Storage Pool Balancing Configuration: Drive MDisk Volume Comments 24-300GB 15K RPM Drives 3 - RAID-5 arrays Vol_0, Vol_1, Vol_2, Vol_3 each 32GB capacity Total MDisk size 5.44TB Total Volume size 128GB All Volumes are created on MDisk0 initially Performance improved by balancing workload across all 3 MDisks: Provided as basic storage functionality, no requirement for an Easy Tier license 46

Easy Tier v3: STAT Tool Provides recommendations on adding additional tier capacity and performance impact Tier 0: Flash Tier 1: Enterprise disk (15K and 10K) Tier 2: Near-line disk (7.2K) 47

Easy Tier v3: Workload Skew Curve Generate the skew report of the workload The workload skew report can be directly read by Disk Magic 48

Pool and Tier Easy Tier v3: Workload Categorization 0x0000 0x0001 0x0004 0x0005 1 1 2 1 0 2 1 0 0 5000 10000 15000 20000 25000 30000 35000 40000 Extents 49 Active ActiveLG Low Inactive Unallocated

EasyTier v3: Data Movement Daily Report Generate a daily (24hours) CSV formatted report of Easy Tier data movements 50

Miscellaneous Enhancements All pool settings can now be changed from the GUI Read only cache mode on volumes 512 compressed volumes per i/o group now allowed with the 2145- DH8 51

Trademarks The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both. Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States. For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml: *, AS/400, e business(logo), DBE, ESCO, eserver, FICON, IBM, IBM (logo), iseries, MVS, OS/390, pseries, RS/6000, S/30, VM/ESA, VSE/ESA, WebSphere, xseries, z/os, zseries, z/vm, System i, System i5, System p, System p5, System x, System z, System z9, BladeCenter The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-ibm products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. 52