Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu.

Similar documents
That is achieved by having a CM Expander in each CM and by providing an internal cross connection between the CMs and their CM Expanders. Should one C

ETERNUS DX60 and DX80

10 Having Hot Spare disks available in the system is strongly recommended. There are two types of Hot Spare disks:

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

3 Here is a short overview of the specifications. A link to a data sheet with full specification details is given later in this web based training

Introduction to NetApp E-Series E2700 with SANtricity 11.10

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 Node

Data Sheet Fujitsu ETERNUS DX400 S2 Series Disk Storage Systems

Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

Data Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System

Data Sheet Fujitsu ETERNUS DX500 S3 Disk Storage System

TECHNICAL SPECIFICATIONS + TECHNICAL OFFER

Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System

Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array

SSD Architecture Considerations for a Spectrum of Enterprise Applications. Alan Fitzgerald, VP and CTO SMART Modular Technologies

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

IBM Emulex 16Gb Fibre Channel HBA Evaluation

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

LEVERAGING A PERSISTENT HARDWARE ARCHITECTURE

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Cisco UCS S3260 System Storage Management

Overview. About the Cisco UCS S3260 System

VSTOR Vault Mass Storage at its Best Reliable Mass Storage Solutions Easy to Use, Modular, Scalable, and Affordable

Cisco UCS C200 M2 High-Density Rack-Mount Server

Cisco UCS S3260 System Storage Management

HP Supporting the HP ProLiant Storage Server Product Family.

Vendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP:

Technical White Paper FUJITSU Storage ETERNUS AF and ETERNUS DX Feature Set

10 The next chapter of this Web Based Training module describe the two different Remote Equivalent Transfer Modes; synchronous and asynchronous.

Datasheet Fujitsu ETERNUS DX8700 S2 Disk Storage System

Cisco UCS S3260 System Storage Management

Cisco HyperFlex HX220c Edge M5

Sun Fire X4170 M2 Server Frequently Asked Questions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

The Genesis HyperMDC is a scalable metadata cluster designed for ease-of-use and quick deployment.

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

EMC Backup and Recovery for Microsoft Exchange 2007

E-Series Hardware Cabling Guide

Data Sheet FUJITSU Storage ETERNUS DX600 S3 Disk System

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

VX3000-E Unified Network Storage

Data Sheet FUJITSU Storage ETERNUS DX500 S3 Disk System

EMC CLARiiON CX3 Series FCP

Cisco Connected Safety and Security UCS C220

ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2

INTRODUCTION TO THE EMC VNX2 SERIES

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

Altos T310 F3 Specifications

Cisco UCS C24 M3 Server

The USB connector can be used for firmware updates and storing the Library configuration.

Product Introduction of Inspur Server NF5280M4

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Cisco MCS 7845-H1 Unified CallManager Appliance

NEC Express5800/R120h-2M System Configuration Guide

IBM System Storage DS5020 Express

Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524

FUJITSU ETERNUS DX8000 SERIES DISK STORAGE SYSTEMS FOR LARGE ENTERPRISES

TECHNICAL SPECIFICATIONS

IBM Shared Disk Clustering. Hardware Reference

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Data Sheet FUJITSU ETERNUS DX60 S3 Disk Storage System

E-Series Hardware Cabling Guide

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

Slide 0 Welcome to the Support and Maintenance chapter of the ETERNUS DX90 S2 web based training.

ETERNUS JX60. Technical Product Introduction. 0 Copyright Fujitsu, Release June 2014

Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System

Suggested use: infrastructure applications, collaboration/ , web, and virtualized desktops in a workgroup or distributed environments.

DELL EMC UNITY: HIGH AVAILABILITY

Integrated Ultra320 Smart Array 6i Redundant Array of Independent Disks (RAID) Controller with 64-MB read cache plus 128-MB batterybacked

P3AM ENZ0. FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems. Operation Guide (Basic)

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Cisco UCS B200 M3 Blade Server

IBM Storwize V7000 Unified

HUAWEI Tecal X6000 High-Density Server

Supports up to four 3.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs. A size-converter

Data Sheet FUJITSU Storage ETERNUS DX8700 S3 Disk System

Data Sheet FUJITSU Server PRIMERGY CX2550 M1 Dual Socket Server Node

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

Pre-Bid Queries for NIT No. RECPDCL/TECH/SERVER-GED/e-Tender/ /186 Dated:

EonStor DS - iscsi Series High-Availability Solutions Delivering Excellent Storage Performance

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

Genesis HyperMDC 200D

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series

NEC Express5800/R120h-2E System Configuration Guide

IBM Storwize V5000 disk system

Front-loading drive bays 1 12 support 3.5-inch SAS/SATA drives. Optionally, front-loading drive bays 1 and 2 support 3.5-inch NVMe SSDs.

Introduction to the CX700

NEC Express5800/R120e-1M System Configuration Guide

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

EonStor DS - SAS Series High-Availability Solutions Delivering Excellent Storage Performance

Acer AR320 F1 specifications

Transcription:

Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu. 1 This training module is divided in six main chapters. Chapter Product Lineup introduces the ETERNUS DX S2 models and their main features. In chapter High Performance we will see what features contribute to the enhanced performance of the S2 generation models. ETERNUS DX S2 introduces many possibilities for flexible expandability. Operability of the S2 models is even easier than before. ETERNUS has always been known for its reliability; also here the S2 models introduce new functionality. Chapter System Modules and Components concludes this Web Based Training by giving us an overview of the modular design of the ETERNUS DX S2. 2 Let us start with the first chapter "Product Lineup". It gives you a brief overview of the new product features, comparison with the earlier models, followed by specification overview. 3 ETERNUS DX S2 models are designed to meet the needs of all environments from small to large. The major new and enhanced functions are: Higher performance; the 2 nd generation of the ETERNUS DX delivers faster I/O and more throughput. Scalability has been enhanced in following ways. Firstly, disk expansion possibilities have been extended, more host connections are supported and cache capacity can be extended further than before. Host server connection options have been extended to include 10 Gbit/s with iscsi and FCoE interfaces. Internally the hard disk drives communicate with the Controller Modules via a faster SAS 2.0 backend bus. These ETERNUS DX S2 models are easier to manage thanks to the identical Graphical User Interface and enhanced management features. Reliability features have been improved with the DX400 models by introducing a new Cache Protector mechanism and by supporting remote copy functionality between all DX models that have this support; from Entry DX90 to Enterprise DX8000 models. The user management has now an improved role based concept that provides better security in managing the ETERNUS DX S2 systems. Green storage features have been improved by minimizing overall power consumption and by occupying less rack space. 4 ETERNUS DX S2 models give the customer an optimal storage platform that balances costs, performance and capacity even better than the earlier models. Here you can see a table showing more detailed comparison of the ETERNUS DX80 S2 and ETERNUS DX90 S2

The main differences between these two models are the 4 to 8 Gigabyte of cache capacity and the maximum number of disks: 120 for the DX80 S2 and 240 for the DX90 S2. ETERNUS DX80 S2 and DX90 S2 can use both 3.5" and 2.5" Enclosures, also in a mixed configuration. Further technical details are explained later in this session. 5 Equally like with the Entry models, the main difference between the two Midrange models is the maximal cache size and the maximum number of disks. Different as with the DX80 and DX90, the Controller Module of the DX400 does not hold any disks drives and therefore the minimum configuration is always one Controller Enclosure and one Drive Enclosure. The Disk Enclosures with 3.5 inch and 2.5 inch form factor are the same for both the Entry and Midrange models. 6 The new ETERNUS DX S2 family is based on a uniform Fujitsu brand design concept. Each maintainable module has a green touch point to easily identify the exchangeable component. All modules are using unified LED coloring; green signaling normal status and amber an error situation. 7 Comparison to the earlier ETERNUS DX Entry models shows the improvements in the performance, functionality and number of disk drives. The basic model DX60 is not changed at the moment. The DX80 is now replaced by the DX80 S2. The maximum number of physical host connections per ETERNUS DX Entry system is now eight. The maximum cache capacity is still 4 Gigabytes and also the maximum number of disk drives has not changed. Thin Provisioning optimizes the utilization of available storage and is now with the S2 models also available in the ETERNUS DX Entry systems. The DX90 is replaced by the DX90 S2 and supports like before a maximum of eight host connectivity ports. The cache capacity is increased to eight Gigabyte and the total number of hard disk drives is now 240. Thin Provisioning is also available for the DX90 S2 as a new feature. 8 Comparing the Midrange S2 models with their predecessors show significant improvements in the following areas; with DX410 S2 the number of maximal host ports has increased to 16, maximal cache size das doubled to 16 Gigabyte and the maximal number of disks has more than quadrupled to 480. Let's now have a look at the equally impressive enhancements that the DX440 S2 brings along; host ports have doubled to 32, maximal cache size is now impressive 96 Gigabytes and the maximum drive count has more than doubled to 960. 9

This table shows the product specifications in comparison to the previous ETERNUS DX80 and DX90 models. A single core Intel Xeon CPU with a higher clock rate of 1.73 GHz contributes partly to better throughput performance. The DX90 S2 can be configured with a maximum of 4 GB cache memory per Controller Module. Thus, in dual CM configuration the maximum total cache capacity is 8 GB. Fibre Channel over Ethernet is a new host interface type in the ETERNUS DX80 S2 and DX90 S2 models. For an iscsi host interface also a 10 Gigabit per second option is now available. For small businesses or for small cluster configurations the DX80 S2 and DX90 S2 continue to be available with SAS host interface with a connection speed of three or six Gigabits per second. Controller Modules for both the DX80 and DX90 have four slots for the Channel Adapter modules, a minimum of two CAs must be fitted. The backend bus system used between the Controller and Drive Enclosures was in the past a SAS 1.0 bus with a maximum bandwidth of three Gigabits per second, which with the DX Entry S2 models is now doubled to six Gigabits per second. As already mentioned, the maximum number of disks has doubled to 240 for the DX90 S2. Please note that in mixed use - when using both 3.5" and 2.5" disks - the maximum number of disks is determined by maximum number of connected Disk Enclosures that cannot exceed nine. A customer can create in a DX80 S2 up to 2048 logical units. With DX90 S2 the maximal number of LUNs that can be configured is quadrupled as compared with the earlier DX90 system. Also the supported maximal number of Host Bus Adapters is quadrupled per port and in total compared to the earlier ETERNUS DX Entry systems. 10 Here is an overview of the improvements that are introduced with the ETERNUS DX 400 models. Especially with the DX440 S2 the increased cache size boosts up the performance of the system considerably. For cache protection fast but low power Solid State Disk is used instead of normal hard disks. Fibre Channel over Ethernet is now available as a new host interface option, and the iscsi host interface is now available at speed of 10 Gigabits per second. For all host interface types the number of host ports per Controller Module has been increased, this not only allows more physical host connections but also indicates that the system performance can deal with a higher number of host servers. SAS 2.0 drive interface enables 6 Gigabit per second transfer rates per disk, a further contributor for increased system performance. New RAID level - RAID 5 plus 0 - has been introduced to allow for choosing an ideal RAID level for every possible usage. Expandability has been considerably improved by supporting more Disk Enclosures and also by introducing 2.5 inch disks for the DX Midrange models. Through increased storage density of the DX systems occupy less rack space and consequently also reduce the storage footprint in the data center - yet another ETERNUS contribution towards Green IT.

Following the increased overall storage capacity also the system connectivity has been considerably enhanced by allowing more logical disks to be accessed by a larger number of Host Bus Adapters. 11 The next paragraph shows how the new ETERNUS DX models have managed to improve their overall throughput performance 12 When comparing the system throughput, both the Entry and Midrange models show remarkable improvement. Let's see what factors are contributing to this. First of all the internal bus used by the two Controller Modules for internal communication base now on the latest technology of PCI Express - the PCIe Gen2 - with a 4 lane architecture. PCIe Gen2 doubles the bandwidth of PCIe Gen 1. Secondly the backend bus. It is of type SAS 2.0 which doubles the internal bandwidth between the Controller Module and the hard disk drives to six Gigabits per second. The latest technology DDR3 RAM chips that run at 866 MHz clock rate in the Entry models and at 1.066 GHz in the Midrange models improve the performance of the cache memory. All these mentioned performance increasing features are especially effective for sequential I/O, which is typical for applications running on a file server. 13 The Intel Xeon CPU that operates the Controller Module provides now better performance, especially in random I/O operations. 14 The next chapter in this Web Based Training shows the improved scalability of the new DX S2 models. It also explains and shows the upgrade possibilities, their improved interface flexibility and how the host connectivity can be easily expanded. As the last item we will touch the subject of Solid State Drives. 15 Scalability is one of the areas where the S2 generation models are superior. Let us start with the DX Entry models. A basic configuration consists of one Controller Enclosure that can be either of 2.5 or 3.5 inch type. Maximum system configuration for the DX80 S2 consists of a total of 120 drives. If all drives are of 3.5 inch type then the basic system can be expanded with a total of 9 Drive Enclosures - when 2.5 inch Enclosures are used then the maximal drive count is reached with 4 Drive Enclosures. All Drive Enclosure expansions can be done without interrupting normal operation. Maximum disk configuration for the DX90 S2 is 240 drives but please note that this can only be achieved using the 2.5 inch drives as the maximal number of Disk Enclosures is reached with one Controller Enclosure and 9 Drive Enclosures. Mixed use of 2.5 and 3.5 inch Enclosures is supported; in this case the given maximal numbers of disks and Enclosures define the maximum configuration. 16

Similarly as with the Entry models, with the Midrange models it is also possible to start small and expand later as the storage requirements grow. With the DX400 models the minimum configuration is one Controller Enclosure and one Disk Enclosure. Naturally all Enclosure expansions can be done without interrupting the normal operation. Maximum configuration contains 20 Disk Enclosures with the DX410 S2 and 40 DEs with the DX440 S2. Like with the Entry models, mixed configuration with both Enclosure types is possible but the given maximal number of Enclosures cannot be exceeded. 17 The previous slides were focusing on explaining how a system can be expanded by adding Disk Enclosures, but the slides were also implying that it is possible to upgrade the Controller Enclosure and/or migrate the user data to a new configuration. Please note that the upgrade feature is only supported from the DX80 S2 onwards. 18 With the S2 models ETERNUS DX can be used in virtually any operating environment; from direct attached SAS to 10 Gigabit Fibre Channel over Ethernet. The table shows an overview of the connectivity options per model; new interface type for the Entry models is the Fibre Channel over Ethernet at 10 Gigabits, iscsi interface speed has been increased to 10 Gigabits and for the SAS interface in the Entry models a 6 Gigabit per second option is now available. 19 Here is a brief description of the available interface options and what usages they typically best suite. Fibre Channel Storage Area Network is used typically with SAN switches and can consist of a single system to a large scale enterprise data center. FCoE combines the Fibre Channel SAN and the Ethernet LAN environments. This allows FC to use 10 Gb LAN cabling while preserving the Fibre Channel protocol. iscsi is an abbreviation for Internet Small Computer System Interface. Unlike Fibre Channel, which requires dedicated infrastructure, iscsi can utilize existing and cost effective network infrastructures. SAS is an abbreviation for Serial Attached SCSI which is a Direct Attached Storage interface type. This is normally only used in very small environments where the application server and the storage system are directly connected. 20 The ETERNUS DX S2 models can be easily expanded not only by expanding the storage capacity but also by adding host interface ports, including the option to mix different interface types. Assuming we have a basic configuration - either DX Entry or Midrange - first expansion option is to add Channel Adapters to the existing Controller Modules. As the DX400 models can be fitted with more Channel Adapters than the Entry models, they offer also the option of increasing the total number of CAs in several steps. The other expansion option is to mix different interface types - for example Fibre Channel and iscsi. In this example two different types of Channel Adapters have

been fitted in each Controller module. Please note that for redundancy purposes the CMs need to be identically configured. 21 Cache memory configuration of the Entry models is very straight forward; they only have one memory slot per Controller Module and this slot in the DX80 S2 models is always occupied with a two Gigabyte memory module whereas in the DX90 S2 models there is always a four Gigabyte module. The Midrange models offer the possibility to configure the cache memory; between eight and 16 Gigabytes between the two Controllers. The memory modules have a capacity of two Gigabytes. The slots are populated in pairs, first the slots zero and two and then the slots one and three. With the DX440 S2 the available DIMM capacities are four and eight Gigabytes, the slots are populated three at a time in the shown order. 22 This slide provides an overview of the configuration rules for the Channel Adapters. For more detailed information refer to the Installation and Operation User's Guide. The CAs must be of the same topology type and reside in the same slot positions in both CMs. CAs must be installed starting from the slot position zero. There are no restrictions regarding using different types of CAs. DX440 S2 with 100 volt mains supply is limited to two CAs per CM. 23 Earlier DX models had some restrictions regarding the use of SSD drives; these restrictions are now history with the S2 models. SSDs can be configured with any RAID type. Any number of SSDs can be configured within the limitations for the total number of disks and total number of RAID Groups. However, it is worth mentioning that the high I/O performance of the SSDs dictate a practical limit for the number of SSDs in a single system - concurrently used disks are able to saturate the host interfaces and thereby the SSDs are not able to deliver their maximal performance. 24 Here some further notes regarding the use of SSDs. Mixing SSDs and SAS drives in the same RAID Group would void the performance benefit of the SSD therefore concatenating SSDs with SAS disks is not supported. Eco-mode can be used to spin down rarely used disks; with a drive that has no spindle this cannot be done and secondly is also not necessary as the power consumption is anyway low. A Hot Spare for a RAID Group with SSDs must also be of SSD type. This is good to bear in mind when configuring the system for an order. 25 In this next chapter we will have a look at the functionality related to multiple paths between the host and the ETERNUS. 26

All ETERNUS DX Entry and Midrange systems support assigned access path, each RAID Group and its Volumes are assigned to a particular CM. For setting up the access path, the Host Response setting can be set with the ETERNUS Web GUI between Active/Active and Active-Active Preferred Path. With this the operating system of the host is able to decide how the Volumes should be addressed. With Active-Active Preferred Path only the CM that is assigned with the Volumes of a particular RAID Group is seen as preferred path. That means a particular LUN is only accessed through the CM that the Volume has been assigned to, while the other CM is used only if the preferred CM is no longer available. With Active/Active all LUNs are accessed alternating over both CMs. That means the host sees both CMs of the ETERNUS as preferred path. Please note that when the host is connected to the ETERNUS DX over two paths, a Multipath driver is always needed to allow the operating system to differentiate between the preferred path and the failover path. Under Windows these paths are called optimized path and non-optimized path. Failing to use a Multipath driver would make the ETERNUS LUNs to appear twice in Windows. When a RAID Group is created it is assigned to one of the existing CMs. This means that under normal circumstances all I/O to the Volumes that reside in this particular RAID Group is carried out by this particular CM. When I/O request comes to the CM that is not assigned for the LUN, the request will be passed on to the other CM internally in the ETERNUS DX - this has a slight impact on the performance. RAID Groups - and consequently the Volumes - are always assigned to a particular CM as the RAID Group is created using the ETERNUS Web GUI; either manually or automatically. It is possible to change this setting any time later. 27 Always when connecting a host server to the ETERNUS DX over two paths - meaning two HBAs, two cables, two CMs - it is recommended to use a Multipath driver. If not, regardless of the ETERNUS Host Response setting, the operating system of the host server detects the ETERNUS LUNs once per host interface. A Multipath driver is installed in the host server and consequently the server is able to address the Volumes using either of the physical access paths. A Multipath driver does not change the ETERNUS internal assignment of the LUNs; LUN1 is assigned to CM0 and LUN2 is assigned to CM1. The Multipath driver can use both access paths parallel to access a particular LUN but typically the driver uses only the assigned path, except when the assigned path is not available due to a hardware failure. In that case all LUNs will be accessed through the available access path. From the server point of view, when both access paths are used parallel this is called Multipath for load balancing. When only the assigned path is used it is called Multipath failover, meaning that the alternative path is only used when the primary path is no longer available. 28 This slide shows the principle of Multipath driver functionality when it is used for failover functionality. Within ETERNUS, the green LUN1 is assigned to CM0 and thus the CA port of CM0 provides the active path and the respective CA port of the CM1 the standby path.

The same is valid for the blue LUN2 but this time CM1 provides the active path and CM0 the standby path. When both CMs are fully functional the I/O is sent by the host only over the active path, meaning directly to the CM that is the owner of the RAID Group where the addressed LUN resides. This configuration is redundant for HBA, Fibre Channel cable, CA or even complete CM failure; meaning that both LUNs are still available should one or more components of either physical path fail. Just for clarification, each HBA is connected to a CM with one Fiber Channel cable. 29 This slide explains what happens internally in the ETERNUS DX when one of the existing two CMs fails. Should a complete Controller Module fail, the ETERNUS internal connection to the RAID Groups of this CM is lost but immediately taken over by the surviving CM. Consequently all RAID Groups are accessed by one CM. 30 The ETERNUS DX S2 reliability is delivered by redundant components, hot maintenance features, cache backup mechanism, Boot-up and Utility Device and improved remote copy function. We will have a brief look at these reliability features in the next chapter. 31 We will use this block diagram to have a look at how ETERNUS DX provides high reliability by utilizing redundant hardware components. All main components of the ETERNUS DX can be configured redundantly. For the Entry models it is also possible to have a single controller configuration, the Midrange models have always a minimum of two Controller Modules. In the diagram the Controller Modules are abbreviated CM, in the Drive Enclosure the respective module is called I/O Module. Redundancy means for both the Controller Enclosure and the Disk Enclosure that the system continues operating even when one Module or Power Supply fails. The cached data from the host server is secured within the ETERNUS DX S2 by mirroring the cache memory between the two CMs. In the case one CM fails the cached data is not lost as the surviving CM holds a copy of it. The disk drives are dual ported and thus connected to two independent access paths. If a CM or an IOM fails, the drives continue always to be accessed by the surviving Module. All main components are hot swappable, meaning they can be replaced without stopping normal operation. Drive Enclosures can be added and RAID Groups can be expanded or changed without stopping normal operation. Also controller firmware can be upgraded on-the-fly without stopping normal operation. There are two prerequisites, however; firstly the system must have at least two Controller Modules and secondly a multipath configuration is needed to maintain uninterrupted server access. 32 The block diagram and the respective functionality of the DX400 models is similar with that of the Entry models, with the exception of the Cache Protector circuitry

being powered in the case of a power failure with a traditional battery pack instead of the super capacitor used in the Entry models. 33 The Controller Modules in the ETERNUS DX improve host I/O transaction performance by the use of cache memory. To ensure that data in the cache memory is not lost when one CM fails, in dual CM configurations the cache memory is always mirrored between the two CMs. For this purpose the memory is divided in two areas; local and mirror. The Entry models can be optionally configured with two CMs; the Midrange models have always two CMs. Should a CM fail in a dual CM configuration, the system operation continues using the mirrored cache data on the surviving CM. After the defective CM is replaced the cache memory contents are reconstructed to its original layout. Another potential threat for losing the cache content is a mains power blackout. All ETERNUS DX systems utilize a mechanism called Cache Guard to back up the cache content in a case of a mains power failure. The DX80 S2 and DX90 S2 have a slightly different system for backing up the cached data in the case of a mains power failure. In the Entry models the Cache Guard is powered by a mechanism based on large capacity capacitor and the data is backed up to a NAND Flash memory. In the Midrange models Cache Guard gets its power from re-chargeable batteries and the data is stored in a Solid State Disk. Both mechanisms deliver the same functionality; keep the cache memory in each Controller Module powered as long as it takes to copy the cached data to a nonvolatile memory - from where it can be copied back to the cache memory as soon as the mains power is restored. 34 Traditional way to backup cached data in a case of a power blackout is to continue supplying power to the cache from a battery backup unit. If the blackout continues over a longer time, the battery pack uses up all its power and consequently the cached memory content will be lost. ETERNUS DX models have an advanced cache protection mechanism. For backing up the cached data the DX Entry and Midrange models are using a slightly different mechanism. The Entry models use a mechanism powered by a super capacitor and the Midrange models a mechanism powered by a battery pack. Reason for Midrange models still using a battery pack is the large cache size that could not be powered with a System Capacitor Unit. Should the system loose main power completely due to a power blackout, an automatic cache backup procedure is started. All necessary components are supplied with power long enough for the cached data to be saved in a Flash memory - Entry models - or in a BUD by the Midrange models. BUD is short Back-up and Utility Disk. The Midrange models use an SSD for the BUD. After the cached data is securely saved the power is no longer needed as the Flash or SSD holds the data without supplied power. After the power is restored it is necessary for the backup power supply to be charged full again. With the Entry models the SCU charges to full very fast, the Midrange models require somewhat longer time.

35 Boot-up and Utility Device - also known as BUD - is a new feature introduced in the S2 models. There is one BUD in each Controller Module. In effect BUD replaces the system disks that were used in the earlier models; it is used for the same purposes. As far as the earlier DX80 and DX90 models are concerned, it is also used to store information that was earlier stored in the Compact Flash device. BUD is implemented slightly differently in the Entry and Midrange models. In the DX80 S2 and DX90 S2 it is a USB device and for Cache Guard a separate Flash memory is used. Whereas in the DX410 S2 and DX440 S2 it is an SSD and is used also for Cache Guard functionality. 36 To finish off this chapter, let us summarize the features of all the members of the ETERNUS DX family. The diagram shows the different ETERNUS DX models in relation to their positioning and functionalities. The DX60 S2 is the very Entry model. The DX80 S2 is best suited for small business. The features of the DX90 S2 cover usages from small to midsize business and the DX400 S2 continues from there extending towards Enterprise level. The ETERNUS DX8700 S2 is positioned in the Enterprise business area. All ETERNUS models support Advanced Copy features like Clones, Snapshot and Replication - please refer to the respective product specification sheet for details. The feature of volume replication - or REC - between two ETERNUS DX systems is a built in feature of the ETERNUS DX90 S2, DX400 S2 and DX8700 S2. To enable the functionality a license key needs to be purchased. To configure and manage the local and remote Advanced Copy features additional ETERNUS SF software suite is necessary. ETERNUS SF Express is a management suite for the ETERNUS DX Entry models, from DX60 to DX90 S2. AdvancedCopy Manager and Storage Cruiser are management suites for all the DX models. Thin Provisioning is a method for optimizing utilization of available storage. It allows disk space to be easily allocated on a just-enough and just-in-time basis. To enable the functionality a separate license key needs to be purchased. The data integrity is guaranteed from the smallest model up to the Enterprise model by a wide choice of RAID levels. Customers can decide individually per usage to enable either controller or disk based data encryption. Full system redundancy is either built in or configurable for all the DX models. 37 The last chapter of this Web Based Training covers the system modules, components and features of the ETERNUS DX S2 models. 38 This block diagram illustrates the internal components of the ETERNUS DX80 S2 and DX90 S2.

Each Controller Module can hold up to two Channel Adapter cards. These are the interface cards connecting to the host servers, matching the customer network topology. Mixed use of different CA interface topology cards in one CM is supported. The CA is a Field Replaceable Unit. Each Controller Module has an embedded Intel Xeon single core CPU with a clock rate of 1.73 GHz. The memory is based on DDR3 DIMM technology. It is mainly used for caching host data but also as memory capacity for system internal functionality. Cache memory redundancy is provided by the pairing CM. The memory DIMM is also an FRU. The BUD is short for "Boot-up and Utility Drive". It is USB memory device and its content is mirrored with the pairing CM. BUD basically substitutes the functionality of the system disks. It is an FRU. The non-volatile memory saves the cache contents in case of a main power failure. When a power failure is detected, a dedicated LSI will be activated to transfer the cached data to a non-volatile Flash memory. The I/O-controller is responsible for the data transfer to or from the disks. By default one of the CMs is responsible for the disks assigned to a particular RAID Group. In the case a direct path from the IOC via the Expander to its assigned disks is defect, the IOC can re-route the disks via an internal SAS 2.0 bus to the other CM Expander and from there the IOC is again able to access its assigned disks. Each DX Entry Controller Module has an embedded System Capacitor Unit (SCU) for providing reserve power for the cache backup circuitry in a case of a power failure. The SCU is based on an electric double layer capacitor and it has substantially shorter charging time and a longer lifetime as compared to a battery. The SAS 2.0 Expanders are used to connect the access paths between the Controller Modules and disk drives in the Controller Enclosure. All Enclosures are chained externally using SAS 2.0 expanders. Each Enclosure has two redundant Power Supply Units. All DX S2 PSUs comply with the 80 PLUS Silver certification which means that the PSU operates under all circumstances with a minimum of 85 per cent efficiency. The PSUs contain variable speed fans that are used for cooling the Enclosure. The PSU is a FRU. The midplane connects the different exchangeable modules in the front and in the back of the Enclosure. The IOM6 - or I/O Module 6 - controls the Drive Enclosure and all its components. The IOM6 is a FRU. 39 The internal architecture of the DX410 S2 is rather similar with that of the DX80 and DX90 S2, but there are some differences that we will now have a closer look at. Each CM supports up to four Channel Adapter cards. The BUD is a Solid State Disk instead of a USB memory device. The PCI Express bus connecting the CMs provides more bandwidth with its two times eight lanes. The Xeon CPU has two cores with one thread each. The I/O Controller has two independent channels. Also the Midrange models use DDR3 DIMMs. Equally as the I/O Controller the SAS Expander also has independent channels for better performance.

The Power Supply Units of the Controller Enclosure are different from those of the Entry S2 models. Battery Backup Units are needed for supplying power for the Cache Protection circuitry during a mains power failure. There are three of them per CM. The Drive Enclosure is available either as a 2.5" or 3.5" form factor and is the same for both Entry and Midrange models. For redundancy and performance purposes each DX410 CM has four SAS 2.0 ports for connecting the expansion Drive Enclosures. From an architecture point of view the DX410 S2 more or less doubles the performance of the DX80 and DX90 S2. 40 Let's now see how the DX440 S2 differs from the DX410 S2. Again, double the amount of Channel Adapters. BBU and BUD are identical with those of the DX410 S2. Xeon CPU has a higher clock rate and a total of 8 threads. The internal PCI Express, memory, I/O and Expander architectures as well as the PSUs are identical with those of the DX410 S2. Like already mentioned, the Disk Enclosures are identical for all Entry and Midrange S2 models. To cater for better redundancy, expandability and performance each CE is equipped with a total of 8 SAS out connectors for connecting the expansion Drive Enclosures. Please note the redundant cabling between the Controller and Drive Enclosures. 41 Next we will get familiar with how the systems look like from the front and the back, starting with the 3.5 inch Controller Enclosure of the DX80 S2 and DX90 S2. On the left hand side of the front panel there are status LEDs and the power button. A total of 12 SAS drives can be fitted in horizontal position. The disks are numbered from bottom left towards top right, starting with disk number zero. In the rear there are two Controller Modules with the CM number zero on the left hand side. Below the CMs there are the two PSUs, with the PSU zero on the left hand side. Now the same for the 2.5 inch Controller and Drive Enclosure, but focusing on the only difference between them, which is naturally the disk drives that are fitted vertically and numbered from left to right starting with disk number zero. A total of 24 disks can be fitted per Enclosure. 42 These are the main modules of the DX4400 S2 Controller Module, starting with a front view: There are three Battery Backup Units, numbered zero to two from left to right. Each BBU consists of a Battery Unit and Charger Unit. On the left hand side there are the status LEDs and the power switch. A switch block is located towards to middle of the unit. In the rear of the unit there are two Controller Modules, module zero in the bottom and module 1 at the top. On the left hand side there are two Power Supply Units numbered zero and one. Each CM has four Channel Adapter modules that are numbered from bottom left to top right, zero through three.

The DX400 S2 models are always delivered with minimum two Controller Modules, single CM configuration is not available. Since the DX400 Controller Module doesn't have any disks, the minimum configuration is one CE and one DE. 43 Next we'll have a look at the 3.5 inch Drive Enclosure, starting with a front view. At the left hand side there are status LEDs and a two-digit display showing the Enclosure number which is automatically identified by the backend cabling. The disk drives are numbered the same way as with the 3.5 inch DX80 and DX90 Controller Enclosure; from bottom left to top right, zero through 11. On the rear side there are the I/O Modules at the top and the PSUs in the bottom, both being numbered from left to right. The 2.5 inch has identical modules except naturally the disks that are fitted vertically and numbered zero through 23, from left to right. The same Drive Enclosures are used for both Entry and Midrange models. 44 That brings us to the end of this Web Based Training session module. Please refer to the other Web Based Training modules for additional information on the ETERNUS DX models.