That is achieved by having a CM Expander in each CM and by providing an internal cross connection between the CMs and their CM Expanders. Should one C

Size: px
Start display at page:

Download "That is achieved by having a CM Expander in each CM and by providing an internal cross connection between the CMs and their CM Expanders. Should one C"

Transcription

1 0 The ETERNUS DX is a seamless product family, providing data center class features from Entry to Enterprise models. This ETERNUS DX Web Based Training module provides a technical introduction to the features that are common for the DX80 S2, DX90 S2, DX410 S2 and DX440 S2 models. 1 Here are the main chapters of this training module. Reliability is extremely important for a storage system; this is one of the areas where ETERNUS DX has its strengths. The next chapter introduces us to the redundant cache mechanism of the ETERNUS DX and explains further functional details of the Controller Modules and how they communicate with the host servers. RAID 5+0 is a new and additional RAID functionality of the DX S2 Midrange models. This chapter compares this new RAID level with the other similar RAID levels 5 and 6 to show for what purposes each of these RAID levels suits best. Chapter RAID and Hard Disk Features introduce us to the numerous technical features ensuring that data stored in ETERNUS DX is always safe. With dynamic LUN configuration features it is possible to move data within ETERNUS DX - without interrupting normal operations - so that all data always resides on disk space that best serves the purpose. Data Encryption is an important security feature of the ETERNUS DX for those environments where data must not get outside the storage system in readable format. 2 Environmental values have always been very important design aspects for the ETERNUS DX. Proof for that is the Eco-mode and newly introduced small form factor disks. Combined with monitoring functionality of the power consumption the users can easily have an overview and control of the system's power consumption. Thin Provisioning feature makes it easy for the customer to prepare for future increased storage capacity demand but to avoid the high initial investments in surplus hardware. The last chapter "Miscellaneous" introduces a number of various functionalities that are partly new for the S2 models. 3 The basic architecture of the ETERNUS DX aims at highest possible reliability through redundancy of virtually all components. The following slide shows as an example the redundancy functionality of the Controller Modules. There are also a number of technical features aiming at increasing the system reliability; one of them called Data Block Guard is introduced in the next slide to follow. 4 To ensure maximal disk access performance also when one Controller Module is no longer fully functional, ETERNUS DX internal architecture provides redundancy for the Drive Enclosure access.

2 That is achieved by having a CM Expander in each CM and by providing an internal cross connection between the CMs and their CM Expanders. Should one CM lose its full functionality, it is possible that the CM Expander remains functional and is consequently internally patched to the fully functional CM so that all disks in all Drive Enclosures can still be accessed with minimum performance loss. Both CM Expanders are in this situation driven by the fully functional Controller Module. 5 Data Block Guard is an ETERNUS DX data protection feature that prevents corrupted data from being sent to the host. The functionality is based on adding a check code on the user data before writing it on the disks and consequently verifying the user data against the check code before sending the data to the host. There are three phases of operation; in the first phase the check code is stored together with the data. For each 512 bytes of data ETERNUS calculates an 8-byte check code, in our example that is done to each of the three bytes. In the next phase the user data gets written on the disk together with the check code. As the data gets read from the disk, it needs to verified against data corruption and before passing on the data to the host the check code needs to be removed. That is naturally done in the reverse order as in the beginning and consequently the host receives only the original data. Should corrupted data be detected then it will not be passed on to the host. Data Block Guard protects for example database applications from having corrupted data in its database that would typically have devastating consequences. 6 In this next chapter we will have a look at couple of features and functionalities that relate to the Controller Modules; cache memory redundancy, cache memory configuration and functionality related to multiple paths between the host and the ETERNUS. 7 The Controller Modules in the ETERNUS DX improve host I/O transaction performance by the use of cache memory. To ensure that data in the cache memory is not lost when one CM fails, in dual CM configurations the cache memory is always mirrored between the two CMs. For this purpose the memory is divided in two areas; local and mirror. The Entry models can be optionally configured with two CMs; the Midrange models have always two CMs. Should a CM fail in a dual CM configuration, the system operation continues using the mirrored cache data on the surviving CM. After the defective CM is replaced the cache memory contents are reconstructed to its original layout. Another potential threat for losing the cache content is a mains power blackout. All ETERNUS DX systems utilize a mechanism called Cache Guard to back up the cache content in a case of a mains power failure. The DX80 S2 and DX90 S2 have a slightly different system for backing up the cached data in the case of a mains power failure. In the Entry models the Cache Guard is powered by a mechanism based on large capacity capacitor and the data is backed up to a NAND Flash memory.

3 In the Midrange models Cache Guard gets its power from re-chargeable batteries and the data is stored in a Solid State Disk. Both mechanisms deliver the same functionality; keep the cache memory in each Controller Module powered as long as it takes to copy the cached data to a nonvolatile memory - from where it can be copied back to the cache memory as soon as the mains power is restored. 8 Cache memory configuration of the Entry models is very straight forward; they only have one memory slot per Controller Module and this slot in the DX80 S2 models is always occupied with a two Gigabyte memory module whereas in the DX90 S2 models there is always a four Gigabyte module. The Midrange models offer the possibility to configure the cache memory; between eight and 16 Gigabytes between the two Controllers. The memory modules have a capacity of two Gigabytes. The slots are populated in pairs, first the slots zero and two and then the slots one and three. With the DX440 S2 the available DIMM capacities are four and eight Gigabytes, the slots are populated three at a time in the shown order. 9 All ETERNUS DX Entry and Midrange systems support assigned access path, each RAID Group and its Volumes are assigned to a particular CM. For setting up the access path, the Host Response setting can be set with the ETERNUS Web GUI between Active/Active and Active-Active Preferred Path. With this the operating system of the host is able to decide how the Volumes should be addressed. With Active-Active Preferred Path only the CM that is assigned with the Volumes of a particular RAID Group is seen as preferred path. That means a particular LUN is only accessed through the CM that the Volume has been assigned to, while the other CM is used only if the preferred CM is no longer available. With Active/Active all LUNs are accessed alternating over both CMs. That means the host sees both CMs of the ETERNUS as preferred path. Please note that when the host is connected to the ETERNUS DX over two paths, a Multipath driver is always needed to allow the operating system to differentiate between the preferred path and the failover path. Under Windows these paths are called optimized path and non-optimized path. Failing to use a Multipath driver would make the ETERNUS LUNs to appear twice in Windows. When a RAID Group is created it is assigned to one of the existing CMs. This means that under normal circumstances all I/O to the Volumes that reside in this particular RAID Group is carried out by this particular CM. When I/O request comes to the CM that is not assigned for the LUN, the request will be passed on to the other CM internally in the ETERNUS DX - this has a slight impact on the performance. RAID Groups - and consequently the Volumes - are always assigned to a particular CM as the RAID Group is created using the ETERNUS Web GUI; either manually or automatically. It is possible to change this setting any time later. 10 Always when connecting a host server to the ETERNUS DX over two paths - meaning two HBAs, two cables, two CMs - it is recommended to use a Multipath

4 driver. If not, regardless of the ETERNUS Host Response setting, the operating system of the host server detects the ETERNUS LUNs once per host interface. A Multipath driver is installed in the host server and consequently the server is able to address the Volumes using either of the physical access paths. A Multipath driver does not change the ETERNUS internal assignment of the LUNs; LUN1 is assigned to CM0 and LUN2 is assigned to CM1. The Multipath driver can use both access paths parallel to access a particular LUN but typically the driver uses only the assigned path, except when the assigned path is not available due to a hardware failure. In that case all LUNs will be accessed through the available access path. From the server point of view, when both access paths are used parallel this is called Multipath for load balancing. When only the assigned path is used it is called Multipath failover, meaning that the alternative path is only used when the primary path is no longer available. 11 This slide shows the principle of Multipath driver functionality when it is used for failover functionality. Within ETERNUS, the green LUN1 is assigned to CM0 and thus the CA port of CM0 provides the active path and the respective CA port of the CM1 the standby path. The same is valid for the blue LUN2 but this time CM1 provides the active path and CM0 the standby path. When both CMs are fully functional the I/O is sent by the host only over the active path, meaning directly to the CM that is the owner of the RAID Group where the addressed LUN resides. This configuration is redundant for HBA, Fibre Channel cable, CA or even complete CM failure; meaning that both LUNs are still available should one or more components of either physical path fail. Just for clarification, each HBA is connected to a CM with one Fiber Channel cable. 12 This slide explains what happens internally in the ETERNUS DX when one of the existing two CMs fails. Should a complete Controller Module fail, the ETERNUS internal connection to the RAID Groups of this CM is lost but immediately taken over by the surviving CM. Consequently all RAID Groups are accessed by one CM. 13 This following chapter focuses on explaining the differences between three similar RAID Groups; RAID 5, RAID 5+0 and RAID 6. RAID 5+0 has been introduced in the Midrange models first time with the S2 models. In addition to explaining the operating principle of these different RAID levels, in this chapter we will also look at the advantages of each of these RAID levels as advice when choosing the RAID level for a particular usage. 14 Here is an overview of all RAID levels that are supported by the ETERNUS DX S2 models. On the far left how the host server sees the RAID Group, on the right hand side the layout of the physical disks.

5 In a RAID 5 array the capacity equaling to one physical disk is used for parity data. That means that if there are five physical disks, the capacity equaling four physical disks is used for user data and the capacity of one disk for the parity data. Therefore, the illustrated disk layout is commonly referred to as four plus one; four data disks and one parity disk. RAID 5+0 is built up similarly like the RAID 1+0; two parallel RAID 5 arrays are treated like one array. RAID 6 adds a second parity disk but unlike RAID 5+0 it uses double parity for calculating the parity data. In the following slides we will have a closer look at the practical differences between these three RAID types. 15 This slide compares the RAID 5+0 with RAID has two advantages over RAID 5; performance and capacity - let us now have a look at where that comes from. This is an example of RAID 5 array with three data disks and one parity disk. Here is a RAID 5+0 that is in effect two RAID 5 arrays with three data disks and one parity disk each. Larger capacity per RAID Group is achieved by the possibility of combining two RAID 5 arrays and still allowing the same maximal number of disks per array as with RAID 5. Higher performance is achieved by the increased number of striped disks, meaning that a greater number of disks can be addressed in parallel. 16 This slide explains how RAID 5+0 provides a reliability advantage over RAID 5. Both illustrated RAID arrays provide the same storage capacity; RAID 5 with seven drives and RAID 5+0 with eight drives. After a single disk failure in RAID 5+0 the rebuild time is shorter because the number of disks that need to be read to re-generate the data is smaller. The advantage of the shorter rebuild time is improved reliability through quicker re-establishment of full redundancy of the array. A RAID 5 array with equal capacity uses double the amount of disks to re-calculate the lost data and will therefore remain longer time vulnerable to loosing the whole array should a second disk fail. In RAID 6 array any two disks can fail without causing the whole array to fail. 17 Next we compare the RAID 5+0 with RAID 6 from a performance point of view. Writing to a RAID 6 array has an overhead as compared to RAID 5 due to more complex algorithm used to calculate the double parity data. Considering the reliability, RAID 5+0 is comparable with RAID 6 in that respect that also 5+0 can lose a total of two disks, but only if the two failing disk reside in different RAID 5 sets. 18 RAID 6 provides further improved reliability as compared to RAID 5+0 by allowing two concurrent disk failures without losing the data of the whole array. Like already mentioned, RAID 5+0 can also handle two lost disks, but not if they fail in the same set.

6 It is rather unlikely to lose two disks in one go or in a short space of time, but the chances for that tend to increase during a rebuild. During rebuild the disks are under extensive load caused by heavy I/O needed to re-generate the data. Especially with large capacity Nearline SAS disks the rebuild can easily take longer than a day if the array is made of large number of disks. During all that time RAID 6 remains redundant, meaning losing any other disk in the array doesn't mean losing the data in the array. 19 This slide provides a comparison between the RAID 5, RAID 5+0 and RAID 6. The table and the diagrams in the bottom of the page focus on three parameters; Reliability, Data efficiency and Write performance. We'll start with RAID 6 that provides best reliability on the cost of the write performance. RAID 5 uses fewer physical hard disks than the other two RAID levels and is therefore best in data efficiency. If write performance is the most critical parameter then RAID 5+0 is the best choice. The information on this slide helps to choose the right RAID level for each usage, bearing in mind that write performance and reliability are trade-offs; one cannot achieve both in one RAID array. The following slide provides an overview of each of these RAID levels to show what technical implementations make them better in one or the other area. 20 Here is a summary of the three areas of interest from the previous slide. RAID 6 provides best data reliability because its double parity functionality allows any two disks to fail without losing the whole RAID Group. Data efficiency refers to the number of physical disk space that is not available for user data in the RAID Group, but instead used for parity information. RAID 5 has the lowest parity overhead and therefore the best choice when pure available capacity is concerned. When write performance matters the best choice among these three arrays is the RAID 5+0 because it has less parity calculation to do as the RAID 6 but can have more parallel disks than the RAID This next chapter focuses on a number of ETERNUS DX functionalities that relate to the hard disks. 22 This slide demonstrates the functionalities related to a disk failure. The actions to follow depend on if there is a Hot Spare available or not. In the first example a disk configuration with a Hot Spare. When a disk fails, a rebuild is automatically started. During rebuild the remaining disks are used as a source to re-generate the data of the lost disk that is being substituted by the Hot Spare. It is worth mentioning that depending on the RAID level, losing a second disk could result in losing the whole array. In other words the array is jeopardized until the rebuild is completed, which can take with a configuration of many large capacity Nearline SAS disks for days.

7 If there is no Hot Spare configured, the rebuild starts automatically only after the failed disk has been replaced. Again, please note that from the moment on the disk fails, until the rebuild is completed, the array is at risk. After the rebuild is completed the array is as good as it was before losing the disk, meaning it has regained full redundancy. Copy Back is the functionality that takes place after the failed disk is replaced in a configuration that had a Hot Spare. Of course, if there was only one Hot Spare configured, at this moment in time there is no Hot Spare available for the array. Replacing the disk starts automatically the Copy Back process, which means that the data from the original Hot Spare is copied over to the replacement disk. After the Copy Back is finished, the original Hot Spare is again available to jump in when needed. 23 There are two types of Hot Spares; Dedicated and Global. A dedicated Hot Spare is available to substitute a disk in a particular RAID Group and only in this particular RAID Group. Each RAID Group can naturally have its own Dedicated Hot Spare that is only used when this particular array loses a disk. Global Hot Spare is available for all RAID Groups in the whole ETERNUS system. This means, inclusive the RAID Groups that are configured with a Dedicated Hot Spare. Should a disk fail in an array that has no Dedicated Hot Spare, the Global Hot Spare will jump in. Also, if a RAID Group has already used its Dedicated Hot Spare, the Global Hot Spare can substitute the next failing disk in that RAID Group. 24 Now, it is always recommended that a failed disk is replaced with a physically identical disk. When configuring Hot Spares, fulfilling that requirement can require a careful planning but as far as the ETERNUS is concerned, the most important objective is to ensure data security. Therefore, if a matching disk is not available as a Hot Spare, the second best is better than nothing. For finding the best fit for a replacement disk, ETERNUS uses three search algorithms in the following order: First disk criteria to be checked is the capacity and the rotation speed. Naturally, a lower capacity is not at all an option but choosing a lower rotation speed but matching capacity would only reduce the performance of the RAID Group. If the first search is not successful, ETERNUS looks next after a Hot Spare disk that has a matching rotation speed and a capacity as close to the lost disk as possible. When both first searches do not bring a result, then the next search goes for finding a disk with a highest available rotation speed. The purpose of all this is to ensure that in the case of a disk failure, the degraded RAID Group establishes full redundancy with minimal impact on the performance. 25 Let's have a look at a practical example. A RAID Group consists of four 2.5 inch 300 Gigabyte disks with a rotation speed of ten thousand rpm. The system has a total of five Global Hot Spare disks of various types and capacities. When a disk in the RAID Group fails, the ETERNUS starts looking for a best suited replacement disk. In our example the first search criteria would find a perfect match,

8 but let's see further which disk would be chosen if the first search criteria would not provide a result. Next best match would be a disk with the same rotation speed but slightly bigger capacity. The 450 Gigabyte disk is a better choice than the 600 Gigabyte because the extra capacity is anyway lost and a 600 Gigabyte Hot Spare could be needed later to replace a same capacity disk. So the next choice would be the disk with double the capacity. Next search criteria is to find a disk that would be of different type but with better performance and last choice would be a disk with a lower number of rotations; that would naturally compromise the performance of the array. Losing performance is anyway better than the risk of losing the whole array but emphasizes the fact that especially in such cases a failed disk should be replaced as soon as possible to restore the performance back to normal. 26 All disks used in ETERNUS DX systems have a built-in SMART functionality. SMART is a self-monitoring mechanism that tracks disk behavior that could indicate that the disk is nearing its end of life. This tracking data is stored disk internally and is read by ETERNUS to provide the system administrator information that can be used as a basis for proactive disk replacement. ETERNUS uses the SMART data also to trigger off Redundant Copy automatically. The big advantage of the Redundant Copy functionality is that the array never loses its redundancy. 27 Here an example of Redundant Copy triggered by SMART. The idea is to predict a coming disk failure on the basis of the SMART data and to start the rebuild of data to the Hot Spare disk before the suspected disk has failed completely. Please note that in the case of a RAID array with parity information, reading data from the suspected disk is considered unreliable and therefore the data is reconstructed through rebuild, not by directly copying the disk. With mirrored arrays the healthy disk is used as a source for the copy process. During the rebuild the suspected disk remains a member of the array, which is important because the rebuild can take for days for a big array and otherwise the array would no longer be redundant. After the rebuild is completed the suspected disk is marked faulty and removed from the array. Redundant Copy triggered by SMART is only a preemptive measure. If the Redundant Copy uses a Global Hot Spare as the target and at the same time another RAID Group loses a disk completely, the Redundant Copy process is interrupted and the Hot Spare will be used for the RAID Group with the failed disk. 28 A Volume created in the ETERNUS DX is visible and accessible to the host almost immediately after it has been created. This is possible because of an ETERNUS DX feature called Quick Format. In this example we'll have a look at what the host sees and what actually happens inside the ETERNUS.

9 As the automatic formatting is started, at first ETERNUS creates a format control table to keep a track of the format process - during this short time the Volume is not accessible to the host. After the table is created ETERNUS starts the actual physical formatting. During this time the Volume appears to the host fully accessible, however, internally ETERNUS has only started formatting the data blocks in sequential order. If the host sends I/O requests to the Volume being formatted and addresses data blocks that are not yet formatted, ETERNUS initiates One Point Format process which means that the sequential formatting is interrupted and those blocks that are being addressed are formatted. After all I/O requests are served the formatting returns back to the sequential mode. If the formatting is interrupted because ETERNUS is powered off, the formatting continues automatically as the system is restarted. 29 As the last of the hard disk related functionalities we will have a look at a feature called Drive Patrol. This is an ETERNUS internal process that aims at finding hard disk data areas that are not reliable or not at all functional. A disk in an existing RAID Group is tested by reading data from it and comparing the data with the parity information. If a case of a data mismatch, the data is firstly recreated and then written back to a different data block on the same drive. Consequently, the respective data area is flagged and no longer used. The system administrator can decide what disks are to be scanned; it is good practice for example to let Disk Patrol scan all new installed disks before they are taken into use. With disks that not yet belong to a RAID Group the testing is done by writing on the disk and then comparing the read data with the original data. 30 ETERNUS DX offers many advanced features aiming at keeping the system always optimal regarding disk usage and disk performance. Dynamic LUN Expansion enables expanding the capacity of a LUN by firstly adding a disk with Logical Device Expansion and consequently increasing the size of the LUN with the LUN Concatenation feature. With RAID Migration it is possible to move data between RAID Groups in order to optimize disk usage and or disk performance. All these functionalities can be used without stopping or interrupting normal operations, for the user they are completely transparent. 31 Sometimes LUNs are running out of space, for this case the ETERNUS has the possibility to expand LUNs by using the LUN expansion feature. The capacity of an existing LUN can be expanded by creating a new LUN and concatenating it to an existing LUN An existing four plus one RAID 5 is running out of available capacity and therefore a disk with identical characteristics is added to the system. The RAID Group is expanded to contain six disks which means that the RAID Group has now some unallocated capacity. With LUN Concatenation it is possible to add the unused disk space to the existing LUN.

10 LUN expansion and LUN Concatenation features enable the establishment of a capacity on-demand system. 32 Let us now look at in more detail the two steps of LUN expansion. The other objective besides just adding capacity to en existing LUN could be to change the RAID level to provide better performance or increased reliability - as explained earlier on when comparing the characteristics of RAID 5, RAID 5+0 and RAID 6. An existing RAID 5 with three plus one configuration is to be expanded by adding two existing unused disks to it. As a result the RAID Group has now a configuration five plus one. If the RAID Group contains a LUN or LUNs, as the next step the additional disk capacity needs to be added to the existing LUN with LUN Concatenation. 33 LUN Concatenation is a functionality that can be used to consolidate unused space in the same RAID Group or across different RAID Groups. Please note that the resulting bigger capacity may not be automatically recognized by the operating system and or application. It may be necessary to re-map the Volume and or re-configure and re-start the application. To circumvent this manual process that also interrupts normal operation, ETERNUS provides automated capacity-ondemand functionality called Thin Provisioning; more about that later in this Web Based Training. The User's Guides provide complete instructions for using LUN Concatenation, but here couple of regulations regarding its usage; applicable maximum and minimum sizes of the LUN. Up to 16 LUNs can be concatenated and they can reside on any type of RAID. For example: RAID Group one is a RAID 5 array with two LUNs. RAID Group two is also a RAID 5 with one LUN on it and a large unused area. Next we want to add the unused area of RAID Group two to LUN two of the RAID Group one. After the Concatenation is finished LUN two has been expanded to 1.8 Terabytes. 34 RAID Migration is functionality for moving data between existing RAID Groups, completely transparent to the users. To better suit for example changed performance requirements or to move existing data to a more reliable RAID Group, a single LUN or several LUNs can be moved system internally. The S2 generation systems allow also migration of concatenated LUNs. The next example shows how a RAID Group can be migrated to larger capacity disks in order to increase its capacity. As a result the existing data resides in a RAID Group with double the capacity and the smaller capacity RAID Group is unused. In the second example the data is moved to a RAID Group providing higher reliability; for example from RAID 5 to RAID 1+0 that in this example both provide about the same capacity. 35

11 Data Encryption is the subject on the following three slides that show the different options ETERNUS DX provides for data encryption. 36 Both types of encryption provided by ETERNUS aim at the same objective; as regulations for financial institutions are tightened, and information security regulations increase - data on disk drives must be prevented from access by unauthorized persons. It is possible to ensure data confidentiality within the company by setting up encrypted LUNs that can only be read by the host that is authorized for it. The other typical security issue is that old, sidelined IT equipment could be sold on or discarded as scrap. The problem is that data, which must not go outside the company, may still reside on the disks. Disks with encrypted data on them are not readable outside the ETERNUS system that created the disks, not even in another ETERNUS system. Unauthorized access to confidential data is thus properly prevented. 37 Self Encrypting Disks are available for the ETERNUS DX S2 models by end of This slide introduces the operating principle of the SE Disks. The control logic for the SED resides in the Controller Module, for the encryption it is necessary to have an authentication key that is also stored as a hash value in the disk itself. Before transmitting data, the CM compares the hash value on the disk with the authentication key to verify its authenticity. Within the disk the data is encrypted using 128-bit AES algorithm. Data resides on the disk only in encrypted format and the encryption engine is designed to enable data I/O at full disk interface bandwidth. ETERNUS cache module sends and receives plain data. ETERNUS Web GUI can be used to enable the encryption functionality of SEDs, as a prerequisite all disks of the RAID Group must be SED. When configuring a system with Self Encrypting Disks, please note that also the Hot Spare must be a SED. 38 The other option for data encryption in ETERNUS DX is to use the built-in encryption feature. There are two algorithms to choose from; Fujitsu's own encryption and the standard 128-bit AES. The control logic for encryption resides in the CM, the data cache receives and sends data with the host in plain format. Plain data is encrypted by the Controller Module and stored in the encryption buffer before it is written to disks. When data is read from the disk drive, it is decrypted inside the CM and written as plain data into the cache, and then transferred to the host. The data resides always encrypted on the disks, meaning that the data on a disk remains secure also when removed from the system. One difference between the SED encryption and built-in encryption is that the built-in encryption can be set up per LUN and the same RAID Group can contain both encrypted and unencrypted LUNs. It is also possible to encrypt an existing Volume and the encryption can be enabled with all available disk types.

12 39 Next few slides focus on the ETERNUS DX features that are dealing with environmental green values. 40 Eco-mode enables reduction of system's power consumption through optimized disk usage. When a particular RAID Group is known to be used only during certain hours - typical example for backups - it is possible to spin the disks down for the time outside of the usage window. Each RAID Group can be scheduled to spin up only during certain time of the day, the rest of the day - and also if not accessed during the scheduled time - the drives are spun down to reduce the power consumption to minimum. Spun down drives are at all times accessible; when a Volume is accessed by the host outside the scheduled time, the drives of the RAID Group are spun up so that the host can access the Volume with minimal time overhead. Eco-mode is enabled per RAID Group by using the Web GUI or the CLI. For example, if you use Nearline SAS disks for backup, the RAID Group with the target disks can be powered off during the daytime and only powered on when used for backup. If backup jobs are running during the night from midnight to 5 am in the morning, then the target RAID Group is only powered on during that time and for the rest of the day the disks remain powered down. A reduction of power consumption of up to 15% is possible by using the Eco-mode. 41 By the way, also applications - for example backup tools - can control drive rotation over the Command Line Interface. Here are screen shots showing how the Eco-mode can be set up using the ETERNUS Web GUI. Host I/O Monitoring Interval refers to the time window to observe if the host is accessing the disks; if the time elapses without host I/O, the drives are automatically spun down. The Spin-down Limit Count defines the maximum number of times per day for spinning the drives down; after the value has been reached the drives stay powered on to minimize the mechanical wear. This is how the Eco-mode schedule is set up; the given times represent the start time and the end time of the period when the drives are powered on - the rest of the day they remain spun down, if not accessed by the host. To ensure normal operating conditions for the drives they are spun up 30 minutes before the scheduled start time and stay on for 30 minutes to settle down after having been potentially extensively accessed during for example the backup window. 42 One frequently asked question regarding the Eco-mode is how it affects the Mean Time Between Failures of the hard disks. One cannot assume that Eco-mode has no impact on the MTBF but it is fair to assume that the impact is very small. Now we will have a look at what makes us say that. Let's assume that a disk is spun down or up three times a day. In five years that makes about five and half thousand cycles, which is far less than a typical disk's MTBF value of spin up / spin down cycles.

13 The other concern that you may have is that when the disks are spun down, how long does it take to be able to access them? SAS disks typically spin up in 15 seconds whereas Nearline SAS disks due to their different mechanical structure need typically some five seconds longer. Disks in large arrays are started slightly staggered to avoid a power surge; however all disks are guaranteed to be spun up before the host would think the array is offline. 43 When considering reducing the power consumption of a storage system - not only to save on running costs but also to preserve the nature - one should bear in mind what the latest disk technologies can offer in this area. While the disk capacity has increased in big steps, the power consumption has at the same reduced in equally big steps; due to the introduction of 2.5 inch disks and especially due to Solid State Disks. In about two years the power consumption per Gigabyte has reduced to about half. As the disk capacities continue increasing, the disk and therewith system footprint per storage capacity continues decreasing. This is also an important contributor for decreasing both power consumption and environmental burden because through smaller footprint the cooling requirements are also lower. For full introduction and practical experience with the Storage Cruiser you may want to consider joining one of the scheduled classroom trainings for it. 44 With ETERNUS DX one do not need to settle for relying that new technologies reduce automatically power consumption, with ETERNUS SF Storage Cruiser it is also possible to monitor the power consumption and system temperatures. With Storage Cruiser it is possible to monitor ETERNUS DX in real time and to view logs providing information from a longer time span. The existing ETERNUS systems can also be grouped for monitoring purposes in order to get a consolidated overview. Here is an example of a system power consumption graph for a single system. ETERNUS SF Storage Cruiser has also other monitoring features for the whole SAN environment that help for example pinpointing potential performance bottlenecks. 45 Next two slides will be spent with learning the functional principle of Thin Provisioning. 46 This slide provides an introduction to Thin Provisioning that is available in the DX80 S2, DX90 S2, DX410 S2 and DX440 S2 models. Thin Provisioning is a technology which saves both initial and ongoing costs, thus providing a better Return on Investment, or ROI. Cost savings are achieved both by lower initial hardware costs and by savings on the running costs through lower energy consumption. Running costs are lowered also in that respect that the user applications can be set up at the first time installation to perceive the available storage capacity to be more than it actually is. As the storage capacity needs increase in the future, it is not necessary to stop the applications for reconfiguring but instead simply add disks to the system to provide additional capacity.

14 The functionality is based on Thin Provisioning Pools that function as storage reserve for the applications. As soon as the available storage space for a particular application goes below a defined threshold, more capacity can be allocated from the TP Pools. Thin Provisioning is a licensed feature that can be enabled with a purchasable license key. Let's have a look at the operating principle using an example. An application server hosts three applications that each have their own dedicated storage space. For the application one ETERNUS reports the total storage space to be much larger than the currently used and needed space. This to cater for increasing future needs. Application two is similarly set up, as well as the application three. All applications can allocate more storage space from the TP Pool; so that the TP Pool never runs out of space the administrator gets a warning to add more physical storage capacity in the system before the Pool is completely allocated. 47 Thin Provisioning provides virtual storage capacity that is only partially available as real physical storage space. That allows for efficient storage usage and minimizes the initial financial investment. Storage capacity can be added on on-demand basis but instead of adding it per application, the available capacity in the Pool is available equally to all applications. The basic idea with Thin Provisioning is to allocate storage capacities according to the future needs, without having the physical capacity available today. So the Administrator can set up Virtual Volumes - in our case a total of ten Terabytes each - but actually have only a total physical capacity of two Terabytes. Even the two Terabytes are not needed on day one, but one day the warning threshold will be reached, indicating that the available physical capacity is about to run out. This triggers the Administrator to add physical disk capacity for the foreseeable future needs and a new threshold will be to set up to warn the administrator again in the future before the added capacity is used up. 48 The last chapter of this Web Based Training module focuses on a selection of miscellaneous ETERNUS DX features. 49 Notification can be enabled using the ETERNUS Web GUI. The advantage of notification as a built-in functionality of the ETERNUS DX is that it eliminates the need for a management application to poll and receive error messages or traps from the system and to generate an that can be sent out using a mail server that consequently sends the information to the administrator and/or service engineer. With ETERNUS DX it is possible to achieve the same with less infrastructure overhead. The mail is created by the storage system, triggered by a selected event or error that has taken place in the system. The is sent out by the mail server to the administrator and/or engineer. Please note that the ETERNUS built-in Notification feature naturally doesn't prevent from using the conventional mechanism should it be already available.

15 50 For monitoring of system events and notifications ETERNUS DX supports three mechanisms: Simple Network Management Protocol, SMI-S [s-m-i-s] - short for Storage Management Initiative Specification - and syslog. The two first mentioned are common industry standards that are supported by many management applications, for example by Fujitsu ServerView. One option for utilizing the syslog functionality is to set up a central syslog server that can be set up to collect the event logs from all systems in a larger data center. 51 In order to log in the ETERNUS Web GUI or CLI for administration or maintenance purposes, a user name and a password must be used. ETERNUS DX has two default user accounts that are pre-set at the factory. User name root with a password root. User name f.ce with a default password that is a string containing a two-digit check code that can be found from the back of the system and the system serial number. These two user accounts cannot be deleted but the passwords can be changed, which is also recommended. To avoid login problems caused by forgotten passwords, it is also recommended to create one user account with administrator privileges and a secure password and store the information in a safe place. This way there is always a way to log in and reset the passwords if they would be forgotten. Needless to say, all such activities can be done on a customer system only after customer approval. Each user account is associated with a user profile that defines what functionalities are available for the particular user. This feature is called Role Based Access Control and its purpose is to allow each user to carry out only those activities that are typical for the particular user. There are six different user profiles to choose from when the user account is set up; for example a user account with Monitor role can only see status information but is not able to change anything, whereas a user with a Maintainer role has access to all functionalities. The difference can be seen clearly in the ETERNUS Web GUI by certain menu options being grayed out to indicate that the user's privileges do not allow such functions. Please note that some menu options may also be grayed out because the system is not in the right state to carry out the particular function. 52 Wake-on-LAN is an Ethernet computer networking standard that allows a computer system to be turned on or woken up by a network message. In ETERNUS DX S2 systems the WOL functionality can be enabled for the Ethernet ports and thus allow remotely switching the system on. As per the WOL standard, a Magic Packet - consisting of 16 repetitions of the destination system's MAC address - is broadcasted in the network to switch on the target system. If the system was already powered on then the Magic Packet has no effect. A Magic Packet cannot be used to switch the system again off, but that can be done with the Web GUI or CLI.

16 53 Redundant IP functionality allows setting up different IP addresses for the management ports of ETERNUS DX. In a system that has two Controller Modules, one of the CMs has always a master role and the other one a slave role. Only the Ethernet port on the CM with a master role can be used to administrate the system. Well, to be exact, there are two functions that can be done with the slave CM; see the device status and the option to swap the master and slave roles of the CMs. Should the CM with the master role fail, the surviving CM becomes the master CM and continues using the IP address that the master CM had before failing. Maximum of two addresses in two subnets can be set up per CM. A switch or a hub needs to be used to enable failover connectivity to both CMs. In our example we are connecting the MNT ports of both CMs to one switch and the RMT ports to the other switch. With this configuration we can always guarantee a connection between the management system and the ETERNUS DX, with full redundancy for both the ETERNUS and for the switches. Please note that the Field Service Terminal port is only available on the Midrange models. 54 This slide demonstrates what happens to the CM connectivity when the CM with the master role fails. But first we'll have a look at the possible ways to connect the management computer - referred to as FST or Field Service Terminal in this illustration - with the ETERNUS DX. A direct connection is possible but naturally without any support for redundancy. Should the CM0 fail, we would lose the connection to the ETERNUS completely. Therefore it is always recommended to use a switch for being able to connect the management computer with both CMs. Here an example configuration with the given IP addresses of the CM0 and CM1. If the LAN Link of the Ethernet port of the master CM goes down, the surviving CM takes over the master role, including the IP address of the master CM. 55 Host Affinity refers to the possibility to build relationships between the host servers and the LUNs of the ETERNUS DX. There are two ways of doing that. LUN Mapping whereby a particular LUN is mapped to a particular Channel Adapter or a group of CAs. This mapping method allows all those hosts that have a physical connection to the designated CA or CAs to access the LUN. On the other hand, it is not possible to prevent a particular host from accessing the LUN. With HBA Mapping it is possible to define which hosts have access to a particular LUN or LUNs. Volume Groups containing the respective LUNs are mapped with the HBAs and consequently only these HBAs have access to the LUNs. Please note that a particular CA port can only be configured for either LUN Mapping or HBA Mapping. 56 We have now come to the end of this Web Based Training module. Thank you for your attention, for additional information on the ETERNUS DX please refer to the other available Web Based Training modules and classroom training offerings.

Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu.

Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu. Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu. 1 This training module is divided in six main chapters.

More information

10 Having Hot Spare disks available in the system is strongly recommended. There are two types of Hot Spare disks:

10 Having Hot Spare disks available in the system is strongly recommended. There are two types of Hot Spare disks: 0 This Web Based Training module provides support and maintenance related information for the ETERNUS DX S2 family. This module provides an introduction to a number of ETERNUS Web GUI functions, for a

More information

17 In big Data Centers it may be practical to collect event and error messages to a central syslog server.

17 In big Data Centers it may be practical to collect event and error messages to a central syslog server. Slide 0 Welcome to this Web-Based-Training session providing you an introduction to the Initial Setup of the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2. 1 In this first chapter we will look at the

More information

Design Guide (Basic) FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems. System configuration design

Design Guide (Basic) FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems. System configuration design FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic) System configuration design Table of Contents 1. Function Overview 14 2. Basic Functions

More information

Slide 0 Welcome to the Support and Maintenance chapter of the ETERNUS DX90 S2 web based training.

Slide 0 Welcome to the Support and Maintenance chapter of the ETERNUS DX90 S2 web based training. Slide 0 Welcome to the Support and Maintenance chapter of the ETERNUS DX90 S2 web based training. 1 This module introduces support and maintenance related operations and procedures for the ETERNUS DX60

More information

ETERNUS DX60 and DX80

ETERNUS DX60 and DX80 Highlights Low environmental impact Highly scalable & versatile connectivity User friendly management systems Outstanding reliability 1 Overview (1) are designed to meet the needs of all environments from

More information

Monitoring ETERNUS DX systems with ServerView Operations Manager

Monitoring ETERNUS DX systems with ServerView Operations Manager User Guide - English FUJITSU Software ServerView Suite Monitoring ETERNUS DX systems with ServerView Operations Manager Edition February 2018 Comments Suggestions Corrections The User Documentation Department

More information

10 The next chapter of this Web Based Training module describe the two different Remote Equivalent Transfer Modes; synchronous and asynchronous.

10 The next chapter of this Web Based Training module describe the two different Remote Equivalent Transfer Modes; synchronous and asynchronous. Slide 0 Welcome to this Web Based Training session providing an introduction to the fundamentals of Remote Equivalent Copy. This is one of the ETERNUS DX Advanced Copy Functions available in Fujitsu ETERNUS

More information

Technologies of ETERNUS6000 and ETERNUS3000 Mission-Critical Disk Arrays

Technologies of ETERNUS6000 and ETERNUS3000 Mission-Critical Disk Arrays Technologies of ETERNUS6000 and ETERNUS3000 Mission-Critical Disk Arrays V Yoshinori Terao (Manuscript received December 12, 2005) Fujitsu has developed the ETERNUS6000 and ETERNUS3000 disk arrays for

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Technical White Paper ETERNUS AF/DX Optimization Features

Technical White Paper ETERNUS AF/DX Optimization Features Technical White Paper ETERNUS AF/DX Optimization Features Automated Storage Tiering and Automated Quality of Service Table of contents Management Summary and General Remarks 2 Introduction 3 AST Basic

More information

Data Sheet FUJITSU ETERNUS DX60 S3 Disk Storage System

Data Sheet FUJITSU ETERNUS DX60 S3 Disk Storage System Data Sheet FUJITSU ETERNUS DX60 S3 Disk Storage System Data Sheet FUJITSU ETERNUS DX60 S3 Disk Storage System The Economy Storage System for SMBs ETERNUS DX Storage Combining leading performance architecture

More information

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System The all-in-one storage system for SMBs or subsidiaries ETERNUS DX - Business-centric Storage

More information

Data Sheet Fujitsu ETERNUS DX500 S3 Disk Storage System

Data Sheet Fujitsu ETERNUS DX500 S3 Disk Storage System Data Sheet Fujitsu ETERNUS DX500 S3 Disk Storage System Leading storage performance, automated quality of service ETERNUS DX - Business-centric Storage ETERNUS DX500 S3 Combining leading performance architecture

More information

Data Sheet FUJITSU Storage ETERNUS DX600 S3 Disk System

Data Sheet FUJITSU Storage ETERNUS DX600 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX600 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX600 S3 Disk System Leading storage performance, automated quality of service ETERNUS DX - Business-centric Storage

More information

Data Sheet FUJITSU Storage ETERNUS DX500 S3 Disk System

Data Sheet FUJITSU Storage ETERNUS DX500 S3 Disk System Data Sheet FUJITSU Storage Disk System Data Sheet FUJITSU Storage Disk System Leading storage performance, automated quality of service ETERNUS DX - Business-centric Storage Combining leading performance

More information

ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2

ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Support and Maintenance Support and Maintenance ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 0 Content Maintenance Utilities Maintenance Operations Log

More information

Lenovo RAID Introduction Reference Information

Lenovo RAID Introduction Reference Information Lenovo RAID Introduction Reference Information Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and cost-efficient methods to increase server's storage performance,

More information

Mladen Stefanov F48235 R.A.I.D

Mladen Stefanov F48235 R.A.I.D R.A.I.D Data is the most valuable asset of any business today. Lost data, in most cases, means lost business. Even if you backup regularly, you need a fail-safe way to ensure that your data is protected

More information

Data Sheet FUJITSU Storage ETERNUS DX60 S3 Disk Storage System

Data Sheet FUJITSU Storage ETERNUS DX60 S3 Disk Storage System Data Sheet FUJITSU Storage ETERNUS DX60 S3 Disk Storage System Data Sheet FUJITSU Storage ETERNUS DX60 S3 Disk Storage System The Economy Storage System for SMBs ETERNUS DX - Business-centric Storage Combining

More information

Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array

Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array Superior performance at reasonable cost ETERNUS DX - Business-centric Storage ETERNUS

More information

MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE

MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE WHITE PAPER JULY 2018 INTRODUCTION The large number of components in the I/O path of an enterprise storage virtualization device such

More information

Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System

Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System The all-in-one storage system for SMBs or subsidiaries ETERNUS DX - Business-centric Storage ETERNUS DX200 S3 Combining leading performance architecture

More information

Data Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System

Data Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System Data Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System The Flexible Data Safe for Dynamic Infrastructures eternus dx s2 Disk Storage Systems The second generation of ETERNUS DX disk storage systems from

More information

NAS System. User s Manual. Revision 1.0

NAS System. User s Manual. Revision 1.0 User s Manual Revision 1.0 Before You Begin efore going through with this manual, you should read and focus on the following safety guidelines. Information about the NAS system s packaging and delivery

More information

IBM i Version 7.3. Systems management Disk management IBM

IBM i Version 7.3. Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM Note Before using this information and the product it supports, read the information in

More information

IBM. Systems management Disk management. IBM i 7.1

IBM. Systems management Disk management. IBM i 7.1 IBM IBM i Systems management Disk management 7.1 IBM IBM i Systems management Disk management 7.1 Note Before using this information and the product it supports, read the information in Notices, on page

More information

Definition of RAID Levels

Definition of RAID Levels RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

More information

Introduction to NetApp E-Series E2700 with SANtricity 11.10

Introduction to NetApp E-Series E2700 with SANtricity 11.10 d Technical Report Introduction to NetApp E-Series E2700 with SANtricity 11.10 Todd Edwards, NetApp March 2014 TR-4275 1 Introduction to NetApp E-Series E2700 with SANtricity 11.10 TABLE OF CONTENTS 1

More information

Data Sheet FUJITSU Storage ETERNUS DX8700 S3 Disk System

Data Sheet FUJITSU Storage ETERNUS DX8700 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX8700 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX8700 S3 Disk System Enterprise Storage with leading scalability in capacity and performance ETERNUS DX - Business-centric

More information

Data Sheet FUJITSU Storage ETERNUS DX60 S4 Disk Storage System

Data Sheet FUJITSU Storage ETERNUS DX60 S4 Disk Storage System Data Sheet FUJITSU Storage ETERNUS DX60 S4 Disk Storage System Data Sheet FUJITSU Storage ETERNUS DX60 S4 Disk Storage System The Economy Storage System for SMBs ETERNUS DX - Business-centric Storage FUJITSU

More information

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System The all-in-one storage system for SMBs or subsidiaries ETERNUS DX - Business-centric Storage

More information

FUJITSU ETERNUS DX8000 SERIES DISK STORAGE SYSTEMS FOR LARGE ENTERPRISES

FUJITSU ETERNUS DX8000 SERIES DISK STORAGE SYSTEMS FOR LARGE ENTERPRISES FUJITSU ETERNUS DX8000 SERIES DISK STORAGE SYSTEMS FOR LARGE ENTERPRISES ETERNUS DX DISK STORAGE SYSTEMS ARE THE MOST RELIABLE AND SECURE DATA SAFES AVAILABLE, FROM ONLINE BACKUP TO MISSION-CRITICAL APPLICATIONS,

More information

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID System Upgrade Teaches RAID In the growing computer industry we often find it difficult to keep track of the everyday changes in technology. At System Upgrade, Inc it is our goal and mission to provide

More information

PowerVault MD3 Storage Array Enterprise % Availability

PowerVault MD3 Storage Array Enterprise % Availability PowerVault MD3 Storage Array Enterprise 99.999% Availability Dell Engineering June 2015 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS

More information

FUJITSU Storage ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems. Setup Guide

FUJITSU Storage ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems. Setup Guide FUJITSU Storage ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Setup Guide STEP Preparation An installation space and network environment must be prepared in advance. Prepare the following manuals: Manuals

More information

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide Non-stop storage is a high-availability solution that combines ETERNUS SF products

More information

HP P6000 Enterprise Virtual Array

HP P6000 Enterprise Virtual Array HP P6000 Enterprise Virtual Array The HP P6000 Enterprise Virtual Array (P6000 EVA) is an enterprise class virtual storage array family for midsized organizations at an affordable price. With built in

More information

SSD Architecture Considerations for a Spectrum of Enterprise Applications. Alan Fitzgerald, VP and CTO SMART Modular Technologies

SSD Architecture Considerations for a Spectrum of Enterprise Applications. Alan Fitzgerald, VP and CTO SMART Modular Technologies SSD Architecture Considerations for a Spectrum of Enterprise Applications Alan Fitzgerald, VP and CTO SMART Modular Technologies Introduction Today s SSD delivers form-fit-function compatible solid-state

More information

VERITAS Foundation Suite for HP-UX

VERITAS Foundation Suite for HP-UX VERITAS Foundation Suite for HP-UX Enhancing HP-UX Performance and Availability with VERITAS Foundation Products V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

RocketU 1144BM Host Controller

RocketU 1144BM Host Controller RocketU 1144BM Host Controller USB 3.0 Host Adapters for Mac User s Guide Revision: 1.0 Oct. 22, 2012 HighPoint Technologies, Inc. 1 Copyright Copyright 2012 HighPoint Technologies, Inc. This document

More information

RocketU 1144CM Host Controller

RocketU 1144CM Host Controller RocketU 1144CM Host Controller 4-Port USB 3.0 PCI-Express 2.0 x4 RAID HBA for Mac User s Guide Revision: 1.0 Dec. 13, 2012 HighPoint Technologies, Inc. 1 Copyright Copyright 2013 HighPoint Technologies,

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage

Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage O V E R V I E W Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Adaptable Modular Storage and Hitachi Workgroup

More information

Data Sheet Fujitsu ETERNUS DX400 S2 Series Disk Storage Systems

Data Sheet Fujitsu ETERNUS DX400 S2 Series Disk Storage Systems Data Sheet Fujitsu ETERNUS DX400 S2 Series Disk Storage Systems The Flexible Data Safe for Dynamic Infrastructures ETERNUS DX S2 Disk Storage Systems The second generation of ETERNUS DX disk storage systems

More information

3 Here is a short overview of the specifications. A link to a data sheet with full specification details is given later in this web based training

3 Here is a short overview of the specifications. A link to a data sheet with full specification details is given later in this web based training 1 Welcome to this web based training session. This presentation provides you an introduction to the Fujitsu ETERNUS JX40. The JX40 is a passive direct server attached drive extension offering extension

More information

Cybernetics Virtual Tape Libraries Media Migration Manager Streamlines Flow of D2D2T Backup. April 2009

Cybernetics Virtual Tape Libraries Media Migration Manager Streamlines Flow of D2D2T Backup. April 2009 Cybernetics Virtual Tape Libraries Media Migration Manager Streamlines Flow of D2D2T Backup April 2009 Cybernetics has been in the business of data protection for over thirty years. Our data storage and

More information

Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System

Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System The Flexible Data Safe for Dynamic Infrastructures. ETERNUS DX S2 DISK STORAGE SYSTEMS Fujitsu s second generation of ETERNUS DX disk storage systems,

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

SolidFire and Pure Storage Architectural Comparison

SolidFire and Pure Storage Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Pure Storage Architectural Comparison June 2014 This document includes general information about Pure Storage architecture as

More information

White paper ETERNUS Extreme Cache Performance and Use

White paper ETERNUS Extreme Cache Performance and Use White paper ETERNUS Extreme Cache Performance and Use The Extreme Cache feature provides the ETERNUS DX500 S3 and DX600 S3 Storage Arrays with an effective flash based performance accelerator for regions

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ion Stoica, UC Berkeley September 13, 2016 (based on presentation from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk N

More information

Managing Storage Adapters

Managing Storage Adapters This chapter includes the following sections: Self Encrypting Drives (Full Disk Encryption), page 2 Create Virtual Drive from Unused Physical Drives, page 3 Create Virtual Drive from an Existing Drive

More information

RAID EzAssist Configuration Utility User Reference Guide

RAID EzAssist Configuration Utility User Reference Guide RAID EzAssist Configuration Utility User Reference Guide DB13-000047-00 First Edition 08P5519 Proprietary Rights Notice This document contains proprietary information of LSI Logic Corporation. The information

More information

The term "physical drive" refers to a single hard disk module. Figure 1. Physical Drive

The term physical drive refers to a single hard disk module. Figure 1. Physical Drive HP NetRAID Tutorial RAID Overview HP NetRAID Series adapters let you link multiple hard disk drives together and write data across them as if they were one large drive. With the HP NetRAID Series adapter,

More information

ETERNUS DX Advanced Copy Functions

ETERNUS DX Advanced Copy Functions ETERNUS DX Advanced Copy Functions 0 Content Equivalent Copy (EC) Session in General Equivalent Copy (EC) Concept EC Concept Diagram EC Cancel, Suspend and Resume Equivalent Copy - Process Mirroring Mechanisms

More information

Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts

Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts Raid: Who What Where When and Why 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts 1 Table of Contents General Concepts and Definitions... 3 What is Raid... 3 Origins of RAID...

More information

Datasheet. FUJITSU Storage ETERNUS SF Storage Cruiser V16.1 ETERNUS SF AdvancedCopy Manager V16.1 ETERNUS SF Express V16.1

Datasheet. FUJITSU Storage ETERNUS SF Storage Cruiser V16.1 ETERNUS SF AdvancedCopy Manager V16.1 ETERNUS SF Express V16.1 Datasheet FUJITSU Storage ETERNUS SF Storage Cruiser V16.1 ETERNUS SF AdvancedCopy Manager V16.1 ETERNUS SF Express V16.1 Central console and advanced management functions for ETERNUS DX storage environments..

More information

Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System

Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System Leading storage performance, automated quality of service ETERNUS DX - Business-centric Storage

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

4.1 Introduction to Media and Devices

4.1 Introduction to Media and Devices Chapter 4 Network Hardware 4.1 Introduction to Media and Devices Many of the issues discussed in this course, such as topology, scalability, and speed, depend on hardware. Unlike many of your computer

More information

Storage Profiles. Storage Profiles. Storage Profiles, page 12

Storage Profiles. Storage Profiles. Storage Profiles, page 12 , page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 6 Automatic Disk Selection, page 7 Supported LUN Modifications, page 8 Unsupported LUN Modifications, page 8 Disk Insertion

More information

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The future of storage is flash The all-flash datacenter is a viable alternative You ve heard it

More information

Network and storage settings of ES NAS high-availability network storage services

Network and storage settings of ES NAS high-availability network storage services User Guide Jan 2018 Network and storage settings of ES NAS high-availability network storage services 2018 QNAP Systems, Inc. All Rights Reserved. 1 Table of Content Before the Setup... 3 Purpose... 3

More information

Configuration Tool and Utilities v3.25 Operation Manual. for Fusion RAID Storage Systems

Configuration Tool and Utilities v3.25 Operation Manual. for Fusion RAID Storage Systems Configuration Tool and Utilities v3.25 Operation Manual for Fusion RAID Storage Systems Contents 1.0 ATTO Configuration Tool Overview... 1 About the Configuration Tool Configuration Tool Launch Configuration

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

BUSINESS CONTINUITY: THE PROFIT SCENARIO

BUSINESS CONTINUITY: THE PROFIT SCENARIO WHITE PAPER BUSINESS CONTINUITY: THE PROFIT SCENARIO THE BENEFITS OF A COMPREHENSIVE BUSINESS CONTINUITY STRATEGY FOR INCREASED OPPORTUNITY Organizational data is the DNA of a business it makes your operation

More information

VERITAS Volume Manager for Windows 2000

VERITAS Volume Manager for Windows 2000 VERITAS Volume Manager for Windows 2000 Advanced Storage Management Technology for the Windows 2000 Platform In distributed client/server environments, users demand that databases, mission-critical applications

More information

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan White paper Version: 1.1 Updated: Sep., 2017 Abstract: This white paper introduces Infortrend Intelligent

More information

5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1

5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks Amdahl s law in Chapter 1 reminds us that

More information

Configuring Storage Profiles

Configuring Storage Profiles This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,

More information

Data Sheet FUJITSU Storage ETERNUS AF650 S2 All-Flash Array

Data Sheet FUJITSU Storage ETERNUS AF650 S2 All-Flash Array Data Sheet FUJITSU Storage ETERNUS AF650 S2 All-Flash Array Data Sheet FUJITSU Storage ETERNUS AF650 S2 All-Flash Array Storage Unchained! ETERNUS AF Storage ETERNUS AF650 S2 FUJITSU Storage ETERNUS AF

More information

Hitachi Adaptable Modular Storage 2000 Family

Hitachi Adaptable Modular Storage 2000 Family O V E R V I E W Hitachi Adaptable Modular Storage 2000 Family Highly Reliable, Cost Effective Modular Storage for Medium and Large Businesses, and Enterprise Organizations Hitachi Data Systems Hitachi

More information

WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution

WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution WHITE PAPER Cloud FastPath: A Highly Secure Data Transfer Solution Tervela helps companies move large volumes of sensitive data safely and securely over network distances great and small. We have been

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

Method to Establish a High Availability and High Performance Storage Array in a Green Environment

Method to Establish a High Availability and High Performance Storage Array in a Green Environment Method to Establish a High Availability and High Performance Storage Array in a Green Environment Dr. M. K. Jibbe Director of Quality Architect Team, NetApp APG mahmoudj@netapp.com Marlin Gwaltney Quality

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley January 31, 2018 (based on slide from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Configuring Storage Profiles

Configuring Storage Profiles This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,

More information

QuickSpecs. Models. Overview

QuickSpecs. Models. Overview Overview The HP Smart Array P400 is HP's first PCI-Express (PCIe) serial attached SCSI (SAS) RAID controller and provides new levels of performance and reliability for HP servers, through its support of

More information

Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System

Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System Data Sheet FUJITSU Storage ETERNUS DX500 S4 Disk System Leading storage performance, automated quality of service ETERNUS DX - Business-centric Storage

More information

NEC M100 Frequently Asked Questions September, 2011

NEC M100 Frequently Asked Questions September, 2011 What RAID levels are supported in the M100? 1,5,6,10,50,60,Triple Mirror What is the power consumption of M100 vs. D4? The M100 consumes 26% less energy. The D4-30 Base Unit (w/ 3.5" SAS15K x 12) consumes

More information

Application-Oriented Storage Resource Management

Application-Oriented Storage Resource Management Application-Oriented Storage Resource Management V Sawao Iwatani (Manuscript received November 28, 2003) Storage Area Networks (SANs) have spread rapidly, and they help customers make use of large-capacity

More information

Chapter 2 CommVault Data Management Concepts

Chapter 2 CommVault Data Management Concepts Chapter 2 CommVault Data Management Concepts 10 - CommVault Data Management Concepts The Simpana product suite offers a wide range of features and options to provide great flexibility in configuring and

More information

Models Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21

Models Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21 Overview The Smart Array 6400 high performance Ultra320, PCI-X controller family provides maximum performance, flexibility, and reliable data protection for HP ProLiant servers, through its unique modular

More information

Managing Switch Stacks

Managing Switch Stacks Finding Feature Information, page 1 Prerequisites for Switch Stacks, page 1 Restrictions for Switch Stacks, page 2 Information About Switch Stacks, page 2 How to Configure a Switch Stack, page 14 Troubleshooting

More information

All-Flash Storage Solution for SAP HANA:

All-Flash Storage Solution for SAP HANA: All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table

More information

Hitachi Adaptable Modular Storage and Workgroup Modular Storage

Hitachi Adaptable Modular Storage and Workgroup Modular Storage O V E R V I E W Hitachi Adaptable Modular Storage and Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Data Systems Hitachi Adaptable Modular Storage and Workgroup

More information

ServeRAID M5000 Series Battery Kit can provide enterprise-grade reliability businesses seek without compromising performance and reliability.

ServeRAID M5000 Series Battery Kit can provide enterprise-grade reliability businesses seek without compromising performance and reliability. ServeRAID M5014 and M5015 SAS/SATA Controllers - Two x4 port internal 6 Gbps SAS/SATA solutions for high-density servers with the flexibility to use both SAS and SATA hard drives At a glance ServeRAID

More information

Datasheet Fujitsu ETERNUS DX8700 S2 Disk Storage System

Datasheet Fujitsu ETERNUS DX8700 S2 Disk Storage System Datasheet Fujitsu ETERNUS DX8700 S2 Disk Storage System The Flexible Data Safe for Dynamic Infrastructures. ETERNUS DX S2 DISK STORAGE SYSTEMS The Fujitsu second generation of ETERNUS DX disk storage systems,

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

Technical White Paper FUJITSU Storage ETERNUS AF and ETERNUS DX Feature Set

Technical White Paper FUJITSU Storage ETERNUS AF and ETERNUS DX Feature Set Technical White Paper FUJITSU Storage ETERNUS AF and ETERNUS DX Feature Set This white paper provides an overview of the main features supported by the FUJITSU Storage ETERNUS AF all-flash and ETERNUS

More information

Uncovering the Full Potential of Avid Unity MediaNetworks

Uncovering the Full Potential of Avid Unity MediaNetworks Uncovering the Full Potential of Avid Unity MediaNetworks REALIZING GREATER REWARDS WITHOUT THE TRADITIONAL RISKS Archion Technologies 700 S. Victory Blvd Burbank, CA. 91502 818.840.0777 www.archion.com

More information