vsphere Flash Device Support October 31, 2017
|
|
- Norman Garrett
- 6 years ago
- Views:
Transcription
1 October 31,
2 Table of Contents 1. vsphere Flash Device Support - Main 1.1.Target Audience 1.2.Resolution 1.3.SSD and Flash Device Use Cases 1.4.SSD Endurance Criteria 1.5.SSD Selection Requirements 1.6.ESXi Coredump Device Usage Model 1.7.VMware Support Policy 2. vsphere Flash Device Support - Appendices 2.1.Appendix 1: ESXi Logging Sequential Requirement 2.2.Appendix 2: OEM Pre-Install Coredump Size 2.3.Appendix 3: Resizing Coredump Partitions 2.4.Appendix 4: Creating a Logging Partition 2
3 1. vsphere Flash Device Support - Main This article provides guidance for SSD and Flash Device usage with vsphere, basic requirements as well as recommendations specific use cases. 3
4 1.1 Target Audience Customer: Ensure that vsphere hosts are populated with flash and SSD devices that meet the required size and endurance criteria as set forth in Table-1 for the various use cases. Also, for devices used for the coredump use case in the case of hosts with a large amount of system memory or when VSAN is in use, ensure that the device size matches guidance in Table-2 and that actual size of the actual coredump partition is adequate. System Vendor: Ensure that certified servers for vsphere use supported flash and SSD devices that meet the required size and endurance criteria as set forth in Table-1 for the various use cases. For systems with large memory as well as for VSAN Ready Nodes, vendors should take care to size flash or SSD devices used for the coredump use case as specified in Table-2 to ensure adequate operation in the event of a system crash and to ensure that the coredump partition is correctly sized as default settings may need to be overridden. For USB factory pre-installs of ESXi on these systems consult Appendix 2. Flash Device Vendor: Ensure that your SSD and Flash drives types meet the required endurance criteria as set forth in Table-1 for the various use cases. For the logging partition use case, ensure that recommended low cost flash devices have sufficient sequential endurance to meet the alternative requirement outlined in Appendix Resolution VMware vsphere ESXi can use locally attached SSDs (Solid State Disk) and flash devices in multiple ways. Since SSDs offer much higher throughput and much lower latency than traditional magnetic hard disks the benefits are clear. While offering lower throughput and higher latency, flash devices such as USB or SATADOM can also be appropriate for some use cases. The potential drawback to using SSDs and flash device storage is that the endurance can be significantly less than traditional magnetic disks and it can vary based on the workload type as well as factors such as the drive capacity, underlying flash technology, etc. This white paper outlines the minimum SSD and flash device recommendations based on different technologies and use case scenarios. 1.3 SSD and Flash Device Use Cases A non-exhaustive survey of various usage models in vsphere environment are listed below. Host swap cache This usage model has been supported since vsphere 5.1 for SATA and SCSI connected SSDs. USB and low end SATA or SCSI flash devices are not supported. The workload is heavily influenced by the degree of host memory over commitment. Regular datastore A (local) SSD is used instead of a hard disk drive. This usage model has been supported since vsphere 6.0 for SATA and SCSI connected SSDs. There is currently no support for USB connected SSDs or for low end flash devices regardless of connection type. vsphere Flash Read Cache (aka Virtual Flash) vsan This usage model has been supported since vsphere 5.5 for SATA and SCSI connected SSDs. There is no support for USB connected SSDs or for low end flash devices. This usage model has been supported since vsphere 5.5 for SATA and SCSI SSDs. 4
5 vsan Hardware Quick Reference Guide should be consulted for detailed requirements. vsphere ESXi Boot Disk A USB flash drive or SATADOM or local SSD can be chosen as the install image foresxi, the vsphere hypervisor, which then boots from the flash device. This usage model has been supported since vsphere 3.5 for USB flash devices and vsphere 4.0 for SCSI/SATA connected devices. Installation to SATA and SCSI connected SSD, SATADOM and flash devices creates a full install image which includes a logging partition (see below) whereas installation to a USB device creates a boot disk image without a logging partition. vsphere ESXi Coredump device The default size for the large coredump partition is 2.5 GiB which is about 2.7 GB and the installer creates a large coredump partition on the boot device for vsphere 5.5 and above. After installation the partition can be resized if necessary; consult Appendix 3 for detailed remediation steps. Any SATADOM or SATA/SCSI SSD may be configured with a coredump partition. In coming release of vsphere non-boot USB flash devices may also be supported. This usage model has been supported from vsphere 3.5 for boot USB flash devices and since vsphere 4.0 for any SATA or SCSI connected SSD that is local. This usage model also applies to Autodeploy hosts which have no boot disk. vsphere ESXi Logging device A SATADOM or local SATA/SCSI SSD is chosen as the location for the vsphere logging partition (aka, /scratch partition ). This partition may be but need not be on the boot disk and this applies to Autodeploy hosts which lack a boot disk. This usage model has been supported since vsphere 6.0 for any SATA or SCSI connected SSD that is local. SATADOMs that meet the requirement set forth in Table 1 are also supported. This usage model can be supported in a future release of vsphere for USB flash devices that meet the requirement set forth in Table SSD Endurance Criteria The flash industry often uses Tera Bytes Written (TBW) as a benchmark for SSD endurance. TBW is the number of terabytes that can be written to the device over its useful life. Most devices have distinct TBW ratings for sequential and random IO workloads, with the latter being much lower due to WAF (defined below). Other measures of endurance commonly used are DWPD (Drive Writes Per Day) and P/E (Program/Erase) cycles. Conversion formulas are provided here for the reader s convenience: Converting DWPD (Drive Writes Per Day) TBW (Terabytes Written): TBW = DWPD * Warranty (in Years) * 365 * Capacity (in GB) / 1,000 (GB per TB) Converting Flash P/E Cycles per Cell TBW (Terabytes Written): TBW = Capacity (in GB) * (P/E Cycles per Cell) / (1,000 (GB per TB) * WAF) WAF (Write Amplification Factor) is a measure of the induced writes caused by inherent properties of flash technology. Due to the difference between the storage block size (512 bytes), the flash cell size (typically 4KiB or 8KiB bytes) and the minimum flash erase size of many cells one write can force a number of induced writes due to copies, garbage collection, etc. For sequential workloads typical WAFs fall in the range of single digits while for random workloads WAFs can approach or even exceed 100. Table 1 contains workload characterization for the various workloads excepting the Datastore and vsphere Flash Read Cache workloads which depend on the characteristics of the Virtual Machines workloads being run and thus cannot be characterized here. A WAF from the table can be used with the above P/E -> TBW formula. 1.5 SSD Selection Requirements 5
6 Performance and endurance are critical factors when selecting SSDs. For each of the above use cases, the amount and frequency of data written to the SSD or flash device determines the minimum requirement for performance and endurance by ESXi. In general, SSDs can be deployed in all of the above use cases, but (low end) flash devices including SATADOM can only be deployed in some. In the table below: ESXi write endurance requirements are stated in terms of Terabytes written (TBW) for a JEDEC random workload. There are no specific ESXi performance requirements, but products built on top of ESXi such as VSAN may have their own requirements. Table 1: SSD/Flash Endurance Requirements 1. For SSD sizes over 1 TB the endurance should grow proportionally (e.g., 7300 TBW for a 2 TB) 2. Endurance requirement normalized to JEDEC random for an inherently sequential workload 3. Only 4 GB of device is used, so a 16 GB device need only support 25% as many P/ E cycles. 4. Default coredump partition size is 2.7 GB. See Table 2 for detailed size requirements. When boot and coredump devices are colocated1 the boot device endurance requirement will suffice. 5. Failure of the ESXi boot and/or coredump devices is catastrophic for vsphere, hence the higher requirement as an extra margin of safety when logging device is collocated(1) with one or both. 6. Future release of vsphere may require higher TBW for its boot device. It is highly recommended for future looking system to have 2 TBW endurance requirements for vsphere boot device. IMPORTANT: ALL of the TBW requirements in Table 1 are stated in terms of the JEDEC Enterprise Random Workload(2) because vendors commonly publish only a single endurance number, the random TBW. Vendors may provide a sequential number if asked and such a number together with a measured or worst case WAF can be used to calculate an alternative sequential TBW if the total workload writes in 5 years are known. Failure of the boot or coredump device is catastrophic for vsphere so VMware requires use of the random TBW requirement for the boot and coredump use cases. Appendix 1 describes in detail how to do the calculation for the logging device use case. (1) Collocated refers to the case where 2 use cases are partitions on the same device, thereby sharing flash cells. 6
7 (2) See JESD218A and JESD219 for the Endurance Test Method and Enterprise Workload definitions, respectively. 1.6 ESXi Coredump Device Usage Model The size requirement for the ESXi coredump device scales with the size of the host DRAM and also usage of VSAN. vsphere ESXi installations with an available local datastore are advised to use dump to file which automatically reserves needful space on the local datastore but flash media in general and installations using VSAN in particular will often lack a local datastore and thus require a coredump device. While the default size of 2560 MiB suffices for a host with 1 TiB of DRAM not running VSAN, if VSAN is in use the default size is very often insufficient. Table 2 gives the recommended partition size in units of MiB and corresponding flash drive size recommendation. If these recommendations are ignored and ESXi crashes then the coredump may be truncated. The footnotes explain the calculation, and note that if using VSAN the values from the right side of the table must be added to those from the left side of the table. To override the default or change coredump partition size after installing consult the appendices. Table 2: Coredump Partition Size Parameter and Size Requirement(4) as a Function of both Host DRAM Size and (if applicable) VSAN Caching Tier Size is the default so no parameter is required for systems without VSAN with up to 1 TiB of DRAM or with VSAN with up to 512 GiB of DRAM and 250 GB of SSDs in the Caching Tier. 2. Due to GiB to GB conversion 6 and 12 TiB DRAM sizes require the next larger flash device to accommodate the coredump partition. Provided sizes will also accommodate colocating the boot device and the coredump device on the same physical flash drive. 3. Sizes in these columns must be added to sizes from left hand side of table. For example, a host with 4 TiB of DRAM and 4 TB of SSD in the VSAN Caching Tier requires a flash device size of at least 24 GB (16 GB + 8 GB) and a coredump partition size of MiB ( ). 4. Coredump device usage is very infrequent so TBW requirement is unchanged from Table VMware Support Policy In general, if the SSD s host controller interface is supported by a certified IOVP driver, then the SSD drive is supported for ESXi provided that the media meets the endurance requirements above. Therefore, there are no specific vsphere restrictions against SATADOM and M.2 provided, again, that they adhere to the endurance requirements set forth in Table 1 above. For USB storage devices (such as flash drives, SD cards plus readers, and external disks of any kind) the drive vendor must work directly with system manufacturers to ensure that the drives are supported for these systems. USB flash devices and SD cards plus readers are qualified pairwise with USB host controllers and it is possible for a device to fail certification with one host controller but pass with another. VMware strictly recommends that customers who do not have a preinstalled system either obtain a USB flash drive directly from their OEM vendor or purchase a model that has been certified for use with their server. 7
8 2. vsphere Flash Device Support - Appendices Supplemental information. 8
9 2.1 Appendix 1: ESXi Logging Sequential Requirement As noted previously, a logging partition is created automatically when ESXi is installed on all non-usb media. No distinction is made as to whether this logging partition is located on a magnetic disk or on SSDs or flash, but when the partition is on flash care must be taken to ensure that device endurance is sufficient. Thus either the sequential workload endurance requirement derived below or the random workload requirement from Table 1 must be met. It cannot be stressed enough that this procedure can only be applied to sequential workloads such as the logging device workload where the worst case WAF is < 100 (block mode) and < 10 (page mode). WAF values for the JEDEC random workload are much larger so a device random TBW capability is a conservative indicator but it is one to which the raw device writes for a sequential workload can be compared directly without considering WAF. To apply this method, you must obtain from the flash vendor a theoretical sequential TBW and WAF. The raw workload has been measured to be 1.5 GB per hour or just over 64 TBW in 5 years without any write amplification. This number is not a worst case number and when the logging device is colocated with the boot and/or coredump device use cases VMware requires that a value of 128 TBW in 5 years shall be used as a worst case value, again without any WAF, to provide a margin of safety due to the catastrophic nature of device failure. These values are provided in Table 1 for comparison directly to the vendor published JEDEC random TBW but such a comparison is conservative due to the greater WAF for the JEDEC random workload. A more aggressive comparison can be done if the workload WAF for a proposed device has been measured under realistic load on an active ESXi cluster. For a flash device where the logging device WAF under load is measured to be <= 8 the sequential requirement for the logging device workload on a dedicated device is 64 TBW * 8 WAF = 512 TBW. When colocated with boot and/or coredump devices the requirement is 128 TBW * 8 WAF = 1024 TBW. Consult your flash vendor for assistance with measuring WAF or use the worst case WAF figures below. The logging partition is formatted with FAT16 which is similar enough to FAT32 so that most flash devices should handle it equally well. ESXi does not issue TRIM or SCSI UNMAP commands with FAT16. By volume of IO the workload is 75% small writes and 25% large writes(3) with 90% of IO write operations small writes which have a higher WAF, so we focus here on small writes. Once a log file is 1 MB in size it is compressed and deleted. The 8 most recent compressed log files of each type, of which there are at most 16, are retained, for a total of 16 uncompressed and 128 compressed log files. The small writes have an average size close to 128 bytes and are strictly sequential as they append consecutive lines of text to a log file. They write the same disk block (512 bytes) and flash cell repeatedly as lines are appended and they overwrite entire flash cells and likely erase blocks, significantly different from the JEDEC random workload. If VSAN is in use then the VSAN trace files will also be written to the logging partition and the trace file write load is included in the above TBW figure, but as noted the bulk of the writes are small and thus this discussion focuses on the small repeated writes of single disk blocks. For a flash device operating in block mode each write of a single disk block will consume an entire flash cell of 4K bytes. For a flash device operating in page mode different writes of single disk blocks can share a flash cell and each block will be written to a different disk block offset in the flash cell that has been previously erased and not subsequently reused.(4) For a block mode flash device to fill a flash cell of size 4K bytes (8 disk blocks) the workload will write 32 disk blocks. So in block mode 32 flash cells will be written in the worst case. Similarly, with a flash cell size of 8K bytes, 64 writes will be required. Since 75% of the writes are small writes the worst case WAF is 24 for block mode flash devices with a flash cell size of 4K bytes and 48 with a flash cell size of 8K bytes, but this is only for WAF directly due to the workload. For page mode flash devices, each flash cell is filled densely, so 32 disk blocks will fit in 4 flash cells of size 4K bytes and 64 disk blocks will fit in 4 flash cells of size 8K bytes. A 5th write will likely be needed as part of garbage collection to copy over the final versions of the disk blocks since the final disk 9
10 blocks will be interleaved with overwritten ones. Again 75% of 5 is about 4 so for a page mode flash device the worst case WAF directly due to the workload is 4. IMPORTANT: If the block mode flash device has an internal cache and can support up to 16 simultaneous IO streams then direct WAF may be greatly reduced. Consult your flash vendor. Once the workload specific WAF for a proposed device has been determined either by direct measurement or using the appropriate worst case value from above then the actual TBW for 5 years will be n * (WAF for proposed device with the logging workload)(5) where n is 128 TBW if the logging device is colocated with the boot and/or coredump devices and 64 TBW if not. To find the scaling factor divide the (WAF for a proposed device with the logging workload) by either 1.1 or the vendor quoted theoretical sequential WAF. For example, 24 (block mode with 4K page size worst case WAF) divided by 1.1 is 22. Multiply the requirement (128 TBW if colocated, 64 TBW if not) by the scaling factor of 22. Since 128 * 22 ~= 2816 TBW, 2820 TBW is the adjusted theoretical sequential TBW for colocated logging workload on this device. Verify that the requirement times the theoretical sequential WAF (2820 TBW * 1.1 WAF = 3102) exceeds raw workload writes times the applicable WAF (128 TBW * 24 WAF = 3072). Compare this requirement to the vendor s theoretical sequential TBW value for the proposed device. VMware does not currently support post-installation configuration of a logging partition on USB media but may in a future release (see Appendix 4). (3) With VSAN in use the split is roughly 3 to 1 by volume of IOs. When VSAN is not in use there are fewer large IOs. (4) Newer technologies may have additional factors not considered here. Please consult your flash vendor. (5) By (WAF for proposed device with the logging workload) is meant, in the absence of a measurement, either 24 or 48 for a block mode flash device with page size of 4K bytes or 8K bytes respectively, or 4 for a page mode flash device. WAF values from actual measurement under realistic workload conditions are preferred. 2.2 Appendix 2: OEM Pre-Install Coredump Size For USB factory pre-installs (also known as a dd-image ) and also in a vsphere Auto Deploy environment the coredump partition size can be provided on first boot with this syntax: autopartitiondiskdumppartitionsize=5120 where 5120 is twice the default in MiB units. When preparing the dd-image for systems with large memory or intended for customers who will use VSAN, OEMs and partners should loopback mount the dd-image and edit the boot.cfg file in the bootbank (partition 5) and add the autopartitiondiskdumppartitionsize option with an option value to the ESXi boot line so that it will be parsed at first boot. No value need be specified for the diskdumpslotsize since the autopartitiondiskdumppartitionsize value will default the slot size to the same size, resulting in a larger coredump partition with a single slot. Although no further action is needed from the OEM, several points should be emphasized to customers: Customers should choose to upgrade rather install when upgrading to a newer version of vsphere ESXi as installing will unconditionally replace the larger than default large coredump partition with a large coredump partition of the default size 2560 MiB, undoing any first boot work. On subsequent boots the autopartitiondiskdumppartitionsize option will have no effect and thus will not work if, for example, a customer deletes an existing coredump partition. Customers can remediate this situation using steps in Appendix 3. Customers who accidentally choose to install or who otherwise need to manually resize their coredump partition can remediate the situation using the steps in Appendix 3. IMPORTANT: In current releases of vsphere upgrade is NOT the default option when using the ISO installer. This issue will be addressed in a forthcoming release of vsphere. 10
11 IMPORTANT: In vsphere 5.5 and 6.0 an upgrade of USB factory pre-installs (aka, the ddimage ) may not be possible in the presence of a large coredump partition regardless of size (i.e., this issue exists even with a default sized large coredump partition). The failure manifests as a failure in the ISO upgrade GUI with a message that the vmkfstools has failed with an error. This issue is being resolved on patches to both 5.5 and 6.0 release vsphere. Customers who encounter this issue and wish to upgrade should retry with the most up to date patch for their version of vsphere. 2.3 Appendix 3: Resizing Coredump Partitions An ESXi instance can have a multiplicity of coredump partitions but at any time only one can be active. By default an ESXi installation will have two coredump partitions, a legacy partition of size 110 MiB which is adequate for operation in Maintenance Mode and a large partition which by default is of size 2.5 GiB and which is required to be used when not in Maintenance Mode. While increasing size of the large coredump partition we make the legacy one active for safety. A coredump partition may hold multiple coredumps with each coredump occupying a slot but coredump partitions on local flash media are best configured with a single slot. After resizing the slot size can be specified on the ESXi boot line with this syntax: diskdumpslotsize=5120 where 5120 MiB is twice the default and the value should match your enlargement. IMPORTANT: To manually resize the coredump partition place the host in Maintenance Mode. IMPORTANT: After resizing the coredump partition the ESXi host must be rebooted with the appropriate diskdumpslotsize specified to finish the resizing operation. Is the boot media coredump partition active? Starting with vsphere 5.5 ESXi images have a large coredump partition of size 2.5 GiB (2.7 GB), but on USB media in particular it may not be present and if present it may not be active. Here is an example showing a host with no active coredump partition on the USB flash boot media: In this case with a pre-install dd-image on a USB flash drive a coredump partition on a local SCSI disk was available and used and no coredump partition was created on first boot. If the local SCSI disk is later used for VSAN or removed altogether a coredump partition will not be created. The next section gives a procedure for creating a large coredump partition when none exists. Creating a coredump partition on USB boot media IMPORTANT: This procedure is only applicable to vsphere ESXi 5.5 and later on USB boot media with MBR partition table type. This is the default for OEM pre-install dd-images. To check if the partition table type of the boot device is MBR (aka msdos ) use this command: 11
12 Note that the previous section gives commands to determine the boot device. Customers who for whatever reason (e.g., upgrade from an install image of a previous version of vsphere ESXi) have a GPT partition table type on their boot device and lack a large coredump partition should contact VMware support for assistance. As noted above, customers with OEM pre-install dd-images may not have a large coredump partition. If the boot device partition table type is MBR then fdisk can be used to create a coredump partition. Default units of fdisk on ESXi are cylinders, so use the u command to switch units to sectors before using n command to create a primary partition with number 2, default start sector and end sector of which is one less than twice the default size in KiB since a disk sector of 512 bytes is half of 1024 (2560 MiB == KiB). Next use t command to set type to fc and then write out the partition table with the w command. Verify partition creation and activate the new coredump partition with these commands: IMPORTANT: After creation reboot and specify diskdumpslotsize to ensure correct slot size. Resizing a coredump partition on USB boot media As noted above the default coredump partition size may not be sufficient. Table 2 provides the needful size for various system configurations. PartedUtil can be used on both MBR and GPT partition table types so this procedure can be used on all ESXi install types, but for OEM preinstall dd-images the coredump partition will be partition number 2 due to limitations of MBR. Here we increase the coredump partition size from 2560 MiB to MiB for a host with 4 TiB of DRAM and 4 TB of SSDs in the VSAN Caching Tier and a 32 GB USB flash (or SD card) boot device. This example is for an install image; for a dd-image substitute 2 for 9 for the partition. 12
13 IMPORTANT: After resizing reboot and specify the diskdumpslotsize to ensure correct slot size. Resizing a coredump partition on other boot media Excepting installations to USB devices a vsphere ESXi install device will have a partition for the logging and unless the flash device is small a datastore partition as well. If the latter is present it may be needful to delete or resize the datastore partition to free up space. The logging partition may be relocated or deleted depending upon requirements. Here we delete the datastore partition and relocate the logging partition for an ESXi installation on a local SCSI disk (naa abcdef). We begin by repointing /scratch at /tmp since we will be relocating the underlying partition. 13
14 Now that space is available the coredump partition can be grown using the technique described in the previous section ( Creating a coredump partition on USB boot media ). Reconfiguration and a reboot are required to use the relocated logging partition; for details see VMware KB article Creating a persistent scratch location for ESXi 4.x/5.x/6.x ( ). A reboot is also required to use a resized coredump partition and the reboots can be combined into one at the end of all the partition manipulation described here. 2.4 Appendix 4: Creating a Logging Partition IMPORTANT: This procedure is only applicable to vsphere ESXi 5.5 and later on USB boot media with MBR partition table type. To check the partition table type on an ESXi installation see the section Creating a coredump partition on USB boot media in Appendix 3. It will be useful if the reader studies that section in detail before continuing here. IMPORTANT: The USB flash device must meet the endurance requirement set forth in Table 1. VMware strictly recommends that the USB flash device be provided by the host system OEM and certified by the OEM for logging device usage model. See VMware Support Policy above. The default units of fdisk on ESXi are cylinders, so use the u command to switch units to sectors before using n command to create a primary partition with number 3, default start sector and end sector of which is one less than the maximum number of 512 byte sectors supported in a FAT16 volume. Next use t command to set the partition type to 6 and then write out the partition table with the w command. Verify and format the new partition with these commands: 14
15 Reconfiguration and a reboot are required to use the newly created partition; for details see VMware Knowledge Base article Creating a persistent scratch location for ESXi 4.x/5.x/6.x ( ). 15
SSD ENDURANCE. Application Note. Document #AN0032 Viking SSD Endurance Rev. A
SSD ENDURANCE Application Note Document #AN0032 Viking Rev. A Table of Contents 1 INTRODUCTION 3 2 FACTORS AFFECTING ENDURANCE 3 3 SSD APPLICATION CLASS DEFINITIONS 5 4 ENTERPRISE SSD ENDURANCE WORKLOADS
More informationOverprovisioning and the SanDisk X400 SSD
and the SanDisk X400 SSD Improving Performance and Endurance with Rev 1.2 October 2016 CSS Technical Marketing Western Digital Technologies, Inc. 951 SanDisk Dr. Milpitas, CA 95035 Phone (408) 801-1000
More informationAdministering VMware vsan. 17 APR 2018 VMware vsphere 6.7 VMware vsan 6.7
Administering VMware vsan 17 APR 2018 VMware vsphere 6.7 VMware vsan 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments
More informationvsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7
vsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have
More informationWhite Paper: Understanding the Relationship Between SSD Endurance and Over-Provisioning. Solid State Drive
White Paper: Understanding the Relationship Between SSD Endurance and Over-Provisioning Solid State Drive 2 Understanding the Relationship Between Endurance and Over-Provisioning Each of the cells inside
More informationvsansparse Tech Note First Published On: Last Updated On:
First Published On: 02-09-2017 Last Updated On: 02-09-2017 1 Table of Contents 1. Introduction 1.1.Introduction 1.2.Virtual Machine Snapshot Overview 2. Introducing vsansparse Snapshots 2.1.Introducing
More informationAdministering VMware vsan. Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1
Administering VMware vsan Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If
More informationBuilding Your Own Robust and Powerful Software Defined Storage with VMware vsan. Tips on Choosing Hardware for vsan Deployment
Building Your Own Robust and Powerful Software Defined Storage with VMware vsan Tips on Choosing Hardware for vsan Deployment Agenda 1 Overview of VSAN 2 VSAN VCG at a Glance 3 VSAN Hardware Guidance (Ready
More informationAdministering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2
Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/
More informationIOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March
IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks
More informationIOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December
IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and
More informationChapter. Chapter. Magnetic and Solid-State Storage Devices
Chapter Chapter 9 Magnetic and Solid-State Storage Devices Objectives Explain how magnetic principles are applied to data storage. Explain disk geometry. Identify disk partition systems. Recall common
More informationDell EMC Best Practices for Running VMware ESXi 6.5 or Later Clusters on XC Series Appliances and XC Core Systems
Dell EMC Best Practices for Running VMware ESXi 6.5 or Later Clusters on XC Series Appliances and XC Core Systems Abstract This best practice guidance is aimed XC Series Appliances and XC Core Systems
More informationKingston s Data Reduction Technology for longer SSD life and greater performance
Kingston s Data Reduction Technology for longer SSD life and greater performance Solid-State Drives (SSDs) have transitioned from being an expensive storage device to becoming common in tablets, and the
More informationManaging Disks. Managing Disks in the Cluster. Disk Requirements
in the Cluster, on page Disk Requirements, on page Replacing Self Encrypted Drives (SEDs), on page 4 Replacing SSDs, on page 6 Replacing NVMe SSDs, on page 7 Replacing Housekeeping SSDs, on page 8 Replacing
More informationWD AV GP Large Capacity Hard Drives
by Kevin Calvert Senior Engineering Program Manager Introduction This document provides important information to OEMs, integrators, and installers who want to deploy Audio/Video (AV) class hard drives
More informationHow do I patch custom OEM images? Are ESXi patches cumulative? VMworld 2017 Do stateless hosts keep SSH & SSL identities after reboot? With Auto Deplo
SER1963BE Technical Overview of VMware ESXi Host Lifecycle Management with Update Manager, Auto Deploy, and Host Profiles VMworld 2017 Content: Not for publication Eric Gray @eric_gray #VMworld #SER1963BE
More information3SE4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3SE4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents LIST OF FIGURES... 6 1. PRODUCT OVERVIEW... 7 1.1 INTRODUCTION
More information3MG2-P Series. Customer Approver. Approver. Customer: Customer Part Number: Innodisk Part Number: Model Name: Date:
3MG2-P Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of Contents 1.8 SATA SSD 3MG2-P LIST OF FIGURES... 6 1. PRODUCT
More information3MS4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3MS4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents SATADOM-ML 3MS4 1. PRODUCT OVERVIEW... 7 1.1 INTRODUCTION
More information3ME4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3ME4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents LIST OF FIGURES... 6 1. PRODUCT OVERVIEW... 7 1.1 INTRODUCTION
More informationSFS: Random Write Considered Harmful in Solid State Drives
SFS: Random Write Considered Harmful in Solid State Drives Changwoo Min 1, 2, Kangnyeon Kim 1, Hyunjin Cho 2, Sang-Won Lee 1, Young Ik Eom 1 1 Sungkyunkwan University, Korea 2 Samsung Electronics, Korea
More informationFree up rack space by replacing old servers and storage
A Principled Technologies report: Hands-on testing. Real-world results. Free up rack space by replacing old servers and storage A 2U Dell PowerEdge FX2s and all-flash VMware vsan solution powered by Intel
More informationPerformance Testing December 16, 2017
December 16, 2017 1 1. vsan Performance Testing 1.1.Performance Testing Overview Table of Contents 2 1. vsan Performance Testing Performance Testing 3 1.1 Performance Testing Overview Performance Testing
More informationVMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide
- MARCH 2018 VMWARE VSAN 6.6 Licensing Guide Table of Contents Introduction 3 License Editions 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster with
More informationvsphere Installation and Setup Update 1 Modified on 04 DEC 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5
vsphere Installation and Setup Update 1 Modified on 04 DEC 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:
More informationEnabling vsan on the Cluster First Published On: Last Updated On:
First Published On: 11-04-2016 Last Updated On: 11-07-2016 1 1. Enabling vsan 1.1.Enabling vsan 1.2.Check Your Network Thoroughly Table of Contents 2 1. Enabling vsan Steps to enable vsan 3 1.1 Enabling
More informationPowerVault MD3 SSD Cache Overview
PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS
More informationvsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5
vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:
More informationWhite Paper: Increase ROI by Measuring the SSD Lifespan in Your Workload
White Paper: Using SMART Attributes to Estimate Drive Lifetime Increase ROI by Measuring the SSD Lifespan in Your Workload Using SMART Attributes to Estimate Drive Endurance The lifespan of storage has
More informationvsan 6.6 Performance Improvements First Published On: Last Updated On:
vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions
More informationVMware vsan Design and Sizing Guide First Published On: February 21, 2017 Last Updated On: April 04, 2018
VMware vsan Design and Sizing Guide First Published On: February 21, 2017 Last Updated On: April 04, 2018 1 Table of Contents 1. Introduction 1.1.Overview 2. vsan Design Overview 2.1.Adhere to the VMware
More informationWhat's New in vsan 6.2 First Published On: Last Updated On:
First Published On: 07-07-2016 Last Updated On: 08-23-2017 1 1. Introduction 1.1.Preface 1.2.Architecture Overview 2. Space Efficiency 2.1.Deduplication and Compression 2.2.RAID - 5/6 (Erasure Coding)
More informationPage Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices
Page Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices Ilhoon Shin Seoul National University of Science & Technology ilhoon.shin@snut.ac.kr Abstract As the amount of digitized
More informationThe What, Why and How of the Pure Storage Enterprise Flash Array. Ethan L. Miller (and a cast of dozens at Pure Storage)
The What, Why and How of the Pure Storage Enterprise Flash Array Ethan L. Miller (and a cast of dozens at Pure Storage) Enterprise storage: $30B market built on disk Key players: EMC, NetApp, HP, etc.
More informationFILE SYSTEMS, PART 2. CS124 Operating Systems Fall , Lecture 24
FILE SYSTEMS, PART 2 CS124 Operating Systems Fall 2017-2018, Lecture 24 2 Last Time: File Systems Introduced the concept of file systems Explored several ways of managing the contents of files Contiguous
More informationIOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December
IOmark- VDI IBM IBM FlashSystem V9000 Test Report: VDI- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark,
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
SER1143BU A Deep Dive into vsphere 6.5 Core Storage Features and Functionality Cormac Hogan Cody Hosterman VMworld 2017 Content: Not for publication #VMworld #SER1143BU Disclaimer This presentation may
More informationStorMagic SvSAN 6.1. Product Announcement Webinar and Live Demonstration. Mark Christie Senior Systems Engineer
StorMagic SvSAN 6.1 Product Announcement Webinar and Live Demonstration Mark Christie Senior Systems Engineer Introducing StorMagic What do we do? StorMagic SvSAN eliminates the need for physical SANs
More informationConfiguration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5
Configuration s Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/
More informationaslc Mini SATA III Flash Module PHANES-HR Series Product Specification APRO aslc MINI SATA III FLASH MODULE
aslc Mini SATA III Flash Module PHANES-HR Series Product Specification APRO aslc MINI SATA III FLASH MODULE Version 01V1 Document No. 100-xBMSR-PHCTMBAS June 2016 APRO CO., LTD. Phone: +88628226-1539 Fax:
More informationEPTDM Features SATA III 6Gb/s msata SSD
EPTDM Features SATA III 6Gb/s msata SSD Transcend EPTDM series are msata Solid State Drives (SSDs) with high performance and quality Flash Memory assembled on a printed circuit board. These devices feature
More informationIOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April
IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-
More information3MG2-P Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3MG2-P Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of Contents 1.8 SATA SSD 3MG2-P LIST OF FIGURES... 6 1. PRODUCT
More informationMass-Storage Structure
Operating Systems (Fall/Winter 2018) Mass-Storage Structure Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review On-disk structure
More information3SE4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3SE4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents msata 3SE4 LIST OF FIGURES... 6 1. PRODUCT OVERVIEW...
More information3ME4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3ME4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model me: Date: Innodisk Approver Customer Approver Table of contents 2.5 SATA SSD 3ME4 LIST OF FIGURES... 6 1. PRODUCT OVERVIEW...
More informationSurveillance Dell EMC Storage with Bosch Video Recording Manager
Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December
More information3ME4 Series. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver. Customer Approver
3ME4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of Contents Slim SSD 3ME4 LIST OF FIGURES... 6 1. PRODUCT OVERVIEW...
More informationMLC. Mini SATA III Flash Module. HERMES-JI Series. Product Specification APRO MLC MINI SATA III FLASH MODULE
MLC Mini SATA III Module HERMES-JI Series Product Specification APRO MLC MINI SATA III FLASH MODULE Version 01V2 Document No. 100-xBMSM-JJICTMB March 2016 APRO CO., LTD. Phone: +88628226-1539 Fax: +88628226-1389
More informationFILE SYSTEMS. CS124 Operating Systems Winter , Lecture 23
FILE SYSTEMS CS124 Operating Systems Winter 2015-2016, Lecture 23 2 Persistent Storage All programs require some form of persistent storage that lasts beyond the lifetime of an individual process Most
More informationBlock alignment for best performance on Nimble Storage. (Version A)
Block alignment for best performance on Nimble Storage (Version 20130326-A) The purpose of this KB article is to describe various types of block alignment pathologies which may cause degradations in performance
More informationA+ Guide to Hardware, 4e. Chapter 7 Hard Drives
A+ Guide to Hardware, 4e Chapter 7 Hard Drives Objectives Learn how the organization of data on floppy drives and hard drives is similar Learn about hard drive technologies Learn how a computer communicates
More informationVMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.
VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS
More informationvsan Health Check Improvements First Published On: Last Updated On:
vsan Health Check Improvements First Published On: 03-29-2017 Last Updated On: 05-02-2018 1 Table of Contents 1. Health Check and Performance Improvements 1.1.Online Health Check 1.2.Performance Diagnostics
More information3ME4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3ME4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents msata mini 3ME4 LIST OF FIGURES... 6 1. PRODUCT OVERVIEW...
More informationI/O Devices & SSD. Dongkun Shin, SKKU
I/O Devices & SSD 1 System Architecture Hierarchical approach Memory bus CPU and memory Fastest I/O bus e.g., PCI Graphics and higherperformance I/O devices Peripheral bus SCSI, SATA, or USB Connect many
More informationData rate - The data rate is the number of bytes per second that the drive can deliver to the CPU.
A+ Guide to Hardware, 4e Chapter 7 Hard Drives Learning from Floppy Drives Floppy drives are an obsolescent technology Replacements: CD drives and USB flash memory Good reasons for studying floppy drive
More informationIOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August
IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,
More informationSurveillance Dell EMC Storage with Milestone XProtect Corporate
Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes
More informationSATA 1.8-inch and 2.5-inch MLC Enterprise SSDs for System x Product Guide
SATA 1.8-inch and 2.5-inch MLC Enterprise SSDs for System x Product Guide The SATA 1.8-inch and 2.5-inch MLC Enterprise solid-state drives (SSDs) for System x employ enterprise MLC NAND technology to bring
More informationWhite Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series
White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.
More information3ME4 Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3ME4 Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents msata 3ME4 LIST OF FIGURES... 6 1. PRODUCT OVERVIEW...
More informationThe TR200 SATA Series: All-new Retail SSDs from Toshiba
The TR200 SATA Series: All-new Retail SSDs from Toshiba UNDER EMBARGO UNTIL JULY 27, 2017 9AM EST Introducing the New Retail From 2017 onwards, retail SSDs will ship under the Toshiba brand name, while
More informationPartitioning and Formatting Guide
Partitioning and Formatting Guide Version 1.2 Date 05-15-2006 Partitioning and Formatting Guide This guide is designed to explain how to setup your drive with the correct partition and format for your
More informationCLOUD PROVIDER POD RELEASE NOTES
VMware Cloud Provider Pod 1.0.1 20 November 2018 Check for additions and updates to these release notes Release Notes Version 1.0.1 This Release Notes document includes release details about VMware Cloud
More informationA+ Guide to Managing and Maintaining your PC, 6e. Chapter 8 Hard Drives
A+ Guide to Managing and Maintaining your PC, 6e Chapter 8 Hard Drives Introduction Hard drive: most important secondary storage device Hard drive technologies have evolved rapidly Hard drive capacities
More informationvsphere Upgrade Update 1 Modified on 4 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5
Update 1 Modified on 4 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you
More informationConfiguration Maximums
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
More informationDell EMC SAN Storage with Video Management Systems
Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for
More informationCISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College
CISC 7310X C11: Mass Storage Hui Chen Department of Computer & Information Science CUNY Brooklyn College 4/19/2018 CUNY Brooklyn College 1 Outline Review of memory hierarchy Mass storage devices Reliability
More information2.5-Inch SATA SSD -7.0mm PSSDS27xxx3
2.5-Inch SATA SSD -7.0mm PSSDS27xxx3 Features: Ultra-efficient Block Management & Wear Leveling Advanced Read Disturb Management Intelligent Recycling for advanced free space management RoHS-compliant
More informationOptimizing SSD Operation for Linux
ACTINEON, INC. Optimizing SSD Operation for Linux 1.00 Davidson Hom 3/27/2013 This document contains recommendations to configure SSDs for reliable operation and extend write lifetime in the Linux environment.
More informationTechnical Notes. Considerations for Choosing SLC versus MLC Flash P/N REV A01. January 27, 2012
Considerations for Choosing SLC versus MLC Flash Technical Notes P/N 300-013-740 REV A01 January 27, 2012 This technical notes document contains information on these topics:...2 Appendix A: MLC vs SLC...6
More information2.5-Inch SATA SSD PSSDS27Txxx6
DMS Celerity 2.5 SSD Datasheet 2.5-Inch SATA SSD PSSDS27Txxx6 Features: SATA 3.1 Compliant, SATA 6.0Gb/s with 3Gb/s and 1.5Gb/s support ATA modes supported PIO modes 3 and 4 Multiword DMA modes 0, 1, 2
More informationData Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public
Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible
More informationBest Practices for SSD Performance Measurement
Best Practices for SSD Performance Measurement Overview Fast Facts - SSDs require unique performance measurement techniques - SSD performance can change as the drive is written - Accurate, consistent and
More informationCisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes
Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems
More informationLenovo Enterprise Capacity Solid State Drives Product Guide
Lenovo Enterprise Capacity Solid State Drives Product Guide Enterprise Capacity solid-state drives (SSDs) from Lenovo provide high-performance, reliable storage solutions for high-capacity enterprise applications.
More informationCS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 13
CS24: INTRODUCTION TO COMPUTING SYSTEMS Spring 2017 Lecture 13 COMPUTER MEMORY So far, have viewed computer memory in a very simple way Two memory areas in our computer: The register file Small number
More informationVMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS
VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions
More informationMaximizing VMware ESX Performance Through Defragmentation of Guest Systems
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems This paper details the results of testing performed to determine if there was any measurable performance benefit to be derived
More informationvstart 50 VMware vsphere Solution Specification
vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES
More informationConfiguring Storage Profiles
This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,
More informationDell EMC BOSS-S1 (Boot Optimized Server Storage) User's Guide
Dell EMC BOSS-S1 (Boot Optimized Server Storage) User's Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION
More informationCHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.
CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. File-System Structure File structure Logical storage unit Collection of related information File
More informationStorage Strategies for vsphere 5.5 users
Storage Strategies for vsphere 5.5 users Silverton Consulting, Inc. StorInt Briefing 2 Introduction VMware vsphere is the number one hypervisor solution in the world with more than 500,000 customers worldwide.
More informationBenchmarking Enterprise SSDs
Whitepaper March 2013 Benchmarking Enterprise SSDs When properly structured, benchmark tests enable IT professionals to compare solid-state drives (SSDs) under test with conventional hard disk drives (HDDs)
More informationUsing IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment
Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment Jeremy Canady IBM Systems and Technology Group ISV Enablement March 2013 Copyright IBM Corporation, 2013 Table of contents
More informationThe HP 3PAR Get Virtual Guarantee Program
Get Virtual Guarantee Internal White Paper The HP 3PAR Get Virtual Guarantee Program Help your customers increase server virtualization efficiency with HP 3PAR Storage HP Restricted. For HP and Channel
More informationCLOUD PROVIDER POD RELEASE NOTES
VMware Cloud Provider Pod 1.0 18 October 2018 Check for additions and updates to these release notes Release Notes Version 1.0 This Release Notes document includes details about VMware Cloud Provider Pod
More informationVMware vsan 6.6. Licensing Guide. Revised May 2017
VMware 6.6 Licensing Guide Revised May 2017 Contents Introduction... 3 License Editions... 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster... 7
More informationSolid State Drive (SSD) Cache:
Solid State Drive (SSD) Cache: Enhancing Storage System Performance Application Notes Version: 1.2 Abstract: This application note introduces Storageflex HA3969 s Solid State Drive (SSD) Cache technology
More informationEmulex LPe16000B 16Gb Fibre Channel HBA Evaluation
Demartek Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage
More informationAssessing performance in HP LeftHand SANs
Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of
More information3MG2-P Series. Customer Approver. Innodisk Approver. Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date:
3MG2-P Series Customer: Customer Part Number: Innodisk Part Number: Innodisk Model Name: Date: Innodisk Approver Customer Approver Table of contents 2.5 SATA SSD 3MG2-P LIST OF FIGURES... 6 1. PRODUCT
More informationSurveillance Dell EMC Storage with Cisco Video Surveillance Manager
Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes
More informationCisco HyperFlex HX220c M4 Node
Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems
More informationLenovo PM963 NVMe Enterprise Value PCIe SSDs Product Guide
Lenovo PM963 NVMe Enterprise Value PCIe SSDs Product Guide The Lenovo PM963 NVMe Enterprise Value PCIe solid-state drives (SSDs) in capacities of 1.92 TB and 3.84 TB are general-purpose yet high-performance
More informationHyperscaler Storage. September 12, 2016
Storage Networking Industry Association Technical White Paper Hyperscaler Storage Abstract: Hyperscaler storage customers typically build their own storage systems from commodity components. They have
More information