BEST PRACTICES FOR RUNNING ORACLE ON DELL EMC XTREMIO X2

Size: px
Start display at page:

Download "BEST PRACTICES FOR RUNNING ORACLE ON DELL EMC XTREMIO X2"

Transcription

1 Installing and Configuring the DM-MPIO WHITE PAPER BEST PRACTICES FOR RUNNING ORACLE ON DELL EMC XTREMIO X2 Abstract This White Paper describes the best practices and recommendations when deploying an Oracle Database Management System 12c on Oracle Linux 7.x running on top of DELL EMC's XtremIO X2 enterprise all-flash storage array. August, 2018 Best Practices for Running Oracle on Dell EMC XtremIO X2

2 Contents Abstract... 1 Executive Summary... 4 Introduction... 5 Test Setup... 5 Test Performance Results... 5 Storage Array: Dell EMC XtremIO X2 All-Flash Array... 8 XtremIO X2 Overview... 8 Architecture and Scalability... 9 XIOS and the I/O Flow XtremIO Write I/O Flow XtremIO Read I/O Flow System Features Inline Data Reduction Thin Provisioning Integrated Copy Data Management XtremIO Data Protection Data at Rest Encryption Write Boost XtremIO Management Server Solution's Software Layer Oracle Physical Linux 7.4 Configuration General Guidelines Configuring IO Elevator and Queue Depth using UDEV Installing and Configuring the DM-MPIO Ensuring LUN Accessibility Oracle ASM ASM Features in Oracle ASM General Recommendations Database Files Location in ASM Disk Groups Number of LUNS per Disk Group Creating a Linux Partition as Required by ASMLib Example for using fdisk utility: Enabling Load Balancing when Using ASMLib versus 4K Advanced Format Considerations Multiblock I/O Request Sizes Redo Log Block Size Best Practices for Running Oracle on Dell EMC XtremIO X2

3 Grid Infrastructure Files OCR/Voting Implementing Oracle Quality of Service (QoS) Simplicity of Operation Provision Capacity Without Complexity Utilities for Thin Provisioning Space Reclamation Snapshots Used for Backup-to-Disk Snapshots Used for Manual Continuous Data Protection (CDP) Crash-Consistent Image Recoverable Image Snapshots for Cloning Primary Databases Recovery Manager Image Copies for Backup to Disk References Appendix A XtremIO Monitoring WebUI XMCLI Appendix B ASM Disk Group Sector Size ASMLib Ramifications How to Learn More Best Practices for Running Oracle on Dell EMC XtremIO X2

4 Executive Summary Dell EMC XtremIO is a market-leading, purpose-built, all-flash array that offers consistently high performance with low latency, unmatched storage efficiency with inline, all-the-time data services, rich application, integrated copy services and unprecedented management simplicity. It is designed from the ground up to unlock flash technology's instant performance potential by uniquely leveraging the characteristics of SSDs, and uses advanced inline data reduction methods to reduce the physical data that must be stored. XtremIO's storage system uses industry-standard components and customdesigned intelligent software to deliver unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. XtremIO has always provided simple, easy-to-use management. The XtremIO Management Server (XMS) delivers an HTML5 user interface that is a simple and easy-to-use interface for storage administrators. XMS allows storage administrators the ability to provisions storage with very little setup and planning. XtremIO is designed and optimized for databases and for DBAs, providing the following benefits. Predictable Performance XtremIO provides predictable and consistency low-latency performance. With XtremIO scale-up and scale-out architecture, year-to-year growth is easy. Initial investment is preserved and application performance is improved. The performance is predictable and provides sub-millisecond response times regardless of the workload and environment be it production, QA, test or development. Incredible simplicity With XtremIO, there is no need for planning and tuning the location and number of database files. XtremIO is a real scaleup / scale-out and N-active-active storage controller architecture in which all the array volumes are served by all array resources. All DBA tasks are fast and simple with 1 to 3 steps. Agility The typical enterprise applications require multiple copies such as test/development, reporting or online analytics. DBAs and test/dev engineers often have to spend hours managing the DB creation and refreshing the environments while often being limited by capacity, performance and number of copies. XtremIO's Integrated Copy Data Management (icdm) allows for instant XtremIO Virtual Copies (XVCs) to be created from production with no performance impact. These copies can be repurposed for near real time analytics, test/dev and any other use case- all with complete space efficiency. Protection Protecting the database is easy with XtremIO. There is no need for any design covering RAID type, data file capacity, load balancing, and tuning. The data is protected with a proprietary flash-optimized algorithm called XtremIO Data Protection (XDP). XDP is very different from RAID in several ways. Since XDP is always working within an all-flash storage array, several criteria were important in the design of this protection scheme. XDP benefits include ultra-low capacity overhead, high levels of data protection in case of double SSD failure, rapid rebuild times, flash endurance, and of course extreme performance. And with XtremIO virtual copies it is easy to protect and recover from any operational and logical corruption; XVC's allow the creation of frequent point-in-time copies (according to RPO intervals seconds, minutes, hours) and use them to recover from any data corruption. An XVC can be kept in the system for as long as needed. Recovery using XtremIO virtual copy is instantaneous and does not impact system performance. 4 Best Practices for Running Oracle on Dell EMC XtremIO X2

5 Introduction Oracle's Database Management System (DBMS) operates at peak performance on the XtremIO Storage Array solution, regardless of the workload it encounters. This includes diverse workloads such as online transaction processing (OLTP), data warehousing, and hybrid workloads. The XtremIO Storage Array delivers predictable high performance and consistent low latency. The recommendations and best practices described in this white paper are geared to assist Storage and Database administrators to maximize the performance and data capacity utilization of the XtremIO X2 Storage Array, when deploying an Oracle DBMS on Oracle Linux 7.x Test Setup All the tests performed on this white paper were conducted with the following equipment: Component Properties XtremIO Array XtremIO X2R, Single Brick, 18 x 1.92TB XtremApp Server Intel Based Server 755 GB RAM 2 x Intel(R) Xeon(R) CPU E GHz (28 cores) Operating System Operating System: Oracle Linux 7 (x86_64) UEK Release 4 Multipath Device Mapper for Multi-Path Software + Oracle ASMLib 2.X Volume Manager Oracle ASM For Grid Infrastructure and Database Oracle Oracle Database 12c R2 Grid and Database Software Test Performance Results In this section, we take a deeper look at performance statistics from our XtremIO X2 array while running SLOB. SLOB is an Oracle I/O workload generation tool kit which possesses the following characteristics: SLOB supports testing Oracle logical read (SGA buffer gets) scaling. SLOB supports testing physical random single-block reads (db file sequential read). SLOB supports testing random single block writes (DBWR flushing capacity). SLOB supports testing extreme REDO logging I/O. SLOB consists of simple PL/SQL. SLOB is entirely free of all application contention. 5 Best Practices for Running Oracle on Dell EMC XtremIO X2

6 Software Configuration Red Hat Enterprise Linux Server release 6.7 Oracle 12c Grid Control 12c SLOB 2.3 Database storage configuration ASM Diskgroup DATA of 4 volumes of 512 GB Diskgroup REDO of 4 volumes of 512 GB Figure 1 shows SLOB performance test results on XtremIO X2. The storage array was prefilled up to 90% in order to simulate an environment as close as possible to a customer's production environment. Figure 1. SLOB Performance Test Results on XtremIO X2 6 Best Practices for Running Oracle on Dell EMC XtremIO X2

7 Figure 2 shows the performance metrics from the perspective of the array. As we can see, XtremIO X2 is handling storage bandwidths as high as ~2.24GB/s with over 290k IOPS during the SLOB performance test. Figure 2. XtremIO X2 IOPS and I/O Bandwidth During SLOB Performance Test Figure 3 shows the block sizes distribution during the SLOB performance test. We can see that most of the bandwidth used is 8KB blocks, as this was the block size configured at the test level to use against our storage array. Figure 3. XtremIO X2 IOPS and I/O Bandwidth During SLOB Performance Test Figure 4 shows the CPU utilization of the Storage Controllers during the SLOB performance test. We can see that the CPUs are utilized well during this process with utilization close to 67%. We can also see the excellent synergy across the X2 cluster, with all the Active-Active Storage Controllers' CPUs sharing the load and effort, and the CPU utilization virtually equal for all the controllers for the entire process. Figure 4. XtremIO X2 IOPS and I/O Bandwidth During SLOB Performance Test 7 Best Practices for Running Oracle on Dell EMC XtremIO X2

8 In Figure 5, we can see the IOPS and latency stats in the SLOB Performance Test. The graph shows again that IOPS are well over 290k but that the latency for all I/O operations remains less than 0.9msec, yielding the excellent performance of the Oracle Database. Figure 5. XtremIO X2 IOPS and I/O Bandwidth Storage Array: Dell EMC XtremIO X2 All-Flash Array Dell EMC's XtremIO is an enterprise-class scalable all-flash storage array that provides rich data services with high performance. It is designed from the ground up to unlock flash technology's instant performance potential by uniquely leveraging the characteristics of SSDs. It also uses advanced inline data reduction methods to reduce the physical data that must be stored on the disks. XtremIO's storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled levels of performance, achieving consistent, low latency for up to millions of IOPS. It comes with a simple, easy-to-use interface for storage administrators and requires very little planning to set up before provisioning. It fits a wide variety of use cases for customers needing a fast and efficient storage system for their datacenters. XtremIO X2 Overview XtremIO X2 is the new generation of the Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility to the previous generation storage array, which already provided proficiency and high performance. By supporting the following features, it adds provides the extra value and advancements required in the evolving world of computer infrastructure. Scale-up for a more flexible system. Write boost for a more sensible and high-performing storage array. NVRAM for improved data availability. New web-based UI for managing the storage array and monitoring its alerts and performance statistics. The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and storage resources, and can be clustered together with additional X-Bricks to grow in both performance and capacity (scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each brick. 8 Best Practices for Running Oracle on Dell EMC XtremIO X2

9 XtremIO architecture is based on a metadata-centric, content-aware system, which helps streamline data operations efficiently without requiring any movement of data post-write for any maintenance reason (e.g. data protection and data reduction are done inline). The system lays out the data uniformly across all SSDs in all X-Bricks in the system, using unique fingerprints of the incoming data and controlling access with metadata tables. This provides an extremely balanced system across all X-Bricks in terms of compute power, storage bandwidth and capacity. Using the same unique fingerprints, XtremIO is equipped with exceptional always-on in-line data deduplication abilities, which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both also in-line and always-on), it achieves incomparable data reduction rates. System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters. With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client and does not require complex capacity or data protection planning which are handled autonomously by the system. Architecture and Scalability An XtremIO X2 Storage System is comprised of a set of X-Bricks that together form a cluster. This is the basic building block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments with storage that are more I/O intensive than capacity intensive. Such applications would typically use smaller SSDs and less RAM. A suitable use of the X2-S is for environments that have high data reduction ratios (i.e. a high compression ratio or a great deal of duplicated data) which lower the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for capacity intensive environments, with bigger disks, more RAM and a higher potential for expansion in future releases. The two X-Brick types cannot be mixed together in a single system, so the decision which type is suitable for your environment must be made in advance. 9 Best Practices for Running Oracle on Dell EMC XtremIO X2

10 Each X-Brick is comprised of: 1. Two 1U Storage Controllers (SCs) with: Two dual socket Haswell CPUs 346GB RAM (for X2-S) or 1TB RAM (for X2-R) Two 1/10GbE iscsi ports Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iscsi) Two 56Gb/s InfiniBand ports One 100/1000/10000 Mb/s management port One 1Gb/s IPMI port Two redundant power supply units (PSUs) 2. One 2U Disk Array Enclosure (DAE) containing: Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R) Two redundant SAS interconnect modules Two redundant power supply units (PSUs) 4U 1U 1U Second Storage Controller First Storage Controller 2U DAE Figure 6. An XtremIO X2 X-Brick The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects. An XtremIO storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA) protocol for this back-end connectivity, ensuring a highly-available, ultra-low latency network for communication between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO system is essentially a single sharedmemory space spanning all of its Storage Controllers. The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software, communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates with the Storage Controllers and sends storage management requests such as creating an XtremIO Volume or mapping a Volume to an Initiator Group. The second 1GB/s IPMI port interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds of an X-Brick and will never be connected to an IPMI port of a Storage Controller in another X-Brick in the cluster. 10 Best Practices for Running Oracle on Dell EMC XtremIO X2

11 With X2, an XtremIO cluster has both scale-out and scale-up capabilities. Scale-out is implemented by adding X-Bricks to an existing cluster. The addition of an X-Brick to an existing cluster linearly increases its compute power, bandwidth and capacity. Each X-Brick added to the cluster includes two Storage Controllers, each with its own CPU power, RAM and FC/iSCSI ports to service the clients of the environment. It also adds a DAE with SSDs to increase the capacity provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is intended for environments that grow both in capacity and performance needs, as in the case of an increase in active users and their data, or in the case of a database which grows in data and complexity. An XtremIO cluster can start with any number of X-Bricks as per the environment's initial needs and can grow to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will support growth up to 8 X-Bricks for X2-R arrays. Figure 7. Scale Out Capabilities Single to Multiple X2 X-Brick Clusters Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. Scale-up is intended for environments that grow in capacity needs without need for extra performance. This can occur when the data increases for the same number of users, or when the growth is such that the storage capacity limits were reached on the current infrastructure before the performance limits. 11 Best Practices for Running Oracle on Dell EMC XtremIO X2

12 Each DAE can hold up to 72 SSDs and is divided to up to 2 groups of SSDs called Data Protection Groups (DPGs). Each DPG can hold a minimum of 18 SSDs and can grow in increments of 6 SSDs up to the maximum of 36 SSDs. In other words, the DPG supports configurations of 18, 24, 30 or 36 SSD's, with up to 2 DPGs in a DAE. SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters. Figure 8. Scale Up Capabilities Up to 2 DPGs and 72 SSDs per DAE For more details on XtremIO X2 see the XtremIO X2 Specifications [ 3] and XtremIO X2 Datasheet [ 4]. 12 Best Practices for Running Oracle on Dell EMC XtremIO X2

13 XIOS and the I/O Flow Each Storage Controller within the XtremIO cluster runs a specially-customized lightweight Linux-based operating system as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the system's functional modules, RDMA communication, monitoring and all other management functions. Figure 9. X-Brick Components XIOS has a proprietary process scheduling-and-handling algorithm designed to meet the specific requirements of a content-aware, low-latency, and high-performing storage system. It provides efficient scheduling and data access, instant exploitation of CPU resources, optimized inter-sub-process communication, and minimized dependency between subprocesses running on different sockets. The XtremIO Operating System collects a variety of metadata tables on incoming data which includes data fingerprint, file location, mappings and reference counts. The metadata is used as the source of information for performing system operations such as uniformly laying out incoming data, implementing inline data reduction services, and accessing the data on read requests. The metadata is also involved in communication with external applications (such as VMware XCOPY and Microsoft ODX) to optimize integration with the storage system. Regardless of which Storage Controller receives an I/O request from a host, multiple Storage Controllers on multiple X- Bricks cooperate to process the request. The data layout in the XtremIO system ensures that all components share the load and equally participate in processing I/O operations. An important functionality of XIOS is data reduction. This is done by using inline data deduplication and compression. Data deduplication and data compression complement each other. Data deduplication removes redundancies, whereas data compression compresses the already deduplicated data before it is written to the flash media. XtremIO is an alwayson thin-provisioned storage system. This enhances the storage savings by never writing a block of zeros to the disks. To service hosts' I/O requests, XtremIO connects with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iscsi. Details of the XIOS architecture and its data reduction capabilities are available in the Introduction to DELL EMC XtremIO X2 Storage Array document [ 2]. 13 Best Practices for Running Oracle on Dell EMC XtremIO X2

14 XtremIO Write I/O Flow In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage Controllers and is split into data blocks. For every data block, the array fingerprints the data with a unique identifier and stores it in the cluster's mapping table. The mapping table maps the host's Logical Block Addresses (LBA) to the blocks' fingerprints and maps the blocks' fingerprints to its physical location in the array (the DAE, SSD and block location offset). The block fingerprint has two objectives: to determine if the block is a duplicate of a block that already exists in the array and to uniformly distribute blocks across the cluster. This distribution is done by dividing the list of potential fingerprints among the Storage Controllers. The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values, ensuring that the fingerprints and blocks are uniformly distributed between the Storage Controllers. A write operation works as follows: 1. A new write request reaches the cluster. 2. The new write is broken into data blocks. 3. For each data block: A. A fingerprint is calculated for the block. B. An LBA-to-fingerprint mapping is created for this write request. C. The fingerprint is checked to see if it already exists in the array. If it exists, the reference count for this fingerprint is incremented by one. If it does not exist: i. A location is chosen on the array where the block will be written (distributed uniformly across the array according to fingerprint value). ii. iii. iv. A fingerprint-to-physical location mapping is created. The data is compressed. The data is written. v. The reference count for the fingerprint is set to one. Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No data is written to the array and the operation is completed quickly, adding an extra benefit to in-line deduplication. Figure 10 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints. Figure 10. Incoming Data Stream Example with Duplicate Blocks As mentioned, fingerprints also help to decide where to write the block in the array. Figure 11 shows the incoming stream from Figure 10, being written to the array after duplicates were removed. The blocks are divided between the Storage Controller according to their fingerprint value, giving a uniform distribution of the data across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand network. 14 Best Practices for Running Oracle on Dell EMC XtremIO X2

15 F, Data Storage Controller DAE InfiniBand 2, A, 1, 9, Data Data Data Data X-Brick 2 X-Brick 1 Storage Controller Storage Controller DAE Data Data Storage Controller Data Data Data Data Data Data Data 0, C, 963FE7B CA38C90 134F F7A F3AFBA3 AB45CB A8 Figure 11. Incoming Deduplicated Data Stream Written to the Storage Controllers The actual write of the data blocks to the SSDs is carried out asynchronously. At the time of the application write, the system places the data blocks in the in-memory write buffer, and protects it using journaling to local and remote NVRAMs. Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic, and preserves the data in case of system failure (e.g. power loss or any other reason). When enough blocks are collected in the buffer for a full stripe, the system writes them to the SSDs on the DAE. Figure 12 shows the data write to the DAEs after a full stripe of data blocks is collected in each Storage Controller. DataData DataDataData Data Data Data Data Data Data Data Data P1 P2 Data Data Data Data Data Data Data P1 P2 X-Brick 2 Storage Controller DAE Storage Controller X-Brick 1 Storage Controller Figure 12. DataDataData DataDataData Data Data Data Data Data Data Data P1 P2 Data Data Data Data Data Data Data P1 P2 DAE Storage Controller Full Stripe of Blocks Written to the DAEs 15 Best Practices for Running Oracle on Dell EMC XtremIO X2

16 XtremIO Read I/O Flow In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The fingerprint found is then looked up in the fingerprint-to-physical mapping, and the data is retrieved from the right physical location. As with writes, the read load is also evenly shared across the cluster, as blocks are evenly distributed, and all volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the application. A compressed data block is decompressed before it is delivered to the host. XtremIO has a memory-based read cache in each Storage Controller. The read cache is content organized by fingerprint. Blocks whose contents are more likely to be read are placed in the read cache for a fast retrieve. A read operation works as such: 1. A new read request reaches the cluster. 2. The read request is analyzed to determine the LBAs for all data blocks, and a buffer is created to hold the data. 3. For each LBA: A. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read. B. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data blocks. C. The requested data block is read from its physical location (read cache or disk) and transmitted to the buffer (created in step 2) in the Storage Controller that processes the request via RDMA over InfiniBand. 4. The system reassembles the data blocks as per the requested read and transmits it to the buffer and then back to the host. System Features The XtremIO X2 Storage Array provides a wide range of built-in features that require no special license. The architecture and implementation of these features is unique to XtremIO and is designed considering the capabilities and limitations of flash media. The following sections list some key features included in the system. Inline Data Reduction XtremIO's unique Inline Data Reduction is achieved by two mechanisms: Inline Data Deduplication and Inline Data Compression Data Deduplication Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The deduplication is at a global level, meaning no duplicate blocks are written over the entire array. As an inline and global process, resource-consuming background processes or additional reads and writes are not necessary (in contrast to postprocessing deduplication schemes which do require such background processes). This increases SSD endurance and eliminates performance degradation. As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow on page 14). The fingerprints are also used for uniform distribution of data blocks across the array, which provides inherent load balancing to increase performance and enhances flash wear-level efficiency, since the data never needs to be rewritten or rebalanced. 16 Best Practices for Running Oracle on Dell EMC XtremIO X2

17 XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The system's unique content-aware storage architecture achieves a substantially larger cache size with a small DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns such as "boot storms" common in VDI environments. XtremIO achieves excellent data deduplication ratios, especially for virtualized environments. With it, SSD usage is smarter, flash longevity is maximized, the logical storage capacity is multiplied (see Figure 6 and Figure 12 for examples) and total cost of ownership is reduced. Data Compression Inline data compression is the compression of data prior to being written to the flash media. XtremIO automatically compresses data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The compression is performed in real-time and not as a post-processing operation. This way, it does not overuse the SSDs or impact performance. Compressibility rates depend on the type of data written. Data Compression complements data deduplication and saves storage capacity by storing only unique data block in the most efficient manner. We can see the benefits and capacity savings for the deduplication-compression combination demonstrated in Figure 13 and some real ratios in the Test Results section in Figure 6 and Figure 12. This is the only data written to the flash media. Data Written by Host 3:1 Data Deduplication 2:1 Data Compression 6:1 Total Data Reduction Figure 13. Data Deduplication and Data Compression Demonstrated Thin Provisioning XtremIO storage uses a small internal block size and all volumes are natively thin provisioned. This means that the system consumes capacity only when it is needed. No storage space is ever pre-allocated before writing. Because of XtremIO's content-aware architecture, blocks can be stored at any location in the system (with the metadata providing a reference to their location), and the data is written only when unique blocks are received. Therefore, unlike disk-oriented solutions, space creeping or garbage collection is not needed, volume fragmentation does not occur, and defragmentation utilities are not needed with XtremIO. This feature on XtremIO enables consistent performance and data management across the entire life cycle of a volume, regardless of the system capacity utilization or the write patterns of clients. 17 Best Practices for Running Oracle on Dell EMC XtremIO X2

18 Integrated Copy Data Management XtremIO pioneered the concept of integrated Copy Data Management (icdm) the ability to consolidate both primary data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency. XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides consolidation, supporting on-demand copy operations at scale, and still maintains delivery of all performance SLAs in a consistent and predictable way. Consolidation of primary data and its copies in the same array has numerous benefits: 1. It can make development and testing activities up to 50% faster, creating copies of production code quickly for development and testing purposes, and then refreshing the output back into production for the full cycle of code upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as development risks, and increases the quality of the product. 2. Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple inmemory operation. Copies of the data are achieved at high performance and obtain the same SLA as production copies without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for both application and infrastructure teams. 3. Operations such as patches, upgrades and tuning tests can be made quickly using copies of production data. Diagnosing problems of applications and databases can be done with these copies and applying the changes back to production can be done by refreshing the copies. The same applies for testing new technologies and combining them in production environments. icdm can also be used for data protection purposes, as it can be used for creating many copies at low intervals point-in-time for recovery. Application integration and orchestration policies can be set to auto-manage data protection by using different SLAs. XtremIO Virtual Copies For all icdm purposes, XtremIO uses its own implementation of snapshots called XtremIO Virtual Copies (XVCs). XVCs are created by capturing the state of data in volumes at a particular point in time and allowing users to access that data when needed, no matter the state of the source volume (i.e. even if the source was deleted). XVCs allow any access type and can be taken either from a source volume or another Virtual Copy. XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system with a unique metadata tree structure that directs I/O to the data with the right timestamp. This allows efficient copy creation that can sustain high performance, while maximizing the media endurance. Figure 14. A Metadata Tree Structure Example of XVCs 18 Best Practices for Running Oracle on Dell EMC XtremIO X2

19 When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the system, thus making the operation very quick. This operation does not have any impact on the system and does not consume any capacity at the point of creation, unlike traditional snapshots which may need to reserve space or copy the metadata for each snapshot. Virtual Copy capacity consumption occurs only when changes are made to any copy of the data. Then the system updates the metadata of the changed volume to reflect the new write and stores its blocks in the system using the standard write flow process. The system supports the creation of Virtual Copies on a single volume or a set of volumes. All Virtual Copies of the volumes in the set are cross-consistent and contain the exact same point-in-time. This can be done manually by selecting a set of volumes for copying, or by placing volumes in a Consistency Group and making copies of that Consistency Group. Virtual Copy deletions are lightweight and proportional to the amount of changed blocks between the entities. The system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted. Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters the system. With XVCs, XtremIO's icdm offers the following tools and workflows to provide consolidation capabilities: 1. Consistency Groups (CG) Grouping of volumes to allow Virtual Copies to be taken on a group of volumes as a single entity. 2. Snapshot Sets A group of Virtual Copies volumes taken together using CGs or a group of manually-chosen volumes. 3. Protection Copies Immutable read-only copies created for data protection and recovery purposes. 4. Protection Scheduler Used for local protection of a volume or a CG. It can be defined using intervals of seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the number of copies needed or the allowable age of the oldest snapshot. 5. Restore from Protection Restore a production volume or CG from one of its descendant snapshot sets. 6. Repurposing Copies Virtual Copies configured with changing access types (read-write / read-only / no-access) for changing purposes. 7. Refresh a Repurposing Copy Refresh a Virtual Copy of a volume or a CG from the parent object or other related copies with relevant updated data. It does not require volume provisioning changes for the refresh to take effect, but only host-side logical volume management operations to discover the changes. XtremIO Data Protection XtremIO Data Protection (XDP) provides the storage system with a highly efficient "self-healing" double-parity data protection capability. It requires very little capacity overhead and metadata space and does not require dedicated spare drives for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single drive rebuild. In the rare case of a double SSD failure, the second drive will be rebuilt only if there is enough space to rebuild the second drive as well, or when one of the failed SSDs is replaced. The XDP algorithm provides: 1. N+2 drives protection 2. Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group) 3. 60% higher write-efficiency than RAID1 4. Flash endurance superior to any RAID algorithm, due to the smaller number of writes and even distribution of data 5. Automatic rebuilds that are faster than traditional RAID algorithms 19 Best Practices for Running Oracle on Dell EMC XtremIO X2

20 As shown in Figure 15, XDP uses a variation of N+2 row and diagonal parity thus providing protection from two simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs). XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its data protection needs. D 0 D 1 D 2 D 3 D 4 P Q k Figure 15. k = 5 (prime) N+2 Row and Diagonal Parity Data at Rest Encryption Data at Rest Encryption (DARE) provides a solution for securing critical data even when the media is removed from the array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or loss during transport and makes it possible to return/replace failed components containing sensitive data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and government institutions. At the heart of XtremIO's DARE solution is Self-Encrypting Drive (SED) technology. An SED has a dedicated hardware which is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the SSDs enables XtremIO to maintain the same software architecture whenever encryption is enabled or disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and performance is not impacted when encryption is used. 20 Best Practices for Running Oracle on Dell EMC XtremIO X2

21 A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data on the SSD. Figure 16. Data at Rest Encryption in XtremIO Write Boost In the new X2 storage array, the write flow algorithm was improved significantly to increase array performance, thus accommodating the rise in compute power and disk speeds. In addition, the it was designed to account for common applications' I/O patterns and block sizes. As described for the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to disk. The commit is sent after the changes are written to local and remote NVRAMs (for protection), and are committed to disk later, at a time that optimizes system performance. In addition to shortening the write to commit cycle, the new algorithm addresses an issue important to many applications and clients: a high percentage of small I/Os creating load on the storage system and degrading latency, especially on larger I/O blocks. Examining customers' applications and I/O patterns, it was found that many I/Os from common applications come in small blocks less than 16K pages, thus creating high loads on the storage array. Figure 17 shows the block size histogram from the entire XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue by aggregating small writes to bigger blocks in the array before writing them to disk. This reduces the demand on the system which can handle larger blocks faster. The test results for the improved algorithm were amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2 to address application latency requirements of 0.5msec or less. 21 Best Practices for Running Oracle on Dell EMC XtremIO X2

22 Figure 17. XtremIO Install Base Block Size Histogram XtremIO Management Server The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is preinstalled with CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a VMware virtual machine. The XMS manages the cluster via the management ports on both Storage Controllers of the first X-Brick in the cluster using a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus can be disconnected from an XtremIO cluster without jeopardizing data I/O tasks. A failure on the XMS affects only monitoring and configuration activities, such as creating and attaching volumes. A virtual XMS is naturally less vulnerable to such failures. The GUI is based on a new Web User Interface (WebUI), which is accessible with any browser, and provides easy-to-use tools for performing most system operations (certain management operations must be performed using the CLI). Some of the most useful features of the new WebUI are described following. 22 Best Practices for Running Oracle on Dell EMC XtremIO X2

23 Dashboard The Dashboard window presents an overview of the cluster. It has three panels: 1. Health Provides an overview of the system's health status and alerts. 2. Performance (shown in Figure 18) Provides an overview of the system's overall performance and top used Volumes and Initiator Groups. 3. Capacity (shown in Figure 19) Provides an overview of the system's physical capacity and data savings. Note that these figures represent views available in the dashboard and not test results shown in earlier figures. Figure 18. XtremIO WebUI Dashboard Performance Panel Figure 19. XtremIO WebUI Dashboard Capacity Panel The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options related to XtremIO's management actions. The main menus contain options for the Dashboard, Notifications, Configuration, Reports, Hardware and Inventory. 23 Best Practices for Running Oracle on Dell EMC XtremIO X2

24 Notifications In the Notifications menu, users can navigate to the Events window (shown in Figure 20) and the Alerts window, which show major and minor issues related to the cluster's health and operations. Figure 20. XtremIO WebUI Notifications Events Window 24 Best Practices for Running Oracle on Dell EMC XtremIO X2

25 Configuration The Configuration window displays the cluster's logical components: Volumes (shown in Figure 21), Consistency Groups, Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. From this window, users can create and modify these entities by using the action panel on the top right. Figure 21. XtremIO WebUI Configuration 25 Best Practices for Running Oracle on Dell EMC XtremIO X2

26 Reports In the Reports menu, users can navigate to different windows to show graphs and data of different aspects of the system's activities, mainly those related to the system's performance and resource utilization. Menu options include: Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User Defined reports. Reports can be viewed with different resolutions of time and components. Entities to be viewed are selected with the "Select Entity" option in the Report menu (shown in Figure 22). In addition, pre-defined or custom time intervals can be selected for the report as shown in Figure 23. The Test Result graphs, shown earlier in this document, were generated with these menu options. Figure 22. XtremIO WebUI Reports Selecting Specific Entities to View Figure 23. XtremIO WebUI Reports Selecting Specific Times to View 26 Best Practices for Running Oracle on Dell EMC XtremIO X2

27 The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage capacity information. The Performance window shows extensive performance reports which mainly include Bandwidth, IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the system. The Latency window (shown in Figure 24) shows latency reports per block sizes and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system. Figure 24. XtremIO WebUI Reports Latency Window The Capacity window in Figure 25 shows capacity statistics and the change in storage capacity over time. The Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD endurance status and statistics. The SSD Balance window shows data balance and variance between the SSDs. The Usage window shows Bandwidth and IOPS usage - both overall and separately for reads and writes. The User Defined window allows users to define their own reports. Figure 25. XtremIO WebUI Reports Capacity Window 27 Best Practices for Running Oracle on Dell EMC XtremIO X2

28 Hardware In the Hardware menu, a picture is provided of the physical cluster and the installed X-Bricks. When viewing the FRONT panel, users can select and highlight any component of the X-Brick and view related detailed hardware information in the panel on the right. Figure 26 shows a hardware view of Storage Controller #1 in X-Brick #1 including installed disks and status LEDs. Users can further click on the "OPEN DAE" button to see visual illustration of the X-Brick's DAE and SSDs and view additional information on each SSD and Row Controller. Figure 26. XtremIO WebUI Hardware Front Panel Figure 27 shows the back panel view including physical connections to and within the X-Brick. This includes FC connections, Power, iscsi, SAS, Management, IPMI and InfiniBand. Connection can be filtered by the "Show Connections" list at the top right. Figure 27. Inventory XtremIO WebUI Hardware Back Panel Show Connections In the Inventory menu, all components in the environment are shown together with related information. This includes: XMS, Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, Infiniband Switches and NVRAMs. 28 Best Practices for Running Oracle on Dell EMC XtremIO X2

29 XMS Menus The XMS Menus are global system menus that can be accessed in the top right tools of the interface. It allows a user to Search components in the system, view Health status of managed components, view major Alerts, view and configure System Settings (shown in Figure 28), and use the User Menu to view login information and support options. Figure 28. XtremIO WebUI XMS Menus System Settings As mentioned, other interfaces are also available to monitor and manage an XtremIO cluster with the XMS server. The system's Command Line Interface (CLI) can be used for everything the GUI provides and more. A RESTful API is another pre-installed interface in the system which allows HTTP-based commands to manage clusters. And for Windows PowerShell console uses, a PowerShell API Module is also available for XtremIO management. Solution's Software Layer Oracle Physical Linux 7.4 Configuration Note: Please refer to the latest Host Configuration guide for up to date procedures and best practices - General Guidelines 1. It is recommended to use 8 paths paths from each host server to the cluster. 2. Keep a consistent, duplex link speed on all paths between the host and the XtremIO cluster. 3. To ensure continuous access to XtremIO storage during cluster software upgrade, verify that a minimum I/O timeout of 30 seconds is set on the HBAs of all hosts connected to the affected XtremIO cluster. Similarly, verify that a minimum timeout of 30 seconds is set for all applications that are using storage from the XtremIO cluster. 4. The HBA queue depth (also referred to as execution throttle) controls the amount of outstanding I/O requests per HBA port. The HBA queue depth should be set to the maximum value. 5. The LUN queue depth controls the amount of outstanding I/O requests per a single path. These settings are controlled in the driver module for the card at the OS level. When connecting Linux host to XtremIO, LUN queue depth setting should retain its default values. 6. I/O scheduling controls how I/O operations are submitted to storage. Linux offers various I/O algorithms (also known as Elevators ) to accommodate different workloads. When connecting a Linux host to XtremIO storage, set the I/O elevator to either noop or deadline. It is not recommended to use the cfq I/O elevator setting, as it is less optimal for XtremIO storage. 29 Best Practices for Running Oracle on Dell EMC XtremIO X2

30 Configuring IO Elevator and Queue Depth using UDEV 1. For instructions on using CLI rather than UDEV, please refer to Host Configuration Guide. 2. As a general rule of thumb, do not change the default Queue Depth setting. If Oracle is the only application attached to XtremIO, consider increasing the Queue Depth between 128 and Create or edit the following file: $ vim /etc/udev/rules.d/99-xtremio.rules 4. Append the following rule to the file: #increase queue depth on the volume ACTION=="add change", SUBSYSTEM=="scsi", ATTR{vendor}=="XtremIO", ATTR{model}=="XtremApp ", ATTR{queue_depth}="32" # Use noop scheduler for added performance ACTION=="add change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="XtremIO", ENV{ID_MODEL}=="XtremApp",ATTR{queue/scheduler}="noop" # Use noop ACTION=="add change", SUBSYSTEM=="block", KERNEL=="dm*", ENV{DM_NAME}=="??14f0c5*", ATTR{queue/Scheduler}="noop" 5. Save the changes made to the file. 6. Run the following command to apply the changes: $ udevadm trigger Note: Some Linux operating systems may benefit from using deadline elevator configuration. Note: In the first rule shown in step 4, there should be eight (8) spaces between 'XtremApp' and the closing quotation mark. 30 Best Practices for Running Oracle on Dell EMC XtremIO X2

31 Installing and Configuring the DM-MPIO The device mapper (also known as DM-MPIO) is a Linux multipathing software that is suitable for balancing I/O to XtremIO Storage Arrays. For more background on DM-MPIO refer to My Oracle Support (Doc ID ). To install and configure the device mapper, the following steps should be followed: 1. Via YUM, install the device mapper package [root@ucs3 ~]# yum install device-mapper Use chkconfig to enable multipathd daemon: [root@ucs3 ~]# chkconfig multipathd on 2. Configure the XtremIO disk device, modify the /etc/multipath.conf file with the following parameters: devices { device { vendor XtremIO product XtremApp path_selector "queue-length 0" rr_min_io_rq 1 path_grouping_policy multibus path_checker tur failback immediate fast_io_fail_tmo 15 user_friendly_names no } } 3. Restart multipathd daemon. As the root user, at the shell prompt, enter: [root@ucs3 ~]# service multipathd restart 4. To ensure that the device mapper's nodes are updated and exposed, it is necessary to run the following commands: [root@ucs3 ~]# multipath F;multipath v 2 Note: Setting the user_friendly_names parameter to no, sets the unique WWID as the multipath device name (/dev/mapper/<naa>). 31 Best Practices for Running Oracle on Dell EMC XtremIO X2

32 Ensuring LUN Accessibility 1. To ensure that XtremIO devices are properly exposed and remain readily accessible via the host without requiring a host reboot, it is recommended to install the sg3_utils package. [root@ucs3 ~]# yum install sg3_utils 2. To probe the SCSI bus for new LUNs on channels, execute the following as root: [root@ucs3 ~]# rescan-scsi-bus.sh 3. At the shell prompt, enter the following command: [root@ucs3 ~]# multipath -ll 3514f0c5c83a001b1 dm-44 XtremIO,XtremApp size=2.0t features='0' hwhandler='0' wp=rw `-+- policy='queue-length 0' prio=1 status=active - 0:0:4:39 sddb 70:144 active ready running - 0:0:5:39 sddv 71:208 active ready running - 1:0:4:39 sdep 129:16 active ready running `- 1:0:5:39 sdfj 130:80 active ready running 4. Use the following command to correlate between the volume NAA and volume name on the attached XtremIO cluster: [root@lgsup22 ~]# for i in `multipath -ll grep XtremIO awk '{print "/dev/mapper/"$1}'`; do lu_name=$(sg_inq --page=0x83 $i grep "vendor specific:" sed -n 1p awk '{print $NF}'); br_name=$(sg_inq --page=0x83 $i grep "vendor specific:" sed -n 2p awk '{print $NF}'); echo $i $lu_name $br_name; done /dev/mapper/3514f0c553b oracle-db1 xbrick10 32 Best Practices for Running Oracle on Dell EMC XtremIO X2

33 Oracle ASM Oracle Automatic Storage Management (ASM) is Oracle's recommended software for supporting Oracle database files. ASM Features in Oracle Disk Groups 10,000 Oracle ASM disks 1 million files for each disk group Without any Oracle Exadata Storage, Oracle ASM has the following storage limits if the COMPATIBLE.ASM or COMPATIBLE.RDBMS disk group attribute is set to less than 12.1: o o 2 TB maximum storage for each Oracle ASM disk 20 PB maximum for the storage system Without any Oracle Exadata Storage, Oracle ASM has the following storage limits if the COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes are set to 12.1 or greater: o o o o o 4 PB maximum storage for each Oracle ASM disk with the AU size equal to 1MB 8 PB maximum storage for each Oracle ASM disk with the AU size equal to 2MB 16 PB maximum storage for each Oracle ASM disk with the AU size equal to 4MB 32 PB maximum storage for each Oracle ASM disk with the AU size equal to 8MB 320 exabytes (EB) maximum for the storage system For more information: 33 Best Practices for Running Oracle on Dell EMC XtremIO X2

34 ASM General Recommendations External redundancy is generally recommended for XtremIO. The XtremIO Storage Array natively provides flash-optimized data protection. Database Files Location in ASM Disk Groups The best practices for Storing Oracle DBMS file types in ASM disk groups are outlined in following table: Database Type Grid DG Data DG Redo 1xDG Redo 2xDG FRA DG Single-Instance N/A Control File SPFILE Data Files Temp Undo Redo Logs Multiplexed Redo Logs (if applicable) Archive Logs Flashback Logs Backup Components RAC OCR Voting File SPFILE Control File Data Files Temp Undo Redo Logs Multiplexed Redo Logs (if applicable) Archive Logs Flashback Logs Backup Components Note: The second redo data group (DG) is applicable if redo logs are multiplexed. Number of LUNS per Disk Group Excellent cluster performance is achieved using an XtremIO Storage Array with just a single LUN in a single disk group. However, in order to maximize performance from a single host, parallelism and adequate utilization of device queues are required. The best practice to achieve this is using a minimum of four LUNs for the data disk group per array. Doing so enables the hosts, or applications, to use parallelism at various queuing points. This method ensures optimal performance from the XtremIO Storage Array. The best practices for Disk group configuration and data placement are outlined in following table: Database Type Grid DG Data DG Redo 1xDG Redo 2xDG FRA DG Single-Instance N/A 4 x LUNs per Array 1 x LUN 1 x LUN 1 x LUN per component: Archive, Flashback, Backup, etc. RAC 1 x LUN 4 x LUNs per Array 1 x LUN 1 x LUN 1 x LUN per component: Archive, Flashback, Backup, etc. For Oracle 12.x GRID installations with ASM, do not set the physical-block-size, neither for X1 nor for X2 arrays (leave the setting at the default of 512). Example for configuring ssd.conf on Solaris SPARC is shown below. ssd-config-list = "XtremIO XtremApp","throttle-max:64,delay-busy: ,retriesbusy:90,retries-timeout:30,retries-notready:30,cache-nonvolatile:true,disksort:false"; 34 Best Practices for Running Oracle on Dell EMC XtremIO X2

35 Creating a Linux Partition as Required by ASMLib Oracle ASMLib is an optional host software that offers another method for handling persistent device naming and other features generally included in later releases of Linux. Although many DBAs prefer Linux UDEV (8) for device naming, some may still prefer using ASMLib. The below URL covers the differences between Oracle ASMLIB and Oracle ASM Filter Driver (ASMFD): 00E57B34D604.htm#OSTMG95908 For DBAs who wishing to transition away from ASMLib, My Oracle Support note provides a step-bystep guide for converting from ASMLib to UDEV (8). If ASMLib is required for specific business needs, the following information should be considered. o o When working with ASMLib, some customers may create partitions. In such a case, the system administrator must decide which utility to use for partitioning. An example with FDISK (8) is provided below. The first addressable sector for each device is sector 0, and each sector is 512 bytes in size. As a general rule, the best practice when partitioning the device is to explicitly assign the starting offset, such as one megabyte. This one megabyte of extra room is reserved by defining the partition to start at sector The extra room is available for storing the ASMLib header, which serves to minimize the occurrence of ASMLib header corruption. Note: As recommended, partitioning drives also guarantees that I/O requests will be aligned properly for XtremIO. Example for using fdisk utility: 1. At the shell prompt, enter: # fdisk u /dev/mapper/<naa> 2. Enter the following values: n for new p for partition 1 for partition for the starting sector Enter to accept the last sector w to save 3. To Access the recently created partition on the block device: # kpartx -av /dev/mapper/<naa> The addressable block device partition becomes: /dev/mapper/<naa>p1 4. If /dev/mapper/<naa>p1 is not displayed, it is necessary to restart multipathd via the service(8): # service multipathd restart 5. The following describes an Example for Initializing a LUN for Oracle ASMLib: # oracleasm createdisk DATAOFF4 /dev/mapper/3514f0c5c83a001b9p1 35 Best Practices for Running Oracle on Dell EMC XtremIO X2

36 In Linux clustering, it is common for hosts to assign different "friendly names" (e.g. mpathx) to share LUNs when the hosts boot up. This is often referred to as "device slip". Device slips can be prevented with UDEV (8). However, since the topic at hand is ASMLib, it should be noted that the oracleasm-support package labels disks with cluster-wide unique headers on each device. Enabling Load Balancing when Using ASMLib To ensure that DM-MPIO nodes are suitably utilized for load balancing, it is recommended to explicitly modify the ASMLib configuration file. The best practice is to perform the modifications while the existing ASM disk groups are unmounted. 1. Modify the /etc/sysconfig/oracleasm file as below: ORACLEASM_ENABLED=true # ORACLEASM_UID: The default user owning the /dev/oracleasm mount point ORACLEASM_UID=oracle # ORACLEASM_GID: The default group owning the /dev/oracleasm mount point ORACLEASM_GID=dba # ORACLEASM_SCANBOOT: When set as true, the system scans for ASM disks upon boot ORACLEASM_SCANBOOT=true # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="dm" # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="sd" 2. Restart ASM Daemon for Changes to take effect. # /etc/init.d/oracleasm stop # /etc/init.d/oracleasm start 512 versus 4K Advanced Format Considerations The default setting for XtremIO volumes is 512 bytes. It is recommended keep the default setting and not use 4K the Advanced Format for Oracle Database deployments. There are no performance ramifications when using 512B volumes in conjunction with an Oracle database. On the contrary, 4K Advanced Format is rejected by many elements of the Oracle and Linux operating system stack. Many software components in both Oracle and Linux operating system layers do not function properly with 4K logical sector sizes. One example of Linux operating system functionality which does not work with 4K Advanced Format is the direct I/O (O_DIRECT) support on both EXT4 and XFS file systems. My Oracle Support Doc ID and Doc ID provide more information on the 4K Advanced Format. 36 Best Practices for Running Oracle on Dell EMC XtremIO X2

37 Multiblock I/O Request Sizes Oracle Database performs I/O on data files in multiples of the database block size (db_block_size), which by default is 8KB. The default Oracle Database block size is optimal on XtremIO. XtremIO supports larger block sizes as well. In the case of multiblock I/O (e.g., table/index scans with access method full), one should tune the Oracle Database initialization parameter db_file_multiblock_read_count to limit the requests to 128KB. This is derived with the following formula: db_file_multiblock_read_count is: db_file_multiblock_read_count = 128KB / db_block_size Historically, Oracle Database was optimized to perform very large transfers to mitigate the seek cost due to multiblock reads on mechanical drives. In a seek-free environment, such as XtremIO, there is no need for such mitigation. Also, most modern Fiber Channel host bus adapters require Linux to segment large requests into multiple requests. For example, an application I/O request of one megabyte is fragmented by the Linux block I/O layer into two 512KB transfers in order to suit the HBA maximum transfer size. Redo Log Block Size The default block size for REDO LOG is 512 bytes. I/O requests sent to the redo log files are in increments of the redo block size. This is the blocking factor Oracle uses within REDO LOG files and has nothing to do with the on-disk format of the XtremIO LUN. Our recommendation for XtremIO X1 and X2 arrays is to create REDO LOG files with 4K block size. For more details check Oracle Support notes Notes: On Oracle versions prior to , you should set the parameter _disk_sector_size_override to TRUE when creating a redo log with 4K-block size in the database instance. This issue is fixed in later Oracle versions. For more information see Oracle Doc ID# Do not set the parameter _disk_sector_size_override in the ASM instance. Once the instance is running, simply add more redo logs with the BLOCKSIZE option set to 4KB and then drop any redo logs that have the default 512B block size. Grid Infrastructure Files OCR/Voting The block size for both Oracle Cluster Registry (OCR) and Cluster Synchronization Services (CSS) voting files are 512 bytes. I/O operations to these file objects are therefore sized as a multiple of 512 bytes. This is consistent with best practice for XtremIO volume creation which also uses 512 byte formatting. Implementing Oracle Quality of Service (QoS) The XtremIO Storage Array is an "equal-opportunity" array, servicing all I/O requests from all hosts with simple first-infirst-out fairness. Potentially non-mission-critical applications may utilize a larger share of the array's performance capacity than desired by the administrator. However, host I/O on Linux platforms can be easily managed with the Linux Control Groups. The following references offer more information regarding implementing QoS at the host level: 37 Best Practices for Running Oracle on Dell EMC XtremIO X2

38 The following procedure is an example of implementing Host QoS to Limit Performance of DEV Server. 1. At the shell prompt, enter: mkdir /cgroup/blkio mount -t cgroup -o blkio none /cgroup/blkio cgcreate -t oracle:dba -a oracle:dba -g blkio:/iothrottle 2. Identify the Device Major:Minor Number: Scripts]# ls -l /dev/oracleasm/disks/data* brw-rw oracle oinstall 253, 6 Nov 20 05:57 /dev/oracleasm/disks/data1 brw-rw oracle oinstall 253, 7 Nov 20 05:57 /dev/oracleasm/disks/data2 brw-rw oracle oinstall 253, 8 Nov 20 05:57 /dev/oracleasm/disks/data3 brw-rw oracle oinstall 253, 9 Nov 20 05:57 /dev/oracleasm/disks/data4 3. Limit DEV Server Maximum Read IOPS to 10KB: [root@oel63-1 ~]# echo 253: > /cgroup/blkio/blkio.throttle.read_iops_device 4. Optionally limit Source Database Server Maximum Read IOPS to 100KB: [root@oel63-1 ~]# echo 253: > /cgroup/blkio/blkio.throttle.read_iops_device Simplicity of Operation Provision Capacity Without Complexity Capacity is the main concern for provisioning XtremIO storage for Oracle Databases. XtremIO LUN provisioning and presentation is very simple and can be performed via the XtremIO WebUI or the XMCLI. To provisioning host storage from XtremIO, follow the procedure below: 1. On the XtremIO Storage Array: a. Create Volumes. b. Create an Initiator Group. c. Map the Volumes to the Initiator Group. 2. On the host: a. Perform a "host LUN discovery". 38 Best Practices for Running Oracle on Dell EMC XtremIO X2

39 Utilities for Thin Provisioning Space Reclamation Oracle Automated Storage Management does not trim the space potentially made available by files that were deleted in the disk group. Instead of trimming space, ASM marks the free space as "available for overwrite", causing inaccurate reporting of logical space used by XtremIO. The ASM Storage Reclamation Utility (ASRU) 1 inserts deleted files with zero-byte blocks in released space in an ASM disk group. Executing ASRU in an ASM disk group that has had many deleted files serves to adjust accounting of logical used capacity on XtremIO. If deleted files are not referenced anywhere else, ASRU corrects the reported physical capacity used. Using an XFS and ext4 2 file system, deleted files are automatically trimmed by specifying the "Discard" mount option. This then propagates to the array. Alternatively, one can forego the "Discard" mount option and perform trim operations out-ofband with the fstrim(8) command. Snapshots Used for Backup-to-Disk XtremIO Storage Array snapshots are precise point-in-time copies of source volumes which essentially are a collection of pointers referencing the source volume blocks. Therefore, snapshots consume no physical capacity. Executing snapshots is an extremely rapid and efficient backup-to-disk methodology. This is because snapshots are based completely on metadata operations. Snapshots employ the same benefits that are attributed to the source volumes, including high-level performance, XtremIO Data Protection (XDP), automatic data distribution, global deduplication and thin provisioning. The space that is saved by creating snapshots is not reflected in the deduplication ratio (as is the case with RMAN Image copies). This is because snapshots are pointer based, so they are not actual duplicated blocks. The space saved by snapshots is tremendous, especially at the time the snapshot is created. Over time, as source volumes are updated, and snapshots mounted and accessed for writes as needed, the net physical capacity consumed by both source volumes and snapshots grows. For backup purposes, it is imperative to invoke snapshots while the database is in "Backup" mode. This is done in order to create valid image copies on snapshots and to enable rolling-forward the database utilizing logs (such as offline logs and/or online or redo logs) up to the desired System Change Number (SCN). This establishes a consistent point in time or "latest SCN". Invoking snapshots to roll-forward to a latest SCN establishes the most recent consistent state (from a database perspective). As a precaution, the recommended best practice is to create a backup control file prior to initiating a backupto-disk process. 1 ASRU is a trademark or registered trademark of the Oracle Corporation. 2 The ext4 or "fourth extended filesystem" is a journaling file system for Linux, developed as the successor to ext3. 39 Best Practices for Running Oracle on Dell EMC XtremIO X2

40 For recovery purposes, the recommended best practice is to separate data files and logs (both offline and online), hence enabling a recovery from various points-in-time. XtremIO backup-to-disk image creation (snapshots) is a seamless and fast process, and results in no perceived degradation in terms of performance of the source volumes. Freeze and thaw of the source volumes are implicitly performed internally on XtremIO, via SCST during snapshot operations. The Snapshot Groups 3 feature is supported to ensure that headers within the database files (such as control files, data files, log files and optional application volumes) remain consistent. Multiple snapshots or Snapshot Groups of the source volumes, as well as snapshots of snapshots, are fully supported. This support enables best practice precautionary steps be taken before attempting actual restores and recoveries, such as performing a mock restore and recovery. Unlike the RMAN "Restore" process, the snapshot process for restoring is very fast. The best practice is to backup image copies on snapshots to a separate storage or tape. The following URL link provides comprehensive details for RMAN backup concepts. The following URL link provides comprehensive steps for backing up existing image copy backups with RMAN Snapshots Used for Manual Continuous Data Protection (CDP) As the implementation of snapshots is so efficient on the XtremIO Storage Array, the snapshots feature may be used as part of a business continuance strategy or for continuous data protection (CDP). Two options can be used for this strategy: A crash-consistent, or "restartable" image OR A recoverable image 3 Snapshot Group refers to any snapshot action that is performed on a folder, or on a manually-selected list of volumes. 40 Best Practices for Running Oracle on Dell EMC XtremIO X2

41 Crash-Consistent Image A crash-consistent or restartable image is a point-in-time image of the primary database on disk; i.e. a snapshot. This option entails taking snapshots and/or Snapshot Groups of the primary database while it is up and operational. The image that is captured is similar to the state of the primary database, once the shutdown abort command is issued against it. During the database restart on the snapshots and/or Snapshot Groups, the database automatically performs a recovery, using the online logs. All committed transactions are included, and all uncommitted transactions are rolled back. The Recovery Point Objective (RPO) is defined per interval. The interval is the scheduled time for snapshot or Snapshot Group creation, which can be set as daily, hourly, or defined in minutes (for example, every 30 minutes). To perform a restore operation, unmount the disk groups or file systems (if applicable), and unmap all of the source volumes comprising the database (data files plus control file, online logs, archived log destination). Once these actions have been successfully performed, map the corresponding snapshot or Snapshot Group. To perform a recover operation using SQLPLUS, enter startup at the prompt for the primary database snapshot. Recoverable Image A recoverable image is an image of the primary database on disk; i.e. a snapshot. This option entails taking snapshots and/or Snapshot Groups of the primary database while the database is in "Backup" mode. The image should be captured after the alter database begin backup command is issued. To avoid excessive logging, the alter database end backup command should be executed shortly thereafter. It is also highly recommended to have a backup file of the control file prior to commencing the backup process, and after completion of the backup process. The recovery point objective (RPO) is defined per interval, once snapshots and/or Snapshot Groups are created. The interval can be set as daily, hourly, or defined in minutes. Unlike with the crash-consistent image iteration, data files on the replica can be rolled forward through time. This is performed by using logs up to a consistent point in time, either to the desired SCN or up to the latest SCN (captured in the control file). This means that RPO has a much higher time resolution than is available with scheduled intervals. In this way, not only can an image be recovered via the scheduled intervals, but points in time in-between intervals can also be recovered. This works in conjunction with an Oracle media recovery. Snapshots for Cloning Primary Databases Clones of the primary database may be deployed using the methodologies described above. Oracle provides a utility called "NID" (or "NEWDBID ") 4 which is used to facilitate renaming of the database ID and database Name properties automatically, as opposed to having to recreate the control file. 4 NID and NEWDBID are trademarks or registered trademarks of the Oracle Corporation. 41 Best Practices for Running Oracle on Dell EMC XtremIO X2

42 Recovery Manager Image Copies for Backup to Disk The Oracle Recovery Manager (RMAN) is an Oracle-native tool for backing up, restoring and recovering an Oracle database. The tool is an integral part of Maximum Availability Architecture (MAA) employed by Oracle for making robust database deployments. RMAN creates backups on disk and backup sets by default, rather than creating image copies. A backup set consists of physical files which can be written to either disk or tape, but the format is native to RMAN only (as opposed to image copies). Image copies created via the "BACKUP AS COPY" command are bit-by-bit copies of database files. Image copies may be directed to disk just like a backup set. The copies are then recorded in the RMAN repository, either via an RMAN catalog or via a control file of the target database (in cases where a catalog does not exist or was not used). Image copy is the recommended format for backup-to-disk, as the format provides the highest level of space savings on the XtremIO Storage Array. This is done by providing up to a 2:1 deduplication ratio between the source database and the RMAN backup to disk. Using RMAN to clone the primary database also benefits from XtremIO's deduplication feature. In an environment where the primary database, RMAN backup-to-disk (image copies), and primary database clone all reside on the XtremIO Storage Array, the DRR (data reduction ratio) can reach 3:1. Use Cases for RMAN Image Copies: Image copies may be used to restore control files, data files and logs, when primary files are corrupted or are inadvertently deleted. Image copies on disk may also be used as point-in-time copies of the actual database files. Thus, the timeconsuming restore from the backup location to the actual primary volumes can be avoided. Regardless of whether the image copies reside on ASM or on the file system, RMAN automatically re-directs the pointers to the image copies, updating the control files accordingly. Image copies on disk may also be used to create a clone of the primary database on the same host or on another host. Image copies may be used to create secondary, backup copies, either to tape media or to another storage device. References 1. Dell EMC XtremIO Main Page 2. Introduction to EMC XtremIO Dell EMC XtremIO X2 Specifications 4. Dell EMC XtremIO X2 Datasheet 5. XtremIO CTO Blog (with product announcements and technology deep dives) 6. EMC Host Connectivity 7. Oracle Database Online Documentation 12c Release 1 (12.1) SLOB Overview Best Practices for Running Oracle on Dell EMC XtremIO X2

43 Appendix A XtremIO Monitoring XtremIO X2 contains various options to verify proper configuration of the array and troubleshoot performance issues. The sections below describe several methods for troubleshooting performance related issues. WebUI 1. Open the WebUI management page: IP> 2. Verify the health of the array. 3. Verify the array capacity is not exceeding 90%. 4. Access the reports tab and check for the following: Performance CPU utilization: If CPU utilization is constantly above 70%, the environment should be fully analyzed. 43 Best Practices for Running Oracle on Dell EMC XtremIO X2

44 XMCLI 1. Use the following command to check the overall health of the array: xmcli (tech)> show-alerts 2. The following command shows the initiator's connectivity to FC-target ports. Note: Verify each initiator is zoned according to XtremIO best practices. xmcli (tech)> show-initiators-connectivity target-details Cluster-Name Initiator-Name Index Port-Type Port-Address Num-Of-Conn-Targets Target-List xbrick745 UCS3-fc-1 19 fc 20:00:00:25:b5:a0:00:4f 2 X1-SC1-target3 [3]; X1-SC2-target3 [7] xbrick745 UCS3-fc-2 20 fc 20:00:00:25:b5:b0:00:4f 2 X1-SC1-target4 [4]; X1-SC2-target4 [8] 3. Use the following command to verify the IOPS/Bandwidth is balanced across the target ports: xmcli (tech)> show-targets-performance filter=port-type:eq:fc Name Index Cluster-Name Index Write-BW(MB/s) Write-IOPS Read-BW(MB/s) Read-IOPS BW(MB/s) IOPS Total-Write-IOs Total-Read-IOs X1-SC1-target3 3 xbrick X1-SC1-target4 4 xbrick X1-SC2-target3 7 xbrick X1-SC2-target4 8 xbrick Use the following commands to verify the IOPS/Bandwidth is balanced between the various initiators of the host: xmcli (tech)> show-initiators-performance filter=initiator-name:like:ucs3 Initiator-Name Index Cluster-Name Index Write-BW(MB/s) Write-IOPS Read-BW(MB/s) Read-IOPS BW(MB/s) IOPS Total-Write-IOs Total-Read-IOs UCS3-fc-1 19 xbrick UCS3-fc-2 20 xbrick The following command shows the comparative performance utilization of a group of application volumes on the XtremIO Storage Array: xmcli (tech)> show-volumes-performance filter=volume-name:like:data512 Volume-Name Index Cluster-Name Index Write-BW(MB/s) Write-IOPS Read-BW(MB/s) Read-IOPS BW(MB/s) IOPS Total-Write-IOs Total-Read-IOs data xbrick data xbrick data xbrick data xbrick Use the following command to check internal XtremIO module utilization. Note: if XENV utilization is constantly above 70%, the environment should be fully analyzed. xmcli (tech)> show-xenvs frequency=30 XEnv-Name Index Cluster-Name Index CPU-Usage(%) CSID State Storage-Controller-Name Index Brick-Name Index X1-SC1-E1 1 xbrick active X1-SC1 1 X1 1 X1-SC1-E2 2 xbrick active X1-SC1 1 X1 1 X1-SC2-E1 3 xbrick active X1-SC2 2 X1 1 X1-SC2-E2 4 xbrick active X1-SC2 2 X Best Practices for Running Oracle on Dell EMC XtremIO X2

45 Appendix B ASM Disk Group Sector Size ASMLib Ramifications 512B is the best practice for XtremIO used with Oracle Database. This section is provided for informational purposes only as it pertains to the use of the 4K Advanced Format which is not recommended for XtremIO used with Oracle Database. The minimum I/O-transfer size for files in an ASM disk group is determined by the sector size of the underlying physical drive. Oracle ASM queries devices for the logical sector size of the drive and assigns this value to the sector_size disk group attribute (see My Oracle Support note ). This is the expected behavior for ASM disks that are not accessed with ASMLib. However, an exception to this behavior was exhibited in early versions of Linux 6.x, with native multi-pathing software (e.g. Device Mapper). In these older Linux versions, the physical sector size was adopted by ASMLib for the ASM disk group instead of the logical sector size. When using EMC PowerPath, instead of Device-Mapper, Oracle queries the device to verify that the logical-sector size of the LUN is the same as the physical-sector size. Therefore, no work-around is required with ASMLib. If your business requirements specifically demand the combination of 4K Advanced Format, ASMLib with DM-MPIO, and neither udev(8) control nor EMC Powerpath on XIOS 6.x, refer to My Oracle Support note for more detailed information. 45 Best Practices for Running Oracle on Dell EMC XtremIO X2

46 How to Learn More For a detailed presentation explaining XtremIO X2 Storage Array's capabilities and how XtremIO X2 substantially improves performance, operational efficiency, ease-of-use and total cost of ownership, please contact XtremIO X2 at XtremIO@emc.com. We will schedule a private briefing in person or via a web meeting. XtremIO X2 provides benefits in many environments and mixed workload consolidations, including virtual server, cloud, virtual desktop, database, analytics and business applications. Learn more about Dell EMC XtremIO Contact a Dell EMC Expert View more resources Join the and #XtremIO Introduction to the Dell EMC XtremIO X2 Storage Array All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. Reference Number H

DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16

DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16 Installing and Configuring the DM-MPIO REFERENCE ARCHITECTURE DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16 Abstract This reference architecture evaluates the best-in-class performance and scalability

More information

VIRTUALIZED DESKTOP INFRASTRUCTURE WITH DELL EMC XTREMIO X2 AND VMWARE HORIZON ENTERPRISE 7.2

VIRTUALIZED DESKTOP INFRASTRUCTURE WITH DELL EMC XTREMIO X2 AND VMWARE HORIZON ENTERPRISE 7.2 WHITE PAPER VIRTUALIZED DESKTOP INFRASTRUCTURE WITH DELL EMC XTREMIO X2 AND VMWARE HORIZON ENTERPRISE 7.2 Abstract This reference architecture evaluates the best-in-class performance and scalability delivered

More information

BEST PRACTICES FOR RUNNING SQL SERVER ON DELL EMC XTREMIO X2

BEST PRACTICES FOR RUNNING SQL SERVER ON DELL EMC XTREMIO X2 WHITE PAPER BEST PRACTICES FOR RUNNING SQL SERVER ON DELL EMC XTREMIO X2 Abstract This White Paper explains the general characteristics of running SQL Server on DELL EMC's XtremIO X2 enterprise all-flash

More information

Copyright 2013 EMC Corporation. All rights reserved. FLASH NEXT: Zero to One Million IOPs In A Flash

Copyright 2013 EMC Corporation. All rights reserved. FLASH NEXT: Zero to One Million IOPs In A Flash 1 FLASH NEXT: Zero to One Million IOPs In A Flash 2 Expectations Are Reset Forever 3 DATA IS GROWING 4 While At The Same Time Costs Must Be Contained Information Must Become An Asset Performance Must Be

More information

UNLEASH YOUR APPLICATIONS

UNLEASH YOUR APPLICATIONS UNLEASH YOUR APPLICATIONS Meet the 100% Flash Scale-Out Enterprise Storage Array from XtremIO Opportunities to truly innovate are rare. Yet today, flash technology has created the opportunity to not only

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

SolidFire and Ceph Architectural Comparison

SolidFire and Ceph Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Ceph Architectural Comparison July 2014 Overview When comparing the architecture for Ceph and SolidFire, it is clear that both

More information

INTRODUCTION TO XTREMIO METADATA-AWARE REPLICATION

INTRODUCTION TO XTREMIO METADATA-AWARE REPLICATION Installing and Configuring the DM-MPIO WHITE PAPER INTRODUCTION TO XTREMIO METADATA-AWARE REPLICATION Abstract This white paper introduces XtremIO replication on X2 platforms. XtremIO replication leverages

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Technical White Paper: IntelliFlash Architecture

Technical White Paper: IntelliFlash Architecture Executive Summary... 2 IntelliFlash OS... 3 Achieving High Performance & High Capacity... 3 Write Cache... 4 Read Cache... 5 Metadata Acceleration... 5 Data Reduction... 6 Enterprise Resiliency & Capabilities...

More information

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash N O I T A M R O F S N A R T T I L H E S FU FLA A IN Dell EMC All-Flash solutions are powered by Intel Xeon processors. MODERNIZE WITHOUT COMPROMISE I n today s lightning-fast digital world, your IT Transformation

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

SolidFire and Pure Storage Architectural Comparison

SolidFire and Pure Storage Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Pure Storage Architectural Comparison June 2014 This document includes general information about Pure Storage architecture as

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

DELL EMC UNITY: DATA REDUCTION

DELL EMC UNITY: DATA REDUCTION DELL EMC UNITY: DATA REDUCTION Overview ABSTRACT This white paper is an introduction to the Dell EMC Unity Data Reduction feature. It provides an overview of the feature, methods for managing data reduction,

More information

Copyright 2013 EMC Corporation. All rights reserved. FLASH REDEFINING THE POSSIBLE

Copyright 2013 EMC Corporation. All rights reserved. FLASH REDEFINING THE POSSIBLE 1 FLASH REDEFINING THE POSSIBLE 2 REDEFINING THE POSSIBLE 3 Expectations Are Reset Forever 4 DATA IS GROWING 5 While At The Same Time Costs Must Be Contained Information Must Become An Asset Performance

More information

FLASH.NEXT. Zero to One Million IOPS in a Flash. Ahmed Iraqi Account Systems Engineer North Africa

FLASH.NEXT. Zero to One Million IOPS in a Flash. Ahmed Iraqi Account Systems Engineer North Africa 1 FLASH.NEXT Zero to One Million IOPS in a Flash Ahmed Iraqi Account Systems Engineer North Africa 2 The Performance Gap CPU performance increases 100x every decade Hard disk drive performance has stagnated

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 2 Flash.Next: Zero To One Million IOPs Karthik Pinnamaneni Sr. Systems Engineer Karthik.Pinnamaneni@emc.com 3 An Order Of Magnitude Better Performance 2000 IOPS/GB 0.5 IOPS/GB 150 IOPS/GB 4000X FASTER

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

XTREMIO: TRANSFORMING APPLICATIONS, ENABLING THE AGILE DATA CENTER

XTREMIO: TRANSFORMING APPLICATIONS, ENABLING THE AGILE DATA CENTER 1 XTREMIO: TRANSFORMING APPLICATIONS, ENABLING THE AGILE DATA CENTER MAX FISHMAN XTREMIO PRODUCT MANAGEMENT 2 THE ALL FLASH ARRAY REVOLUTION ALL FLASH ARRAY 3 XTREMIO ENABLES THE AGILE DATA CENTER 10%

More information

MongoDB on Kaminario K2

MongoDB on Kaminario K2 MongoDB on Kaminario K2 June 2016 Table of Contents 2 3 3 4 7 10 12 13 13 14 14 Executive Summary Test Overview MongoPerf Test Scenarios Test 1: Write-Simulation of MongoDB Write Operations Test 2: Write-Simulation

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside.

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside. MODERNISE WITH ALL-FLASH Intel Inside. Powerful Data Centre Outside. MODERNISE WITHOUT COMPROMISE In today s lightning-fast digital world, it s critical for businesses to make their move to the Modern

More information

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS By George Crump Economical, Storage Purpose-Built for the Emerging Data Centers Most small, growing businesses start as a collection of laptops

More information

All-Flash Business Processing SAN and ONTAP 9 Verification Tests Using Oracle Workloads

All-Flash Business Processing SAN and ONTAP 9 Verification Tests Using Oracle Workloads Technical Report All-Flash Business Processing SAN and ONTAP 9 Verification Tests Using Oracle Business Group, DFMG, NetApp September 2016 TR-4531 Abstract This document describes the results of a series

More information

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain Performance testing results using Dell EMC Data Domain DD6300 and Data Domain Boost for Enterprise Applications July

More information

New HPE 3PAR StoreServ 8000 and series Optimized for Flash

New HPE 3PAR StoreServ 8000 and series Optimized for Flash New HPE 3PAR StoreServ 8000 and 20000 series Optimized for Flash AGENDA HPE 3PAR StoreServ architecture fundamentals HPE 3PAR Flash optimizations HPE 3PAR portfolio overview HPE 3PAR Flash example from

More information

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief Digital Business Demands Performance and Scale As enterprises shift to online and softwaredriven business models, Oracle infrastructure is being pushed to run at exponentially higher scale and performance.

More information

Warsaw. 11 th September 2018

Warsaw. 11 th September 2018 Warsaw 11 th September 2018 Dell EMC Unity & SC Series Midrange Storage Portfolio Overview Bartosz Charliński Senior System Engineer, Dell EMC The Dell EMC Midrange Family SC7020F SC5020F SC9000 SC5020

More information

StorageCraft OneXafe and Veeam 9.5

StorageCraft OneXafe and Veeam 9.5 TECHNICAL DEPLOYMENT GUIDE NOV 2018 StorageCraft OneXafe and Veeam 9.5 Expert Deployment Guide Overview StorageCraft, with its scale-out storage solution OneXafe, compliments Veeam to create a differentiated

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON A SOLIDFIRE COMPETITIVE COMPARISON NetApp SolidFire and Pure Storage Architectural Comparison This document includes general information about Pure Storage architecture as it compares to NetApp SolidFire.

More information

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Reasons to Deploy Oracle on EMC Symmetrix VMAX Enterprises are under growing urgency to optimize the efficiency of their Oracle databases. IT decision-makers and business leaders are constantly pushing the boundaries of their infrastructures and applications

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide TECHNICAL DEPLOYMENT GUIDE StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide Overview StorageCraft, with its scale-out storage solution OneBlox, compliments Veeam to create a differentiated diskbased

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

The storage challenges of virtualized environments

The storage challenges of virtualized environments The storage challenges of virtualized environments The virtualization challenge: Ageing and Inflexible storage architectures Mixing of platforms causes management complexity Unable to meet the requirements

More information

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix Nutanix White Paper Hyper-Converged Infrastructure for Enterprise Applications Version 1.0 March 2015 1 The Journey to Hyper-Converged Infrastructure The combination of hyper-convergence and web-scale

More information

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The future of storage is flash The all-flash datacenter is a viable alternative You ve heard it

More information

EMC XTREMIO HIGH-PERFORMANCE CONSOLIDATION SOLUTION FOR ORACLE

EMC XTREMIO HIGH-PERFORMANCE CONSOLIDATION SOLUTION FOR ORACLE White Paper EMC XTREMIO HIGH-PERFORMANCE CONSOLIDATION SOLUTION FOR ORACLE EMC XtremIO, VMware vsphere, Oracle Database, Oracle RAC Optimize storage service times for OLTP and high throughput OLAP/DW workloads

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This describes the EMC VSPEX Proven Infrastructure solution for private

More information

Dell Compellent Storage Center and Windows Server 2012/R2 ODX

Dell Compellent Storage Center and Windows Server 2012/R2 ODX Dell Compellent Storage Center and Windows Server 2012/R2 ODX A Dell Technical Overview Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date July 2013 October 2013 Description Initial

More information

NetVault Backup Client and Server Sizing Guide 2.1

NetVault Backup Client and Server Sizing Guide 2.1 NetVault Backup Client and Server Sizing Guide 2.1 Recommended hardware and storage configurations for NetVault Backup 10.x and 11.x September, 2017 Page 1 Table of Contents 1. Abstract... 3 2. Introduction...

More information

XtremIO Business Continuity & Disaster Recovery. Aharon Blitzer & Marco Abela XtremIO Product Management

XtremIO Business Continuity & Disaster Recovery. Aharon Blitzer & Marco Abela XtremIO Product Management XtremIO Business Continuity & Disaster Recovery Aharon Blitzer & Marco Abela XtremIO Product Management Agenda XtremIO Current BC/DR Offerings New BC/DR Offering Benefits of New Offering Technical Overview

More information

FOUR WAYS TO LOWER THE COST OF REPLICATION

FOUR WAYS TO LOWER THE COST OF REPLICATION WHITE PAPER I JANUARY 2010 FOUR WAYS TO LOWER THE COST OF REPLICATION How an Ultra-Efficient, Virtualized Storage Platform Brings Disaster Recovery within Reach for Any Organization FOUR WAYS TO LOWER

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

HPE MSA 2042 Storage. Data sheet

HPE MSA 2042 Storage. Data sheet HPE MSA 2042 Storage HPE MSA 2042 Storage offers an entry storage platform with built-in hybrid flash for application acceleration and high performance. It is ideal for performance-hungry applications

More information

EMC SOLUTION FOR SPLUNK

EMC SOLUTION FOR SPLUNK EMC SOLUTION FOR SPLUNK Splunk validation using all-flash EMC XtremIO and EMC Isilon scale-out NAS ABSTRACT This white paper provides details on the validation of functionality and performance of Splunk

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec Software Defined Storage at the Speed of Flash PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec Agenda Introduction Software Technology Architecture Review Oracle Configuration

More information

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile Tegile Systems 1 Introducing Tegile Company Overview Product Overview Solutions & Use Cases Partnering with Tegile 2 Company Overview Company Overview Te gile - [tey-jile] Tegile = technology + agile Founded

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

VMAX and XtremIO: High End Storage Overview and Update

VMAX and XtremIO: High End Storage Overview and Update VMAX and XtremIO: High End Storage Overview and Update Mohamed Salem Senior Systems Engineer GLOBAL SPONSORS Dell EMC Storage Portfolio HIGH-END MIDRANGE UNSTRUCTURED 2 High End Storage Family VMAX XTREMIO

More information

NetVault Backup Client and Server Sizing Guide 3.0

NetVault Backup Client and Server Sizing Guide 3.0 NetVault Backup Client and Server Sizing Guide 3.0 Recommended hardware and storage configurations for NetVault Backup 12.x September 2018 Page 1 Table of Contents 1. Abstract... 3 2. Introduction... 3

More information

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays This whitepaper describes Dell Microsoft SQL Server Fast Track reference architecture configurations

More information

Open vstorage EMC SCALEIO Architectural Comparison

Open vstorage EMC SCALEIO Architectural Comparison Open vstorage EMC SCALEIO Architectural Comparison Open vstorage is the World s fastest Distributed Block Store that spans across different Datacenter. It combines ultrahigh performance and low latency

More information

Storage Designed to Support an Oracle Database. White Paper

Storage Designed to Support an Oracle Database. White Paper Storage Designed to Support an Oracle Database White Paper Abstract Databases represent the backbone of most organizations. And Oracle databases in particular have become the mainstream data repository

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

PeerStorage Arrays Unequalled Storage Solutions

PeerStorage Arrays Unequalled Storage Solutions Simplifying Networked Storage PeerStorage Arrays Unequalled Storage Solutions John Joseph, VP of Marketing EqualLogic,, 9 Townsend West, Nashua NH 03063 Phone: +1-603 603-249-7772, FAX: +1-603 603-579-6910

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR 1 VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR PRINCIPAL CORPORATE SYSTEMS ENGINEER RECOVERPOINT AND VPLEX 2 AGENDA VPLEX Overview RecoverPoint

More information

INFINIDAT Storage Architecture. White Paper

INFINIDAT Storage Architecture. White Paper INFINIDAT Storage Architecture White Paper Abstract The INFINIDAT enterprise storage solution is based upon the unique and patented INFINIDAT Storage Architecture (ISA). The INFINIDAT Storage Architecture

More information

Operate your business in real time Aleksandr Shvadtshenko Sr System Engineer, Nordics

Operate your business in real time Aleksandr Shvadtshenko Sr System Engineer, Nordics Dell EMC All-Flash Strategy Operate your business in real time Aleksandr Shvadtshenko Sr System Engineer, Nordics Evolution Dell EMC of all-flash Midrange storage architectures This research shows that

More information

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer Storage Platforms Update Ahmed Hassanein, Sr. Systems Engineer 3 4 Application Workloads PERFORMANCE DEMANDING UNDERSTANDING APPLICATION WORKLOADS CAPACITY DEMANDING IS VITAL TRADITIONAL CLOUD NATIVE 5

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Storage Solutions for VMware: InfiniBox. White Paper

Storage Solutions for VMware: InfiniBox. White Paper Storage Solutions for VMware: InfiniBox White Paper Abstract The integration between infrastructure and applications can drive greater flexibility and speed in helping businesses to be competitive and

More information

NEC M100 Frequently Asked Questions September, 2011

NEC M100 Frequently Asked Questions September, 2011 What RAID levels are supported in the M100? 1,5,6,10,50,60,Triple Mirror What is the power consumption of M100 vs. D4? The M100 consumes 26% less energy. The D4-30 Base Unit (w/ 3.5" SAS15K x 12) consumes

More information

Oracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer

Oracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer Oracle Exadata X7 Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer 05.12.2017 Oracle Engineered Systems ZFS Backup Appliance Zero Data Loss Recovery Appliance Exadata Database

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

IBM System Storage DCS3700

IBM System Storage DCS3700 IBM System Storage DCS3700 Maximize performance, scalability and storage density at an affordable price Highlights Gain fast, highly dense storage capabilities at an affordable price Deliver simplified

More information

Zero Data Loss Recovery Appliance DOAG Konferenz 2014, Nürnberg

Zero Data Loss Recovery Appliance DOAG Konferenz 2014, Nürnberg Zero Data Loss Recovery Appliance Frank Schneede, Sebastian Solbach Systemberater, BU Database, Oracle Deutschland B.V. & Co. KG Safe Harbor Statement The following is intended to outline our general product

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems OceanStor 5300F&5500F& 5600F&5800F V5 Huawei mid-range all-flash storage systems (OceanStor F V5 mid-range storage for short) deliver the high performance, low latency, and high scalability that are required

More information

Virtual Security Server

Virtual Security Server Data Sheet VSS Virtual Security Server Security clients anytime, anywhere, any device CENTRALIZED CLIENT MANAGEMENT UP TO 50% LESS BANDWIDTH UP TO 80 VIDEO STREAMS MOBILE ACCESS INTEGRATED SECURITY SYSTEMS

More information

How To Get The Most Out Of Flash Deployments

How To Get The Most Out Of Flash Deployments How To Get The Most Out Of Flash Deployments PRESENTATION TITLE GOES HERE Eric Burgener Research Director, Storage Practice IDC Flash: A Must Have Storage performance req ts very different in virtual infrastructure

More information

All-Flash Business Processing SAN and ONTAP 9 Verification Tests Using Microsoft SQL Server Workloads

All-Flash Business Processing SAN and ONTAP 9 Verification Tests Using Microsoft SQL Server Workloads Technical Report All-Flash Business Processing SAN and ONTAP 9 Verification Tests Using Microsoft Business Workloads Group, DFMG, NetApp September 2016 TR-4532 Abstract This document describes the results

More information

SvSAN Data Sheet - StorMagic

SvSAN Data Sheet - StorMagic SvSAN Data Sheet - StorMagic A Virtual SAN for distributed multi-site environments StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical applications

More information

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved.

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved. SolidFire Petr Slačík Systems Engineer NetApp petr.slacik@netapp.com 27.3.2017 1 2017 NetApp, Inc. All rights reserved. 1 SolidFire Introduction 2 Element OS Scale-out Guaranteed Performance Automated

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9 PRODUCT CATALOG THE SUMMARY CLUSTER SERIES - pg. 3 ULTRA SERIES - pg. 5 EXTREME SERIES - pg. 9 CLUSTER SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP When downtime is not an option Downtime is

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

Life In The Flash Director - EMC Flash Strategy (Cross BU)

Life In The Flash Director - EMC Flash Strategy (Cross BU) 1 Life In The Flash Lane @SamMarraccini, Director - EMC Flash Strategy (Cross BU) CONSTANT 2 Performance = Moore s Law, Or Does It? MOORE S LAW: 100X PER DECADE FLASH Closes The CPU To Storage Gap FLASH

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates Technical Report A Thorough Introduction to 64-Bit Aggregates Shree Reddy, NetApp September 2011 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES The NetApp Data ONTAP 8.0 operating system operating

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

Protect enterprise data, achieve long-term data retention

Protect enterprise data, achieve long-term data retention Technical white paper Protect enterprise data, achieve long-term data retention HP StoreOnce Catalyst and Symantec NetBackup OpenStorage Table of contents Introduction 2 Technology overview 3 HP StoreOnce

More information

Dell EMC SC Series Arrays and Oracle

Dell EMC SC Series Arrays and Oracle Dell EMC SC Series Arrays and Oracle Abstract Best practices, configuration options, and sizing guidelines for Dell EMC SC Series storage in Fibre Channel environments when deploying Oracle. July 2017

More information

DELL EMC DATA DOMAIN OPERATING SYSTEM

DELL EMC DATA DOMAIN OPERATING SYSTEM DATA SHEET DD OS Essentials High-speed, scalable deduplication Up to 68 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability Data invulnerability architecture

More information