DEPLOYMENT BEST PRACTICES FOR ORACLE DATABASE WITH EMC VMAX3 FAST SERVICE LEVELS AND HINTS

Size: px
Start display at page:

Download "DEPLOYMENT BEST PRACTICES FOR ORACLE DATABASE WITH EMC VMAX3 FAST SERVICE LEVELS AND HINTS"

Transcription

1 DEPLOYMENT BEST PRACTICES FOR ORACLE DATABASE WITH EMC VMAX3 FAST SERVICE LEVELS AND HINTS EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation EMC VMAX3 disk arrays, Oracle database administrators have a new way to deploy a wide range of applications in a single, high-performance, high capacity, self-tuning storage environment that can dynamically manage each application s performance requirements with minimal effort. March 2016 EMC WHITE PAPER

2 To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit or explore and compare products in the EMC Store. Copyright 2016 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H

3 TABLE OF CONTENTS EXECUTIVE SUMMARY... 5 AUDIENCE... 5 VMAX3 PRODUCT OVERVIEW... 5 VMAX3 Overview... 5 VMAX3 and Service Level Objetive (SLO) based provisioning... 7 Database Storage Analyzer Product Overview DSA and FAST Hinting STORAGE DESIGN PRINCIPLES FOR ORACLE ON VMAX Storage connectivity considerations Host connectivity considerations Number and size of host devices considerations Virtual Provisioning and thin devices considerations Partition Alignment Considerations for x86-based platforms ASM and database striping considerations Oracle data types and the choice of SLO Host I/O Limits and multi-tenancy Using cascaded storage groups ORACLE DATABASE PROVISIONING Storage provisioning tasks with VMAX Provisioning Oracle database storage with Unisphere Provisioning Oracle database storage with Solutions Enabler CLI ORACLE SLO MANAGEMENT TEST USE CASES Test Configuration Test Overview Test case 1 Single database run with GRADUAL change of SLO Test case 2 Diamond SLO for Oracle DATA and REDO WORKING WITH DATABASE STORAGE ANALYZER (DSA) database metrics collection and retention Creating DSA user with hinting privilege

4 Mapping files DSA Hint wizard CONCLUSION REFERENCES

5 EXECUTIVE SUMMARY The EMC VMAX3 family of storage arrays is the next major step in evolving VMAX hardware and software targeted to meet new industry challenges of scale, performance, and availability. With VMAX3, EMC has made advances in making complex operations of storage management, provisioning, and setting performance goals simple to run and manage. The VMAX3 family of storage arrays is pre-configured from the factory to simplify deployment at customer sites and minimize time to first I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. The VMAX3 is offered as an All- Flash Array, combining solid-state drives (SSDs) and a large cache that accelerates both writes and reads beyond the range of conventional SSDs. VMAX3 is also offered as a hybrid-array with both SSDs and traditional hard-disk drives (HDDs) leveraging VMAX Fully Automated Storage Tiering (FAST) to automatically distribute the load across the storage tiers as the system workload changes over time. The VMAX3 hardware architecture includes more CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric. Many enhancements were introduced to VMAX3 replication software to support new capabilities. EMC TimeFinder SnapVX local replication allows for hundreds of snapshots that can be incrementally refreshed or restored and can cascade any number of times. EMC Symmetrix Remote Data Facility (SRDF) remote replication software adds new features and capabilities that provide more robust remote replication support such as SRDF/Metro. VMAX3 with EMC ProtectPoint adds the ability to connect directly to the EMC Data Domain system, which allows Oracle database backups to be sent directly from the primary storage to the Data Domain system without having to go through the host first. In addition to traditional block storage, VMAX3 also offers embedded file support known as Embedded NAS (enas) via a new hypervisor layer. This white paper explains the basic VMAX3 design changes regarding storage provisioning and performance management and how they simplify the management of storage and affect Oracle database layout decisions. It explains the FAST architecture for managing Oracle database performance using Service Level Objectives (SLOs) and provides guidelines and best practices for its use. This paper also describes the FAST Hinting feature that gives database administrators (DBAs) the ability to temporarily move database objects, such as tables and indexes, to SSDs for reasons such as month-end processing or holiday sales. AUDIENCE This white paper is intended for database and system administrators, storage administrators, and system architects who are responsible for implementing, managing, and maintaining Oracle databases and VMAX3 storage systems. It is assumed that readers have some familiarity with Oracle and the VMAX3 family of storage arrays and are interested in achieving higher database availability, performance, and ease of storage management. VMAX3 PRODUCT OVERVIEW VMAX3 OVERVIEW The VMAX3 family of storage arrays is built on the strategy of simple, intelligent, modular storage, and incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX3 engines. This interface allows the storage array to seamlessly grow from an entry-level configuration into the world s largest storage array. It provides the highest levels of performance and availability featuring new hardware and software capabilities. The VMAX3 family, which includes VMAX 100K, 200K and 400K, delivers the latest in Tier-1, scale-out, multi-controller architecture with consolidation and efficiency for the enterprise. With enhanced hardware and software, the VMAX3 array provides unprecedented performance and scale. It offers dramatic increases in floor tile density (GB/Ft 2 ) with engines and high capacity disk enclosures, for both 2.5" and 3.5" drives, consolidated in the same system bay. Figure 1 shows the possible VMAX3 components. Refer to EMC documentation and release notes to find the supported components. 5

6 The VMAX3 family also includes VMAX 450F and 850F All-Flash arrays, which are not covered in this paper. All VMAX3 models are pre-configured to significantly shorten the time from installation to first I/O. 1 8 redundant VMAX3 Engines Up to 4 PB usable capacity Up to 256 FC host ports Up to 16 TB global memory (mirrored) Up to 384 Cores, 2.7 GHz Intel Xeon E v2 Up to 5,760 drives: SSD, 7.2k rpm, 10k rpm, or 15k rpm Figure 1 VMAX3 Hybrid storage array 1 VMAX3 engines provide the foundation to the storage array. Each fully redundant engine contains two VMAX3 directors and redundant interfaces to the new Dynamic Virtual Matrix dual InfiniBand fabric interconnect. Each director consolidates front-end, global memory, and back-end functions, enabling direct memory access to data for optimized I/O operations. Depending on the array chosen, up to eight VMAX3 engines can be interconnected via a set of active fabrics that provide scalable performance and high availability. New to the VMAX3 design is a shared multi-core architecture, host ports are no longer mapped directly to CPU resources. CPU resources are allocated as needed using pools (front-end, back-end, or data services pools) of CPU cores, which can service all activity in the VMAX3 array. VMAX3 arrays introduce the industry s first open storage and hypervisor-converged operating system, HYPERMAX OS. It combines industry-leading high availability, I/O management, data integrity validation, quality of service, and storage tiering and data security with an open application platform. HYPERMAX OS features a real-time, non-disruptive, storage hypervisor that manages and protects embedded data services (running in virtual machines) by extending VMAX high availability to these data services that traditionally have run external to the array (such as EMC Unisphere ). HYPERMAX OS runs on top of the Dynamic Virtual Matrix, using its scale out flexibility of cores, cache, and host interfaces. The embedded storage hypervisor reduces external hardware and networking requirements, delivers the highest levels of availability, and dramatically lowers latency. All storage in the VMAX3 array is virtually provisioned. VMAX Virtual Provisioning enables users to simplify storage management and increase capacity utilization by sharing storage among multiple applications and only allocating storage as needed from a shared pool of physical disks known as a Storage Resource Pool (SRP). The array uses the dynamic and intelligent capabilities of FAST to meet specified SLOs throughout the lifecycle of each application. VMAX3 SLOs and SLO provisioning are new to the VMAX3 family and are tightly integrated with FAST to optimize agility and array performance across all drive types in the system. While VMAX3 can ship in an All-Flash Array, when purchased with hybrid drive types as a combination of flash and hard drives, FAST technology can improve application performance, and simultaneously reduce costs by using a combination of high-performance flash drives with costeffective high-capacity hard disk drives. 1 Additional drive types and capacities may be available. Contact your EMC representative for more details. 6

7 For local replication, VMAX3 adds a new feature to TimeFinder software called SnapVX, which supports a greater number of snapshots. Unlike previous VMAX snapshots, SnapVX snapshots do not require the use of dedicated target devices. SnapVX allows for up to 256 snapshots per individual source. These snapshots can copy (referred to as link copy ) their data to new target devices and re-link to update just the incremental data changes of previously linked devices. For remote replication, SRDF adds new capabilities and features to provide protection for Oracle databases and applications. All user data entering VMAX3 is T10 Data Integrity Field (DIF)-protected, including replicated data and data on disks. T10 DIF protection can be expanded all the way to the host and application to provide full end-to-end data protection for Oracle databases using either Oracle ASMlib with UEK or Automatic Storage Management (ASM) Filter Driver on a variety of supported Linux operating systems 2. VMAX3 AND SERVICE LEVEL OBJETIVE (SLO) BASED PROVISIONING Introduction to FAST in VMAX3 With VMAX3, FAST is enhanced to include both intelligent storage provisioning and performance management, using SLOs. SLOs automate the allocation and distribution of application data to the correct data pool (and therefore storage tier) without manual intervention. Simply choose the SLO (for example, Platinum, Gold, or silver) that best suits the application requirement. SLOs are tied to expected average I/O latency for both reads and writes and therefore, both the initial provisioning and application ongoing performance are automatically measured and managed based on compliance to storage tiers and performance goals. Every 10 minutes, FAST samples the storage activity, and when necessary, moves data at FAST s sub-lun granularity, which is 5.25 MB (42 extents of 128 KB). SLOs can be dynamically changed at any time, and FAST continuously monitors and adjusts the data location at the sub-lun granularity across the available storage tiers to match the performance goals provided. This is all done automatically within the VMAX3 storage array without deploying a complex application Information Lifecycle Management (ILM) 3 strategy or using host resources for migrating data due to performance needs. VMAX3 FAST Components Figure 2 shows the elements of FAST that form the basis for SLO-based management, as described below. The Physical disk group provides grouping of physical storage (Flash or hard disk drives) based on drive types. All drives in a disk group have the same technology, capacity, form factor, and speed. The disk groups are pre-configured based on the specified configuration requirements at the time of purchase. The Data Pool is a collection of RAID-protected internal devices (also known as TDATs or thin data devices) that are carved out of a single physical disk group. Each data pool can belong to a single SRP (see definition below) and provides a tier of storage based on its drive technology and RAID protection. Data pools can allocate capacity for host devices or replications. Data pools are also pre-configured at the factory to provide optimal RAID protection and performance. The Storage Resource Pool (SRP) is a collection of data pools that provides FAST a domain for capacity and performance management. By default, a single default SRP is factory pre-configured. Additional SRPs can be created with an EMC service engagement. The data movements performed by FAST are done within the boundaries of the SRP and are covered in detail later in this paper. The Storage Group (SG) is a collection of host devices (LUNs) that consume storage capacity from the underlying SRP. Because both FAST and storage provisioning operations are managed at a storage group level, storage groups can be cascaded (hierarchical) to allow different levels of granularity required for each operation (as described in the cascaded storage group section of this paper). The host devices (LUNs) are the components of a storage group. In VMAX3, all host devices are virtual, and at the time of creation, can be fully allocated or thin. Virtual means that they are a set of pointers to data in the data pools, which allows FAST to manage the data location across data pools seamlessly. Fully allocated means that the device s full capacity is reserved in the data pools even before the host has access to the device. Thin means that although the host sees the LUN with its full reported capacity, in reality, no capacity is allocated from the data pools until explicitly written to by the host. All host devices are natively striped across the data pools where they are allocated with granularity of a single VMAX3 track size, which is 128 KB. 2 Refer to EMC Simple Support Matrix for more information on T10 DIF supported HBA and operating systems. 3 Information Lifecycle Management (ILM) refers to a strategy of managing application data based on policies. It usually involves complex data analysis, mapping, and tracking practices. 7

8 The Service Level Objectives (SLO) provide a pre-defined set of service levels (such as Platinum, Gold, or Silver) that can be supported by the underlying SRP. Each SLO has a specific performance goal that FAST will work to satisfy. An SLO defines an expected average response time target for a storage group. By default, all host devices and all storage groups are attached to the Optimized SLO (which will assure that I/Os are serviced from the most appropriate data pool for their workload), but in cases where more deterministic performance goals are needed, specific SLOs can be specified. Figure 2 VMAX3 architecture and service level provisioning Service Level Objectives (SLO) and Workload Types Overview Each SRP contains a set of known storage resources, as seen in Figure 2. Based on the available resources in the SRP, HYPERMAX OS will offer a list of available SLOs that can be met using this particular SRP, as shown in Table 1. This assures that SLOs can be met, and that SRPs are not provisioned beyond their ability to meet application requirements. Note: Since SLOs are tied to the available drive types, it is important to plan the requirements for a new VMAX3 system carefully. EMC works with customers using a new Sizer tool to assist with this task. Table 1 Service Level Objectives SLO Minimum required drive combinations to list SLO Performance expectation Diamond SSD Emulating SSD performance Platinum SSD and (15K or 10K) Emulating performance between 15K drive and SSD Gold SSD and (15K or 10K or 7.2K) Emulating 15K drive performance Silver SSD and (15K or 10K or 7.2K) Emulating 10K drive performance Bronze 7.2K and (15K or 10K) Emulating 7.2K drive performance Optimized Any System optimized performance 8

9 The specific SLO does need to be selected, as by default, all data in the VMAX3 storage array receives an Optimized SLO. The system Optimized SLO meets the performance and compliance requirements by dynamically placing the most active data in the highest performing tier and the less active data in low performance, high-capacity tiers. FAST will place the most active data on higher performing storage and least active data will tend to stay on 10k RPM type drives. However, when specific storage groups (database LUNs) require a more deterministic SLO, one of the other available SLOs can be selected. For example, a storage group holding critical Oracle data files can receive a Diamond SLO while the Oracle log files can receive a Platinum SLO. A less-critical application can be fully contained in a Gold or Silver SLO. See Oracle Data Types and the Choice of SLO for more information. Once an SLO is selected (other than an Optimized SLO), it can be further qualified by a workload type: online transaction processing (OLTP) or decision-support system (DSS). The OLTP workload is focused on optimizing performance for small block I/Os, and the DSS workload is focused on optimizing performance for large block I/Os. The workload type can also specify whether to account for any overhead associated with replication (local or remote). The workload type qualifiers for replication overhead are OLTP_Rep and DSS_Rep, where Rep denotes replicated. Understanding SLO Definitions and Workload Types Each SLO is effectively a reference to an expected response-time range (minimum and maximum allowed latencies) for host I/Os, where a particular Expected Average Response Time is attached to each SLO and workload combination. The Solutions Enabler CLI or Unisphere for VMAX can list the available service levels and workload combinations, as seen in Figure 3. These applications only list the expected average latency, not the range of values. Without a workload type, the latency range is the widest for its SLO type. When a workload type is added, the range is reduced, due to the added information. When the Optimized SLO is selected (which is also the default SLO for all storage groups, unless the user assigns another), the latency range is in fact the full latency spread that the SRP can satisfy, based on its known and available components. Figure 3 Unisphere shows available SLOs 9

10 Important SLO considerations: Because an SLO references a range of target host I/O latencies, the smaller the spread, the more predictable the result. It is therefore recommended to select both an SLO, as well as a workload type. For example, a Platinum SLO with OLTP workload and no replications. Because an SLO references an Expected Average Response Time, it is possible for two applications running a similar workload and set with the same SLO to perform slightly different. This can happen if the host I/O latency still falls within the allowed range. For that reason, use a workload type with an SLO when a smaller range of latencies is desirable. Note: SLOs can be easily changed using Solutions Enabler or Unisphere for VMAX. Also, when necessary to add additional layers of SLOs, the SG can be easily changed into a cascaded SG so that each child or the parent can receive its appropriate SLO. DATABASE STORAGE ANALYZER PRODUCT OVERVIEW Database Storage Analyzer (DSA) is an application that ships with Unisphere 8 for VMAX and provides database and storage administrators a performance monitoring and troubleshooting solution for Oracle databases running on EMC Symmetrix and VMAX storage systems. DSA correlates database and storage level activities. It presents I/O metrics such as IOPS, throughput, and response time from both the database and the VMAX storage system, which helps to immediately identify gaps between the database I/O performance and the storage I/O performance. DSA also provides information about what storage tier is being used and its allocations by the database. This information helps set performance expectations. Figure 4 shows a sample screenshot of the DSA Performance dashboard, showing the I/O Wait Time for the selected database. For example, when the I/O Wait Time is high, it is possible that allowing more capacity on the SSD will improve performance (providing enough connectivity is in place). Note the following: DSA is packaged together with Unisphere for VMAX with no additional cost. DSA currently supports VMAX systems running EMC Enginuity 5671 or higher, and HYPERMAX OS 5977 or higher. Unisphere and DSA store their data in the same database, so there is no need to allocate additional resources for DSA, other than what is specified in the Unisphere for VMAX documentation. With embedded management, VMAX provides access to both Unisphere and DSA running inside the array in a virtual machine. Both Unisphere and DSA can be accessed directly via their own URL, yet contain an optional link to launch each other, providing the user has access rights and login information to the other application. Figure 4 DSA Performance dashboard 10

11 DSA AND FAST HINTING With Version 8.1, DSA introduces the new FAST Hinting feature. This feature provides a way to improve application performance by sending database-related hints to the FAST Engine in the storage array for data that is likely to be accessed during a specified period of time. FAST Hinting allows the DBA to promote just the important parts of a database to the SSD for a specific time without having to change the SLO for the entire database. For example, if in a 2 TB database, only 256 GB are accessed for month-end processing, only those specific database objects can be hinted on and promoted to the SSD. Currently, there is a limit imposed by FAST that the maximum amount of data per hint cannot exceed 20% of the SG capacity 4. When a hint is processed by DSA, if its capacity is larger than allowed, an error message will appear, and the hint status will show as failed. A DSA user can create hints from the Analytics tab in the DSA interface by going through a simple process of selecting the relevant database objects and then setting the priority and the time when the hint should be active. Offering the capability for users to provide FAST Hints is critical for enhancing FAST to be application-aware, as well as ensure SLO compliance. FAST identifies changes in the workload and adjusts the application data extents in the storage tiers to achieve the SLO performance goal. Even with infrequent events such as month-end closing, holiday sales, or critical business reports, FAST will move the data to the SSD while the report is running. However, with a Hint, FAST can place the database objects in the SSD before the activity starts, allowing for the best application performance right from the start of the run. Hinting is not supported for databases running on virtual environments other than VMware with Raw Device Mapping (RDM) configuration. 4 A future release may change this to a higher value, such as 50%. 11

12 STORAGE DESIGN PRINCIPLES FOR ORACLE ON VMAX3 VMAX3 storage provisioning has become much simpler than before. Since VMAX3 physical disk groups, data pools, and even the default SRP come pre-configured, based on inputs to the Sizer tool that helps size them correctly, the only requirement is to configure connectivity between your hosts and the VMAX3 and then start provisioning host devices. The following sections discuss the principles and considerations for storage connectivity and provisioning for Oracle. STORAGE CONNECTIVITY CONSIDERATIONS When planning storage connectivity for performance and availability, it is recommended to connect to storage ports on different engines and directors 5, increasing high-availability in the unlikely event of a component failure. Dynamic core allocation is a new feature to VMAX3. Each VMAX3 director provides services such as front-end connectivity, back-end connectivity, or data management. Each such service has its own set of cores on each director and these cores are pooled together to provide CPU resources that can be allocated as necessary. For example, even if host I/Os arrive via a few front-end ports, all the cores in that pool will service these ports. As I/Os arriving to other directors will have their own core pools, again, for best performance and availability, it is best to connect each host to ports across directors before using ports on the same director. HOST CONNECTIVITY CONSIDERATIONS Host connectivity considerations include two aspects. The first is the number and speed of the host bus adapter (HBA) ports (initiators), and the second is the number and size of host devices. HBA ports considerations Each HBA port (initiator) creates a path for I/Os between the host and the SAN switch, which then continues to the VMAX3 storage. If a host only uses a single HBA port, it will have a single I/O path that must serve all I/Os. This design is not advisable, as a single path does not provide high availability, and also risks a potential bottleneck during high I/O activity due to the lack of additional ports for load balancing. A better design provides each database server at least two HBA ports, preferably on two separate HBAs. The additional ports provide more connectivity and also allow multipathing software like EMC PowerPath or Linux Device-Mapper to load balance and fail over across HBA paths. Each path between the host and storage device creates an SCSI device representation on the host. For example, two HBA ports going to two VMAX front-end adapter ports with a 1:1 relationship create three presentations for each host device: one for each path and another that the multipathing software creates as a pseudo device (such as /dev/emcpowera, or /dev/dm-1, etc.). If each HBA port was zoned and masked to both front-end adapter (FA) ports, there would be five SCSI device representations for each host device (one for each path combination + pseudo device). This second method provides more path combinations for availability, but does not enhance performance. While modern operating systems can manage hundreds of devices, it is not advisable or necessary, and it burdens the user with complex tracking and storage provisioning management overhead. It is therefore usually sufficient for each HBA initiator to connect to one or two frontend VMAX targets, one on each director (preferably on different engines if more than one is available), and not have each HBA port zoned and masked to all VMAX front-end ports. This approach provides enough connectivity, availability, and concurrency, yet reduces the complexity of the host registering lots of SCSI devices unnecessarily. 5 Each VMAX3 engine has two redundant directors. 12

13 NUMBER AND SIZE OF HOST DEVICES CONSIDERATIONS VMAX3 introduces the ability to create host devices with a capacity from a few megabytes to multiple terabytes. With the native striping across data pools that VMAX3 provides, the user may be tempted to create only a few very large host devices. Consider the following example: a 10 TB Oracle database can reside on a 1 x 10 TB host device, or perhaps on 10 x 1 TB host devices. While either option satisfies the capacity requirement, it is recommended to use a reasonable number of host devices and size. In this example, if the database capacity was to rise above 10 TB, it is likely that the DBA will want to add another device of the same capacity (which is an Oracle ASM best practice), even if they do not need 20 TB in total. Therefore, large host devices create very large building blocks when additional storage is needed. Also, each host device creates its own host I/O queue at the operating system. Each such queue can service a tunable, but limited, number of I/Os simultaneously. If, for example, the host had only four HBA ports, and only a single 10 TB LUN (using the previous example again), with multipathing software, it will have only four paths available to queue I/Os. A high level of database activity will generate more I/Os than the queues can service, resulting in artificially elongated latencies. In this example, two additional host devices are advisable to alleviate such an artificial bottleneck. Host software such as PowerPath or iostat can help in monitoring host I/O queues to ensure the number of devices and paths is adequate for the workload. Another benefit of using multiple host devices is that internally the storage array can use more parallelism when operations such as FAST data movement or local and remote replications take place. By performing more copy operations simultaneously, the overall operation takes less time. Finally, if using the Diamond SLO, the allocated capacity of the devices must fit in the capacity of the SSD tier. Note: While there is no recommendation for the size and number of host devices, we recommend finding a reasonable, low number that offers enough concurrency, provides an adequate building block for capacity increments when additional storage is needed, and does not become too large to manage. For example, a good starting point for a moderate-performance database with storage replication is 8-16 devices for ASM +DATA disk group and 4-8 LUNs for the ASM +REDO. These numbers will change if more concurrency and I/O queues are required. VIRTUAL PROVISIONING AND THIN DEVICES CONSIDERATIONS All VMAX3 host devices are Virtually Provisioned (also known as Thin Provisioning), meaning they are merely a set of pointers to capacity allocated at 128 KB extent granularity in the storage data pools. However, to the host, they look and respond just like normal LUNs. Using pointers allows FAST to move the application data between the VMAX3 data pools without affecting the host. It also allows better capacity efficiency for TimeFinder snapshots by sharing of extents when data does not change between snapshots. Virtual provisioning offers a choice of whether to fully allocate the host device capacity, or allow it to do allocation on-demand. A fully allocated device consumes all of its capacity in the data pool on creation, and therefore, there is no risk that future writes may fail if the SRP has no capacity left 6. On the other hand, allocation on-demand allows over provisioning, meaning that although the storage devices are created and look to the host as available with their full capacity, actual capacity is only allocated in the data pools when host writes occur. This is a common cost saving practice but requires the storage administrator to monitor the available capacity in the SRP to prevent write failure due to the pool being completely full. Allocation on-demand is suitable in situations when: The application s capacity growth rate is unknown, and The user prefers not to commit large amounts of storage ahead of time, as it may never get used, and The user prefers not to disrupt host operations at a later time by adding more devices. Therefore, if allocation-on demand is leveraged, capacity will only be physically assigned as needed to meet application requirements. Note: Allocation on-demand works well with Oracle ASM in general, as ASM tends to re-use deleted space efficiently. When ASM Filter Driver is used, deleted capacity can be easily reclaimed in the SRP. This is done by adding a thin attribute to the ASM disk group and performing a manual ASM rebalance. 6 FAST allocates capacity in the appropriate data pools based on the workload and SLO. However, when a data pool is full, FAST may use other pools in the SRP to prevent host I/O failure. 13

14 Since Oracle pre-allocates capacity in the storage when database files are created, when allocation on-demand is used, it is best to deploy a strategy where database capacity is grown over time based on actual need. For example, if ASM was provisioned with a thin device of 2 TB, rather than immediately creating datafiles of 2 TB and consuming all its space, the DBA should create datafiles that consume only the capacity necessary for the next few months, adding more datafiles at a later time, or increasing their size, based on need. PARTITION ALIGNMENT CONSIDERATIONS FOR X86-BASED PLATFORMS ASM requires at least one partition on each host LUN. Some operating systems (such as Solaris) also require at least one partition for user data. Due to legacy BIOS architecture, by default, x86-base operating systems 7 tend to create partitions with an offset of 63 blocks, or 63*512 bytes = 31.5K. This offset is not aligned with VMAX track boundary (128 KB for VMAX3). As a result, I/Os crossing track boundaries may be requested in two operations causing unnecessary overhead and a potential for performance problems. Note: It is strongly recommended to align the host partition of VMAX devices to an offset such as 1 MB (2048 blocks). Use the Linux parted command or the expert mode in fdisk command to move the partition offset. Example for using the parted Linux command with dm-multipath or PowerPath: # DM-Multipath: for i in {1..32} do parted -s /dev/mapper/ora_data$i mklabel msdos parted -s /dev/mapper/ora_data$i mkpart primary 2048s 100% done # PowerPath for i in ct cu cv cw cx cy cz da db dc dd de do parted -s /dev/emcpower$i mklabel msdos parted -s /dev/emcpower$i mkpart primary 2048s 100% done Example for using the fdisk command: [root@dsib0063 scripts]# fdisk /dev/mapper/ora_data1... Command (m for help): n (create a new partition) Command action e extended p primary partition (1-4) p (this will be a primary partition) Partition number (1-4): 1 (create the first partition) First cylinder ( , default 1):[ENTER] (use default) Using default value 1 Last cylinder, +cylinders or +size{k,m,g} ( , default 13054):[ENTER] (use full LUN capacity) Using default value Command (m for help): x (change to expert command mode) Expert command (m for help): p (print partition table) Disk /dev/mapper/ora_data1: 255 heads, 63 sectors, cylinders Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID Expert command (m for help): b (move partition offset) Partition number (1-4): 1 (move partition 1 offset) New beginning of data ( , default 63): 2048 (offset of 1MB) 7 Starting with Red Hat or Oracle Linux 7.x, the first partition is aligned by default at 1 MB offset. Prior releases require manual partition alignment. 14

15 Expert command (m for help): p (print partition table again) Disk /dev/mapper/ora_data1: 255 heads, 63 sectors, cylinders Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID Expert command (m for help): w (write updated partition table) ASM AND DATABASE STRIPING CONSIDERATIONS Host striping occurs when a host allocates capacity to a file and the storage allocations do not all take place as a contiguous space on a single host device. Instead, the file s storage allocation is spread (striped) across multiple host devices to provide more concurrency, although to anyone trying to read or write to the file, it appears contiguous. When the Oracle database issues reads and writes randomly across the datafiles, striping is not of great importance, since the access pattern is random. However, when a file is read or written to sequentially, striping can be of great benefit as it spreads the workload across multiple storage devices, creating more parallelism of execution and, often, higher performance. Without striping, the workload is directed to a single host device, with limited ability for parallelism. Oracle ASM natively stripes its content across the ASM members (storage devices). ASM uses two types of striping: the first, which is the default for most Oracle data types, is called coarse-grained striping, and it allocates capacity across ASM disk group 8 member s round-robin, with a 1 MB default allocation unit (AU), or stripe-depth. ASM AU can be sized from 1 MB (default) up to 64 MB. The second type of ASM striping is called fine-grained striping, which is used by default only for the control files. Fine-grained striping divides the ASM members into groups of eight, allocates an AU on each, and stripes the newly created data at 128 KB across the eight members, until the AU on each of the members is full. Then, it selects another eight members and repeats the process until all user data is written. This process usually takes place during Oracle files initialization, when the DBA creates datafiles, tablespaces, or a database. The type of striping for each Oracle data type is kept in ASM templates, which are associated with the ASM disk groups. Existing ASM extents are not affected by template changes, and therefore, it is best to set the ASM templates correctly as soon as the ASM disk group is created. To inspect the ASM templates, type the following command: SQL> select name, stripe from V$ASM_TEMPLATE; ASM default behavior is typically adequate for most workloads. However, when Oracle databases expect a high update rate, which generates high volume of log writes, EMC recommends setting the redo log s ASM template 9 to fine-grained instead of coarsegrained to create a smaller stripe size (256 KB instead of 1 MB). To change the database redo logs template, type the following command on the ASM disk group holding the logs: SQL> ALTER DISKGROUP <REDO_DG> ALTER TEMPLATE onlinelog ATTRIBUTES (FINE); Changing the AU size or template of the datafiles is usually not necessary and will not provide performance enhancement. If the DBA still wanted to test with a different AU size, the following examples show how to change it and the datafile template from its default of coarse-grained into fine-grained. The following example shows how to change the AU size of a disk group during creation: SQL> CREATE DISKGROUP <DSS_DG> EXTERNAL REDUNDANCY DISK 'AFD:ORA_DEV1' SIZE 10G, 'AFD:ORA_DEV2' SIZE 10G ATTRIBUTE 'compatible.asm'=' ','au_size'='8m'; 8 EMC recommends no ASM mirroring (i.e., external redundancy), which creates a single ASM failure group. However, when ASM mirroring is used, or similarly, when multiple ASM failure groups are manually created, the striping will occur within a failure group rather than at a disk group level. 9 Since each ASM disk group has its own template settings, modifications such as for redo logs template should only take place in the appropriate disk group where the logs reside. 15

16 The following example shows how to change the stripe type or DSS_DG disk group to fine-grained: SQL> ALTER DISKGROUP <DSS_DG> ALTER TEMPLATE datafile ATTRIBUTES (FINE); In a similar fashion, tempfile templates can be modified to use fine-grained striping for applications where many temp files are generated. ORACLE DATA TYPES AND THE CHOICE OF SLO The following sections describe considerations for various Oracle data types and selection of SLOs to achieve the desired performance. Planning SLO for Oracle databases VMAX3 storage arrays can support many enterprise applications, together with all their replication needs and auxiliary systems (such as test, development, reporting, patch-testing, and others). With FAST and SLO management, it is easy to provide the right amount of resources to each such environment with ease and modify it as business priorities or performance needs change over time. This section discusses some of the considerations regarding different Oracle data types and SLO assignment for them. When choosing the SLO for the Oracle database, consider the following: While FAST operates at a sub-lun granularity to satisfy SLO and workload demands, the SLO is set at a storage group granularity (a group of devices). It is therefore important to match the storage group to sets of devices of equal application and business priority (for example, a storage group can contain one or more ASM disk groups, but a single ASM disk group should never be divided to multiple storage groups with more than a single SLO, since Oracle stripes the data across it). Consider that with VMAX3 all writes go to the cache, which is persistent, and thus uses lazy writer to the back-end storage. Therefore, unless other reasons are in play (such as synchronous remote replications, long I/O queues, or a system that is over utilized), write latency should always be very low (cache-hit), regardless of the SLO or disk technology storing the data. On a well-balanced system, the SLO s primary effect is on read latency and IOPS. In general, EMC recommends for mission critical databases to separate the following data types to a distinct set of devices (using an ASM example): +GRID (when Oracle Real Application Clusters (RAC) is configured): when RAC is installed, it keeps the cluster configuration file and quorum devices inside the initial ASM disk group. When RAC is used, EMC recommends only for this disk group to use normal or high ASM redundancy (double or triple ASM mirroring). The reason is that it is small, so mirroring hardly makes a difference; however, it tells Oracle to create more quorum devices. All other disk groups should normally use external redundancy, leveraging capacity saving, and VMAX3 RAID protection. Note: Do not mix database data with +GRID if storage replication is used, as cluster information is unique to its location. If a replica is to be mounted on another host, a different +GRID can be pre-created there with the correct cluster information for that location. +DATA: a minimum of one disk group for data and control files. Large databases may use more disk groups for datafiles, based on business needs, retention policy, etc. Each such disk group can have its own SLO, using a VMAX3 storage group or a cascaded storage group to set it. +REDO: online redo logs. A single ASM disk group, or sometimes two (when logs are multiplexed). It is recommended to separate data from logs for performance reasons, but also when TimeFinder is used for backup/recovery. This ensures that a restore of the datafile devices will not over-write the redo logs. +TEMP (optional): temp files can typically reside with datafiles, however, when TEMP is very active and large, the DBA may decide to separate it to its own ASM disk group, and thus allow a different SLO and performance management. The DBA may also decide to separate TEMP to its own devices when storage replications are used, since temp files do not need to be replicated (can be easily re-created if needed), which saves bandwidth for remote replications. +FRA: typically used for archive and/or flashback logs. If flashback logs consume a lot of space, the DBA may decide to separate the archive from flashback logs. 16

17 The following section will address SLO considerations for these data types. SLO considerations for Oracle datafiles A key part of performance planning for the Oracle database is understanding the business priority of the application it serves. With large databases, it can also be important to understand the structure of the schemas, tablespaces, partitions, and the associated datafiles of the database. A default SLO can be used for the whole database for simplicity, but when more control over database performance is necessary, a distinctive SLO should be used, together with a workload type. The choice of workload type is rather simple: for databases focused on sequential reads/writes, a DSS type should be used. For databases that either focus on transactional applications (OLTP), or mixed workloads such as both transactional and reporting, an OLTP type should be used. If storage remote replication is used (SRDF), add with Replications to the workload type. Use the following guidelines when considering which SLO to select. When to use Diamond SLO: the Diamond SLO is only available when SSDs are available in the SRP. It tells FAST to move all of the allocated storage extents in that storage group to SSDs, regardless of the I/O activity to them. Diamond provides the best read I/O latency, as flash technology is best for random reads. Diamond is also popular for mission-critical databases servicing many users, where the system is always busy, or even when each group of users starts their workload intermittently and expects high-performance with low-latency. By having the whole storage group use SSDs, it does not matter when a user becomes active to provide them with the best performance. When to use Bronze SLO: the Bronze SLO is a good choice for databases that do not have a specific performance require. It allows more critical applications utilize capacity on SSDs. For example, databases can use the Bronze SLO when their focus is development, testing, and reporting. Another use for the Bronze SLO is for gold copies of the database. When to use Optimized SLO: the Optimized SLO is a good default when FAST should make the best decisions based on actual workload and for the storage array as a whole. Because the Optimized SLO uses the widest range of allowed I/O latencies, FAST will attempt to give the active extents in the storage group the best performance, including SSDs if possible. However, if there are competing workloads with explicit SLO, they may get priority for the faster storage tiers, based on the smaller latency range other SLOs have. When to use Silver, Gold, or Platinum SLO: as explained earlier, each SLO provides a range of allowed I/O latency that FAST will work to maintain. Use the SLO that best fits the application based on business and performance needs. Refer to Service Level Objectives (SLO) and Workload Types Overview SLO considerations for log files An active redo log file exhibits sequential write I/Os by the log writer, and once the log is switched, an archiver process will typically start sequential read I/Os from that file. Since all writes in VMAX3 go to cache, the SLO has limited effect on log performance. Archiver reads are not latency critical, so there is no need to dedicate high performance storage for archive logs. Considering this, Oracle logs can use any SLO, since they are write latency critical, and the write latency has only to do with the VMAX3 cache, not the back-end storage technology. Therefore, Oracle log files can normally use the Optimized (default) SLO or the same SLO that is used for the datafiles. In special cases, where the DBA wants the logs on the best storage tiers, the Platinum or Diamond SLO can be used instead. SLO considerations for TEMP and ARCHIVE Logs In all VMAX systems sequential read profiles use intelligent pre-fetch algorithms to optimize read activity over the back-end of the array and all writes, including sequential writes, are buffered in VMAX cache and destaged to disk asynchronously. Temp files use sequential reads and sequential writes I/O profiles, and archive logs use sequential writes I/O profiles. In both cases, any SLO will suffice, where low-latency SLOs (such as Diamond, or Platinum) should likely be kept for other Oracle file types that focus on smaller I/Os and are more random read in nature. Unless there are specific performance needs for these file types, the Optimized SLO can be used for simplicity. SLO considerations for INDEXES Often index access is performed in memory. Also, often index access is mixed with the datafiles and shares their SLO. However, when indexes are large, they may incur a lot of storage I/Os. In that case, it may be useful to separate them to their own LUNs (or 17

18 ASM disk group) and use a low-latency SLO (such as Gold, Platinum, or even Diamond) as index access is typically random and latency critical. SLO considerations for All-Flash workloads When a workload either requires a predictable low-latency/high-iops performance, or perhaps when many users with intermittent workload peaks use a consolidated environment, each requiring high-performance during their respective activity time, an All-Flash performance is suitable. An All-Flash deployment is also suitable when data center power and floor space are limited, and a highperformance, consolidated environment is desirable. Note: VMAX3 offers a choice of a single SSD tier or multiple tiers. Since most databases require additional capacity for replicas, test/dev environments, and other copies of the production data, consider a hybrid array for these replicas, and simply assign the production data to the Diamond SLO. SLO considerations for noisy neighbor and competing workloads In highly consolidated environments, many databases and applications compete for storage resources. FAST can provide each with the appropriate performance when specific SLO and workload types are specified. By using different SLOs for each such application (or group of applications), it is easy to manage such a consolidated environment and modify the SLOs when business requirements change. Refer to the next section for additional ways of controlling performance in a consolidated environment. HOST I/O LIMITS AND MULTI-TENANCY The host I/O limits quality of service (QoS) feature was introduced in the previous generation of VMAX arrays, but it continues to offer VMAX3 customers the option to place specific IOPs or bandwidth limits on any storage group, regardless of the SLO assigned to that group. Assigning a specific host I/O limit for IOPS, for example, to a storage group with low performance requirements can ensure that a spike in I/O demand will not saturate its storage, cause FAST to inadvertently migrate extents to higher tiers, or overload the storage, affecting performance of more critical applications. Placing a specific IOPs limit on a storage group will limit the total IOPs for the storage group, but it does not prevent FAST from moving data based on the SLO for that group. For example, a storage group with a Gold SLO may have data in both SSD and hard disk drive (HDD) tiers to satisfy the I/O latency goals, yet be limited to the IOPS provided by the host I/O limit. USING CASCADED STORAGE GROUPS VMAX3 offers cascaded storage groups, wherein multiple child storage groups can be associated with a single parent storage group for ease of manageability and for storage provisioning. This provides flexibility by associating different SLOs to individual child storage groups to manage service levels for various application objects and using the cascaded storage groups for storage provisioning. Figure 5 Cascaded storage groupshows an Oracle server using a cascaded storage group. The Oracle +DATA ASM disk group is set to use the Gold SLO, whereas the +REDO ASM disk group is set to use the Silver SLO. Both storage groups are part of a cascaded storage group, Oracle_DB_SG, which can be used to provision all database devices to the host or multiple hosts if there is a cluster. Figure 5 Cascaded storage group 18

19 ORACLE DATABASE PROVISIONING STORAGE PROVISIONING TASKS WITH VMAX3 Since VMAX3 comes pre-configured with data pools and an SRP, the next step is to create the host devices and make them visible to the hosts using device masking. Note: Remember that zoning at the switch sets the physical connectivity that device masking defines more closely. Zoning needs to be set ahead of time between the host initiators and the storage ports that will be used for device masking tasks. Device creation is an easy task and can be performed in a number of ways: 1) Using Unisphere for VMAX3 UI 2) Using Solutions Enabler CLI Device masking is also an easy task and includes the following steps: 1) Creation of an Initiator Group (IG). The IG is the list of host HBA port world wide names (WWNs) to which the devices will be visible. 2) Creation of a Storage Group (SG). Since SGs are used for both FAST SLO management and storage provisioning, review the information on Using cascaded storage groups. 3) Creation of a Port Group (PG). The PG is the group of VMAX3 front-end ports where the host devices will be mapped and visible. 4) Creation of a Masking View (MV). The MV creates a combination of the SG, PG, and IG. Device masking helps control access to storage. For example, storage ports can be shared across many servers, but only the masking view determines which of the servers will have access to the appropriate devices and storage ports. 19

20 PROVISIONING ORACLE DATABASE STORAGE WITH UNISPHERE This section covers storage provisioning for Oracle databases using Unisphere for VMAX. Creation of a host Initiator Group (IG) Provisioning storage requires creation of host initiator groups by specifying the host HBA WWN ports. To create a host IG: 1. Select the appropriate VMAX storage 2. Select the Hosts tab. 3. Select from the list of initiator WWNs, as shown in Figure 6. Figure 6 Create Initiator Group 20

21 Creation of Storage Group (SG) A SG defines a group of one or more host devices. Using the SG creation screen, a SG name is specified, and new storage devices can be created and placed into the SG with their initial SLO. If more than one group of devices is requested, each group creates a child SG and can take its own unique SLO. The SG creation screen is shown in Figure 7. Figure 7 Create Storage Group 21

22 Select hosts In this step, the hosts to which the new storage will be provisioned are selected. This is done by selecting an IG (host HBA ports), as shown in Figure 8. Figure 8 Create Initiator Group 22

23 Creation of Port Group (PG) A PG defines which of the VMAX front-end ports will map and mask the new devices. A new PG can be created, or an existing one can be selected, as shown in Figure 9. Figure 9 Create Port Group 23

24 Creation of Masking View (MV) At this point, Unisphere is now ready to create a MV. The SG, IG, and PG are presented, and a MV name is entered. VMAX automatically maps and masks the devices in the SG to the Oracle servers, as shown in Figure 10. Figure 10 Create Masking View 24

25 PROVISIONING ORACLE DATABASE STORAGE WITH SOLUTIONS ENABLER CLI The following is a provisioning example using VMAX3 Solutions Enabler CLI to create storage devices and mask them to the host. Create devices for ASM disk group Create 4 x 1 TB thin devices for the Oracle ASM Data Disk Group. The output of the command includes the new device IDs. The full capacity of the devices can be pre-allocated, as shown below. Note: If preallocate size=all is omitted, capacity for the new devices will not be pre-allocated in the data pools, and the device will be Thin. See Virtual Provisioning and thin devices considerations for more information. Use one of the two methods to create new devices: # symdev -sid 115 create -tdev -cap 500 -captype gb -N 4 -v Or.. # symconfigure -sid 115 -cmd "create dev count=4,size=1024 GB, preallocate size=all, emulation=fba, config=tdev ;" commit... New symdevs: 06F:072 Mapping and masking devices to host <Create Child Storage Groups for ASM DGs > # symaccess -sid 115 create name DATA_SG type storage devs 06F:072 # symaccess -sid 115 create name REDO_SG type storage devs 073:075 <Create Parent Storage Group and add Childs> # symaccess -sid 115 create name ORA1_SG type storage sg Data_SG,REDO_SG <Create host initiator group using a text file containing WWNs of HBA ports> # symaccess -sid 115 create name ORA1_IG type initiator file wwn.txt <Create port group specifying the VMAX3 FA ports> # symaccess -sid 115 create name ORA1_PG type port dirport 1E:4,2E:4,1E:8,2E:8 <Create masking view to complete the mapping and masking> # symaccess -sid 115 create view name ORA1_MV sg ORA1_SG pg ORA1_PG ig ORA1_IG 25

26 ORACLE SLO MANAGEMENT TEST USE CASES TEST CONFIGURATION This section provides examples of using Oracle databases with SLO management. Test overview The following test cases are covered: Single database performance using different SLOs for the Oracle datafiles A Diamond SLO (flash-only) configuration with both Oracle +DATA and +LOG on SSDs Databases configuration details The following tables show the use cases test environment. Table 2 shows the VMAX3 storage environment. Table 3 shows the host environment. Table 4 shows the databases storage configuration. Table 2 Test storage environment Configuration aspect Description Storage array VMAX 400K with 2-engines HYPERMAX OS Drive mix (including spares) 64 x SSDs - RAID5 (3+1) 246 x 10K HDD - RAID1 102 x 1TB 7K HDD - RAID6 (6+2) Table 3 Test host environment Configuration aspect Description Oracle Oracle Grid and Database release 12.1 Linux Oracle Enterprise Linux 6 Multipathing Linux DM Multipath Hosts 2 x Cisco C240, 96 GB memory Volume Manager Oracle ASM Table 4 Test database configuration Database Name: FINDB Size: 1.5 TB Thin devices (LUNs) ASM DG Assignment SRP Start SLO +DATA: 4 x 1 TB thin LUNs Default Bronze +REDO: 4 x 100 GB thin LUNs Default Bronze 26

27 TEST OVERVIEW General test notes FINDB database was configured to run an industry standard OLTP workload with 70/30 read/write ratio and 8 KB block size using Oracle database 12c and ASM. No special database tuning was done, as the focus of the test was not on achieving maximum performance and rather on the comparative differences of a standard database workload. DATA and REDO storage groups (and ASM disk groups) were cascaded into a parent storage group for ease of provisioning and performance management. Data collection included storage performance metrics using Solutions Enabler CLI and Unisphere, host performance metrics using iostat, and database performance metrics using Oracle Automatic Workload Repository (AWR). High level test cases 1. In the first test use case of a single database workload, both DATA and LOG devices were set to the Bronze SLO before the test was conducted. In this way, no data extents remained on SSDs in the storage array to create a baseline. During the test, only the +DATA SLO was changed from Bronze to Platinum. The LOG storage group was left at Bronze since, as explained earlier, the LOG workload is focused on writes, which are always handled by the VMAX3 cache, and therefore affected to a lesser degree by the SLO. 2. In the second test use case, both DATA and LOG devices were set to a Diamond SLO. The outcome is that regardless of their workload, all their extents migrated to SSD for best performance. TEST CASE 1 SINGLE DATABASE RUN WITH GRADUAL CHANGE OF SLO Test Scenario Testing was started with all storage groups using a Bronze SLO to create a baseline. The DATA SG was then configured with progressively faster SLOs (Silver, then Gold, then Platinum), and performance statistics were gathered to analyze the effect of SLO changes on database transaction rates. During all tests, the REDO SG remained on a Bronze SLO. Objectives The purpose of this test case is to understand how database performance can be controlled by changing the DATA SLO. Test execution steps 1. Run an OLTP workload on FINDB with DATA and REDO storage groups on Bronze SLO. 2. Gradually apply Silver, Gold, and Platinum SLOs to the DATA SG and gather performance statistics. Test Results Table 5 shows the test results of Test Case 1, including the database average transaction rate (AVG TPM), Oracle AWR average random read response time (dbfile seq read), and storage front-end response time. Table 5 Use case 1 results Databases DATA SLO REDO SLO AVG TPM AWR dbfile seq read (ms) FA Avg response time (ms) FINDB Bronze Bronze 26, Silver Bronze 32, Gold Bronze 72, Platinum Bronze 146,

28 Figure 11 shows the database average transaction rate (TPM) changes as a direct effect of changes in DATA SLOs SLO Controlled TPM OLTP Workload x 0 Figure 11 Use case 1 TPM changes The overall change between the Bronze and Platinum SLO was 5x improved transaction rate. VMAX3 promoted active data extents to increase performance as the SLO changed from Bronze to Silver, Gold, and Platinum. Not only did the transaction rate increase, but I/O latencies were also reduced with more SSDs were allocated. With a Bronze SLO, the Oracle database experienced a latency of 10 ms, which improved to 2 ms with the Platinum SLO. The corresponding transaction rate jumped from 26,377 on the Bronze SLO to almost 146,000 on the Platinum SLO. This resulted in a 5x improvements in transaction rate. TEST CASE 2 DIAMOND SLO FOR ORACLE DATA AND REDO Test Scenario This test used an SSD-only configuration by placing both DATA and REDO storage groups on a Diamond SLO (SSD only). Objectives The purpose of this test case is to provide a flash-only configuration for low-latency and high-performance. This test case used a Diamond SLO for DATA and LOG storage groups. Test execution steps: 1. Set the DATA and REDO storage groups SLO to Diamond. 2. Run the OLTP workload and gather performance statistics. Results Table 6 shows the test results of Test Use Case 2, including the database average transaction rate (AVG TPM), Oracle AWR average random read response time (dbfile seq read), and storage front-end response time. Table 6 Test case 2 results Databas DATA SLO REDO AVG TPM AWR dbfile seq Average FA latency es SLO read (ms) (ms) FINDB Diamond Diamond 183,

29 Figure 12 shows the database average transaction rate (TPM) when using a Diamond SLO for both DATA and REDO storage groups (equivalent to an All-Flash configuration). The Diamond SLO provided a predictable, high-performance, and high-transaction rate at a low latency for the OLTP workload Diamond SLO Controlled TPM - Oracle Data and REDO Logs - OLTP Workload Figure 12 All-Flash Configuration Transaction rate 29

30 WORKING WITH DATABASE STORAGE ANALYZER (DSA) DATABASE METRICS COLLECTION AND RETENTION DSA collects information by connecting directly to the monitored database using a dedicated database read-only user. This user has select permissions on a fixed list of Oracle dictionary tables to collect performance and mapping information. DSA fetches performance data every 5 minutes and sends it back to the Unisphere repository database where it aggregates the data into hourly and daily aggregations. By default, DSA saves the fetched data for 15 days; however, you can extend this period to 30 days. DSA saves the hourly aggregations for 15 months and the daily aggregations for 2 years; however, you can extend both periods up to 3 years. Figure 13 shows an example of how to change the database retention parameters. In addition to the 5 minute collection time, there is also a nightly process that accesses the database once a day at around 12:00am to update the dictionary information about objects and extents. Figure 13 DSA Retention Configuration CREATING DSA USER WITH HINTING PRIVILEGE DSA collects information by connecting directly to the monitored database as a normal database user with limited permissions for its performance collection and mapping tasks. DSA then fetches data dictionary and activity directly from the database tables. The database user can be created during the installation process, or DBAs can create it manually by running the Add database option. The DSA user must have SYSADMIN privileges. Note that the SYSADMIN role is required to collect object extent data using the DBCC EXTENTINFO command. This process runs once a day. 30

31 Before you can perform hinting, you must create a user with DSA Hinting privileges. This can be done in Unisphere by selecting Home > All Symmetrix > Administration > Security and creating a DSA user, providing login credentials, and selecting DSA Hinting. Figure 14 shows how to create a DSA user with Hinting privileges. Figure 14 Creating DSA User 31

32 MAPPING FILES The mapping process is responsible for mapping the Oracle files to the storage system devices. By default, the process runs once a week, however, you can configure it to run at different times. During device mapping, the list of database files is copied using SSH to the monitored database host. A process running on the monitored database host identifies the host physical devices associated with the Oracle files, and then sends the list back to be loaded into the DSA repository. Figure 15 shows a sample of the Mapping Wizard under the Administration tab; you can change the mapping interval using the Configure button. Figure 15 Mapping Wizard DSA HINT WIZARD Hinting example Creating a hint in DSA is done by selecting a database that you want to hint from the DSA Dashboard and then selecting Analytics. You can specify a time range for analysis, and DSA will show a list of objects that can be hinted in a window, as shown in Figure 16. Simply mark the objects for Hinting and then click Add to Hint to start the Hint Wizard. Figure 16 Select Objects for Hinting Use the hint wizard to create a four-objects hint After clicking Add to Hint, you will be asked to provide a hint name, the priority of the hint, and the frequency of the hint, as shown in Figure 17. The priority levels are described below. Priority 1 Raises the hinted objects to a Diamond SLO. The entire hinted area will be marked as active in FAST. This means that once the hint is active on the array, the hinted data will be eligible to be promoted to the SSD to meet the Diamond SLO. Priority 2 Raises the hinted objects to a Platinum SLO. The entire hinted area will be marked as active in FAST. This means that once the hint is active on the array, the hinted data will be eligible to be promoted to a higher tier to meet the requirements of the Platinum SLO. 32

33 Priority 3 Raises the hinted object to a Platinum SLO, but there is no change to the active/inactive state of the data in FAST. This means that the data will need to be accessed before it can be promoted to a higher tier. Note: Only a hint of Priority 1 will distinctively try to promote the database object fully to the SSD. Figure 17 Hint Wizard Use DSA performance reports to show the hint in action The DSA performance window, shown in Figure 18, shows a number of performance statistics, including IOPs, at the database level. This is a snapshot that shows a sample database load running on a VMAX array with the workload starting at 08:30AM. This particular database is about 2 TB and is attached to the Silver SLO in FAST. The graph shows the I/O load on this database for each disk group in the SRP. In this case, all the database workloads start running on the 10K disks for the first hour of the test and only the 10K disks are active. This is because the 10K disks can satisfy the performance requirements for the Silver SLO, so FAST does not need to promote any of the data to the SSD to improve performance. To show the effects of Hinting on performance, we issued a hint to promote four database tables. The hint used a one-time, Priority 1 hint to tell FAST to promote those objects to the data to SSD at 10:00 AM. You can see in the graph that as the FAST Hint starts promoting data to the SSD, the number of IOPs going to the SSD devices increases and the number of IOPs to 10K disks is reduced. Running these objects under the Silver SLO produced under 2,500 IOPs in the 10K tier but, using FAST Hinting, we temporarily improve the performance of those objects to over 10,000 IOPs, without having to change the SLO for the entire database. Figure 18 DSA Performance Results 33

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY HYPERMAX OS Integration with CloudArray ABSTRACT With organizations around the world facing compliance regulations, an increase in data, and a decrease in IT spending,

More information

Dell EMC Service Levels for PowerMaxOS

Dell EMC Service Levels for PowerMaxOS Dell EMC Service Levels for PowerMaxOS Dell Engineering May 2018 1 Dell EMC Service Levels for PowerMaxOS H17108 Revisions Date May 2018 Description Initial release The information in this publication

More information

ReDefine Enterprise Storage

ReDefine Enterprise Storage ReDefine Enterprise Storage What s New With VMAX 1 INDUSTRY S FIRST ENTERPRISE DATA PLATFORM 2 LOW LATENCY Flash optimized NO DOWNTIME Always On availability BUSINESS ORIENTED 1-Click Service Levels CLOUD

More information

EMC OPEN REPLICATOR MIGRATION FROM HP 3PAR TO EMC VMAX3 USING ORACLE DATABASE

EMC OPEN REPLICATOR MIGRATION FROM HP 3PAR TO EMC VMAX3 USING ORACLE DATABASE EMC OPEN REPLICATOR MIGRATION FROM HP 3PAR TO EMC VMAX3 USING ORACLE DATABASE ABSTRACT This white paper describes data transfer using EMC Solution Enabler Open Replicator pull software to transfer disk-to-disk

More information

VMAX ALL FLASH. For Mission-Critical Oracle

VMAX ALL FLASH. For Mission-Critical Oracle VMAX ALL FLASH For Mission-Critical Oracle Performance All Flash performance that can scale (submillisecond response times) for mission critical Oracle mixed workloads; OLTP, DW/BI, and Analytics Virtualize

More information

EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES

EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES Applied best practices guide. EMC VMAX HYPERMAX OS 5977.811.784 Embedded NAS Version 8.1.10.21 ABSTRACT This white paper outlines best practices for planning,

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions EMC Solutions for Enterprises EMC Tiered Storage for Oracle ILM Enabled by EMC Symmetrix V-Max Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009 EMC Corporation.

More information

Dell EMC PowerMax Storage for Mission-Critical SQL Server Databases

Dell EMC PowerMax Storage for Mission-Critical SQL Server Databases Dell EMC PowerMax Storage for Mission-Critical SQL Server Databases June 2018 H17234 Abstract This white paper describes how mission-critical SQL Server databases benefit from the Dell EMC PowerMax storage

More information

EMC Solutions Enabler (SE) version 8.2 and Unisphere for VMAX version 8.2 provide array management and control.

EMC Solutions Enabler (SE) version 8.2 and Unisphere for VMAX version 8.2 provide array management and control. This module provides an overview of the VMAX All Flash and VMAX3 Family of arrays with HYPERMAX OS 5977. Key features and storage provisioning concepts are covered as well as the CLI command structure

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer Storage Platforms Update Ahmed Hassanein, Sr. Systems Engineer 3 4 Application Workloads PERFORMANCE DEMANDING UNDERSTANDING APPLICATION WORKLOADS CAPACITY DEMANDING IS VITAL TRADITIONAL CLOUD NATIVE 5

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

EMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing

EMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing EMC VMAX 400K SPC-2 Proven Performance Silverton Consulting, Inc. StorInt Briefing EMC VMAX 400K SPC-2 PROVEN PERFORMANCE PAGE 2 OF 10 Introduction In this paper, we analyze all- flash EMC VMAX 400K storage

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

EMC Unisphere for VMAX Database Storage Analyzer

EMC Unisphere for VMAX Database Storage Analyzer EMC Unisphere for VMAX Database Storage Analyzer Version 8.2.0 Online Help (PDF version) Copyright 2014-2016 EMC Corporation. All rights reserved. Published in the USA. Published March, 2016 EMC believes

More information

EMC Unisphere for VMAX Database Storage Analyzer

EMC Unisphere for VMAX Database Storage Analyzer EMC Unisphere for VMAX Database Storage Analyzer Version 8.4.0 Online Help (PDF version) Copyright 2014-2017 EMC Corporation All rights reserved. Published May 2017 Dell believes the information in this

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

EMC VMAX UNISPHERE 360

EMC VMAX UNISPHERE 360 EMC VMAX UNISPHERE 360 ABSTRACT Unisphere 360 is a new application designed to consolidate and simplify data center management of VMAX Storage systems. WHITE PAPER To learn more about how EMC products,

More information

EMC Exam E VMAX3 Solutions and Design Specialist Exam for Technology Architects Version: 6.0 [ Total Questions: 136 ]

EMC Exam E VMAX3 Solutions and Design Specialist Exam for Technology Architects Version: 6.0 [ Total Questions: 136 ] s@lm@n EMC Exam E20-542 VMAX3 Solutions and Design Specialist Exam for Technology Architects Version: 6.0 [ Total Questions: 136 ] Question No : 1 A storage administrator attempts to link a TimeFinder

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC white paper FlashGrid Software Intel SSD DC P3700/P3600/P3500 Topic: Hyper-converged Database/Storage FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC Abstract FlashGrid

More information

ORACLE DATA WAREHOUSE ON EMC SYMMETRIX VMAX 40K

ORACLE DATA WAREHOUSE ON EMC SYMMETRIX VMAX 40K White Paper ORACLE DATA WAREHOUSE ON EMC SYMMETRIX VMAX 40K Scalable query and ETL performance for very large database (VLDB) Reduced backup time and impact enabled by TimeFinder VP Snap Unisphere for

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

EMC SYMMETRIX VMAX 10K

EMC SYMMETRIX VMAX 10K EMC SYMMETRIX VMAX 10K EMC Symmetrix VMAX 10K with the Enginuity operating environment delivers a multi-controller, scale-out architecture with consolidation and efficiency for the enterprise. The VMAX

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC SYMMETRIX VMAX 40K STORAGE SYSTEM

EMC SYMMETRIX VMAX 40K STORAGE SYSTEM EMC SYMMETRIX VMAX 40K STORAGE SYSTEM The EMC Symmetrix VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC Unisphere for VMAX Database Storage Analyzer

EMC Unisphere for VMAX Database Storage Analyzer EMC Unisphere for VMAX Database Storage Analyzer Version 8.0.3 Online Help (PDF version) Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes the

More information

DELL EMC VMAX UNISPHERE 360

DELL EMC VMAX UNISPHERE 360 DELL EMC VMAX UNISPHERE 360 ABSTRACT Using Unisphere 360 to consolidate the management of VMAX storage system offers many benefits. This management interface offers a single interface where all enrolled

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

EMC VMAX3 FAMILY FEATURE OVERVIEW A DETAILED REVIEW FOR OPEN AND MAINFRAME SYSTEM ENVIRONMENTS

EMC VMAX3 FAMILY FEATURE OVERVIEW A DETAILED REVIEW FOR OPEN AND MAINFRAME SYSTEM ENVIRONMENTS EMC VMAX3 FAMILY FEATURE OVERVIEW A DETAILED REVIEW FOR OPEN AND MAINFRAME SYSTEM ENVIRONMENTS ABSTRACT This white paper describes the features that are available for the EMC VMAX3 Family storage systems.

More information

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Reasons to Deploy Oracle on EMC Symmetrix VMAX Enterprises are under growing urgency to optimize the efficiency of their Oracle databases. IT decision-makers and business leaders are constantly pushing the boundaries of their infrastructures and applications

More information

OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION

OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION White Paper OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION Simplifies cloud storage Automates management and provisioning Transforms as-a-service delivery EMC

More information

Storage Designed to Support an Oracle Database. White Paper

Storage Designed to Support an Oracle Database. White Paper Storage Designed to Support an Oracle Database White Paper Abstract Databases represent the backbone of most organizations. And Oracle databases in particular have become the mainstream data repository

More information

DELL EMC VMAX3 FAMILY

DELL EMC VMAX3 FAMILY DELL EMC VMAX3 FAMILY VMAX 100K, 200K, 400K The Dell EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for the enterprise.

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed

More information

DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT

DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT July 2017 VMAX Engineering ABSTRACT This white paper examines deployment of the Microsoft Windows Server Hyper-V virtualization solution

More information

EMC SYMMETRIX VMAX 40K SYSTEM

EMC SYMMETRIX VMAX 40K SYSTEM EMC SYMMETRIX VMAX 40K SYSTEM The EMC Symmetrix VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3

ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3 ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3 Simplified Oracle storage operations for virtualized Oracle environments Faster Oracle database snapshot creation and deletion

More information

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief Digital Business Demands Performance and Scale As enterprises shift to online and softwaredriven business models, Oracle infrastructure is being pushed to run at exponentially higher scale and performance.

More information

IMPLEMENTING FAST VP AND STORAGE TIERING FOR ORACLE DATABASE 11g AND EMC SYMMETRIX VMAX

IMPLEMENTING FAST VP AND STORAGE TIERING FOR ORACLE DATABASE 11g AND EMC SYMMETRIX VMAX White Paper IMPLEMENTING FAST VP AND STORAGE TIERING FOR ORACLE DATABASE 11g AND EMC SYMMETRIX VMAX Abstract As the need for more information continues to explode, businesses are forced to deal with an

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

VMAX enas DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS

VMAX enas DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS VMAX enas DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS EMC VMAX Engineering White Paper ABSTRACT This document provides guidelines and best practices for deploying enas for Microsoft environment

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

EMC Innovations in High-end storages

EMC Innovations in High-end storages EMC Innovations in High-end storages Symmetrix VMAX Family with Enginuity 5876 Sasho Tasevski Sr. Technology consultant sasho.tasevski@emc.com 1 The World s Most Trusted Storage System More Than 20 Years

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. Dell EMC VMAX. User Guide

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. Dell EMC VMAX. User Guide VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR Dell EMC VMAX User Guide TABLE OF CONTENTS 1. Purpose...3 2. Introduction to the Management Pack...3 2.1 How the Management Pack Collects Data...3 2.2 Data

More information

Using EMC FAST with SAP on EMC Unified Storage

Using EMC FAST with SAP on EMC Unified Storage Using EMC FAST with SAP on EMC Unified Storage Applied Technology Abstract This white paper examines the performance considerations of placing SAP applications on FAST-enabled EMC unified storage. It also

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

WHAT S NEW WITH TIMEFINDER FOR EMC SYMMETRIX VMAX

WHAT S NEW WITH TIMEFINDER FOR EMC SYMMETRIX VMAX White Paper WHAT S NEW WITH TIMEFINDER FOR EMC SYMMETRIX VMAX Applied Technology for traditional and virtual environments Abstract This white paper describes the latest EMC TimeFinder features available

More information

Oracle Database 10G. Lindsey M. Pickle, Jr. Senior Solution Specialist Database Technologies Oracle Corporation

Oracle Database 10G. Lindsey M. Pickle, Jr. Senior Solution Specialist Database Technologies Oracle Corporation Oracle 10G Lindsey M. Pickle, Jr. Senior Solution Specialist Technologies Oracle Corporation Oracle 10g Goals Highest Availability, Reliability, Security Highest Performance, Scalability Problem: Islands

More information

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers An Oracle Technical White Paper October 2011 Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers Introduction... 1 Foundation for an Enterprise Infrastructure... 2 Sun

More information

EMC Celerra Virtual Provisioned Storage

EMC Celerra Virtual Provisioned Storage A Detailed Review Abstract This white paper covers the use of virtual storage provisioning within the EMC Celerra storage system. It focuses on virtual provisioning functionality at several levels including

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2016-05-18 2015-2016 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)

More information

Deploying Oracle Database 11gR2 with ASM and Hitachi Dynamic Provisioning Software on the Hitachi Virtual Storage Platform

Deploying Oracle Database 11gR2 with ASM and Hitachi Dynamic Provisioning Software on the Hitachi Virtual Storage Platform 1 Deploying Oracle Database 11gR2 with ASM and Hitachi Dynamic Provisioning Software on the Hitachi Virtual Storage Platform Best Practices Guide By Anantha Adiga October 2010 Month Year Feedback Hitachi

More information

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Copyright 2012, Oracle and/or its affiliates. All rights reserved. 1 Storage Innovation at the Core of the Enterprise Robert Klusman Sr. Director Storage North America 2 The following is intended to outline our general product direction. It is intended for information

More information

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP Silverton Consulting, Inc. StorInt Briefing BENEFITS OF MULTI- NODE SCALE- OUT CLUSTERS RUNNING NETAPP CDOT PAGE 2 OF 7 Introduction

More information

EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology

EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology Abstract This paper examines the attributes of the IBM DB2 UDB V8.2 database as they relate to optimizing the configuration for the

More information

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide White Paper Third-party Information Provided to You Courtesy of Dell Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide Abstract This document provides an overview of the architecture of the

More information

VMAX ALL FLASH FAMILY

VMAX ALL FLASH FAMILY VMAX ALL FLASH FAMILY VMAX 250F, The exciting Dell EMC VMAX family of all-flash arrays now includes the newest powerful member, the VMAX. The VMAX delivers unparalleled performance and scalability as a

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 This whitepaper describes the Dell Microsoft SQL Server Fast Track reference architecture configuration

More information

MOST ACCESSIBLE TIER 1 STORAGE

MOST ACCESSIBLE TIER 1 STORAGE EMC VMAX 10K Powerful, Trusted, Smart, and Efficient MOST ACCESSIBLE TIER 1 STORAGE The EMC VMAX 10K storage system is a new class of enterprise storage purposebuilt to provide leading high-end virtual

More information

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. White Paper rev. 2017-10-16 2017 FlashGrid Inc. 1 www.flashgrid.io Abstract Ensuring high availability

More information

Exam Name: Technology Architect Solutions Design Exam

Exam Name: Technology Architect Solutions Design Exam Vendor: EMC Exam Code: E20-322 Exam Name: Technology Architect Solutions Design Exam Version: DEMO QUESTION 1 A customer wants to consolidate a large physical IT infrastructure with a VMware vsphere based

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Optimizing Tiered Storage Workloads with Precise for Storage Tiering

Optimizing Tiered Storage Workloads with Precise for Storage Tiering Applied Technology Abstract By introducing Enterprise Flash Drives to EMC Symmetrix and CLARiiON storage systems, EMC revitalized the importance of tiered storage. Optimizing the use of these tiers provides

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC 1 EMC Symmetrix Series The High End Platform Tom Gorodecki EMC 2 EMC Symmetrix -3 Series World s Most Trusted Storage Platform Symmetrix -3: World s Largest High-end Storage Array -3 950: New High-end

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Veritas Dynamic Multi-Pathing for VMware 6.0 Chad Bersche, Principal Technical Product Manager Storage and Availability Management Group

Veritas Dynamic Multi-Pathing for VMware 6.0 Chad Bersche, Principal Technical Product Manager Storage and Availability Management Group Veritas Dynamic Multi-Pathing for VMware 6.0 Chad Bersche, Principal Technical Product Manager Storage and Availability Management Group Dynamic Multi-Pathing for VMware 1 Agenda 1 Heterogenous multi-pathing

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Manageability and availability for Oracle RAC databases Overview Veritas InfoScale Enterprise for Oracle Real Application Clusters

More information

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0

More information

Dell EMC ViPR Controller

Dell EMC ViPR Controller Dell EMC ViPR Controller Version 3.6.2 Ingest Services for Existing Environments Guide 302-004-917 Copyright 2013-2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes

More information

Dell EMC SC Series Arrays and Oracle

Dell EMC SC Series Arrays and Oracle Dell EMC SC Series Arrays and Oracle Abstract Best practices, configuration options, and sizing guidelines for Dell EMC SC Series storage in Fibre Channel environments when deploying Oracle. July 2017

More information

Increasing Performance of Existing Oracle RAC up to 10X

Increasing Performance of Existing Oracle RAC up to 10X Increasing Performance of Existing Oracle RAC up to 10X Prasad Pammidimukkala www.gridironsystems.com 1 The Problem Data can be both Big and Fast Processing large datasets creates high bandwidth demand

More information

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage Performance Study of Microsoft SQL Server 2016 Dell Engineering February 2017 Table of contents

More information