VMAX enas DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS

Size: px
Start display at page:

Download "VMAX enas DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS"

Transcription

1 VMAX enas DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS EMC VMAX Engineering White Paper ABSTRACT This document provides guidelines and best practices for deploying enas for Microsoft environment using SMB 3.0 file shares. It also covers specific applications use cases of deploying and migrating Microsoft SQL Server on enas file storage and using enas File Auto Recovery for replication February 2017 WHITE PAPER

2 The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. Copyright 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA 02/17, White Paper, Part Number H Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to change without notice. 2

3 TABLE OF CONTENTS EXECUTIVE SUMMARY...5 AUDIENCE... 5 Terminology... 5 VMAX PRODUCT OVERVIEW...6 VMAX3 and FAST Service Level Objective (SLO)... 6 VMAX Guest Container Infrastructure Overview... 7 VMAX ENAS DEPLOYMENT CONSIDERATIONS...7 VMAX enas configurations options... 8 Storage provisioning tasks for enas... 8 VMAX enas volume and enas file system creation considerations... 9 VMAX enas to host connectivity best practices... 9 Number and size of enas devices for file systems... 9 Microsoft SMB 3.0 support and Continuous Availability Microsoft Offloaded Data Transfer (ODX) FILE AUTO RECOVERY WITH SRDF/S overview of file auto recovery FAR management using FARM MICROSOFT APPLICATION DEPLOYMENT USE CASES WITH ENAS Test Overview Test Configuration Use case 1 SQL Database run with change in FAST SLO Use case 2 Performance scalability with Data Movers ENAS FAR USE CASES Test Overview Use case 3 VDM migration to another system for load balancing CONCLUSION REFERENCES APPENDIX I STEP-BY-STEP STORAGE PROVISIONING USING UNISPHERE APPENDIX II VMAX AND ENAS CLI APPENDIX III DISCOVERING ENAS SMI-S PROVIDER WITH SCVMM APPENDIX IV FILE AUTO RECOVERY CONFIGURATION AND MANAGEMENT

4 4

5 EXECUTIVE SUMMARY The VMAX family of storage arrays VMAX All Flash and VMAX3 exemplifies the next major step in evolving VMAX hardware and software targeted to meet changing industry requirements for scalability, performance, and availability. VMAX3 represents a great advancement in making complex operations such as storage management, provisioning, and setting performance goals, simple to execute and manage. In 2016, EMC released three newly engineered and purpose-built VMAX All Flash products: VMAX 250, VMAX 450 and VMAX 850 that are available with F and FX software packges. The new VMAX architecture uses the latest, most cost efficient 3D NAND flash drive technology, using multi-dimensional scale, large write-cache buffering, back-end write aggregation, high IOPS, high bandwidth, and low latency. In addition to traditional block storage, VMAX now offer file support using embedded network attached storage (enas) through a new hypervisor layer which positions VMAX3 as converged solution for both file and block storage. VMAX3 enas offers consolidated file storage across the datacenter and reduces file deployment costs by eliminating the need for separate hardware. Because VMAX3 enas runs directly on VMAX3 directors, it offers the highest level of reliability and availability. VMAX3 data services including Service Level Objective (SLO)-based provisioning. Dynamic host I/O limits for IOPS and bandwidth are offered on VMAX enas as well, making it very easy to manage performance and throughput for both block and file storage. VMAX enas uses VNX file storage management features such as Automatic Volume Management (AVM), different types of network virtual devices, and multi-protocol file access including NFS 3, NFS 4, SMB 2.0, and SMB 3.0 for Microsoft Windows using IPv4 and IPv6. The advanced features of VMAX enas for Microsoft environments include offloaded data transfer (ODX), multi-path I/O (MPIO), and jumbo frame support, which allow users to make optimal use of resources for best performance. VMAX enas supports data protection for files using easy-to-schedule periodic snapshots as well as local and remote file system replication. Common enas use cases include running Oracle on NFS, VMware on NFS, Microsoft SQL on SMB 3.0, home directories, file shares, and consolidating Windows Servers. File Auto Recovery (FAR) integrates enas with industry standard VMAX remote replication using SRDF. FAR, with its manual and automatic fail over capabilities, offers load balancing and migration of file based applications between local and remote storage arrays. An enhanced version of File Auto Recovery Manager (FARM), GUI based application to manage FAR was introduced. This white paper explains the basic VMAX design and operations with regard to storage provisioning, performance management, and deployment best practices of file-based storage using VMAX enas. It also covers how VMAX enas simplifies the management of file storage in a Microsoft SMB 3.0 environment, using Microsoft SQL Server examples. The paper also covers VMAX enas FAR use cases for Microsoft SQL Server databases. Note: Unless otherwise specified this document pertains to both VMAX All Flash and VMAX3.Family of storage systems AUDIENCE This white paper is intended for database and system administrators, storage administrators, and system architects who are responsible for implementing, managing, and maintaining Microsoft Applications in SMB environments with VMAX storage systems. Readers should have some familiarity with the EMC family of storage arrays, including EMC VMAX and VNX. TERMINOLOGY The following table explains important terms used in this paper. Term AVM CIFS Disk volume enas Disk Volume enas Metavolume enas Slice Volume enas Stripe Volume FAR FARM Description Automatic volume management. Used in enas to manage volumes and file systems. Common Internet File System. An access protocol that allows access to files and folders from Windows hosts located on network. It is based on Microsoft s SMB protocol. enas volume that equates VMAX block devices presented to enas by using the appropriate masking view. enas volume that equates VMAX block devices presented to enas by using appropriate masking view. enas uses the disk volume as a basic building block to create other type of volumes. A logical volume on which enas File System must be created. The metavolume provides expandable storage capacity that might be needed to dynamically expand a file system and a means to form a logical volume that is larger than a single disk A metavolume can include disk volumes, slice volumes, stripe volumes, or other metavolumes. Volume carved out of enas Disk Volume to create smaller volume for manageability. Volume organized into a set of interlaced stripes on Disk or Slice volumes to improve volume performance. File Auto Recovery. Feature that performs synchronous replication of enas based file systems. File Auto Recovery Manager. Windows based utility that allows automated and manual failover of enas 5

6 FAST Host Initiator Group (IG) HYPERMAX OS Hypervisor Masking View (MV) Port Group (PG) SMB Storage Group (SG) VDM VMAX Container VMAX CTD replicated file systems using FARM. Fully automated storage tiering (FAST) automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage tiers. A collection of host bus adapter (HBA) ports for storage accessibility. HYPERMAX OS is an open, converged storage hypervisor and operating system. It enables VMAX to embed storage infrastructure services like data mobility and data protection directly in the array. This delivers new levels of data center efficiency and consolidation by reducing footprint and energy requirements. In addition, HYPERMAX OS delivers the ability to perform real-time and non-disruptive data services. A software capability that virtualizes hardware, creating and running virtual machines and hosting guests. For example, HYPERMAX OS acts as a hypervisor to create and run containers. A construct that binds IG, PG, and SG together and allows automatic mapping and masking of storage devices to hosts for ease of storage provisioning. A collection of VMAX front end (FA) ports used for storage provisioning for hosts. The Server Message Block (SMB) Protocol is a network file sharing protocol. As implemented in Microsoft Windows, it is known as Microsoft SMB Protocol. The set of message packets that defines a particular version of the protocol is called a dialect. The Common Internet File System (CIFS) Protocol is a dialect of SMB. Latest version of SMB is 3.0 A collection of VMAX devices that are host addressable. Storage Group can be used to (a) present devices to hosts (LUN masking), (b) specify FAST Service Levels (SLOs) to a group of devices, and (c) manage grouping of devices for replications software such as SnapVX and SRDF. Virtual Data Mover. Instance of an enas Data Mover that is portable and can be replicated. The virtual machine created and provided by HYPERMAX OS. Cut-Through Driver. A proprietary driver that allows the VMAX hypervisor layer to access VMAX storage devices directly. VMAX PRODUCT OVERVIEW The EMC VMAX family of storage arrays is built on the strategy of simple, intelligent, modular storage. The VMAX incorporates a Dynamic Virtual Matrix interface that connects and shares resources across all VMAX engines, allowing the storage array to seamlessly grow from an entry-level configuration into the world s largest storage array. It provides the highest levels of performance and availability featuring new hardware and software capabilities. The newest additions to the VMAX family VMAX 250, 450 and 850 deliver the latest in Tier-1 scale-out multi-controller architecture with consolidation and efficiency for the enterprise. They offer dramatic increases in floor tile density, high capacity flash, and hard disk drives in dense enclosures for both 2.5" and 3.5" drives, and support both block and file (enas) storage. The VMAX family of storage arrays comes pre-configured from the factory to simplify deployment at customer sites and minimize time to first I/O. Each array uses virtual provisioning to allow the user easy and quick storage provisioning. VMAX can ship as an all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and reads even farther. It can also ship as hybrid, multi-tier storage that excels in providing performance management based on SLOs. The new VMAX hardware architecture comes with more CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric. Figure 1 shows possible VMAX components. Refer to EMC documentation and release notes to find the most up-to-date supported components. 1 8 redundant VMAX3 Engines Up to 4 PB usable capacity Up to 192 FC host ports Up to 16 TB global memory (mirrored) Up to 384 Cores, 2.7 GHz Intel Xeon E v2 Up to drives Figure 1. VMAX All Flash Storage Array VMAX3 AND FAST SERVICE LEVEL OBJECTIVE (SLO) 6

7 With VMAX3, FAST is enhanced to include both intelligent storage provisioning and performance management, using SLOs. SLOs automate the allocation and distribution of application data to the correct data pool (and therefore storage tier) without manual intervention. Simply choose the SLO (for example, Platinum, Gold, or Silver), that best suits the application requirements. SLOs are tied to the expected average I/O latency for both reads and writes. Therefore, both the initial provisioning and application s on-going performance are automatically measured and managed based on compliance to storage tiers and performance goals. FAST continuously samples the storage activity and every 10 minutes, if necessary, moves data at FAST s sub-lun granularity of 5.25MB (42 extents of 128KB). SLOs can be dynamically changed at any time. FAST continuously monitors and adjusts data location at the sub-lun granularity across the available storage tiers to match the performance goals provided. All this is done automatically, within the VMAX3 storage array, without having to deploy complex application ILM 1 strategies or use host resources for migrating data due to performance needs. VMAX GUEST CONTAINER INFRASTRUCTURE OVERVIEW HYPERMAX OS has incorporated a lightweight hypervisor that allows Virtual Machines (VMs) to run within VMAX3. It combines industry-leading high availability, I/O management, data integrity validation, quality of service, data security, and storage tiering with an open application platform. HYPERMAX OS features a real-time, non-disruptive storage hypervisor that manages and protects embedded data services (running on virtual machines) by extending VMAX3 high availability to data services that traditionally have run outside of the array. HYPERMAX OS provides the needed infrastructure to run guest virtual machines. enas uses the hypervisor layer provided by HYPERMAX OS to create and run a set of virtual machines (containers) on VMAX3 controllers. This embedded storage hypervisor reduces external hardware and networking requirements, and delivers the highest levels of availability with lower latency. Each VMAX3 engine has two directors. Each director can support multiple emulations, each emulation providing a different functionality to the storage array. Front End (FA) emulation, for example, supports host access to the storage. enas components run as virtual machines within the FA emulation using allocated director resources including assigned CPU cores and memory. These virtual machines host two elements of enas: software Data Movers (DM) and Control Stations (CS), and are distributed based on the mirrored pair architecture of VMAX3 to evenly consume VMAX3 resources for both performance and capacity. The VMAX3 proprietary Cut Through Driver (CTD) allows the Guest Operating System (GOS) of the VM to access VMAX3 storage for its use. The GOS can be assigned Ethernet and FC I/O modules for its exclusive usage during the configuration and installation process. The Controls Station and Data Movers use an internal network to communicate with each other. Figure 2 shows the components of enas and their interconnections on a single-engine system. Figure 2. enas architecture for single-engine VMAX3 VMAX ENAS DEPLOYMENT CONSIDERATIONS Embedded NAS (enas) extends the value of VMAX3 to file storage by including vital enterprise features including FAST Service Level Objective-based provisioning and performance management, and host I/O limits. VMAX3 with enas is a multi-controller NAS solution, designed for customers requiring consolidation for block and file storage in mission-critical environments. enas supports 1 ILM is Information Life Management. 7

8 equivalent VNX2 NAS capabilities, features, and functionality as found on the VNX2 File operating environment. Refer to VNX2 documentation on support.emc.com for details. VMAX ENAS CONFIGURATIONS OPTIONS The default minimum configuration for enas on VMAX 100K includes two Control Station VMs and two Data Mover VMs. A maximum of eight Data Movers, seven active and one standby, can be configured for VMAX 200K and VMAX 400K models. Logical cores, memory and number of I/O modules for VMs come pre-configured from the factory. For host connectivity, the following I/O modules are supported: 4-port 1GbE BaseT, 2-port 10GbE BaseT, and 2-port 10GbE optical. Refer to enas support matrix for supported configurations. enas Data Mover on VMAX 200K and 400K can have up to six Ethernet I/O modules, while VMAX 100K can have up to four Ethernet I/O modules. Note that each I/O module occupies a slot that could be otherwise used by an FC module for block connectivity to the host. It is important to find a balance between file and block usage on the VMAX3 system while determining enas configuration. Table 1 shows enas configurations for various VMAX3 models. Table 1. enas Configurations for VMAX Family VMAX 250 VMAX 200K VMAX 400K VMAX 100K F/FX 450F/FX 850F/FX Data Mover (DM) Logical Cores Memory (GB) I/O Modules Control Station (CS) Logical Cores Memory (GB) I/O Modules None Req'd None Req'd None Req'd None Req'd Max DMs supported Note: Check the EMC VMAX enas support matrix for the latest information regarding supported configurations. enas comes preconfigured with its boot and control volumes in their own storage group (SG). It also has a preconfigured port group (PG) and an initiator group (IG) so that all the administrator has to do is to create volumes for user data in a storage group and mask the storage group with enas Data Movers. For load balancing and high availability, CS and DM instances are distributed evenly across director boards based on the system configuration. Configuration details discussed in this section are for information purposes only. STORAGE PROVISIONING TASKS FOR ENAS VMAX3 comes pre-configured with data pools and a Storage Resource Pool (SRP). With enas, even the boot and control volumes are pre-configured. During configuration, you need to create user devices for enas, create file systems on those devices, and export file systems to the host using CIFS/SMB protocol. You can create file systems in a number of ways: 1. Use Unisphere for VMAX3 and Unisphere for VNX UI Intuitive Provisioning Wizards. 1.1 On VMAX Unisphere, use File Dashboard to provision storage for enas. Provide an appropriate storage group name, select an SLO for the storage group, number the devices, and select a size for each device. 1.2 enas will discover the storage group created in the step above as a mapped storage pool. Launch Unisphere for VNX, create file systems from the pool, and export them. 2. Use CLI (Refer to Appendix II for steps) 2.1 Use Solutions Enabler CLI to create Devices and Masking View on VMAX Use enas Control Station CLI to create a file system and export it over CIFS. See Appendix II for instructions for using CLI to create file systems. 3. Use EMC SMI-S and Microsoft SCVMM. Refer to Appendix III for more details about discovering the enas provider. 2 Refer to Appendix II for steps to provision storage for enas using Unisphere and enas CLI 8

9 VMAX enas VOLUME AND enas FILE SYSTEM CREATION CONSIDERATIONS enas uses VMAX3 thin devices and implements an optimized volume management layout for the Storage Group created for enas use. The enas volume management layer provides two options Automatic Volume Management (AVM) and Manual Volume Management (MVM) to define an optimal volume layout for file systems to meet different application workload profiles. Using Automatic Volume Management (AVM) Unisphere for VNX supports AVM to simplify the selection of striping, concatenation, slicing, and volume creation optimized by workload for ease of storage management for enas. These are the elements of AVM: Mapped storage pools: VMAX3 storage groups with different SLO and workloads based on their definition at the VMAX3 Block level. Auto extend: A file system created with AVM can be configured to automatically extend when it reaches a certain predefined threshold. Striping: When the storage administrator requests a file system of certain size, the enas system creates a striped volume of required size across a set of devices in mapped storage pool, or creates an enas metavolume as necessary from the enas storage pool. The default stripe size for system-defined storage pools is set at 256 KB. The algorithm that AVM uses looks for a set of eight enas disk volumes. If the set of eight disk volumes is not found, then the algorithm either looks for a set of four or two or a one-disk volume based on availability. AVM stripes the disk volumes together, if the disk volumes are all of the same size. If the disk volumes are not the same size, AVM creates a metavolume on top of the disk volumes. AVM then adds the stripe or the metavolume to the storage pool. Note: For best performance results, create enas Storage group in multiples of eight devices. AVM, by default, stripes file systems across eight devices if eight or more volumes are present in storage group. Anything less than eight devices per File System will degrade the performance. For efficiency, create file systems that will be approximately 85% full with user data. If a file system needs to be extended at a later time, new devices can be added to the storage pool. Using Manual Volume Management (MVM) Although AVM is a simple and preferred way to create volumes and file systems, automation can limit control over the location of the storage allocated to a file system. Manual volume management allows the administrator to create and aggregate different volume types into usable file system storage that meets specific configuration needs. Note: When using MVM, for best performance results, stripe the file system across a multiple of eight devices. VMAX ENAS TO HOST CONNECTIVITY BEST PRACTICES When planning host to enas connectivity for performance and availability, connect at least two physical ports from each Data Mover to the network. Similarly, connect at least two ports from each host to the network as well. In this way, even in case of a component failure, enas can continue to service the host I/O. For best performance and availability, use multiple file systems and spread them across all the Data Movers serviced by different Ethernet interfaces. Each share created on the Data Mover is accessible from all ports on the Data Mover; therefore, it is essential that the host has connectivity to all the ports of the Data Mover. With SMB 3.0, the host can take advantage of load balancing and fault tolerance if multiple Ethernet ports are available on the host. For non SMB 3.0 environments requiring load balancing or high availability, before creating an IP interface, create virtual network devices available to selected data movers, selecting a type of Ethernet channel, link aggregation, or Fail Safe Network (FSN). VNX documentation provides information about configuring virtual Ethernet devices. NUMBER AND SIZE OF ENAS DEVICES FOR FILE SYSTEMS VMAX lets you create thin devices with a capacity ranging from a few megabytes to multiple terabytes. With the wide striping in the storage resource pool that VMAX provides, you might be tempted to create only a few very large host devices. However, you should use a reasonable number of enas devices and sizes, preferably in multiples of eight in each storage group for enas consumption. The reason is that each enas device creates its own I/O queue at the data mover that can service a limited number of I/O operations simultaneously. A high level of database activity will generate more I/O than the queues can service, resulting in artificially long latencies if only a few large devices are used. Another benefit of using multiple devices is that internally, VMAX can use more parallelism when operations such as FAST data movement, local and remote replications take place. By performing parallel copy operations simultaneously, the overall activity takes less time. Error! Reference source not found.error! Reference source not found. shows SQL performance with different numbers of devices in each file system. 9

10 SQL Batch Requests/Sec Figure 3. Application performance with different number of devices per enas FS SQL Batch 400 Requests/Sec, Application performance with different number of devices in each FS HOST I/O LIMITS AND enas 0 SQL Batch Requests/Sec, 1546 The Host I/O Limits quality of service (QoS) feature was introduced in the previous generation of VMAX arrays. It offers VMAX customers the option to place specific IOPS or bandwidth limits on any storage group, regardless of the SLO assigned to that group. The I/O limit set on a storage group provisioned for enas will be applied to all file systems carved out of that storage group cumulatively. If the Host I/O limits set at the storage group level need to be transparent to the corresponding enas file system, there must be a one to one correlation between them. Assigning a specific Host I/O limit for IOPS, for example, to a storage group (file system) with low performance requirements can ensure that a spike in I/O demand will not saturate its storage, cause FAST inadvertently to migrate extents to higher tiers, or overload the storage, affecting performance of more critical applications. Placing a specific IOPs limit on a storage group will limit the total IOPs for the storage group, but it does not prevent FAST from moving data based on the SLO for that group. For example, a storage group with Gold SLO may have data in both EFD and HDD tiers to satisfy SLO compliance, yet be limited to the IOPS set by Host I/O Limits. USING VIRTUAL DATA MOVERS (VDM) SQL Batch Requests/Sec, Number of Devices per enas FS enas supports Virtual Data Movers (VDM). VDMs are used for isolating Data Mover instances within a secure logical partition. VDMs are file system containers that allow a Virtual Data Mover with independence from other VDMs in the same Data Mover container. VDM is a security mechanism as well as an enabling technology that simplifies the DR failover process. It maintains file system context information (metadata) to avoid rebuilding these structures on failover. File systems can be mounted beneath VDM's that are logically isolated from each other. VDMs can be used to support multiple LDAP domains within a customer environment. VDMs can also be used to rebalance file loads across physical Data Movers by moving VDMs and their underlying file systems between Data Movers. VDMs are important for in-company multi-tenancy, as well as ease of use, when deploying replication solutions. DATA PROTECTION OF ENAS FILE SYSTEMS File System Snapshots using SnapSure SQL Batch Requests/Sec, 1369 enas provides additional levels of protection at the file system level using SnapSure, which provides point-in-time, logical images of a Production File System (PFS) called checkpoints. Checkpoints can be read-only or read-write. With SnapSure, you can restore a PFS to a point in time from a read-only or writeable checkpoint. Create checkpoints using the data protection tab in Unisphere for VNX. In Unisphere, you will select the file system, the checkpoint name, and the pool to be used for storing the checkpoints. Use Unisphere for VNX to schedule automated snapshots, allowing at least 15 minutes between snapshots. Figure 1 shows the Unisphere screen used to create a snapshot (checkpoint). 10

11 Figure 4. File system snapshots using Snapsure File System Replication using Replicator Use enas Replicator to replicate data at the file system level. Asynchronous local and remote replication is supported over the IP network. enas Replication can perform continuous file system replication, or a one-time copy of file system and VDM replication. The destination for the local replica can be the same as the Data Mover (loopback replication), or it can be another Data Mover on the same enas system. A replication session creates a point-in-time copy of the source object and periodically transfers it to the destination to make sure that the source and destination are consistent. You can create up to four replication sessions for each file system. Configure replication using Unisphere for VNX (preferred) or enas software control station CLI. To set up a replication session in Unisphere, specify the replication name, destination system, destination pool, and synchronization interval, as shown in Figure 5 Figure 5. File system replication using Replicator 11

12 MICROSOFT SMB 3.0 SUPPORT AND CONTINUOUS AVAILABILITY enas supports SMB 3.0. One of the most important and relevant components of the Windows Server 2012 to the storage space is the new CIFS capability released via SMB 3.0, particularly the Continuous Availability (CA) feature. The CA feature of SMB 3.0 allows Windows hosts to persistently access SMB shares without losing the session state during Data Mover failover. The CA feature is disabled by default on enas. It can be enabled and configured only through enas CLI (see Appendix II for detailed steps). When CA is enabled on a share, the persistent handles option lets a CIFS server save specific metadata associated with an open file handle on the disk. When a Data Mover failover occurs, the new primary Data Mover reads the metadata from the disk before starting the CIFS service. The host (CIFS client) will re-establish its session to the Data Mover and attempt to re-open its files. The Data Mover will return the persistent handle to the client. The end result is that there is no impact to the application accessing the open files as long as Data Mover failover time does not exceed the application timeout. This capability allows CIFS connections to endure client and file server failover processes. SMB 3.0 supports Multipath I/O (MPIO), in which multiple TCP connections can be associated with a given SMB session. If one TCP connection is broken due to network failure, the user session can still continue using the remaining active TCP connections. MPIO provides transparent network failover and load balancing without any additional configuration. CIFS can be used as a robust connectivity methodology for SQL, SharePoint and Hyper-V. Windows hosts can take advantage of multipath and high availability by configuring multiple Ethernet ports on hosts and enas Data Movers. MICROSOFT OFFLOADED DATA TRANSFER (ODX) enas supports Microsoft s offloaded data transfer (ODX) feature on Windows Server Instead of using buffered read and buffered write operations, Windows ODX starts the copy operation with an offload read and retrieves a token representing the data from the storage device. Then it uses an offload write command with the token to request data movement from the source disk to the destination disk. The copy manager of enas performs the data movement according to the token. Use the Windows ODX feature to move large files or data through the high-speed storage network without any load on the IP network or host resources. Windows ODX significantly reduces client-server network traffic and CPU time usage during large data transfers, because all the data movement is at the backend storage network, as seen in Error! Reference source not found.figure 6, ODX can be used in virtual machine deployment, massive data migration, and tiered storage device support. It can lower the cost of physical hardware deployment through the ODX and thin provisioning storage features. Figure 6. Microsoft Offloaded Data Transfer with enas JUMBO FRAME CONFIGURATION For best performance on the Windows host, set MTU to 9000 (Jumbo Frames), at the Data Mover Ethernet interfaces as well as on the Ethernet interface. Ensure that all intermediate Ethernet switches support jumbo frames. Figure 7 shows jumbo frame settings at the Windows host Ethernet interface and at the enas Data Mover. 12

13 Figure 7. Jumbo frame configuration using MTU settings FILE AUTO RECOVERY WITH SRDF/S This section covers File Auto Recovery architectural overview and deployment best practices. It also provides use cases of using File Auto Recovery on enas storage for Microsoft applications. OVERVIEW OF FILE AUTO RECOVERY File Auto Recovery (FAR) allows manual failover or migration of a virtual Data Mover (VDM) from a source enas system to a destination enas system. This failover or move leverages block-level Symmetrix Remote Data Facility (SRDF) synchronous replication for zero data loss in the event of an unplanned outage. This feature consolidates VDMs, file systems, file system checkpoint schedules, CIFS servers, networking, and VDM configurations into one storage pool for each VDM which are synchronously replicated at secondary site. It works for a recovery of file servers at secondary site when the source is unavailable. An option is also provided to recover and clean up the source system and make it ready as a future destination for failback operation. A VDM-level DR solution does not require a dedicated standby Data Mover. Two sites can act as standby sites for each other. In case of failover, operational Data Mover takes on the additional load of the failed site. Figure 8 shows FAR configuration Automated and manually initiated failover operations can be performed using EMC File Auto Recovery Manager (FARM). FARM allows monitoring of sync-replicated VDMs and triggers automatic failover based on Data Mover, File System, Control Station, or IP network unavailability at the source site. FARM also allows manually initiated failover and recovery of sync-replicated VDMs in the event of planned maintenance at the primary site. FARM must be installed on a Windows system with network access to enas Control Station (CS) and Data Mover (DM) network interfaces to be monitored. Figure 8. File Auto Recovery configuration setup 13

14 FAR DEPLOYMENT FAR configuration requires the following steps: 1. Install and configure source and destination enas systems. 2. Configure, map, and mask additional enas control LUNs required for FAR. 3. Configure control station-to-control station communication. 4. Enable FAR, which will also create NAS_DB mirror between source and destination enas systems. 5. Configure a FAR-replicable VDM. For more details regarding setup and configuration of enas File Auto Recovery, please refer to document EMC VMAX3 Family Embedded NAS File Auto Recovery with SRDF/S. Below are some of the considerations for FAR deployment: 1. Interfaces attached to VDM are for exclusive use by VDM and cannot be used by Datamover. Datamover should have its own interfaces configured through which it can reach DNS, NTP and Domain Controller servers. This is especially required if CIFS shares are configured. 2. For faster failover and clean up, keep Datamovers in healthy state. If a Datamover has failed over to local standby, it should be manually restored to normal state as soon as possible. 3. Because FARM operates in an active/passive mode, FARM no longer actively monitors a VDM that failed over or was reversed to secondary site. After the VDM Restore operation, choose Configure > VDM Configurations > Storage Settings > VDM, and select the VDM from the list. This action ensures that the VDM is monitored again by FARM. 4. VDMs are failed over in a sequential fashion and each VDM takes at least three minutes to fail over. Consider this when estimating failover time and total outage in case of an unplanned outage. FAR MANAGEMENT USING FARM FARM allows manual failover management and automatic failover management by setting priorities on VDMs. Figure 9 shows manual Failover, Restore and Reverse role operations with SRDF using FARM. Figure 10 shows setting VDM priorities for automatic FARM operations. Figure 9. FARM manual operations 14

15 SQL Server Batch Req/Sec Figure 10. Setting VDM priorities for FARM automatic failover operation FAR BEST PRACTICES VDM provides an abstraction to consolidate multiple file system and relevant NAS components. As FAR operates on VDMs the number of components and size of the file systems abstracted by VDM will determine the data replicated by SRDF and amount of time it takes to recover all the file systems. So when deploying FAR, configure VDMs, considering the importance of the applications and recovery needs. When using automatic failover, assign high priority to critical file systems to minimize RTO for mission critical applications. For a reasonable failover time, it is recommended not to have more than 6 or 7 VDM Sync sessions per system. Each VDM which is being replicated should not use more than 8 VMAX devices in its storage pool. This would help to keep SRDF group to a manageable size. Just like any other D/R or load balancing deployments, periodic testing of the overall infrastructure and file recovery will ensure that the secondary site has enough resources to take additional load. enas pool ( VMAX storage group for enas) should have sufficient space to hold file system and checkpoints (snaps) as file systems, while snaps need to be in the same pool (storage group). FAR uses SRDF/S and leverages industry standard reliability and scalability available on VMAX3. FAR requires initial sync between source and destination sites which leverage the block copy efficiency of SRDF. Once the SRDF groups are synced there is very minimal performance impact on the zero-data-loss solution for file auto recovery. Figure 11 shows that FAR does not cause any performance impact after initial sync is done Figure 11. No FS Replication Initial Sync-up duration Time --> File auto recovery performance impact FS Synchronous Replication MICROSOFT APPLICATION DEPLOYMENT USE CASES WITH ENAS This section covers examples of using Microsoft SQL Server on enas storage with SLO management. It also covers uses cases for file recovery using VMAX3 enas FAR. TEST OVERVIEW Test use cases These use cases are described in this section: 15

16 1. Single database performance using different VMAX3 FAST SLOs for the SQL Server Data files. 2. Single database performance using different numbers of enas Data Movers for the SQL Server Data files. General test notes: OLTP1 was configured to run a 90/10% read/write ratio OLTP workload derived from an industry standard. No special database tuning was done as the focus of the test was not on achieving maximum performance, but rather on comparative differences of a standard database workload. DATA and LOG Storage File Systems were created from a single VMAX3 storage group for ease of provisioning and performance management. A single storage group of eight 200GB devices was used for data and log file systems. Data collection storage performance metrics were gathered using Solutions Enabler and Unisphere. Host performance statistics were collected using Windows Perfmon. Figure 12. Test bed configuration details for OLTP SQL Application with enas CIFS export TEST CONFIGURATION Database configuration details The following tables show the environment that was deployed for all use cases. Table 2 shows the VMAX3 storage and enas environment, Table 3 shows the host environment, and Table 4 shows the database s storage configuration. Table 5 shows SQL Server database layout details. Please refer to Figure 12 for test bed configuration details. Table 2. Configuration aspect VMAX3 environment Description Storage array VMAX 400K HYPERMAX OS Drive mix (excluding spares) 60 x 200GB-EFDs - RAID5 (3+1) 240 x 300GB-15K HDD - RAID1 96 x 1TB-7K HDD - RAID6 (6+2) enas version enas H/W configuration Component Memory (GB) Cores Network CS (2) GbE 16

17 DM (2) GbE x 2 Table 3. Host environment Configuration aspect Description Microsoft SQL Server Windows Multipathing Host SQL Server 2014 Enterprise Edition 64-bit Windows Server 2012 R2 64-bit EMC Powerpath 5.7 SP4 64-bit 1 x Cisco C240, 96 GB memory Table 4. Database configuration Database Thin devices (LUNs) LUN layout SRP Start SLO Name: OLTP1 DATA: 3 x 2 TB thin LUNs Default Gold Size: 1.2 TB LOG: 1 x 2 TB thin LUNs Default Gold Table 5. SQL Database layout details SQL DB details Database Mount point SQL Server File Groups SQL Server Data files Total SQL files sizes OLTP1 \\cifs1\oltp1_data1 \\cifs2\oltp1_data2 FIXED_FG, GROWING_FG, SCALING_FG MSSQL_OLTP_root.mdf, Fixed_1.ndf, Growing_1.ndf, Scaling_1.ndf Fixed_2.ndf, Growing_2.ndf, Scaling_2.ndf 378GB 370GB \\cifs1\oltp1_data3 Fixed_3.ndf, Growing_3.ndf, Scaling_3.ndf 370GB \\cifs2\oltp1_logs OLTP1_log.ldf 200GB USE CASE 1 SQL DATABASE RUN WITH CHANGE IN FAST SLO Objective: The purpose of this test case is to demonstrate how database performance can be controlled by changing the SLO on a Storage Group used for SQL Server data files residing on CIFS file systems. Test case execution steps: Run an OLTP workload on OLTP1 SQL Server Database with SQL Server data files and transaction log storage group on Gold SLO. Run the test for four hours. At the end of the test, note the SQL Server Database response time and SQL Batch Requests/sec. Change the SLO for the SQL Server storage group to Platinum and gather performance statistics. Repeat the test for the Diamond SLO. Test results: The chart in Figure 13 shows the test results of Use Case 1, including the database transaction rate as measured in SQL batch requests per second and the SQL Server database response time (in milliseconds). Response time and batch requests per second both show incremental improvement as the SLO is changed from Gold to Diamond. 17

18 Response time(ms) SQL Batch Req/Sec 20 SLO based enas comparison Avg. Response Time, 11ms Avg. Response Time, 3ms Avg. Response Time, 2ms Gold Platinum Diamond SLO SQL Batch Requests/Sec Avg. Response Time 0 Figure 13. SQL performance statistics as a direct effect of changes in SLO for SQL storage group used by enas VMAX3 promoted active data extents to high performance storage tiers, including more EFD capacity, as the SLO changed from Gold to Platinum. Therefore, the transaction rate increased. I/O latencies were reduced with more EFD allocations. With Gold SLO, SQL Server database experienced an average latency of 11 ms which improved to 3 ms with Platinum SLO and to 2 ms with Diamond, which includes enas latency overhead. The corresponding transaction rate increased from 415 with the Gold SLO to 1,378 with the Platinum SLO, and to 1,546 with the Diamond SLO. USE CASE 2 PERFORMANCE SCALABILITY WITH DATA MOVERS Objective: This test demonstrates near-linear performance scalability as the number of Data Movers is increased on an enas system. Test case execution steps: 1. On VMAX3, set the SQL Server data files storage group and SQL Server transaction storage group SLO levels to Diamond. 2. Create file systems for data and logs and mount them from a single Data Mover. 3. Run the OLTP workload and gather performance statistics. 4. Repeat steps 2 and 3 with two and three Data Movers. Ensure that file systems are evenly distributed across Data Movers for each run. Test results: Figure 14 shows SQL Server Batch Requests/sec and average SQL response time for the same database with one, two, and three Data Movers. As we can see from the chart, enas provides almost linear scaling performance while maintaining a fairly constant average response time. Since the backend storage and amount of VMAX3 cache remained constant for all three configurations, they remain a limiting factor in scaling. 18

19 Response Time (ms) SQL Batch Req/Sec SQL Data Mover Scaling and SQL Performance SQL Batch Req/Sec, 947 Average Response Time (ms), SQL Batch Req/Sec, 1615 Average Response Time (ms), SQL Batch Req/Sec, Number of Data Movers Average Response Time (ms), SQL Batch Req/Sec Average Response Time (ms) Figure 14. SQL Server Performance and Scaling with Number of Data Movers enas FAR USE CASES This section covers VMAX3 enas FAR use cases. TEST OVERVIEW Test use cases These use cases are described in this section: 1. Planned maintenance at the primary site. 2. Unplanned VDM failover from primary to secondary site. 3. VDM migration to another system for load balancing General test notes: Primary and secondary enas sites were setup and SRDF groups configured for synchronous replication for FAR. Applications were configured on enas SMB shares and VDMs were configured to manage FAR use cases. 19

20 Figure 15. Microsoft SQL Server FAR Configuration TEST CONFIGURATION Database configuration details The following tables show the environment that was deployed for all use cases. Table 2 shows the VMAX3 storage and enas environment, Table 6 shows the storage configuration with enas FAR configuration, Table 7 shows the database configuration and Table 8 shows enas VDM setup. Please refer to Figure 15 for test bed configuration details. FARM GUI is used wherever possible for FAR management in this section. Appendix IV describes enas CLI that can be used for FAR management. Table 6. Configuration aspect VMAX3 environment Description Storage array VMAX 400K (R1 and R2) HYPERMAX OS (5977 Q SR) enas version enas H/W configuration Component Memory (GB) Cores Network CS (2) GbE DM (2) GbE x 2 FARM Version Table 7. Host environment Configuration aspect Description Microsoft SQL Server Windows Multipathing Host SQL Server 2014 Enterprise Edition 64-bit Windows Server 2012 R2 64-bit EMC Powerpath 5.7 SP4 64-bit 1 x Cisco C240, 96 GB memory 20

21 Table 8. Configuration aspect enas environment Storage Aspect NAS Aspect Application Aspect VDM 1 NAS Storage Group (NAS_Data): 8 devices for each storage group enas Pool SQL Pool (NAS_Data1) DM 2 MS SQL Server Data and Logs RDF Group 101 (only one RDF group for all VDMs) SQL FS1: Vdm1_fs1(data), Vdm1_fs2(logs) USE CASE 1 PLANNED MAINTENANCE AT THE PRIMARY SITE Objective: The purpose of this test case is to understand how FAR can be used in the event of planned maintenance at the primary site. Test case execution steps: 1. Gracefully shut down the application running on enas at the primary site. 2. On the application host, un-mount/disconnect SMB shares mounted from enas. 3. Shut down the AFM service on FARM if it is running. 4. Use FARM GUI to fail over the VDM to the secondary site using the Reverse operation. Detailed execution steps: 1. Detach or gracefully shut down the SQL server databases running on enas on the primary site prior to planned maintenance of the site. 2. Disconnect SMB shares mounted from enas. 3. As shown in Figure 16, launch FARM application and shut down FARM service if it is running. Service state should appear as Stopped at the end of the step. Once the FARM service has been shut down, select one or more desired VDM sessions and execute Reverse operation. 4. As shown in Figure 17, confirm the execution of the Reverse operation. 5. Monitor the completion of the Reverse operation. 6. Once the Reverse operation is successful, mount the SMB shares back on the application host using the original share name and SMB server IP address. 7. Restart the application. Figure 16. AFM Reverse operation for planned failover 21

22 Figure 17. Confirm Reverse operation :50: "Reverse SyncRep Session session1" is running :50: Detected Active Control Station(Primary): :50: Detected Active Control Station(Secondary): :50: VDM Prepared Reverse VDM_1 from to :51: Now doing precondition check... done: 18 s :51: Now doing health check... done: 12 s :51: Now cleaning local... done: 2 s :51: Service outage start :51: INFO: In case the 'turning down remote network interface(s)' fail, refer to the CCMD to access the file systems and/or ckpt file systems from the client :51: Now turning down remote network interface(s)... done: 10 s :51: INFO: In case the SRDF switch failure, refer to the CCMD for remounting R1's file systems, checkpoint file systems :51: Now switching the session (may take several minutes)... done: 18 s :52: Now importing sync replica of NAS database... done: 22 s :52: Now creating VDM... done: 5 s :52: Now importing VDM settings... done: 0 s :52: Now mounting exported FS(s)/checkpoint(s)... done: 2 s :52: Now loading VDM... done: 1 s :52: Now turning up local network interface(s)... done: 1 s :52: Service outage end: 59 s :52: :52: Now mounting unexported FS(s)/checkpoint(s)... done: 2 s :52: Now importing schedule(s)... done: 0 s :52: Now unloading remote VDM/FS(s)/checkpoint(s)... done: 19 s :52: Now cleaning remote... done: 9 s :52: Elapsed time: 121s :52: done :52: VDM VDM_1 Reverse OK :53: Configuration Updated :53: "Reverse SyncRep Session session1" completed. Figure 18. Monitor reverse operation log on FARM Test results: AFM with FAR on VMAX3 enas allows seamless maintenance of the primary site with minimal impact on the application. As soon as the VDM is migrated to the secondary site, the application can be restarted without any need for SMB share IP address changes or further recovery. USE CASE 2 UNPLANNED VDM FAILOVER FROM PRIMARY TO SECONDARY Objective: The purpose of this test case is to understand how FAR can be used in the event of unplanned failover from primary to secondary site. Test case execution steps: 1. When primary site is not reachable AFM initiates automatic failover to secondary site. Ensure that the failover from primary to secondary is successful. 2. Mount the file shares from enas on the secondary site if needed and ensure that they are accessible. 3. Restore and restart the application on the secondary site. 4. VDM can be failed back to the primary site once the primary site is fully restored. 5. In the event of an unplanned failover, the primary site is not cleaned up as part the failover operation. Therefore, the primary site needs to be cleaned up first using NAS CLI which will then resume the reverse replication from secondary to primary site. 22

23 :18: Check DataMover server_2 of the primary site. Result: FAILED :18: /home/nasadmin/.vmsm/fo.sh :18: Now doing precondition check... done: 53 s :18: Now doing health check... done: 2 s :18: Now cleaning local... done: 3 s :18: :18: INFO: In case the SRDF switch failure, refer to the CCMD for remounting R1's file systems, checkpoint file systems :18: Now switching the session (may take several minutes)... done: 8 s :18: Now importing sync replica of NAS database :18: started R1 configuration import :18: applying R1 configuration to local site :18: applying R1 Filesystem configuration to local site :18: Updating R2 device configuration on local site :18: Updated R2 device configuration on local site :18: importing volume table :18: imported volume table :18: Updating R2 device configuration on local site :18: Updated R2 device configuration on local site :18: importing volume table :18: imported volume table :18: applied R1 Filesystem configuration to local site :18: Marking devices for server in progress :18: Updated the disk type :18: started check disk reachability for R2 devices :18: started check fs id and name conflict during config merge :18: id = :18: name = root_fs_vdm_vdm_ :18: id = :18: name = vdm_2_fs :18: :18: importing sync replica of NAS database... done: 49 s :18: Now creating VDM... done: 5 s :18: Now importing VDM settings... done: 0 s :18: Now mounting exported FS(s)/checkpoint(s)... done: 2 s :18: Now loading VDM... done: 2 s :18: Now turning up local network interface(s)... done: 1 s :18: Service outage end: 125s :18: :18: Now mounting unexported FS(s)/checkpoint(s)... done: 0 s :18: Now importing schedule(s)... done: 0 s :18: Elapsed time: 127s :18: done Figure 19. Monitoring unplanned failover log Detailed execution steps: 1. If AFM detects that the VDM on the primary site is not reachable, it initiates a failover to the secondary site. Using nas_syncrep CLI, verify that the replication has stopped. Use FARM GUI to check the current state of the failover process in the logs window. Ensure that it shows failover has completed successfully. 2. Mount the SMB shares on the application host from the enas on secondary site. Since the network configuration has also moved from primary to secondary site as part of the failover operation, the host will continue to use the same IP addresses for the SMB server. Thus the SMB shares can be mounted using the original share name and SMB server IP address. 3. Restore and restart the application as needed. Note: The data mover hosting the VDM will be rebooted as part of the cleanup process which will affect other VDMs hosted by the same data mover. Resuming primary site operations 1. Once the primary site comes back up again, restore VDM to the primary site. After an unplanned failover, first clean up the primary site using enas CLI. After the primary site cleanup completes, replication resumes from the secondary site to primary site. Issue the following command shown below on the primary site enas control station for proper cleanup. 2. Once the cleanup operation is initiated verify that the reverse replication from secondary to primary site has started 3. To resume VDM on the primary site use FARM restore option on desired VDM sessions as shown in Figure 20 23

24 Cleaning up all VDMs on primary site: $ nas_syncrep Clean -all id name vdm_name remote_system session_status 4096 session1 VDM_1 <--CS in_sync To clean up some specific VDMs on primary site: $ nas_syncrep Clean session1 WARNING: You have just issued the nas_syncrep -Clean command. This may result in a reboot of the original source Data Mover that the VDM was failed over from. Verify whether or not you have working VDM(s)/FS(s)/checkpoint(s) on this Data Mover and plan for this reboot accordingly. Running the nas_syncrep -Clean command while you have working VDM(s)/FS(s)/checkpoint(s) on this Data Mover will result in Data Unavailability during the reboot. Are you sure you want to proceed? [yes or no] yes Now cleaning session session1 (may take several minutes)... done Now rebooting Data Mover server_2... done Now starting session session1... done Figure 20. Restoring VDMs back to primary site isung FARM Test results: AFM with FAR on VMAX3 enas handles unplanned failover with minimal impact on application availability. Once the primary site is up, the Restore operation migrates the file services back to the primary site. USE CASE 3 VDM MIGRATION TO ANOTHER SYSTEM FOR LOAD BALANCING Objective: The purpose of this test case is to understand how FAR can be used to provide load balancing. Test case execution steps: 1. Identify the VDMs that need to be migrated to another site for load balancing. 2. Configure the failover of the VDMs using AFM. 3. Use planned failover steps as outlined in Use Case 1 to migrate the VDMs to another site. 4. Mount SMB shares and start applications on remote site after the VDM failover. Test results: VMAX3 enas with FAR allows load balancing across the enas sites using planned failover of specific VDMs from the primary site to the secondary site. Figure 21 shows the effect of load balancing file services using primary and secondary sites. As shown, migrating VDM to the secondary site allowed improved performance for both databases through effective resource utilization on both sites. 24

25 SQL Batch Req/Sec SQL DB Performance Both VDMs on same VDM2 migrated to remote enas VDM1 VDM2 VDM1 VDM2 Figure 21. SQL DB load balancing after VDM FAR Migration CONCLUSION VMAX3 with enas provides a consolidation platform for Microsoft Server applications in SMB environments. It provides an easy way to provision, manage, and operate file and block environments while keeping in mind application performance needs. SLO management allows applications to meet compliance and latency requirements. With SMB 3.0 support, enas for VMAX3 for provides data transfer offloading and load balancing benefits. Seamless, easy-to-use Unisphere UIs help the user perform file system provisioning in just a few steps. File Auto Recovery integrates enas with proven block level replication using SRDF to allow load balancing, and planned and unplanned failover for enas based applications REFERENCES EMC VMAX3 Family Documentation Set Deployment best practice for SQL Server with VMAX3 Service Level Object Management EMC VNX2 series documentation Managing Volumes and File Systems on VNX manually VNX Replicator Documentation Microsoft Offload data transfer VNX SnapSure Documentation Virtual Data Movers on EMC VNX Configuring and Managing Network High Availability on VNX VMAX enas File Auto Recovery with SRDF/S EMC VMAX3 Family Embedded NAS File Auto Recovery Manager Product Guide APPENDIX I STEP-BY-STEP STORAGE PROVISIONING USING UNISPHERE CREATE STORAGE FOR ENAS USING UNISPHERE FOR VMAX Device creation and masking on VMAX3 includes the following tasks: 25

26 Create a Storage Group (SG): SG is the grouping of devices. The SLO management and masking view controls are at storage group level. Create a Masking View (MV). Masking view brings together a combination of storage group, port group, and initiator group. You do not need to create an Initiator Group or a Port Group for enas as they are already created by system at enas install time. To provision storage for files using the System Dashboard in Unisphere for VMAX: 1. Select Provision Storage for File from COMMON TASKS. 2. In the new window that opens, provide a Storage Group Name, select the Service Level for the storage group, and indicate the number of devices and size per device as shown in Figure 22. Create eight devices, or multiples of eight devices, for each storage group. The Storage Group Name provided in this step will appear as the Storage Pool Name on enas while configuring file systems. Do not add devices to system created Storage Group EMBEDDED_NAS_DM_SG because it is exclusively for enas boot and control LUNs, and for internal use only. Figure 22. Storage provisioning for enas using Unisphere for VMAX CREATE MASKING VIEW (MV) Provisioning storage for file as described above creates a masking view as well, so you do not need to create a masking view. However if an existing storage group needs to be used by enas, use the system pre-configured port group EMBEDDED_NAS_DM_PG and the initiator group EMBEDDED_NAS_DM_IG to create a masking view. CREATE FILE SYSTEMS AND SMB SHARE Use Unisphere for VNX to create File Systems and export them as an SMB share or an NFS export. Storage for file (enas) that was created using Unisphere for VMAX is automatically discovered by enas as storage pools. SMB share creation is a two-step process: Create a file system Configure the file system as an SMB Share Figure 23 shows the Unisphere screen from which these tasks are accomplished. 26

27 Figure 23. File system creation and export using Unisphere for VNX To create a file system, select the Storage Pool to be used for file system, indicate the size of the file system and its maximum capacity, and enable Auto Extend (if required) as shown in Figure 24. Check the Slice Volumes option if you need to create multiple volumes from the same pools; otherwise the file system will consume all of the available space in the storage pool. If you are using manual volume management, first create a metavolume for the file system. Figure 24. Creating enas file system Any file system created on enas can be exported as CIFS or NFS share. To configure a CIFS share, complete the Data Mover, CIFS Share Name, File System, and CIFS Server fields, as shown in Figure 25. Set up a CIFS server on the Data Mover before creating any CIFS shares. The system administrator must configure the CIFS server, including registration with an Active Directory server, before configuring SMB shares. Note: Creating and configuring enas Data movers and domain controllers are pre-requisites that need to be completed ahead of time, and are beyond the scope of this paper. An SMB/CIFS share is configured on top of a file system. Select the file system to be shared and give it a share name, which is the name by which hosts will access it. If multiple CIFS servers were created on the data mover, you can select a particular CIFS server. Figure 25 shows the parameters required for creating an SMB share. 27

28 Figure 25. Creating an SMB share via enas 28

29 APPENDIX II VMAX AND ENAS CLI SAMPLE VMAX3 SOLUTIONS ENABLER COMMANDS TO CREATE STORAGE FOR ENAS Create a storage group that is to be used for enas consumption using the default SRP. # symsg -sid 115 create SQL_SG -srp DEFAULT_SRP Assign SLO to Storage Group, with workload type OLTP # symsg -sid 115 -sg SQL_SG set -slo gold -wl oltp Create masking view for pre-created or already existing storage group SQL_SG, using system defined port group and initiator group. # symaccess -sid 115 create view -name NAS_SQL -sg SQL_SG -pg EMBEDDED_NAS_DM_PG -ig EMBEDDED_NAS_DM_IG - celerra Add devices for enas -lun option is required and should have a value 10 or greater since LUN ID 00 to 0F are reserved for system use. # symaccess -sid 115 -type storage -name NAS_SQL add devs 153:162 -lun 153 -celerra SAMPLE VMAX ENAS CLI COMMANDS TO CREATE FILESYSTEM, MOUNT POINTS AND CIFS EXPORTS 3 File system creation from storage pool using storage group created in Solutions Enabler above. # nas_fs -name FS1 -type uxfs -create size=200g pool=sql_sg -option slice=y worm=off For backup or disaster recovery if there is requirement to place enas file system journal logs to be on the above created file system itself, then add log_type=split to the above command # nas_fs -name FS1 -type uxfs -create size=200g pool=sql_sg -option slice=y worm=off log_type=split Optional: To create a file system from existing meta volume instead of pool. Meta Volume name is M_1_2 in this example. # nas_fs -name FS1 -type uxfs -create M_1_2 worm=off Create a mount point from the above created filesystem # server_mountpoint server_2 -c /FS1 Mount the filesystem (default) # server_mount server_2 FS1 /FS1 Mount the filesystem SMB 3.0 (with continuous availability) # server_mount server_2 o smbca FS1 /FS1 Export filesystem as CIFS export (default) # server_export server_2 -P cifs -name FS1 /FS1 Export filesystem as CIFS export with type=ca (continuous availability SMB3.0) # server_export server_2 P cifs name FS1 o type=ca /FS1 Add DNS server # server_dns server_2 -p tcp domainsql.local , Start CIFS service # server_setup server_2 -P cifs -o start 3 Using enas CLI requires SSH access to enas control station. 29

30 Add computer name # server_cifs server_2 -add compname=cifs_1, domain=domainsql.local Join domain and authenticate # server_cifs server_2 -Join compname=cifs_1.domainsql.local, domain=domainsql.local, admin=sqladmin server_2: Enter Password: ********* 30

31 APPENDIX III DISCOVERING ENAS SMI-S PROVIDER WITH SCVMM enas SMI-S Provider is pre-installed and runs natively on the control station itself. This section describes the steps needed to discover the enas Software Control Station SMI-S Provider for SMB share provisioning in the System Center Virtual Machine Manager Console (SCVMM). This process consists of the following operations: Discover existing SMB shares created by VNX Unisphere or enas CLI Create new file systems and SMB shares on enas Delete unused file systems from enas Install the control station root certificate, on the VMM server 1. Display the contents of the root CA certificate on the Control Station: # /nas/sbin/nas_ca_certificate display 2. Copy the entire contents from the -----BEGIN CERTIFICATE----- to the -----END CERTIFCATE---- lines to clipboard 3. Open notepad on SCVMM Server and paste the contents of the certificate, then save as root.cer 4. Import the certificate into the SCVMM server by double-clicking the root.cer file and making selections from dialogs as shown in Figure 26. Figure 26. Import the certificate wizard Modify settings in enas SMI-S ECOM administration page Configure settings for security and SSLClientAuthentication through the ECOM webpage at the control station URL as shown in Figure

32 Figure 27. ECOM configuration control station URL Click Dynamic settings on the ECOM Administration Page, and locate the setting for SSLClientAuthentication, as shown in Figure 28. Change the setting to "None, then click Apply as shown in Figure 28. For more information, visit the Microsoft technet blog. Figure 28. SSL Client Authentication security settings ENAS FILE SHARE DISCOVERY IN SCVMM CONSOLE 1. Launch the SCVMM console to highlight the Fabric Resources icon. 2. Expand the storage tree, and click providers. 3. Add storage devices by selecting Add a storage device that is managed by an SMI-S provider. See Figure

33 Figure 29. SCVMM console for adding storage discovered by SMI-S Provider 4. Complete the following information, with storage provider discovery connection settings as shown in Figure 30. o Protocol: SMI-S CIMXML o Provider IP address or FQDN o TCP/IP port : 5989 o Use Secure Sockets Layer (SSL) connection o Create a Run As account which is the standard nasadmin user account and then select it, as shown in Figure 30. Figure 30. Connection settings for storage provider 33

34 5. Click Next for the Storage Devices Wizard. The enas cifs exports should appear under Storage Devices, for selection, and classification. See Figure 31 for the storage devices and their assigned classifications. Figure 31. Discovered enas cifs export for classification 6. At this point, the information about enas cifs exports can be found under storage > file servers. See Figure 32 for the list of file servers with their attributes. Figure 32. enas file shares after discovery ENAS SYSTEM FILE MANAGEMENT TASKS USING MICROSOFT SCVMM Once enas SMI-S provider is discovered, Microsoft System Center Virtual Machine Manager (SCVMM) can be used to: 1. Create and discover new cifs exports on enas 2. Remove existing cifs exports if the cifs have no user data on them 3. Discover new CIFS exports created outside of SCVMM (Unisphere for VNX) 4. Once discovered, enas file exports can be used as storage for virtual hard disks in VMM by specifying UNC path and size for VHDX, for example: New-VHD -Path \\SFSERVER00\SHARE00\VM00.VHDX -Dynamic -SizeBytes 100 GB 34

EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES

EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES Applied best practices guide. EMC VMAX HYPERMAX OS 5977.811.784 Embedded NAS Version 8.1.10.21 ABSTRACT This white paper outlines best practices for planning,

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY HYPERMAX OS Integration with CloudArray ABSTRACT With organizations around the world facing compliance regulations, an increase in data, and a decrease in IT spending,

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

DELL EMC VMAX UNISPHERE 360

DELL EMC VMAX UNISPHERE 360 DELL EMC VMAX UNISPHERE 360 ABSTRACT Using Unisphere 360 to consolidate the management of VMAX storage system offers many benefits. This management interface offers a single interface where all enrolled

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy Applied Technology Abstract This white paper highlights a new EMC Celerra feature that simplifies the process of creating specific custom

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Dell EMC Service Levels for PowerMaxOS

Dell EMC Service Levels for PowerMaxOS Dell EMC Service Levels for PowerMaxOS Dell Engineering May 2018 1 Dell EMC Service Levels for PowerMaxOS H17108 Revisions Date May 2018 Description Initial release The information in this publication

More information

ReDefine Enterprise Storage

ReDefine Enterprise Storage ReDefine Enterprise Storage What s New With VMAX 1 INDUSTRY S FIRST ENTERPRISE DATA PLATFORM 2 LOW LATENCY Flash optimized NO DOWNTIME Always On availability BUSINESS ORIENTED 1-Click Service Levels CLOUD

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines Enabled by EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Dell EMC PowerMax enas Quick Start Guide

Dell EMC PowerMax enas Quick Start Guide Dell EMC PowerMax enas Quick Start Guide Version 8.1.13.35 For Dell EMC PowerMax and VMAX All Flash REVISION 01 Copyright 2015-2018 Dell Inc. or its subsidiaries All rights reserved. Published May 2018

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

EMC VMAX UNISPHERE 360

EMC VMAX UNISPHERE 360 EMC VMAX UNISPHERE 360 ABSTRACT Unisphere 360 is a new application designed to consolidate and simplify data center management of VMAX Storage systems. WHITE PAPER To learn more about how EMC products,

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

VMAX ALL FLASH FAMILY

VMAX ALL FLASH FAMILY VMAX ALL FLASH FAMILY VMAX 250F, The exciting Dell EMC VMAX family of all-flash arrays now includes the newest powerful member, the VMAX. The VMAX delivers unparalleled performance and scalability as a

More information

EMC VNX Series: Introduction to SMB 3.0 Support

EMC VNX Series: Introduction to SMB 3.0 Support White Paper EMC VNX Series: Introduction to SMB 3.0 Support Abstract This white paper introduces the Server Message Block (SMB) 3.0 support available on the EMC VNX and the advantages gained over the previous

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

EMC Solutions Enabler (SE) version 8.2 and Unisphere for VMAX version 8.2 provide array management and control.

EMC Solutions Enabler (SE) version 8.2 and Unisphere for VMAX version 8.2 provide array management and control. This module provides an overview of the VMAX All Flash and VMAX3 Family of arrays with HYPERMAX OS 5977. Key features and storage provisioning concepts are covered as well as the CLI command structure

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Embedded Management on PowerMax and VMAX All Flash

Embedded Management on PowerMax and VMAX All Flash Embedded Management on PowerMax and VMAX All Flash Embedded Management (emanagement) with Unisphere for PowerMax Dell EMC Engineering May 2018 A Dell EMC Technical White Paper Revisions Date May 2018 Description

More information

DEPLOYMENT BEST PRACTICES FOR ORACLE DATABASE WITH EMC VMAX3 FAST SERVICE LEVELS AND HINTS

DEPLOYMENT BEST PRACTICES FOR ORACLE DATABASE WITH EMC VMAX3 FAST SERVICE LEVELS AND HINTS DEPLOYMENT BEST PRACTICES FOR ORACLE DATABASE WITH EMC VMAX3 FAST SERVICE LEVELS AND HINTS EMC VMAX Engineering White Paper ABSTRACT With the introduction of the third generation EMC VMAX3 disk arrays,

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer Storage Platforms Update Ahmed Hassanein, Sr. Systems Engineer 3 4 Application Workloads PERFORMANCE DEMANDING UNDERSTANDING APPLICATION WORKLOADS CAPACITY DEMANDING IS VITAL TRADITIONAL CLOUD NATIVE 5

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

DELL EMC UNITY: DATA REDUCTION

DELL EMC UNITY: DATA REDUCTION DELL EMC UNITY: DATA REDUCTION Overview ABSTRACT This white paper is an introduction to the Dell EMC Unity Data Reduction feature. It provides an overview of the feature, methods for managing data reduction,

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Accelerate Applications Using EqualLogic Arrays with directcache

Accelerate Applications Using EqualLogic Arrays with directcache Accelerate Applications Using EqualLogic Arrays with directcache Abstract This paper demonstrates how combining Fusion iomemory products with directcache software in host servers significantly improves

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

DELL EMC VMAX3 FAMILY

DELL EMC VMAX3 FAMILY DELL EMC VMAX3 FAMILY VMAX 100K, 200K, 400K The Dell EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for the enterprise.

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays This whitepaper describes Dell Microsoft SQL Server Fast Track reference architecture configurations

More information

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions EMC Solutions for Enterprises EMC Tiered Storage for Oracle ILM Enabled by EMC Symmetrix V-Max Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009 EMC Corporation.

More information

DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT

DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT DELL EMC VMAX ALL FLASH STORAGE FOR MICROSOFT HYPER-V DEPLOYMENT July 2017 VMAX Engineering ABSTRACT This white paper examines deployment of the Microsoft Windows Server Hyper-V virtualization solution

More information

Dell EMC ViPR Controller

Dell EMC ViPR Controller Dell EMC ViPR Controller Version 3.6.2 Ingest Services for Existing Environments Guide 302-004-917 Copyright 2013-2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

VMAX ALL FLASH FAMILY

VMAX ALL FLASH FAMILY VMAX ALL FLASH FAMILY VMAX 450F, 850F The Dell EMC VMAX family of all-flash arrays began with the introduction of the VMAX 450F and 850F, delivering unparalleled performance and scale as mission-critical

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.4 Configuring and managing LUNs H16814 02 Copyright 2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes the information in this publication

More information

DELL EMC UNITY: VIRTUALIZATION INTEGRATION

DELL EMC UNITY: VIRTUALIZATION INTEGRATION DELL EMC UNITY: VIRTUALIZATION INTEGRATION A Detailed Review ABSTRACT This white paper introduces the virtualization features and integration points that are available on Dell EMC Unity. July, 2017 WHITE

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi EMC Solutions for Microsoft Exchange 2007 NS Series iscsi Applied Technology Abstract This white paper presents the latest storage configuration guidelines for Microsoft Exchange 2007 on the Celerra NS

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

Using EMC CLARiiON CX4 Disk-Drive Spin Down with EMC Celerra FAST

Using EMC CLARiiON CX4 Disk-Drive Spin Down with EMC Celerra FAST Using EMC CLARiiON CX4 Disk-Drive Spin Down with EMC Celerra FAST Applied Technology Abstract This white paper highlights the ability to use the CLARiiON disk-drive Spin Down feature with EMC Celerra to

More information

Warsaw. 11 th September 2018

Warsaw. 11 th September 2018 Warsaw 11 th September 2018 Dell EMC Unity & SC Series Midrange Storage Portfolio Overview Bartosz Charliński Senior System Engineer, Dell EMC The Dell EMC Midrange Family SC7020F SC5020F SC9000 SC5020

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

Dell EMC. Converged Technology Extensions for Storage Product Guide

Dell EMC. Converged Technology Extensions for Storage Product Guide Dell EMC Converged Technology Extensions for Storage Product Guide Document revision 1.9 December 2017 Revision history Date Document revision Description of changes December 2017 1.9 Removed the topic,

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

EMC ViPR Controller. Service Catalog Reference Guide. Version REV 02

EMC ViPR Controller. Service Catalog Reference Guide. Version REV 02 EMC ViPR Controller Version 3.5 Service Catalog Reference Guide 302-003-279 REV 02 Copyright 2013-2019 EMC Corporation All rights reserved. Published February 2019 Dell believes the information in this

More information

Introduction to Using EMC Celerra with VMware vsphere 4

Introduction to Using EMC Celerra with VMware vsphere 4 Introduction to Using EMC Celerra with VMware vsphere 4 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01 EMC ViPR Controller Version 2.3 Integration with VMAX and VNX Storage Systems Guide 302-002-075 REV 01 Copyright 2013-2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC

More information

VMAX ALL FLASH. For Mission-Critical Oracle

VMAX ALL FLASH. For Mission-Critical Oracle VMAX ALL FLASH For Mission-Critical Oracle Performance All Flash performance that can scale (submillisecond response times) for mission critical Oracle mixed workloads; OLTP, DW/BI, and Analytics Virtualize

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the disaster recovery modular add-on to the Federation Enterprise Hybrid Cloud Foundation solution for SAP. It introduces the solution architecture and features that ensure

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

DELL EMC VMAX ALL FLASH

DELL EMC VMAX ALL FLASH DELL EMC VMAX ALL FLASH Essential Capabilities for Large Enterprise Mixed Workload Consolidation By Eric Burgener, Research Director IDC Storage Team Sponsored by Dell EMC May 2017 Lab Validation Executive

More information

EMC Innovations in High-end storages

EMC Innovations in High-end storages EMC Innovations in High-end storages Symmetrix VMAX Family with Enginuity 5876 Sasho Tasevski Sr. Technology consultant sasho.tasevski@emc.com 1 The World s Most Trusted Storage System More Than 20 Years

More information

EMC VMAX ALL FLASH AND VMAX3 iscsi DEPLOYMENT GUIDE FOR WINDOWS ENVIRONMENTS

EMC VMAX ALL FLASH AND VMAX3 iscsi DEPLOYMENT GUIDE FOR WINDOWS ENVIRONMENTS EMC VMAX ALL FLASH AND VMAX3 iscsi DEPLOYMENT GUIDE FOR WINDOWS ENVIRONMENTS EMC VMAX Engineering White Paper ABSTRACT This white paper provides guidelines and best practices for deploying iscsi with EMC

More information

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

MIGRATING TO DELL EMC UNITY WITH SAN COPY

MIGRATING TO DELL EMC UNITY WITH SAN COPY MIGRATING TO DELL EMC UNITY WITH SAN COPY ABSTRACT This white paper explains how to migrate Block data from a CLARiiON CX or VNX Series system to Dell EMC Unity. This paper outlines how to use Dell EMC

More information

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage Performance Study of Microsoft SQL Server 2016 Dell Engineering February 2017 Table of contents

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays TECHNICAL REPORT: Performance Study Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays ABSTRACT The Dell EqualLogic hybrid arrays PS6010XVS and PS6000XVS

More information

Surveillance Dell EMC Storage with Verint Nextiva

Surveillance Dell EMC Storage with Verint Nextiva Surveillance Dell EMC Storage with Verint Nextiva Sizing Guide H14897 REV 1.3 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017 Dell believes the information

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

DELL EMC UNITY: COMPRESSION FOR FILE Achieving Savings In Existing File Resources A How-To Guide

DELL EMC UNITY: COMPRESSION FOR FILE Achieving Savings In Existing File Resources A How-To Guide DELL EMC UNITY: COMPRESSION FOR FILE Achieving Savings In Existing File Resources A How-To Guide ABSTRACT In Dell EMC Unity OE version 4.2 and later, compression support was added for Thin File storage

More information

EMC VPLEX with Quantum Stornext

EMC VPLEX with Quantum Stornext White Paper Application Enabled Collaboration Abstract The EMC VPLEX storage federation solution together with Quantum StorNext file system enables a stretched cluster solution where hosts has simultaneous

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for IBM zseries mainframes. Geographically

More information

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Wendy Chen, Roger Lopez, and Josh Raw Dell Product Group February 2013 This document is for informational purposes only and may

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2016-05-18 2015-2016 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)

More information

Implementing SharePoint Server 2010 on Dell vstart Solution

Implementing SharePoint Server 2010 on Dell vstart Solution Implementing SharePoint Server 2010 on Dell vstart Solution A Reference Architecture for a 3500 concurrent users SharePoint Server 2010 farm on vstart 100 Hyper-V Solution. Dell Global Solutions Engineering

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

Replication is the process of creating an

Replication is the process of creating an Chapter 13 Local tion tion is the process of creating an exact copy of data. Creating one or more replicas of the production data is one of the ways to provide Business Continuity (BC). These replicas

More information