LONG-DISTANCE APPLICATION MOBILITY ENABLED BY EMC VPLEX GEO

Size: px
Start display at page:

Download "LONG-DISTANCE APPLICATION MOBILITY ENABLED BY EMC VPLEX GEO"

Transcription

1 White Paper LONG-DISTANCE APPLICATION MOBILITY ENABLED BY EMC VPLEX GEO An Architectural Overview EMC GLOBAL SOLUTIONS Abstract This white paper describes the design, deployment, and validation of a virtualized application environment incorporating Microsoft Windows 2008 R2 with Hyper-V, SAP ERP 6.0 EHP4, Microsoft SharePoint 2010, and Oracle Database 11gR2 on virtualized EMC VNX and EMC VMAX storage presented by EMC VPLEX Geo. June 2011

2 Copyright 2011 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All trademarks used herein are the property of their respective owners. Part Number H

3 Contents Executive summary... 6 Business case... 6 Solution overview... 6 Key results... 7 Introduction... 8 Purpose... 8 Scope... 8 Audience... 8 Terminology... 9 Solution Overview Overview Physical architecture Hardware resources Software resources Key components Introduction Common elements EMC VPLEX Geo EMC VPLEX Geo overview EMC VPLEX Geo design considerations EMC VPLEX Geo configuration EMC VPLEX Geo administration EMC VPLEX Geo administration overview EMC VPLEX Geo administration process EMC VNX EMC VNX5700 overview EMC VNX5700 configuration Pool configuration LUN configuration EMC Symmetrix VMAX EMC Symmetrix VMAX overview EMC Symmetrix VMAX configuration Symmetrix volume configuration Meta device configuration

4 Microsoft Hyper-V Microsoft Hyper-V overview Microsoft Hyper-V configuration Hyper-V networking configuration Microsoft SCVMM configuration Networking infrastructure Networking infrastructure overview Network design considerations Network configuration Silver Peak WAN optimization WAN optimization overview Silver Peak NX appliance Silver Peak design considerations Silver Peak WAN optimization results Microsoft Office SharePoint Server SharePoint overview Microsoft SharePoint Server 2010 configuration SharePoint Server configuration overview SharePoint Server design considerations SharePoint Server farm virtual machine configurations SharePoint virtual machine configuration and resources SharePoint farm test methodology SharePoint Server environment validation Test summary SharePoint baseline test SharePoint encapsulated test SharePoint live migration test SharePoint live migration with compression test SharePoint and Silver Peak WAN optimization SAP SAP overview SAP configuration SAP configuration overview SAP design considerations SAP virtual machine configurations HP LoadRunner configuration

5 SAP ERP workload profile SAP environment validation Test summary SAP test methodology SAP load generation SAP test procedure SAP test results SAP and Silver Peak WAN optimization Oracle Oracle overview Oracle configuration Oracle configuration overview Oracle virtual machine configuration Oracle database configuration and resources SwingBench utility configuration Oracle environment validation Test summary Oracle baseline test Oracle encapsulated test Oracle distance simulation test Oracle distance simulation with compression test Oracle live migration test Oracle and Silver Peak Wan optimization Conclusion Summary Findings References White papers Product documentation Other documentation

6 Executive summary Business case Today s global enterprise demands always-on availability of applications and information in order to remain competitive. The priority is mission-critical applications the applications that with downtime result in lost productivity, lost customers and ultimately, lost revenue. EMC has continuously led in products, services, and solutions that ensure uptime and protect business from disastrous losses. EMC VPLEX enables customers to seamlessly migrate workloads over distance to protect information or to better support initiatives and employees around the globe. Leveraging the architecture documented here, customers can migrate workloads to different physical locations, up to 2,000 km apart. This provides an unprecedented level of flexibility while ensuring application and information availability. The business day is no longer 9-to-5; companies are working around the clock at offices across the globe. Information and applications are needed to keep the business running smoothly. EMC VPLEX helps customers easily migrate workloads around the globe to: Increase ROI by increasing utilization of hardware and software assets Ensure availability of information and applications Minimize interruption of revenue generating processes Optimize application and data access to better meet specific geographic demands Solution overview The EMC VPLEX family is a solution for federating EMC and non-emc storage. The VPLEX platform logically resides between the servers and heterogeneous storage assets supporting a variety of arrays from various vendors. VPLEX simplifies storage management by allowing LUNs, provisioned from various arrays, to be managed though a centralized management interface. The EMC VPLEX platform removes physical barriers within, across, and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides mobility, availability, and collaboration between two VPLEX clusters within synchronous distances. VPLEX Geo further dissolves those distances by extending these use cases to asynchronous distances. With a unique scale-up and scale-out architecture, VPLEX s advanced data caching and distributed cache coherency provide workload resiliency, automatic sharing, balancing and failover of storage domains, and enable both local and remote data access with predictable service levels. 6

7 Key results VPLEX Geo provides a more effective way of managing virtual storage environments by enabling transparent integration with existing applications and infrastructure, and by providing the ability to migrate data between remote data centers with no interruption in service. Organizations do not need to perform traditionally complex, time-consuming tasks to migrate their data between geographically dispersed data centers, such as making physical backups or using data replication services. With VPLEX Geo employed as described in this solution, organizations can: Easily migrate applications in real time from one site to another with no downtime, using standard infrastructure tools such as Microsoft Hyper-V. Provide an application-transparent and non-disruptive solution for interruption avoidance and data migration. This reduces the operational impact associated with traditional solutions (such as tape backup and data replication) from days or weeks, to minutes. Transparently share and balance resources between geographically-dispersed data centers with standard infrastructure tools. 7

8 Introduction Purpose The purpose of this document is to provide readers with an overall understanding of the VPLEX Geo technology and how it can be used with tools such as Microsoft Hyper-V to provide effective resource distribution and sharing between data centers across distances of up to 2,000 km with no downtime. VPLEX Geo enables application mobility between data centers at asynchronous distances. Using VPLEX Geo in conjunction with Microsoft Hyper-V, IT administrators can guarantee application mobility across existing WANs. With the addition of Silver Peak compression, existing WAN bandwidth can be optimized for maximum AccessAnywhere performance between locations. Scope Audience The scope of this white paper is to document the: Environment configuration for multiple applications using virtualized storage presented by EMC VPLEX Geo Migration from traditional, SAN-attached storage to a virtualized storage environment presented by EMC VPLEX Geo Application mobility within a geographically dispersed VPLEX Geo virtualized storage environment This white paper is intended for EMC employees, partners, and customers including IT planners, virtualization architects and administrators, and any other IT professionals involved in evaluating, acquiring, managing, operating, or designing infrastructure that leverages EMC technologies. 8

9 Terminology This document includes the following terminology. Table 1. Term Asynchronous group Terminology Definition Asynchronous consistency groups are used for distributed volumes in VPLEX Geo to ensure that I/O to all volumes in the group is coordinated across both clusters, and all directors in each cluster. All volumes in an asynchronous group share the same detach rule, are in write-back cache mode, and behave the same way in the event of an inter-cluster link failure. Only distributed virtual volumes can be included in an asynchronous consistency group. CNA COM Consistency group Distributed device DR HA OLTP SAP ABAP SAP ERP Synchronous group UCS VM VPLEX Geo VHD Converged Network Adapter Communication identifies inter- and intra-cluster communication links Consistency groups allow you to group volumes together and apply a set of properties to the entire group. In a VPLEX Geo where clusters are separated by asynchronous distances (up to 50 ms RTT), consistency groups are required for asynchronous I/O between the clusters. In the event of a director, cluster, or inter-cluster link failure consistency groups ensure consistency in the order in which data is written to the back-end arrays, preventing possible data corruption. Distributed devices use storage from both clusters. A distributed device s components must be other devices, and those devices must be created from storage in both clusters in the Geo-plex. Disaster Recovery High Availability On-line transaction processing SAP Advanced Business Application Programming SAP Enterprise Resource Planning Synchronous consistency groups provide a convenient way to apply rule sets and other properties to a group of volumes at a time, simplifying system configuration and administration on large systems. Volumes in a synchronous group behave the same in a VPLEX, and can have global or local visibility. Synchronous consistency groups can contain local, global, or distributed volumes. Cisco Unified Computing System Virtual Machine. A software implementation of a machine that executes programs like a physical machine. Provides distributed federation within, across, and between two clusters (within asynchronous distances). Virtual Hard Disk. A Hyper-V virtual hard disk (VHD) is a file that encapsulates a hard disk image. 9

10 Solution Overview Overview The validated solution is built in a Microsoft Hyper-V environment on EMC VPLEX Geo infrastructure that incorporates EMC Symmetrix VMAX and EMC VNX storage arrays. The key components of the physical architecture are: EMC VPLEX Geo infrastructure blocks providing access and management of virtualized storage An EMC VNX5700 storage array EMC Symmetrix VMAX storage arrays Microsoft Hyper-V clusters supporting SAP, Microsoft SharePoint, and Oracle Silver Peak NX WAN optimization appliances Physical architecture Figure 1 illustrates the physical architecture of the use case solution. Figure 1. Physical architecture diagram 10

11 Hardware resources Table 2 describes the hardware resources used in this solution. Table 2. Hardware resources Equipment Quantity Configuration Rack servers 4 Production Site (Site A) 2 six-core Xeon 5650 CPUs, 96 GB RAM 2 10-GB Emulex CNA adapters Unified computing blade servers 4 Disaster Recover Site (Site B) 2 quad-core Xeon 5670 CPUs, 48 GB RAM 2 10-GB QLogic CNA adapters EMC Symmetrix VMAX 1 FC, 600 GB/15k FC drives, 200 GB Flash drives EMC VNX FC connectivity, 600 GB/15k FC drives, 200 GB Flash drives EMC VPLEX 2 VPLEX Geo cluster with two engines and four directors on each cluster WAN emulation GbE network emulators WAN compression appliance Enterprise-class switches 2 1-GbE hardware appliance 4 Converged network switches 2 per site for array and server connectivity 11

12 Software resources Table 3 describes the software resources used in this solution environment. Table 3. Software resources Software Version EMC PowerPath 5.5 Microsoft Windows 2008 R2 Microsoft Windows 2008 R2 Hyper-V SP1 SP1 Microsoft Office SharePoint Server Microsoft SQL Server 2008 R Red Hat Enterprise Linux 5.5 SAP ERP SAP NetWeaver 6.0 EHP4 7.0 EHP 1 Unicode 64-bit Oracle RDBMS 11gR Visual Studio Test Suite 2008 SP1 KnowledgeLake Document Loader 1.1 SwingBench HP LoadRunner

13 Key components Introduction The virtualized data center environment described in this white paper was designed and deployed using a shared infrastructure. All layers of the environment are shared to create the greatest return on infrastructure investment, while supporting multiple application requirements for functionality and performance. Using server virtualization, based on Microsoft Hyper-V, Intel x86-based servers are shared across applications and clustered to achieve redundancy and failover capability. VPLEX Geo is used to present shared data stores across the physical data center locations, enabling migration of the application virtual machines (VMs) between the physical sites. Physical Site A storage consists of a Symmetrix VMAX Single Engine (SE) and a VNX5700 for the SAP, Microsoft, and Oracle environments. VNX5700 is used for the physical Site B data center infrastructure and storage. Common elements The following sections briefly describe the components used in this solution, including: EMC VPLEX Geo EMC VPLEX Geo administration EMC VNX5700 EMC Symmetrix VMAX SE Microsoft 2008 R2 with Hyper-V Microsoft Systems Center Virtual Machine Manager (SCVMM) Silver Peak NX-9000 WAN optimization appliance 13

14 EMC VPLEX Geo EMC VPLEX Geo overview EMC VPLEX Geo is a storage virtualization platform for the private and hybrid cloud. EMC VPLEX Geo is a SAN-based block solution for local and distributed federation that allows the physical storage provided by traditional storage arrays to be virtualized, accessed, and managed across the boundaries between data centers. This form of access, called AccessAnywhere, removes many of the constraints of the physical data center boundaries and its storage arrays. AccessAnywhere storage allows data to be moved, accessed, and mirrored transparently between data centers, effectively allowing storage and applications to work between data centers as though those physical boundaries were not there. EMC VPLEX Geo design considerations In this solution, we designed our VPLEX Geo plexes using a routed topology with the following environmental characteristics: The routers are situated between the clusters. An Empirix network emulator is used between clusters. Figure 2 shows the VPLEX Geo cluster in a routed topology. Figure 2. VPLEX Geo cluster using routed topology 14

15 EMC VPLEX Geo configuration For this solution, the VPLEX Geo clusters consist of two clusters in two geographical locations. On each cluster, there are two port groups as described in Table 4 and Table 5. Table 4. VPLEX Geo-plex 1 port groups Subnet attributes for Port Group 0: prefix subnet mask cluster-address gateway mtu 1500 remote-subnet /24 Subnet attributes for Port Group 1: prefix subnet mask cluster-address gateway mtu 1500 remote-subnet /24 Table 5. VPLEX Geo-plex 2 port groups Subnet attributes for Port Group 0: prefix subnet mask cluster-address gateway mtu 1500 remote-subnet /24 Subnet attributes for Port Group 1: prefix subnet mask cluster-address gateway mtu 1500 remote-subnet /24 15

16 After both clusters join together to form VPLEX Geo, network connectivity is established from each director across two clusters. Figure 3 shows the connectivity status of director-1-1-a. Figure 3. Director-1-1-A connectivity status Figure 4 shows the connectivity status of director-2-1-a. Figure 4. Director 2-1-A connectivity status When distributed devices are created, they are in synchronous mode by default. VPLEX Geo clusters require consistency groups to be configured to make distributed devices in asynchronous mode. 16

17 Figure 5 shows that the Consistency Group VPLEX-Async-Group was created on both clusters. There are a total of nine virtual volumes in the group. Figure 5. VPLEX consistency group To verify all virtual volumes are in asynchronous mode, the VPLEX CLI can be used as shown in Figure 6. Figure 6. Verify asynchronous mode 17

18 After the VPLEX Geo cluster WAN connection was established, and the distance latency was introduced between two clusters (representing two geographical locations), there was a delay on both network traffic and the SAN. Figure 7 shows the VPLEX WAN port status. Figure 7. VPLEX WAN port status Figure 8 shows the packet round-trip time (RRT) between the VPLEX directors. Figure 8. RTT between VPLEX directors 18

19 EMC VPLEX Geo administration EMC VPLEX Geo administration overview When bringing an existing storage array into a virtualized storage environment, the options are to: or Encapsulate storage volumes from existing storage arrays that have already been used by hosts Create a new VPLEX Geo LUN and migrate the existing data to that LUN VPLEX Geo provides an option to encapsulate the existing data using VPlexcli. When application consistency is set (using the appc flag), the volumes claimed are dataprotected and no data is lost. Note: There is no GUI equivalent for the appc flag. In this solution, we encapsulated existing storage volumes with real data and brought them into VPLEX Geo clusters, as shown in Figure 9. The data was protected when storage volumes were claimed with the appc flag to make storage volumes application consistent. Figure 9. Encapsulated storage volumes 19

20 EMC VPLEX Geo administration process In this solution, administration of VPLEX Geo was done primarily through the Management Console, although the same functionality exists with VPlexcli. On authenticating to the secure web-based GUI, the user is presented with a set of on-screen configuration options, listed in the order of completion. For more information about each step in the workflow, refer to the EMC VPLEX Management Console online help. Table 6 summarizes the steps to be taken, from the discovery of the arrays up to the storage being visible to the host. Table 6. Step VPLEX Geo administration process Action 1 Discover available storage VPLEX Geo automatically discovers storage arrays that are zoned to the backend ports. All arrays connected to each director in the cluster are listed in the Storage Arrays view. 2 Claim storage volumes Storage volumes must be claimed before they can be used in the cluster (with the exception of the metadata volume, which is created from an unclaimed storage volume). Only after a storage volume is claimed, can it be used to create extents, devices, and then virtual volumes. 3 Create extents Create extents for the selected storage volumes and specify the capacity. 4 Create devices from extents A simple device is created from one extent and uses storage in one cluster only. 5 Create a virtual volume Create a virtual volume using the device created in the previous step. 6 Register initiators When initiators (hosts accessing the storage) are connected directly or through a Fibre Channel fabric, VPLEX Geo automatically discovers them and populates the Initiators view. Once discovered, you must register the initiators with VPLEX Geo before they can be added to a storage view and access storage. Registering an initiator gives a meaningful name to the port s WWN, which is typically the server s DNS name, to allow you to easily identify the host. 7 Create a storage view For storage to be visible to a host, first create a storage view and then add VPLEX Geo front-end ports and virtual volumes to the view. Virtual volumes are not visible to the hosts until they are in a storage view with associated ports and initiators. The Create Storage View wizard enables you to create a storage view and add initiators, ports, and virtual volumes to the view. Once all components are added to the view, it automatically becomes active. When a storage view is active, hosts can see the storage and begin I/O to the virtual volumes. After creating a storage view, you can only add or remove virtual volumes through the GUI. To add or remove ports and initiators, use the CLI. For more information, refer to the EMC VPLEX CLI Guide. 20

21 EMC VNX5700 EMC VNX5700 overview The EMC VNX family delivers industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This nextgeneration storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. The VNX series is designed to meet the high-performance, high-scalability requirements of midsize and large enterprises, delivering leadership performance, efficiency, and simplicity for demanding virtual application environments. EMC VNX5700 configuration This section describes how the VNX5700 was configured in this solution. Pool configuration Table 7 describes the VNX5700 pool configuration used in this solution. Table 7. VNX5700 pool configuration Pool Protection type Drive count Drive technology Drive capacity VNXPool0 RAID 1/0 16 SAS 300 GB VNXPool1 RAID 5 30 SAS 300 GB VNXPool2 RAID 1/0 16 SAS 300 GB VNXPool3 RAID 1/0 2 SATA Flash 200 GB VNXPool4 RAID 1/0 2 SAS 300 GB LUN configuration Table 8 describes the VNX5700 LUN configuration used in this solution. Table 8. VNX5700 LUN configuration LUN LUN ID LUN size Pool R10CSV TB VNXPool0 R5CSV TB VNXPool1 R5CSV TB VNXPool1 R5CSV TB VNXPool1 R10CSV TB VNXPool2 R10CSV GB VNXPool3 ORALOG GB VNXPool4 21

22 EMC Symmetrix VMAX EMC Symmetrix VMAX overview The EMC Symmetrix VMAX series is the latest generation of the Symmetrix product line. Built on the strategy of simple, intelligent, modular storage, it incorporates a scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration into a large-scale enterprise storage system. Symmetrix VMAX arrays provide improved performance and scalability for demanding enterprise environments such as those found in large virtualization environments, while maintaining support for EMC's broad portfolio of platform software offerings. Symmetrix VMAX systems deliver software capabilities that improve capacity use, ease of use, business continuity, and security. These features provide significant advantage to customer deployments in a virtualized environment where ease of management and protection of virtual machine assets and data assets are required. EMC Symmetrix VMAX configuration This section describes how the EMC Symmetrix VMAX was configured in this solution. Symmetrix volume configuration Table 9 describes the Symmetrix VMAX volume configuration used in this solution. Table 9. Volume ID Symmetrix VMAX volume configuration Protection type Device size Drive technology Drive capacity 1FD:229 RAID 5 (7+1) 240 GB Fibre Channel 450 GB 15k 22A:22B RAID 5 (7+1) 150 GB Fibre Channel 450 GB 15k Note: Devices 22A:22B are used as stand-alone devices. Meta device configuration Table 10 describes the Symmetrix VMAX metadevice configuration used in this solution. Table 10. Volume ID Symmetrix VMAX metadevice configuration Protect Meta configuration Meta members 1FD RAID 5 (7+1) striped 1FE: TB 206 RAID 5 (7+1) striped 207:20E 2.1 TB 20F RAID 5 (7+1) striped 210: TB 218 RAID 5 (7+1) striped 219: TB 221 RAID 5 (7+1) striped 222: TB Volume size 22

23 Microsoft Hyper-V Microsoft Hyper-V overview Microsoft Hyper-V configuration Hyper-V is a hypervisor-based virtualization technology from Microsoft that organizations use to reduce costs by using virtualization through Windows Server 2008 R2. Microsoft Hyper-V enables customers to make the best use of their server hardware by consolidating multiple server roles as separate virtual machines running on a single physical machine. This section describes the configuration of the Microsoft Hyper-V environment used in this solution. Windows Failover Cluster is used to provide high-availability features as well as live migration and cluster shared volume capability. Using the VPLEX volume for the cluster shared volume allows multiple virtual machines to be hosted on a single LUN, while still allowing for live migration of a virtual machine from one site to another independent of the other virtual machines on that volume. Figure 10 shows that four nodes are used at each site and three Ethernet network connections are used for Heartbeat, live migration, and client access at the respective site. System Center Virtual Machine Manager (SCVMM) is used to manage the virtual machines on the Hyper-V cluster. Figure 10. Hyper-V cluster nodes and connections 23

24 Hyper-V networking configuration One Hyper-V virtual switch, named MSvSwitch, is created for the virtual machine network. Figure 11 shows the relationship between the virtual machine, virtual switch, and physical NIC on one of the cluster nodes. Figure 11. Virtual switch relationship The virtual machine network is configured to use VLAN tagging as shown in Figure

25 Figure 12. VLAN tagging on the virtual machine network EMC PowerPath is installed on the cluster nodes, as shown in Figure 13, to provide load balancing and for fault tolerance on the FC network. To provide support for the VPLEX device, Invista Devices Support must be selected during installation. It can also be changed after installation using Add/Remove Programs. Figure 13. EMC PowerPath installed 25

26 The distributed volume on VPLEX Geo is presented to the nodes on both sites. A basic volume is created, formatted with NTFS on one of the nodes, and then added to the cluster. With Windows 2008 R2, the Cluster Shared Volumes feature can be activated by right-clicking on the cluster and selecting Enable Cluster Shared Volumes, as shown in Figure 14. The disk resource can then be added into cluster shared volume. Figure 14. Enabling cluster shared volumes Note: Make sure all nodes are available when enabling the Cluster Shared Volume feature otherwise you will need to use the Cluster CLI command later to add the node into the owner list of the cluster resource. See Figure 15. Figure 15. Cluster resource owner list Cluster Shared Volume mounts the disk under C:\SharedStorage on every node of the cluster, as shown in Figure

27 Figure 16. Cluster shared volume location The virtual machine can be configured to use that path to place the virtual disk on the shared volume, as shown in Figure 17. Figure 17. Virtual hard disk path If the storage network between the host and array fails, there is an option to redirect the traffic over the LAN to the node that owns the cluster shared disk resource. 27

28 Microsoft SCVMM configuration If you need to move the virtual machine disk to a different volume, the Migrate Storage option can be used with Microsoft Systems Center Virtual Machine Manager (SCVMM), as shown in Figure 18. Figure 18. SCVMM Migrate Storage option And provide the target volume to move to, as shown in Figure 19. Figure 19. Target volume The virtual machine must be in a saved state or powered off to move the underlying storage. 28

29 Networking infrastructure Networking infrastructure overview This section describes the virtual machine network environment used in this solution. Topics include: Network design considerations Network infrastructure configuration Network design considerations The virtual machine network environment in this solution consists of a single Layer-2 network extended across the WAN between Site A and Site B. The following design considerations apply to this environment: This extension was done using Cisco's Overlay Transport Virtualization (OTV) rather than by bridging the VLAN over the WAN. OTV allows for Ethernet LAN extension over any WAN transport by dynamically encapsulating Layer 2 Mac in IP and routing it across the WAN. Edge devices and Nexus 7000 switches exchange information about learned devices on the extended VLAN at each site via multicast, which negates the need for ARP and other broadcasts to be propagated across the WAN. Additionally, using OTV rather than bridging eliminates BPDU forwarding (part of normal spanning tree operations in a bridged VLAN scenario) and provides the ability to eliminate or rate-limit other broadcasts to conserve bandwidth. Note: For recommendations about using live migration in your own Hyper-V environment, refer to the Hyper-V: Live Migration Network Configuration Guide at the Microsoft TechNet site. Network configuration Table 11 lists the OTV configuration for each edge device in the virtual machine network. Table 11. Site OTV Site A: Pcloud-7000-OTV Virtual machine network OTV configuration Configuration feature otv otv site-vlan 1 interface Overlay1 description VPLEX-WAN otv isis authentication key-chain VPLEX otv join-interface Ethernet3/16 otv control-group otv data-group /29 otv extend-vlan 580 otv site-identifier 1 29

30 Site OTV Site B: Pcloud-7000-VPLEX-SITE-B Configuration feature otv interface Overlay1 description VPLEX-WAN otv isis authentication key-chain VPLEX otv join-interface Ethernet3/33 otv control-group otv data-group /29 otv extend-vlan 580 otv site-identifier 1 Note: For more detail on Cisco OTV refer to the Cisco Quick Start Guide. 30

31 Silver Peak WAN optimization WAN optimization overview WAN optimization is the process of improving network traffic flow by increasing efficiency and minimizing bandwidth roadblocks through the use of data compression, caching, and other techniques. There are many choices for WAN optimization products and vendors that can be deployed to meet your networking needs. In this solution we used Silver Peak WAN optimization appliances. Note: Bandwidth savings from WAN optimization are independent of distance, and present over all distances. Acceleration benefits from WAN optimization increase substantially as latency increases. Detailed test results for acceleration beyond 20 ms are available at Silver Peak NX appliance Silver Peak s appliances are data-center-class network devices designed to meet the rigorous WAN optimization requirements of large enterprises, delivering top performance, scalability, and reliability. Silver Peak s high-capacity NX appliance scales from megabytes-per-second to gigabytes-per-second of WAN capacity in a single device. By optimizing primarily at the network layer, the appliance can optimize all IP traffic regardless of transport protocol or application software version. Silver Peak s optimization technology helps to overcome common WAN bandwidth, latency, and quality challenges. As shown in Figure 20, Silver Peak appliances can be deployed between VPLEX Geo clusters at both ends of the WAN. Silver Peak, when deployed with VPLEX Geo, mitigates many challenges associated with deploying a geographically distributed architecture, including limited bandwidth, high latency (due to distance), and WAN quality. Figure 20. Silver Peak WAN optimization appliances and EMC VPLEX Geo 31

32 Silver Peak design considerations In this solution, the Silver Peak appliances were configured as follows: Silver Peak compression appliances were configured in Routed mode using policy-based routing, and inserted inline on each side of the 1-Gigabit Ethernet WAN link. Site-to-site traffic was routed across the WAN, entering the appliance through the Site A LAN interface. It was then compressed and sent through a GRE tunnel between the two appliances where it is uncompressed on the other side and sent out the Site B LAN interface into the Site B network. Under an appliance failure scenario the appliances fail to open, meaning all traffic continues to pass through, although uncompressed. In addition, in Bridged mode, System Bypass can be used to pass all traffic uncompressed between sites if required. The Silver Peak appliances can also be configured as a redirect target using WCCP rather than be deployed inline. See the Silver Peak Configuration Guide available at for more detail. Silver Peak WAN optimization results Our test results showed that this solution benefitted from the Silver Peak WAN optimization across all applications. Figure 21 shows the traffic across the LAN and WAN, and describes the peak and average deduplication ratio for all applications over the testing period for both sites. The average deduplication was 66 percent on Site B and 56 percent on Site A. Figure 21. Silver Peak WAN optimization performance 32

33 Microsoft Office SharePoint Server 2010 SharePoint overview This section covers the following topics: Microsoft SharePoint Server 2010 configuration SharePoint Server environment validation Microsoft SharePoint Server 2010 configuration SharePoint Server configuration overview When customers move their physical SharePoint environments to a virtualized infrastructure, they also seek all the benefits that virtualization brings to such a complex, federated application as SharePoint Server. Mobility of the entire farm, rather than building a mirror farm, is a rising requirement of enterprise-level application owners. Migrating the entire SharePoint farm to a remote site as a form of disaster avoidance can be complex. Building a solution needs to meet the two leading challenges: Moving the entire SharePoint farm between different data centers without interrupting operations on the farm Reducing storage maintenance costs The virtualized SharePoint Server 2010 farm used in our solution overcomes these challenges by building on Microsoft Hyper-V enabled by VPLEX Geo technology, which allows disparate storage arrays at multiple locations to provision a single, shared array on the SharePoint 2010 farm. SharePoint Server design considerations In this SharePoint 2010 environment design, the major configuration highlights include: The SharePoint farm is designed as a publishing portal. There is around 400 GB of user content, consisting of four SharePoint site collections (document centers) with four content databases, each populated with 100 GB of random user data. Microsoft network load balancing (NLB) was enabled on three web front-ends (WFE) servers for load balancing and local failover consideration. The SharePoint farm uses seven virtual machines hosted on four physical Hyper-V servers at the production site. Three web front-end (WFE) servers are also configured with query roles for load balancing. The query components have been scaled out to three partitions. Each query server contains a part of the index partitions and a mirror of another index partition for fault-tolerance considerations. Two index components are provisioned for fault tolerance and better crawl performance. 33

34 SharePoint Server farm virtual machine configurations Table 12 describes the virtual machine configurations for the SharePoint Server 2010 farm. Table 12. Configuration Microsoft SharePoint Server 2010 farm virtual machines Three WFE virtual machines Description The division of resources offers the best search performance and redundancy in a virtualized SharePoint farm. As WFE and query roles are CPU-intensive, the WFE VMs were allocated four virtual CPUs. The query components have been scaled out into three partitions. Each query server contains a part of the index partitions and a mirror of another index partition for faulttolerance consideration. Two index virtual machines Application Excel virtual machine SQL Server virtual machine Two index components were partitioned in this farm for better crawl performance. Multiple crawl components mapped to the same crawl database to achieve fault tolerance. The index components were designed to crawl themselves without impacting the production WFE servers. Four virtual CPUs and 6 GB memory were allocated for the index server. The incremental crawl was scheduled to run every two hours per day. Two virtual CPUs and 2 GB of memory were allocated for the application server as these roles require less resource. Four virtual CPUs and 16 GB of memory were allocated for the SQL Server virtual machine as CPU utilization and memory requirement for SQL in a SharePoint farm could be high. With more memory allocated to the SQL virtual machine, the SQL Server becomes more effective in caching SharePoint user data, leading to fewer required physical IOPS for storage and better performance. Four tempdb db files were created, which is equal to the SQL Server core CPUs as Microsoft recommended. 34

35 SharePoint virtual machine configuration and resources Table 13 lists the virtual machine configuration of the SharePoint farm and the allocated resources. Table 13. SharePoint farm virtual machine configuration Server role Quantity vcpus Memory (GB) Bootdisk (GB) Search disk (GB) WFE Servers Index Servers Application Server (Host Central Admin) SQL Server 2008 R2 Enterprise Not Applicable Not Applicable SharePoint farm test methodology The data population tool uses a set of sample documents. Altering the document names and metadata (before insertion) makes each document unique. One load-agent host is allocated for each WFE, allowing data to be loaded in parallel until the targeted 400 GB data size is reached. The data is spread evenly across the four-site collections (each collection is a unique content database). The user profiles consist of a mix of three user operations: browse, search, and modify. KnowledgeLake DocLoaderLite was used to populate SharePoint with random user data, while Microsoft VSTS 2008 SP1 emulated the client user load. Third-party vendor code was used to ensure an unbiased and validated test approach. During validation, a Microsoft heavy-user load profile was used to determine the maximum user count that the Microsoft SharePoint 2010 server farm could sustain while ensuring the average response times remained within acceptable limits. Microsoft standards state that a heavy user performs 60 requests in each hour; that is, there is a request every 60 seconds. The user profiles in this testing consist of three user operations: 80 percent browse 10 percent search 10 percent modify Note: Microsoft publishes default service-level agreement (SLA) response times for each SharePoint user operation. Common operations (such as browse and search) should be completed within 3 seconds or less, and uncommon operations (such as modify) should be completed within 5 seconds or less. These response time SLAs were comfortably met and exceeded. 35

36 SharePoint Server environment validation Test summary Our testing validated SharePoint Server 2010 operations before and after encapsulation into the VPLEX Geo cluster: A baseline test is performed first to log the SharePoint 2010 farm base performance. The next test then validates the effect on performance when the SharePoint farm virtual machines are encapsulated into the VPLEX Geo cluster. Live migration tests were performed, using a distance emulator on the whole SharePoint farm after the encapsulation of storage into the VPLEX Geo cluster. Latency was set at 20 ms equivalent to 2,000 km. Live migration tests were performed on the whole SharePoint farm with the insertion of Silver Peak compression. Latency was set to 20 ms equivalent to 2,000 km. SharePoint 2010, VPLEX Geo, and Hyper-V performance data was logged for analysis during the tests. This data presents an account of results from VSTS 2008 SP1, which generates continuous workload (Browse/Search/Modify) to the WFEs of the SharePoint 2010 farm, while simultaneously consolidating the SQL and Oracle OLTP workload on the same Hyper-V clusters. SharePoint baseline test With a mixed user profile of 80/10/10, the virtualized SharePoint farm can support a maximum of 11,520 users with 10 percent concurrency, while satisfying Microsoft s acceptable response time criteria, as shown in Table 14 and Table 15. Table 14. SharePoint user activity baseline performance User activity - Browse/Search/Modify Acceptable response time Baseline response time Browse/Search/Modify 80% / 10% / 10% <3 / <3 / <5 sec 2.47/2.00/1.33 sec Table 15. Content mix - Browse/Search/ Modify SharePoint content mix baseline performance Requests per second (RPS) Microsoft user profile Concurrency 80% / 10% / 10% 19.2 Heavy 10 % Maximum user capacity 36

37 SharePoint encapsulated test The storage for the entire SharePoint farm was encapsulated and virtualized in this test. The storage was active across both sites and made available to the SharePoint and SQL servers through VPLEX Geo. After that, all SharePoint virtual machines were started up. After encapsulation, with a mixed user profile of 80/10/10, the virtualized SharePoint farm can support a maximum of 12,780 users with 10 percent concurrency, while satisfying Microsoft s acceptable response time criteria. Figure 22 shows the performance of passed tests per second after encapsulating into the VPLEX LUNs on the SharePoint virtual machines. Figure 22. Performance of passed tests per second after SharePoint encapsulation Table 16 and Table 17 show the performance results when encapsulation is used. Table 16. SharePoint user activity performance after encapsulation User activity - Browse/Search/Modify Acceptable response time Baseline response time Browse/Search/Modify 80% / 10% / 10% <3 / <3 / <5 sec 2.41 / 2.33 / 1.26 sec Table 17. Content mix - Browse/Search/ Modify SharePoint content mix performance after encapsulation Requests per second (RPS) Microsoft user profile Concurrency 80% / 10% / 10% 21.3 Heavy 10% Maximum user capacity 37

38 SharePoint live migration test In this test environment, the whole SharePoint farm was migrated from the production site to the DR site with a 2,000 km distance by using Hyper-V live migration without loss of service. During the live migration process, the SharePoint farm was running with a full load. All SharePoint virtual machines were migrated in sequence from the production site to the DR site in the eight-node Hyper-V clusters. The 10 GbE connection was used for the live-migration network as live migration requires high bandwidth. Figure 23 shows the performance of passed tests per second during the live migration. When running live migration between the sites, the transactions per second fluctuated. The drop in the number of passed tests per second was due to the migration process of the SQL server as Hyper-V was doing memory transfer across the sites. Because the live migration process affects the maximum user capacity of the entire SharePoint farm, we recommend doing the whole farm live migration during non-peak hours. As shown in Figure 23, there was no loss of service for the whole SharePoint farm during the live migration. Figure 23. Passed tests per second during live migration of the entire SharePoint farm across sites with a 2,000 km distance Table 18 and Table 19 list the SharePoint farm performance results during the live migration across the sites with a 2,000 km distance. Running the entire SharePoint farm on the DR site would decrease the number of passed tests per second because of the 20 ms latency between the clients and the SharePoint farm including the web front-end servers. Table 18. User activity performance including the live migration between sites with a 2,000 km distance User activity - Browse/Search/Modify Acceptable response time Response time Browse/Search/Modify 80% / 10% / 10% <3 / <3 / <5 sec 2.90 / 1.73 / 1.49 sec 38

39 Table 19. Content mix - Browse/Search/ Modify Content mix performance during the live migration between sites with 2000 km distance Requests per second (RPS) Concurrency Maximum user capacity 80% / 10% / 10% % % Successful Requests Rate Virtual machines with large memory configurations take longer to migrate than virtual machines with smaller memory configurations. This is because active memory is copied over the network to the receiving cluster node before migration. Table 20 lists the live migration duration, with and without latency for the entire SharePoint farm. Note how a 2,000 km distance between the two data centers caused a longer cross-site live migration duration. Table 20. SharePoint Farm server role Live migration duration of all SharePoint virtual machines Live migration duration without distance latency (mm:ss) Application Excel 0:28 1:14 Web Front End Server 1 0:56 1:25 Web Front End Server 2 0:44 1:35 Web Front End Server 3 0:38 1:52 SQL Server 3:05 4:48 Crawler Server 1 1:06 1:45 Crawler Server 2 0:48 2:31 Live migration duration with 2000 km distance and 20 ms latency (mm:ss) 39

40 SharePoint live migration with compression test In this test, the entire SharePoint farm was migrated across the sites with a 2,000 km distance with the insertion of Silver Peak WAN optimization. Latency was set to 20 ms equivalent to 2,000 km. During the live migration process, the SharePoint farm was running with a full load. All SharePoint virtual machines were migrated in sequence across sites in the eight-node Hyper-V clusters. In this scenario, the 1 GbE connection with Silver Peak WAN optimization enabled was used for the live-migration network. The live migration duration with Silver Peak WAN compression was similar to the 10 GbE network connection. Table 21 details the migration duration for all the SharePoint virtual machines. Table 21. WAN-optimized SharePoint live migration results SharePoint farm server role Application Excel 1:49 Web Front End Server 1 1:36 Web Front End Server 2 2:47 Web Front End Server 3 2:49 SQL Server 6:32 Crawler Server 1 3:12 Crawler Server 2 3:00 Live migration duration with a 2,000 km distance and Silver Peak enabled (mm:ss) 40

41 SharePoint and Silver Peak WAN optimization Figure 24 shows the data reduction ratio during full load on the SharePoint farm. Our testing showed that data reduction can reach up to 68.5 percent, including the live migration process across the sites. Figure 24. Reduction of SharePoint traffic with Silver Peak 41

42 SAP SAP overview Large and midsize organizations deploy SAP ERP 6.0 EHP4 to meet their core business needs such as financial analysis, human capital management, procurement and logistics, product development and manufacturing, and sales and service; supported by analysis, corporate services, and end-user service delivery. EMC VPLEX Geo enables virtualized storage for applications to access LUNs between data center sites, and provides the ability to move virtual machines between data centers. This optimizes data center resources and results in zero downtime for data center relocation and server maintenance. Because SAP applications and modules can be distributed among several virtual servers (see Figure 25), and normal operations involve extensive communication between them, it is critical that communication is not disrupted when individual virtual machines are moved from site to site. Figure 25. SAP environment The rest of this section covers the following topics: SAP configuration overview Validation of the virtualized SAP environment 42

43 SAP configuration SAP configuration overview SAP ERP system PRD was installed as a high-availability system with the International Demonstration and Education System (IDES) database ABAP stack on Windows 2008 Enterprise SP2 and Microsoft SQL Server 2008 R2 Enterprise. IDES represents a model international company with subsidiaries in several countries. IDES contains application data for various business scenarios that can be run in the system. The business processes in the IDES system are designed to reflect real-life business requirements and characteristics. SAP design considerations In this SAP ERP 6.0 EHP4 environment, the major configuration considerations include: SAP patches, parameters, basis settings, and load balancing, as well as Windows 2008 and Hyper-V were all installed and configured according to SAP procedures and guidelines. SAP update processes (UPD/UP2) were configured on the Application Server instances. Some IDES functionality for example, synchronization with the external GTS system was deactivated to eliminate unnecessary external interfaces that were outside the scope of the test. The system was configured and customized to enable LoadRunner automated scripts to run business processes on the functional areas including Sales and Distribution (SD), Material Management (MM), and Finance and Controlling (FI/CO). The Order to Cash (OTC) business scenario was used as an example in this use case. The storage for the entire SAP environment was encapsulated and virtualized in this test. The storage was across the two sites and made available to the SAP servers through VPLEX Geo. 43

44 SAP virtual machine configurations The sample SAP system PRD consists of one SAP Database instance, one ABAP system central services (ASCS) instance, and two application server (AS) instances. All instances are installed on Hyper-V virtual machines with the configuration as described in Table 22. Table 22. Server role Quantity vcpus SAP virtual machine resources Memory (GB) OS bootdisk (GB) SAP ERPDB SAPASCS SAPERPDI Additional disks (GB) HP LoadRunner configuration The HP LoadRunner application emulates concurrent users to apply production workloads on an application platform or environment. LoadRunner applies consistent, measurable, and repeatable loads to an application from end to end. The LoadRunner system consists of one LoadRunner controller and the associated virtual user generator in a virtual machine with the configuration as listed in Table 23. Table 23. LoadRunner virtual machine Server role vcpus Memory (GB) Controller 2 16 The parameters were configured according to best practices, including enabling IP spoofing, running as a process instead of a thread, and setting think time to a limited value. SAP ERP workload profile In our testing LoadRunner ran an order-to-cash (OTC) business process scenario to generate the application-specific workload. This process covers a sell-from-stock scenario, which includes the creation of a customer order with six line items and the corresponding delivery with subsequent goods movement and invoicing. Special pricing conditions were also used. The process consists of the following transactions: 1. Create an order with six line items (Transaction VA01). 2. Create a delivery for this order (VL01N). 3. Display the customer order (VA03). 4. Change the delivery (VL02N) and post goods issue. 5. Create an invoice (VF01). 6. Create an accounting document. 44

45 SAP environment validation Test summary The test objective was to: Validate the non-disruptive movement of the SAP Database, Central Services, and Application Server instance virtual machines across data centers, enabled by Microsoft Hyper-V live migration and EMC VPLEX Geo During validation, 100 sales orders were created in the PRD system. The purpose of this scenario was to maintain active connections between the user GUI, database instance, and the application server instances during a Hyper-V live migration, so that business continuity and a federated solution landscape under live migration could be verified by a successful SAP sales order creation process. SAP test methodology Table 24 describes the SAP test scenarios. Table 24. Scenario Baseline SAP validation test scenarios Description 100 sales orders were created on the SAP ERP 6.0 EHP4 system while the SAP database, central services, and application server instances resided in Data Center 1. Live migration Live migration with WAN optimization (compression) 100 sales orders were initiated on the SAP ERP 6.0 EHP4 system while the SAP database, central services, and application server instances resided in Data Center 1. During the SAP sales orders creation process, live migrations were conducted to move the SAP VMs from Data Center 1 to Data Center sales orders were initiated on the SAP ERP 6.0 EHP4 system while the SAP database, central services, and application server instances resided in Data Center 1. During the SAP sales orders creation process, live migrations were conducted to move the SAP VMs from Data Center 1 to Data Center 2. The live migrations were conducted using Silver Peak WAN optimization. We used the following SAP key performance indicators to evaluate the functionality and throughput during the tests: Business volume (number of SAP business documents processed) SAP average response time for dialog work process Some other statistics are on the Windows OS level, and those metrics were collected from Microsoft SCVMM. 45

46 SAP load generation The SAP ERP 6.0 EHP4 system used to validate this solution was a standard IDES system with a custom configuration and additional master data and transactional data. The database size was 511 GB and the SAP SID was PRD. The LoadRunner controller ramped up one virtual user every 20 seconds until the number of virtual users reached 10 concurrent and active virtual users. All virtual users generated system workload activity during the entire testing period. All virtual users connected to PRD through the predefined logon group in order to distribute the workload evenly across both SAP application server instances. SAP test procedure Table 25 lists the test procedure steps for each phase of testing. Table 25. SAP test procedure Step Action 1 Count existing SAP sales order documents. 2 Reset the LoadRunner environment. 3 Start OS performance collection. 4 Run the LoadRunner scenario. 5 Stop OS performance collection. 6 Count existing SAP sales order documents. 7 Collect performance metrics from SAP and Microsoft SCVMM. 46

47 SAP test results Our result showed that the SAP sales order creation process was not interrupted during the test. The virtual users experienced longer response time during the live migration, but it soon returned to previous performance once live migration was completed. Table 26 lists the test results. Metrics Table 26. SAP test results Baseline Live migration (no Silver Peak) Number of Sales Orders Total Dialog Steps 2,343 2,342 2,341 DIA Avg. Resp. Time (ms) ,360 1,450 CPU Time % DB Time % LoadRunner Duration (mm:ss) 12:40 15:12 15:34 Live Migration Times in sequence (mm:ss) SAPERPDB - 4:49 4:40 SAPERPENQ1-1:08 1:13 SAPERPDI1-2:36 3:01 SAPERPDI2-2:37 3:07 Total Migration Duration - 11:10 12:01 Live Migration (with Silver Peak) Figure 26 compares the metrics of the three test scenarios described in Table 26. 2,700 2,400 2,100 1,800 1,500 1, Baseline Live Migration (no Silver Peak) Live Migration (with Silver Peak) Nr. of Sales Orders Total Dialog Steps DIA Avg. Resp. Time (ms) Figure 26. SAP test results compared 47

48 Figure 27 compares the sequence times for the two live migration scenarios (no compression and with compression), as described in Table :24 12:00 9:36 7:12 4:48 2:24 0:00 Live Migration (no Silver Peak) Live Migration (with Silver Peak) SAPERPDB SAPERPENQ1 SAPERPDI1 SAPERPDI2 Figure 27. SAP live migration times compared Figure 28 shows the virtual user response time from LoadRunner Controller during the test. Figure 28. LoadRunner virtual user response time 48

49 SAP and Silver Peak WAN optimization The test results showed that the Silver Peak WAN optimization appliance was able to compress, on average, 33 percent of the out-going traffic and 39 percent of the incoming traffic during the testing period. Figure 29 compares the amount of traffic before and after Silver Peak compression. Figure 29. Reduction of SAP traffic with Silver Peak 49

50 Oracle Oracle overview Oracle Database 11g Enterprise Edition delivers industry-leading performance, scalability, security, and reliability on a choice of clustered or single servers running Windows, Linux, and UNIX. It provides comprehensive features, easily managing the most demanding transaction processing, business intelligence, and content management applications. In this solution, SwingBench was used to exercise the Oracle database. SwingBench is a load generator and benchmark tool designed to test Oracle databases. The SwingBench Order Entry - PL/SQL (SOE) workload models a TPC-C like OLTP order entry workload. Note: If you are considering implementing Oracle Database in a Hyper-V environment, please refer to the Oracle support website, My Oracle Support, and the document Certification Information for Oracle Database on Microsoft Windows (64-bit) [ID ]. Oracle configuration Oracle configuration overview Oracle virtual machine configuration In this Oracle environment, the major configuration highlights include: A 200 GB OLTP Oracle Database 11g The Oracle Database 11g was running in archivelog mode with Flashback enabled Table 27 describes the virtual machine configurations for the Oracle environment. Table 27. Component Oracle virtual machines Description Operating system Red Hat Enterprise Linux 5 (64-bit) release 5.5 Kernel CPU Memory el5 #1 SMP 4 vcpus 24 GB Oracle database version Oracle Database 11g Enterprise Edition Release bit 50

51 Oracle database configuration and resources This section describes the Oracle 11g database configuration. Table 28 describes the key configuration parameters for the database. Table 28. Key database parameters Instance parameter db_name Value VPGEO db_block_size 8192 log_buffer memory_max_target memory_target sort_area_size Table 29 describes the sizing allocation and usage of the database tablespaces. Table 29. Key database parameters Tablespace Size (MB) Used (MB) Free (MB) SOE_DATA SOE_INDEX SYSAUX SYSTEM TEMP UNDOTBS USERS

52 SwingBench utility configuration The SwingBench SOE database schema models a traditional OLTP database. Tables and indexes reside in separate tablespaces and are shown in Table 30. Table 30. Schema tables and indexes Table name CUSTOMERS INVENTORIES ORDERS ORDER_ITEMS PRODUCT_DESCRIPTIONS PRODUCT_INFORMATION WAREHOUSES LOGON Index CUSTOMERS_PK (UNIQUE), CUST_ACCOUNT_MANAGER_IX, CUST_ _IX, CUST_LNAME_IX, CUST_UPPER_NAME_IX INVENTORY_PK (UNIQUE), INV_PRODUCT_IX, INV_WAREHOUSE_IX ORDER_PK (UNIQUE), ORD_CUSTOMER_IX, ORD_ORDER_DATE_IX, ORD_SALES_REP_IX, ORD_STATUS_IX ORDER_ITEMS_PK (UNIQUE), ITEM_ORDER_IX, ITEM_PRODUCT_IX PRD_DESC_PK (UNIQUE), PROD_NAME_IX PRODUCT_INFORMATION_PK (UNIQUE), PROD_SUPPLIER_IX WAREHOUSES_PK (UNIQUE) n/a Table 31 shows the table and index sizes for the SOE schemas. Table 31. Table and index sizes Table name Table size (MB) Index size (MB) ORDER_ITEMS CUSTOMERS ORDERS LOGON INVENTORIES PRODUCT_DESCRIPTIONS PRODUCT_INFORMATION WAREHOUSES

53 Oracle environment validation Test summary Our testing validated the VPLEX Geo configuration and the use of live migration for non-disruptive movement of the virtual machine across data centers. There was testing at each stage of the solution build. Baseline tests were performed on the Oracle virtual machine prior to encapsulation of the storage into the VPLEX Geo cluster. Encapsulated tests were performed on the Oracle virtual machine after the encapsulation of storage into the VPLEX Geo cluster. Distance simulation tests were performed on the Oracle virtual machine after the encapsulation of storage into the VPLEX Geo cluster. Latency was set at 20 ms equivalent to 2,000 km. Distance simulation tests were performed on the Oracle virtual machine after the encapsulation of storage into the VPLEX Geo cluster and the insertion of Silver Peak compression. Latency was set at 20 ms equivalent to 2,000 km. At each stage, an availability test using a SwingBench Order Entry - PL/SQL (SOE) workload of 20 users was run against the Oracle 11g database with and without live migration and the results compared. After each test was run the database was flashed back to a common restore point to ensure consistency. Oracle baseline test A baseline SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users was run. This produced an average of 240 transactions per minute over the hour of the baseline test with an average response time of 15.6 ms per transaction, as shown in Table 32 and Figure 30. Table 32. Oracle baseline test results SwingBench transaction Average response Number of transactions Customer Registration 16 ms 2316 Browse Products 5 ms 4918 Order Products 30 ms 4009 Process Orders 24 ms 2395 Browse Orders 3 ms

54 Figure 30. TPM output from SwingBench from Oracle baseline test Oracle encapsulated test After encapsulating the volumes into VPLEX Geo, the SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users was repeated. This produced an average of 236 transactions per minute over the hour of the post encapsulation test, with an average response time of 7.4 ms per transaction, as shown in Table 33 and Figure 31. Table 33. Oracle encapsulated test results SwingBench transaction Average response Number of transactions Customer Registration 4 ms 2358 Browse Products 1 ms 4730 Order Products 1 ms 4008 Process Orders 22 ms 2307 Browse Orders 9 ms

55 Figure 31. Oracle encapsulated test results Oracle distance simulation test After encapsulation testing was complete, a distance emulator was employed to insert a latency of 20 ms between sites. The SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users was repeated. This produced an average of 263 transactions per minute over the hour of the distance simulation test, with an average response time of 46 ms per transaction, as shown in Table 34 and Figure 32. Table 34. Oracle distance simulation test results SwingBench transaction Average response Number of transactions Customer Registration 27 ms 2630 Browse Products 56 ms 5251 Order Products 83 ms 4495 Process Orders 38 ms 2616 Browse Orders 30 ms 842 Figure 32. Oracle distance simulation test results 55

56 Oracle distance simulation with compression test For this test, Silver Peak compression was enabled, along with the distance emulator configured for a latency of 20 ms between sites. The SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users was repeated. This produced an average of 251 transactions per minute over the hour of the distance simulation with compression test, with an average response time of 136 ms per transaction, as shown in Table 35 and Figure 33. Table 35. Oracle distance simulation with compression test results SwingBench transaction Average response Number of transactions Customer Registration 60 ms 2486 Browse Products 49 ms 5101 Order Products 187 ms 4116 Process Orders 163 ms 2551 Browse Orders 221 ms 821 Figure 33. Oracle distance simulation with compression test results 56

57 Oracle live migration test After starting the SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users, the virtual machine migrated all of the virtual machines from Site A to Site B. Table 36 and Figure 34 show the migration test results and the effects of distance on the migration times. Table 36. Oracle live migration test results Stage Live migration (m:ss) Average TPM Baseline Test 5: Encapsulated Test 5: Distance Simulation Test 6: Distance Simulation with Compression enabled 7: The database remained available throughout the live migration. There was a temporary dip in the transaction rate as the virtual machine completed its migration but transactions soon returned to their previous level. Figure 34. Oracle live migration test results 57

58 Oracle and Silver Peak Wan optimization Figure 35 compares the bandwidth use for Oracle with and without Silver Peak WAN optimization. Our testing showed that traffic reduction reached 75 percent on Site B and 65 percent reduction on Site A, during live migration. Figure 35. Reduction of Oracle traffic with Silver Peak 58

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

EMC Storage for VMware vsphere Enabled by EMC VPLEX Local

EMC Storage for VMware vsphere Enabled by EMC VPLEX Local EMC Storage for VMware vsphere Enabled by EMC VPLEX Local and VMware vsphere 4.1 EMC Information Infrastructure Solutions Abstract Storage migration and array replacement are costly, time-consuming projects

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4 EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4 A Detailed Review EMC Information Infrastructure Solutions Abstract Customers are looking for

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC VPLEX with Quantum Stornext

EMC VPLEX with Quantum Stornext White Paper Application Enabled Collaboration Abstract The EMC VPLEX storage federation solution together with Quantum StorNext file system enables a stretched cluster solution where hosts has simultaneous

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) Enabled by EMC Symmetrix DMX-4 4500 and EMC Symmetrix Remote Data Facility (SRDF) Reference Architecture EMC Global Solutions 42 South

More information

EMC VPLEX Metro with HP Serviceguard A11.20

EMC VPLEX Metro with HP Serviceguard A11.20 White Paper EMC VPLEX Metro with HP Serviceguard A11.20 Abstract This white paper describes the implementation of HP Serviceguard using EMC VPLEX Metro configuration. October 2013 Table of Contents Executive

More information

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved. Verron Martina vspecialist 1 TRANSFORMING MISSION CRITICAL APPLICATIONS 2 Application Environments Historically Physical Infrastructure Limits Application Value Challenges Different Environments Limits

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Implementing SharePoint Server 2010 on Dell vstart Solution

Implementing SharePoint Server 2010 on Dell vstart Solution Implementing SharePoint Server 2010 on Dell vstart Solution A Reference Architecture for a 3500 concurrent users SharePoint Server 2010 farm on vstart 100 Hyper-V Solution. Dell Global Solutions Engineering

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER White Paper EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER EMC XtremSF, EMC XtremCache, EMC VNX, Microsoft SQL Server 2008 XtremCache dramatically improves SQL performance VNX protects data EMC Solutions

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Information Infrastructure Solutions

EMC Information Infrastructure Solutions EMC Tiered Storage for Microsoft Office SharePoint Server 2007 BLOB Externalization Enabled by EMC CLARiiON, EMC Atmos, Microsoft Hyper-V, and Metalogix StoragePoint Applied Technology EMC Information

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT White Paper EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT Genetec Omnicast, EMC VPLEX, Symmetrix VMAX, CLARiiON Provide seamless local or metropolitan

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MISSION CRITICAL APPLICATIONS 2 Application Environments Historically Physical Infrastructure Limits Application Value Challenges Different Environments Limits On Performance Underutilized

More information

EMC Business Continuity for Microsoft Office SharePoint Server 2007

EMC Business Continuity for Microsoft Office SharePoint Server 2007 EMC Business Continuity for Microsoft Office SharePoint Server 27 Enabled by EMC CLARiiON CX4, EMC RecoverPoint/Cluster Enabler, and Microsoft Hyper-V Proven Solution Guide Copyright 21 EMC Corporation.

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture EMC Virtual Architecture for Microsoft SharePoint Server 2007 Enabled by EMC CLARiiON CX3-40, VMware ESX Server 3.5 and Microsoft SQL Server 2005 Reference Architecture EMC Global Solutions Operations

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

SAP High-Performance Analytic Appliance on the Cisco Unified Computing System

SAP High-Performance Analytic Appliance on the Cisco Unified Computing System Solution Overview SAP High-Performance Analytic Appliance on the Cisco Unified Computing System What You Will Learn The SAP High-Performance Analytic Appliance (HANA) is a new non-intrusive hardware and

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Reasons to Deploy Oracle on EMC Symmetrix VMAX Enterprises are under growing urgency to optimize the efficiency of their Oracle databases. IT decision-makers and business leaders are constantly pushing the boundaries of their infrastructures and applications

More information

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo Vendor: EMC Exam Code: E20-002 Exam Name: Cloud Infrastructure and Services Exam Version: Demo QUESTION NO: 1 In which Cloud deployment model would an organization see operational expenditures grow in

More information

Microsoft SharePoint Server 2010 on Dell Systems

Microsoft SharePoint Server 2010 on Dell Systems Microsoft SharePoint Server 2010 on Dell Systems Solutions for up to 10,000 users This document is for informational purposes only. Dell reserves the right to make changes without further notice to any

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in this

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Data Center Interconnect Solution Overview

Data Center Interconnect Solution Overview CHAPTER 2 The term DCI (Data Center Interconnect) is relevant in all scenarios where different levels of connectivity are required between two or more data center locations in order to provide flexibility

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630

Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630 Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630 A Dell reference architecture for 5000 Users Dell Global Solutions Engineering June 2015 A Dell Reference Architecture THIS

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions EMC Solutions for Enterprises EMC Tiered Storage for Oracle ILM Enabled by EMC Symmetrix V-Max Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Milestone XProtect Corporate Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 2 AccessAnywhere TM ProtectEverywhere TM Application Availability and Recovery in Distributed Datacenter Environments Horia Constantinescu Sales Territory Manager, EMEA EMC RecoverPoint EMC VPLEX T:

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

EMC RECOVERPOINT: ADDING APPLICATION RECOVERY TO VPLEX LOCAL AND METRO

EMC RECOVERPOINT: ADDING APPLICATION RECOVERY TO VPLEX LOCAL AND METRO White Paper EMC RECOVERPOINT: ADDING APPLICATION RECOVERY TO VPLEX LOCAL AND METRO Abstract This white paper discusses EMC RecoverPoint local, remote, and Concurrent (local and remote) data protection

More information

EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING

EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING White Paper EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING Abstract This white paper provides an overview of VPLEX/VE use cases and performance characteristics Copyright 2014 EMC Corporation.

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING

EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING ABSTRACT This White Paper provides a best practice to install and configure SUSE SLES High Availability Extension (HAE) with EMC

More information

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY HYPERMAX OS Integration with CloudArray ABSTRACT With organizations around the world facing compliance regulations, an increase in data, and a decrease in IT spending,

More information

Using EMC FAST with SAP on EMC Unified Storage

Using EMC FAST with SAP on EMC Unified Storage Using EMC FAST with SAP on EMC Unified Storage Applied Technology Abstract This white paper examines the performance considerations of placing SAP applications on FAST-enabled EMC unified storage. It also

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0

More information

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain Performance testing results using Dell EMC Data Domain DD6300 and Data Domain Boost for Enterprise Applications July

More information

Solutions for Demanding Business

Solutions for Demanding Business Solutions for Demanding Business Asseco Driving Competitive Advantage with VPLEX for VMware Availability in Volksbank EMC FORUM Bucharest, 16.10.2014 2 Agenda About VOLSKBANK About Asseco SEE Challenging

More information

EMC Celerra Replicator V2 with Silver Peak WAN Optimization

EMC Celerra Replicator V2 with Silver Peak WAN Optimization EMC Celerra Replicator V2 with Silver Peak WAN Optimization Applied Technology Abstract This white paper discusses the interoperability and performance of EMC Celerra Replicator V2 with Silver Peak s WAN

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the disaster recovery modular add-on to the Federation Enterprise Hybrid Cloud Foundation solution for SAP. It introduces the solution architecture and features that ensure

More information

Video Surveillance EMC Storage with Digifort Enterprise

Video Surveillance EMC Storage with Digifort Enterprise Video Surveillance EMC Storage with Digifort Enterprise Sizing Guide H15229 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published August 2016 EMC believes the information

More information

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v A Design and Implementation Guide for SharePoint Server 2010 Collaboration Profile on Active System 800 with VMware vsphere Dell

More information

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Sizing Guide H15052 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published May 2016 EMC believes the information

More information

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform Discuss database workload classification for designing and deploying SQL server databases

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE VNX

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE VNX White Paper EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE VNX EMC VPLEX, EMC Next-Generation VNX, VMware vcloud Suite, and VMware

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

SAN Virtuosity Fibre Channel over Ethernet

SAN Virtuosity Fibre Channel over Ethernet SAN VIRTUOSITY Series WHITE PAPER SAN Virtuosity Fibre Channel over Ethernet Subscribe to the SAN Virtuosity Series at www.sanvirtuosity.com Table of Contents Introduction...1 VMware and the Next Generation

More information

Cisco I/O Accelerator Deployment Guide

Cisco I/O Accelerator Deployment Guide Cisco I/O Accelerator Deployment Guide Introduction This document provides design and configuration guidance for deploying the Cisco MDS 9000 Family I/O Accelerator (IOA) feature, which significantly improves

More information

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. White Paper rev. 2017-10-16 2017 FlashGrid Inc. 1 www.flashgrid.io Abstract Ensuring high availability

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

IBM Storwize V5000 disk system

IBM Storwize V5000 disk system IBM Storwize V5000 disk system Latest addition to IBM Storwize family delivers outstanding benefits with greater flexibility Highlights Simplify management with industryleading graphical user interface

More information

iscsi Technology: A Convergence of Networking and Storage

iscsi Technology: A Convergence of Networking and Storage HP Industry Standard Servers April 2003 iscsi Technology: A Convergence of Networking and Storage technology brief TC030402TB Table of Contents Abstract... 2 Introduction... 2 The Changing Storage Environment...

More information

WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS?

WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS? Why Data Domain Series WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS? Why you should take the time to read this paper Provide data isolation by tenant (Secure logical data isolation for each tenant

More information

HP solutions for mission critical SQL Server Data Management environments

HP solutions for mission critical SQL Server Data Management environments HP solutions for mission critical SQL Server Data Management environments SQL Server User Group Sweden Michael Kohs, Technical Consultant HP/MS EMEA Competence Center michael.kohs@hp.com 1 Agenda HP ProLiant

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT WHITE PAPER LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT Continuous protection for Oracle environments Simple, efficient patch management and failure recovery Minimal downtime for Oracle

More information

Eliminate the Complexity of Multiple Infrastructure Silos

Eliminate the Complexity of Multiple Infrastructure Silos SOLUTION OVERVIEW Eliminate the Complexity of Multiple Infrastructure Silos A common approach to building out compute and storage infrastructure for varying workloads has been dedicated resources based

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies

Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies Enabled by EMC VNX Blue print of a highly-available and disaster-tolerant infrastructure EMC VPLEX EMC RecoverPoint EMC E-Lab TM

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information