Microsoft Storage Spaces Direct (S2D) Deployment Guide

Size: px
Start display at page:

Download "Microsoft Storage Spaces Direct (S2D) Deployment Guide"

Transcription

1 Front cover Microsoft Storage Spaces Direct (S2D) Deployment Guide Last Update: January 2017 Microsoft Software Defined Storage solution based on Windows Server 2016 Microsoft Software Defined Storage using Lenovo rack-based servers Designed for Enterprise MSPs/CSPs, and HPC High performing, high availability and scale out solution with growth potential David Feisthammel Daniel Lu David Ye Michael Miller

2 Abstract As the high demand for storage continues to accelerate for enterprises in recent years, Lenovo and Microsoft have teamed up to craft a software-defined storage solution leveraging the advanced feature set of Windows Server 2016 and the flexibility of the Lenovo System x3650 M5 rack server and RackSwitch G8272 switch. This solution provides a solid foundation for customers looking to consolidate both storage and compute capabilities on a single hardware platform, or for those enterprises that wish to have distinct storage and compute environments. In both situations, this solution provides outstanding performance, high availability protection and effortless scale out growth potential to accommodate evolving business needs. This deployment guide provides insight to the setup of this environment and guides the reader through a set of well-proven procedures leading to readiness of this solution for production use. This guide is based on Storage Spaces Direct as implemented in Windows Server 2016 RTM (Release to Manufacturing). Do you have the latest version? Check whether you have the latest version of this document by clicking the Check for Updates button on the front page of the PDF. Pressing this button will take you to a web page that will tell you if you are reading the latest version of the document and give you a link to the latest if needed. While you re there, you can also sign up to get notified via whenever we make an update. Contents Storage Spaces Direct Solution Overview Solution configuration Overview of the installation tasks Configure the physical network switches Prepare the servers and storage Install Windows Server Install Windows Server roles and features Configure the operating system Configure networking parameters Create the Failover Cluster Enable and configure Storage Spaces Direct Summary Lenovo Professional Services Appendix: Bill of Materials for hyperconverged solution Change history Authors Notices Trademarks Microsoft Storage Spaces Direct (S2D) Deployment Guide

3 Storage Spaces Direct Solution Overview The initial offering of software-defined storage (SDS) in Windows Server 2012 was called Storage Spaces. The next iteration of this solution has been introduced in Windows Server 2016 under the name Storage Spaces Direct (S2D), and continues the concept of collecting a pool of affordable drives to form a large usable and shareable storage repository. In Windows Server 2016, the solution expands to encompass support for both SATA and SAS drives, including NVMe disk devices, that reside internally in the server. Figure 1 shows an overview of the Storage Spaces Direct stack. Figure 1 Storage Spaces Direct stack When discussing high performance and shareable storage pools, many IT professionals think of expensive SAN infrastructure. Thanks to the evolution of disk and virtualization technology, as well as ongoing advancements in network throughput, the realization of having an economical, highly redundant and high performance storage subsystem is now present. Key considerations of S2D are as follows: S2D capacity and storage growth Leveraging the 14x 3.5 drive bays of the x3650 M5 and high-capacity drives such as the 4 TB drives in this solution, each server node is itself a JBOD (just a bunch of disks) Copyright Lenovo All rights reserved. 3

4 repository. As demand for storage and/or compute resources grows, additional x3650 M5 systems are added into the environment to provide the necessary storage expansion. S2D performance Using a combination of solid-state drives (SSDs) and regular hard disk drives (HDDs) as the building blocks of the storage volume, an effective method for storage tiering is available. Faster-performing SSDs act as a cache repository to the capacity tier, which is placed on traditional HDDs in this solution. Data is striped across multiple drives, thus allowing for very fast retrieval from multiple read points. At the physical network layer, 10GbE links are employed today. However, in the future, additional throughput needs can be satisfied by using higher bandwidth adapters. For now, the dual 10 GbE network paths that contain both Windows Server operating system and storage replication traffic are more than sufficient to support the workloads and show no indication of bandwidth saturation. S2D resilience Traditional disk subsystem protection relies on RAID storage controllers. In S2D, high availability of the data is achieved using a non-raid adapter and adopting redundancy measures provided by Windows Server 2016 itself. The storage can be configured as simple spaces, mirror spaces, or parity spaces. Simple spaces: Stripes data across a set of pool disks, and is not resilient to any disk failures. Suitable for high performance workloads where resiliency is either not necessary, or is provided by the application. Mirror spaces: Stripes and mirrors data across a set of pool disks, supporting a two-way or three-way mirror, which are respectively resilient to single disk, or double disk failures. Suitable for the majority of workloads, in both clustered and non-clustered deployments. Parity spaces: Stripes data across a set of pool disks, with a single disk write block used to store parity information, and is resilient to a single disk failure. Suitable for large block append-style workloads, such as archiving, in non-clustered deployments. S2D use cases The importance of having a SAN in the enterprise space as the high-performance and high-resilience storage platform is changing. The S2D solution is a direct replacement for this role. Whether the primary function of the environment is to provide Windows applications or a Hyper-V virtual machine farm, S2D can be configured as the principal storage provider to these environments. Another use for S2D is as a repository for backup or archival of VHD(X) files. Wherever a shared volume is applicable for use, S2D can be the new solution to support this function. S2D supports two general deployment scenarios, which have been called disaggregated and hyperconverged. Microsoft sometimes uses the term converged to describe the disaggregated deployment scenario. Both scenarios provide storage for Hyper-V, specifically focusing on Hyper-V Infrastructure as a Service (IaaS) for service providers and enterprises. In the disaggregated approach, the environment is separated into compute and storage components. An independent pool of servers running Hyper-V acts to provide the CPU and memory resources (the compute component) for the running of VMs that reside on the storage environment. The storage component is built using S2D and Scale-Out File Server (SOFS) to provide an independently scalable storage repository for the running of VMs and applications. This method, as illustrated in Figure 2 on page 5, allows for the independent scaling and expanding of the compute farm (Hyper-V) and the storage farm (S2D). 4 Microsoft Storage Spaces Direct (S2D) Deployment Guide

5 Figure 2 Disaggregated configuration - nodes do not run Hyper-V For the hyperconverged approach, there is no separation between the resource pools for compute and storage. Instead, each server node provides hardware resources to support the running of VMs under Hyper-V, as well as the allocation of its internal storage to contribute to the S2D storage repository. Figure 3 on page 6 demonstrates this all-in-one configuration for a four-node hyperconverged solution. When it comes to growth, each additional node added to the environment will mean both compute and storage resources are increased together. Perhaps workload metrics dictate that a specific resource increase is sufficient to cure a bottleneck (e.g., CPU resources). Nevertheless, any scaling will mean the addition of both compute and storage resources. This is a fundamental limitation for all hyperconverged solutions. 5

6 Figure 3 Hyperconverged configuration - nodes provide shared storage and Hyper-V hosting Solution configuration The primary difference between configuring the two deployment scenarios is that no vswitch creation is necessary in the disaggregated solution, since the S2D cluster is used only for the storage component and does not host VMs. This document specifically addresses the deployment of a Storage Spaces Direct hyperconverged solution. If a disaggregated solution is preferred, it is a simple matter of skipping a few configuration steps, which will be highlighted along the way. The following components and information are relevant to the test environment used to develop this guide. This solution consists of two key components, a high-throughput network infrastructure and a storage-dense high-performance server farm. In this solution, the networking component consists of a pair of Lenovo RackSwitch G8272 switches, which are connected to each node via 10GbE Direct Attach Copper (DAC) cables. In addition to the Mellanox ConnectX-4 NICs described in this document, Lenovo also supports Chelsio T520-LL-CR dual-port 10GbE network cards that use the iwarp protocol. This Chelsio NIC can be ordered via the CORE special-bid process as Lenovo part number 46W0609. Contact your local Lenovo client representative for more information. Although the body of this document details the steps required to configure the Mellanox cards, it is a simple matter to substitute Chelsio NICs in the solution. 6 Microsoft Storage Spaces Direct (S2D) Deployment Guide

7 The server/storage farm is built using four Lenovo System x3650 M5 servers equipped with multiple storage devices. Supoprted storage devices include HDD, SSD, and NVMe media types, although Microsoft currently advises against configuring a solution using all three media types. A four-node cluster is the minimum configuration required to harness the failover capability of losing any two nodes. Figure 4 shows high-level details of the configuration. The four server/storage nodes and two switches take up a combined total of 10 rack units of space. The use of RAID controllers: Microsoft does not support any RAID controller attached to the storage devices used by S2D, regardless of a controller s ability to support pass-through or JBOD mode. As a result, the N2215 SAS HBA is used in this solution. The ServeRAID M1215 controller is used only for the pair of mirrored (RAID-1) boot drives and has nothing to do with S2D. Networking: Two Lenovo RackSwitch G8272 switches, each containing: 48 ports at 10Gbps SFP+ 4 ports at 40Gbps QSFP+ Compute: Four Lenovo System x3650 M5 servers, each containing: Two Intel Xeon E v4 processors 256 GB memory One quad-port 1GbE adapter (not used in solution) One dual-port 10GbE Mellanox ConnectX-4 PCIe adapter with RoCE support Storage in each x3650 M5 server: Twelve 3.5 HDD at front Two 3.5 HDD + Two 2.5 HDD at rear ServeRAID M1215 SAS RAID adapter N2215 SAS HBA (LSI SAS Gbps) Figure 4 Solution rack configuration using System x3650 M5 systems Figure 5 on page 8 shows the layout of the drives. There are 14x 3.5 drives in the server, 12 at the front of the server and two at the rear of the server. Four are 800 GB SSD devices, while the remaining ten drives are 4 TB SATA HDDs. These 14 drives form the tiered storage pool of S2D and are connected to the N2215 SAS HBA. Two 2.5 drive bays at the rear of the server contain a pair of 600 GB SAS HDDs that are mirrored (RAID-1) for the boot drive and connected to the ServeRAID M1215 SAS RAID adapter. One of the requirements for this solution is that a non-raid storage controller is used for the S2D data volume. Note that using a RAID storage controller set to pass-through mode is not supported at the time of this writing. The ServeRAID adapter is required for high availability of the operating system and is not used by S2D for its storage repository. 7

8 Figure 5 x3650 M5 storage subsystem Network wiring of this solution is straight-forward, with each server being connected to each switch to enhance availability. Each system contains a dual-port 10 GbE Mellanox ConnectX-4 adapter to handle operating system traffic and storage communications. Figure 6 Server to switch network connectivity To allow for redundant network links in the event of a network port or external switch failure, the recommendation calls for the connection from Port 1 on the Mellanox adapter to be joined to a port on the first G8272 switch ( S2DSwitch1 ), plus a connection from Port 2 on the same Mellanox adapter to be linked to an available port on the second G8272 switch ( S2DSwitch2 ). This cabling construct is illustrated in Figure 6. Defining an Inter-Switch Link (ISL) ensures failover capabilities on the switches. The last construction on the network subsystem is to leverage the virtual network capabilities of Hyper-V on each host to create a SET-enabled team from both 10 GbE ports on the Mellanox adapter. From this a virtual switch (vswitch) is defined and logical network adapters 8 Microsoft Storage Spaces Direct (S2D) Deployment Guide

9 (vnics) are created to facilitate the operating system and storage traffic. Note that for the disaggregated solution, the SET team, vswitch, and vnics are not created. Also, for the disaggregated solution, the servers are configured with 128 GB of memory, rather than 256 GB, and the CPU has 10 cores instead of 14 cores. The higher-end specifications of the hyperconverged solution are to account for the dual functions of compute and storage that each server node will take on, whereas in the disaggregated solution, there is a separation of duties, with one server farm dedicated to Hyper-V hosting and a second devoted to S2D. 9

10 Overview of the installation tasks This document specifically addresses the deployment of a Storage Spaces Direct hyperconverged solution. Although nearly all configuration steps presented apply to the disaggregated solution as well, there are a few differences between these two solutions. We have included notes regarding steps that do not apply to the disaggregated solution. These notes are also included as comments in PowerShell scripts. A number of tasks need to be performed in order to configure this solution. If completed in a stepwise fashion, this is not a difficult endeavor. The high-level steps described in the remaining sections of the paper are as follows: 1. Configure the physical network switches on page Prepare the servers and storage on page Install Windows Server 2016 on page Install Windows Server roles and features on page Configure the operating system on page Configure networking parameters on page Create the Failover Cluster on page Enable and configure Storage Spaces Direct on page 23 Configure the physical network switches Like Windows Server 2012 R2, Windows Server 2016 includes a feature called SMB Direct, which supports the use of network adapters that have the Remote Direct Memory Access (RDMA) capability. Network adapters that support RDMA can function at full speed with very low latency, while using very little CPU. For workloads such as Hyper-V or Microsoft SQL Server, this enables a remote file server to resemble local storage. SMB Direct provides the following benefits: Increased throughput: Leverages the full throughput of high speed networks where the network adapters coordinate the transfer of large amounts of data at line speed. Low latency: Provides extremely fast responses to network requests and, as a result, makes remote file storage feel as if it is directly attached block storage. Low CPU utilization: Uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications, including Hyper-V. Leveraging the benefits of SMB Direct comes down to a few simple principles. First, using hardware that supports SMB Direct and RDMA is critical. Use the Bill of Materials found in Appendix: Bill of Materials for hyperconverged solution on page 29 as a guide. This solution utilizes a pair of Lenovo RackSwitch G /40 Gigabit Ethernet switches and a dual-port 10GbE Mellanox ConnectX-4 PCIe adapter for each node. Redundant physical network connections are a best practice for resiliency as well as bandwidth aggregation. This is a simple matter of connecting each node to each switch. In our solution, Port 1 of each Mellanox adapter is connected to the Switch 1 and Port 2 of each Mellanox adapter is connected to Switch 2, as shown in Figure 7 on page Microsoft Storage Spaces Direct (S2D) Deployment Guide

11 Switch 2 Switch 1 Node 1 Node 2 Node 3 Node 4 Figure 7 Switch to node cabling As a final bit of network cabling, we configure an Inter-Switch Link (ISL) between our pair of switches to support the redundant node-to-switch cabling described above. To do this, we need redundant high-throughput connectivity between the switches, so we connect Ports 53 and 54 on each switch to each other using a pair of 40Gbps QSFP+ cables. Note that these connections are not shown in Figure 7. In order to leverage the SMB Direct benefits listed above, a set of cascading requirements must be met. Using RDMA over Converged Ethernet (RoCE) requires a lossless fabric, which is typically not provided by standard TCP/IP Ethernet network infrastructure, since the TCP protocol is designed as a best-effort transport protocol. Datacenter Bridging (DCB) is a set of enhancements to IP Ethernet, which is designed to eliminate loss due to queue overflow, as well as to allocate bandwidth between various traffic types. To sort out priorities and provide lossless performance for certain traffic types, DCB relies on Priority Flow Control (PFC). Rather than using the typical Global Pause method of standard Ethernet, PFC specifies individual pause parameters for eight separate priority classes. Since the priority class data is contained within the VLAN tag of any given traffic, VLAN tagging is also a requirement for RoCE and, therefore SMB Direct. Once the network cabling is done, it's time to begin configuring the switches. These configuration commands need to be executed on both switches. We start by enabling Converged Enhanced Ethernet (CEE), which automatically enables Priority-Based Flow Control (PFC) for all Priority 3 traffic on all ports. Enabling CEE also automatically configures Enhanced Transmission Selection (ETS) so that at least 50% of the total bandwidth is always 11

12 available for our storage (PGID 1) traffic. These automatic default configurations are suitable for our solution. The commands are listed in Example 1. Example 1 Enable CEE on the switch enable configure terminal cee enable After enabling CEE, we configure the vlans. Although we could use multiple vlans for different types of network traffic (storage, client, management, cluster heartbeat, Live Migration, etc.), the simplest choice is to use a single vlan (12) to carry all our SMB Direct solution traffic. Employing 10GbE links makes this a viable scenario. Enabling vlan tagging is important in this solution, since RDMA requires it. Example 2 Establish vlan for all solution traffic vlan 12 name SMB exit interface port 1-4,53-54 switchport mode trunk switchport trunk allowed vlan add 12 exit For redundancy, we configure an ISL between a pair of 40GbE ports on each switch. We use the last two ports, 53 and 54, for this purpose. Physically, each port is connected to the same port on the other switch using a 40Gbps QSFP+ cable. Configuring the ISL is a simple matter of joining the two ports into a port trunk group. See Example 3. Example 3 Configure an ISL between switches for resiliency interface port pvid 4094 switchport mode trunk lacp mode active lacp key 100 exit Once we've got the configuration complete on the switch, we need to copy the running configuration to the startup configuration. Otherwise, our configuration changes would be lost once the switch is reset or reboots. This is achieved using the write command, Example 4. Example 4 Use the write command to copy the running configuration to startup write Repeat the entire set of commands above (Example 1 on page 12 through Example 4) on the other switch, defining the same vlan and port trunk on that switch. Since we are using the same ports on both switches for identical purposes, the commands that are run on each switch are identical. Remember to commit the configuration changes on both switches using the write command. Note: If the solution uses another switch model or switch vendor s equipment, other than the RackSwitch G8272, it is essential to perform the equivalent command sets for the switches. The commands themselves may differ from what is stated above but it is imperative that the same functions are executed on the switches to ensure proper operation of this solution. 12 Microsoft Storage Spaces Direct (S2D) Deployment Guide

13 Prepare the servers and storage In this section, we describe updating firmware and drivers, and configuring the RAID subsystem for the boot drive in the server nodes. Firmware and drivers Best practices dictate that with a new server deployment, the first task is to review the system firmware and drivers relevant to the incoming operating system. If the system has the latest firmware and drivers installed it will expedite tech support calls, and may reduce the need for such calls. Lenovo has a useful tool for this important task called UpdateXpress. UpdateXpress can be utilized in two ways: The first option allows the system administrator to download and install the tool on the target server, perform a verification to identify any firmware and drivers that need attention, download the update packages from the Lenovo web site, and then proceed with the updates. The second method lets the server owner download the new packages to a local network share or repository and then install the updates during a maintenance window. This flexibility in the tool grants full control to the server owner and ensures that these important updates are performed at a convenient time. Windows Server 2016 contains all the drivers necessary for this solution with the exception of the Mellanox ConnectX-4 driver, which was updated by Mellanox after the final Release to Manufacturing (RTM) build of the OS was released. To obtain the latest CX-4 driver, visit: In addition, it is recommended to install the Lenovo IMM2 PBI mailbox driver. Although this is actually a null driver and is not required for the solution, installing this driver removes the bang from the Unknown device in the Windows Device Manager. You can find the driver here: tem%2bx3650%2bm5&product=ibm/systemx/8871&&platform=windows+2012+r2&function=all#i MMPBI Physical storage subsystem Follow these steps to configure a RAID-1 array for the operating system: 1. Power on the server to review the drive subsystem in preparation for the installation of the operating system. 2. During the system boot process, press the F1 key to initiate the UEFI menu screen, Figure 8. Traverse to System Settings, Storage, and then access the ServeRAID M1215 controller. 13

14 Figure 8 UEFI main menu 3. Create a RAID-1 pool from the two 2.5 HDDs installed at the rear of the system. Leave the remaining 12 drives (four 800 GB SSDs and ten 4 TB HDDs) that are connected to the N2215 SAS HBA as unconfigured. They will be managed directly by the operating system when the time comes to creating the storage pool. Install Windows Server 2016 You can install Windows from a variety of sources: Remote ISO media mount via the IMM Bootable USB media with the installation content Installation DVD System x servers, including the x3650 M5, feature an Integrated Management Module (IMM) to provide remote out-of-band management, including remote control and remote media. Select the source that is appropriate for your situation. The following steps describe the installation: 1. With the method of Windows deployment selected, power the server on to begin the installation process. 2. Select the appropriate language pack, correct input device, and the geography, then select the desired OS edition (GUI or Core components only). 3. Select the RAID-1 array connected to the ServeRAID M1215 controller as the target to install Windows (you might need to scroll through a list of available drives). 4. Follow the prompts to complete the installation of the OS. Windows Server 2016 contains all the drivers necessary for this solution with the exception of the Mellanox ConnectX-4 driver, which was updated by Mellanox after the final Release to Manufacturing (RTM) build of the OS was released. To obtain the latest CX-4 driver, visit: 14 Microsoft Storage Spaces Direct (S2D) Deployment Guide

15 In addition, it is recommended to install the Lenovo IMM2 PBI mailbox driver. Although this is actually a null driver and is not required for the solution, installing this driver removes the bang from the Unknown device in the Windows Device Manager. You can find the driver here: tem%2bx3650%2bm5&product=ibm/systemx/8871&&platform=windows+2012+r2&function=all#i MMPBI Install Windows Server roles and features Several Windows Server roles and features are used by this solution. It makes sense to install them all at the same time, then perform specific configuration tasks later. To make this installation quick and easy, use the following PowerShell script, Example 5 on page 15. Example 5 PowerShell script to install necessary server roles and features Install-WindowsFeature -Name File-Services Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart Note that it is a good idea to install the Hyper-V role on all nodes even if you plan to implement the disaggregated solution. Although you may not regularly use the storage cluster to host VMs, if the Hyper-V role is installed, you will have the option to deploy an occasional VM if the need arises. Once the roles and features have been installed and the nodes are back online, operating system configuration can begin. Configure the operating system Next, we configure the operating system, including Windows Update, AD Domain join, and internal drive verification. To ensure that the latest fixes and patches are applied to the operating system, perform updating of the Windows Server components via Windows Update. It is a good idea to reboot each node after the final update is applied to ensure that all updates have been fully installed, regardless what Windows Update indicates. Upon completing the Windows Update process, join each server node to the Windows Active Directory Domain. Use the following PowerShell command to accomplish this task. Example 6 PowerShell command to add system to an Active Directory Domain Add-Computer -DomainName <DomainName> -Reboot From this point onward, when working with cluster services be sure to log onto the systems with a Domain account and not the local Administrator account. Ensure that a Domain account is part of the local Administrators Security Group, as shown in Figure 9. 15

16 Figure 9 Group membership of the Administrator account Verify that the internal drives are online, by going to Server Manager > Tools > Computer Management > Disk Management. If any are offline, select the drive, right-click it, and click Online. Alternatively, PowerShell can be used to bring all 14 drives in each host online with a single command. Example 7 PowerShell command to bring all 14 drives online Get-Disk? FriendlyName -Like *ATA* Set-Disk -IsOffline $False Since all systems have been joined to the domain, we can execute the PowerShell command remotely on the other hosts while logged in as a Domain Administrator. To do this, use the command shown in Example 8. Example 8 PowerShell command to bring drives online in remote systems Invoke-Command -ComputerName S2D02, S2D03, S2D04 -ScriptBlock { Get-Disk? FriendlyName -Like *ATA* Set-Disk -IsOffline $False} Configure networking parameters Now that the required Windows Server roles and features have been installed, we turn our attention to some network configuration details. For the Mellanox NICs used in this solution, we need to enable Data Center Bridging (DCB), which is required for RDMA. Then we create a policy to establish network Quality of Service (QoS) to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes, ensuring resiliency and performance. We also need to disable regular Flow Control (Global Pause) on the Mellanox adapters, since Priority Flow Control (PFC) and Global Pause cannot operate together on the same interface. To make all these changes quickly and consistently, we again use a PowerShell script, as shown in Example 9 on page Microsoft Storage Spaces Direct (S2D) Deployment Guide

17 Note: If using Chelsio NICs, the configuration steps shown in Example 9 are not necessary. Example 9 PowerShell script to configure required network parameters on servers # Enable Data Center Bridging (required for RDMA) Install-WindowsFeature -Name Data-Center-Bridging # Configure a QoS policy for SMB-Direct New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3 # Turn on Flow Control for SMB Enable-NetQosFlowControl -Priority 3 # Make sure flow control is off for other traffic Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7 # Apply a Quality of Service (QoS) policy to the target adapters Enable-NetAdapterQos -Name "Mellanox 1","Mellanox 2" # Give SMB Direct a minimum bandwidth of 50% New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETS # Disable Flow Control on physical adapters Set-NetAdapterAdvancedProperty -Name "Mellanox 1" -RegistryKeyword "*FlowControl" -RegistryValue 0 Set-NetAdapterAdvancedProperty -Name "Mellanox 2" -RegistryKeyword "*FlowControl" -RegistryValue 0 For an S2D hyperconverged solution, we deploy a SET-enabled Hyper-V switch and add RDMA-enabled host virtual NICs to it for use by Hyper-V. Since many switches won't pass traffic class information on untagged vlan traffic, we need to make sure that the vnics using RDMA are on vlans. To keep this hyperconverged solution as simple as possible and since we are using dual-port 10GbE NICs, we will pass all traffic on vlan 12. If you need to segment your network traffic more, for example to isolate VM Live Migration traffic, you can use additional vlans. Example 10 shows the PowerShell script that can be used to perform the SET configuration, enable RDMA, and assign vlans to the vnics. These steps are necessary only for configuring a hyperconverged solution. For a disaggregated solution these steps can be skipped since Hyper-V is not enabled on the S2D storage nodes. Example 10 PowerShell script to create a SET-enabled vswitch in hyperconverged solution # Create a SET-enabled vswitch supporting multiple uplinks provided by the Mellanox adapter New-VMSwitch -Name S2DSwitch -NetAdapterName "Mellanox 1", "Mellanox 2" -EnableEmbeddedTeaming $true -AllowManagementOS $false # Add host vnics to the vswitch just created Add-VMNetworkAdapter -SwitchName S2DSwitch -Name SMB1 -ManagementOS Add-VMNetworkAdapter -SwitchName S2DSwitch -Name SMB2 -ManagementOS # Enable RDMA on the vnics just created Enable-NetAdapterRDMA -Name "vethernet (SMB1)","vEthernet (SMB2)" # Assign the vnics to a vlan Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB1 -VlanId 12 -Access ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB2 -VlanId 12 -Access ManagementOS Now that all network interfaces have been created (including the vnics required by a hyperconverged deployment if necessary), IP address configuration can be completed, as follows: 1. Configure a static IP address for the operating system or public facing interface on the SMB1 vnic (for example, x). Configure default gateway and DNS server settings as appropriate for your environment. 17

18 2. Configure a static IP address on the SMB2 vnic, using a different subnet if desired (for example, x). Again, configure default gateway and DNS server settings as appropriate for your environment. 3. Perform a ping command from each interface to the corresponding server nodes in this environment to confirm that all connections are functioning properly. Both interfaces on each node should be able to communicate with both interfaces on all other nodes. Of course, PowerShell can be used to make IP address assignments if desired. Example 11 shows the commands used to specify a static IP address and DNS server assignment for Node 1 in our environemnt. Make sure to change the IP addresses and subnet masks (prefix length) to appropriate values for your environment. Example 11 PowerShell commands used to configure the SMB vnic interfaces on Node 1 Set-NetIPInterface -InterfaceAlias "vethernet (SMB1)" -Dhcp Disabled New-NetIPAddress -InterfaceAlias "vethernet (SMB1)" -IPAddress PrefixLength 24 Set-DnsClientServerAddress -InterfaceAlias "vethernet (SMB1)" -ServerAddresses Set-NetIPInterface -InterfaceAlias "vethernet (SMB2)" -Dhcp Disabled New-NetIPAddress -InterfaceAlias "vethernet (SMB2)" -IPAddress PrefixLength 24 Set-DnsClientServerAddress -InterfaceAlias "vethernet (SMB2)" -ServerAddresses It's a good idea to disable any network interfaces that won't be used for the solution before creating the Failover Cluster. This includes IBM USB Remote NDIS Network device. The only interfaces that will be used in this solution are the SMB1 and SMB2 vnics. Figure 10 shows the network connections. The top two connections (in blue box) represent the two physical ports on the Mellanox adapter and must remain enabled. The next connection (in red box) represents the IBM USB Remote NDIS Network device, which can be disabled. Finally, the bottom two connections (in the green box) are the SMB Direct vnics that will be used for all solution network traffic. There may be additional network interfaces listed, such as those for multiple Broadcom NetXtreme Gigabit Ethernet NICs. These should be disabled as well. Figure 10 Windows network connections Since RDMA is so critical to the performance of the final solution, it s a good idea to make sure each piece of the configuration is correct as we move through the steps. We can t look for RDMA traffic yet, but we can verify that the vnics (in a hyperconverged solution) have RDMA enabled. Example 12 on page 18 shows the PowerShell command we use for this purpose and Figure 11 on page 19 shows the output of that command in our environment. Example 12 PowerShell command to verify that RDMA is enabled on the vnics just created Get-NetAdapterRdma? Name -Like *SMB* ft Name, Enabled 18 Microsoft Storage Spaces Direct (S2D) Deployment Guide

19 Figure 11 PowerShell command verifies that RDMA is enabled on a pair of vnics Using Virtual Machine Queue For the 10GbE Mellanox adapters in our solution, the operating system automatically enables dynamic VMQ and RSS, which improve network performance and throughput to the VMs. VMQ is a scaling network technology for Hyper-V switch that improves network throughput by distributing processing of network traffic for multiple VMs among multiple processors. When VMQ is enabled, a dedicated queue is established on the physical NIC for each vnic that has requested a queue. As packets arrive for a vnic, the physical NIC places them in that vnic's queue. These queues are managed by the system's processors. Although not strictly necessary, it is a best practice to assign base and maximum processors for VMQ queues on each server in order to ensure maximum efficiency of queue management. Although the concept is straight forward, there are a few things to keep in mind when determining proper processor assignment. First, only physical processors are used to manage VMQ queues. Therefore, if Hyper-Threading (HT) Technology is enabled, only the even-numbered processors are considered viable. Next, since processor 0 is assigned to many internal tasks, it is best not to assign queues to this particular processor. Before configuring VMQ queue management, execute a couple of PowerShell commands to gather in-formation. We need to know if HT is enabled and how many processors are available. You can issue a WMI query for this, comparing the NumberOfCores field to the NumberOfLogicalProcessors field. As an alternative, issue the Get-NetAdapterRSS command to see a list of viable processors (remember not to use Processor 0:0/0) as shown in Example 13. Example 13 PowerShell commands used to determine processors available for VMQ queues # Check for Hyper-Threading (if there are twice as many logical procs as number of cores, HT is enabled) Get-WmiObject -Class win32_processor ft -Property NumberOfCores, NumberOfLogicalProcessors -AutoSize # Check procs available for queues (check the RssProcessorArray field) Get-NetAdapterRSS Once you have this information, it's a simple math problem. We have a pair of 14-core CPUs in each host, providing 28 processors total, or 56 logical processors, including Hyper-Threading. Excluding processor 0 and eliminating all odd-numbered processors leaves us with 27 processors to assign. Given the dual-port Mellanox adapter, this means we can assign 13 processors to one port and 14 processors to the other. This results in the following processor assignment: Mellanox 1: procs 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28 Mellanox 2: procs 30, 32, 34, 46, 38, 40, 42, 44, 46, 48, 50, 52, 54 Use the following PowerShell script to define the base (starting) processor as well as how many processors to use for managing VMQ queues on each physical NIC consumed by the vswitch (in our solution, the two Mellanox ports.) Example 14 PowerShell script to assign processors for VMQ queue management # Configure the base and maximum processors to use for VMQ queues Set-NetAdapterVmq -Name "Mellanox 1" -BaseProcessorNumber 2 -MaxProcessors 14 19

20 Set-NetAdapterVmq -Name "Mellanox 2" -BaseProcessorNumber 30 -MaxProcessors 13 # Check VMQ queues Get-NetAdapterVmqQueue Now that we ve got the networking internals configured for one system, we use PowerShell remote execution to replicate this configuration to the other three hosts. Example 15 shows the PowerShell commands, this time without comments. These commands are for configuring a hyperconverged solution using Mellanox NICs. If Chelsio NICs are being used, eliminate the first 9 steps. If configuring a disaggregated solution, eliminate the last 9 steps. Example 15 PowerShell remote execution script to configure networking on remaining hosts Invoke-Command -ComputerName S2D02, S2D03, S2D04 -ScriptBlock { Install-WindowsFeature -Name Data-Center-Bridging New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3 Enable-NetQosFlowControl -Priority 3 Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7 Enable-NetAdapterQos -Name "Mellanox 1","Mellanox 2" New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETS Set-NetAdapterAdvancedProperty -Name "Mellanox 1" -RegistryKeyword "*FlowControl" -RegistryValue 0 Set-NetAdapterAdvancedProperty -Name "Mellanox 2" -RegistryKeyword "*FlowControl" -RegistryValue 0 New-VMSwitch Name S2DSwitch NetAdapterName "Mellanox 1", "Mellanox 2" -EnableEmbeddedTeaming $true -AllowManagementOS $false Add-VMNetworkAdapter SwitchName S2DSwitch Name SMB1 ManagementOS Add-VMNetworkAdapter SwitchName S2DSwitch Name SMB2 ManagementOS Enable-NetAdapterRDMA -Name vethernet (SMB1), vethernet (SMB2) Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB1 -VlanId 12 -Access ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB2 -VlanId 12 -Access ManagementOS Set-NetAdapterVmq -Name "Mellanox 1" -BaseProcessorNumber 2 -MaxProcessors 14 Set-NetAdapterVmq -Name "Mellanox 2" -BaseProcessorNumber 30 -MaxProcessors 13} The final piece of preparing the infrastructure for S2D is to create the Failover Cluster. Create the Failover Cluster Before creating the Failover Cluster we need to validate the components that are necessary to form the cluster. As an alternative to using the GUI, the following PowerShell commands can be used to test and create the Failover Cluster, Example 16. Example 16 PowerShell commands to test and create a failover cluster Test-Cluster -Node S2D01,S2D02,S2D03,S2D04 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration" New-Cluster -Name S2DCluster -Node S2D01,S2D02,S2D03,S2D04 -NoStorage Once the cluster is built, you can also use PowerShell to query the health status of the cluster storage. Example 17 PowerShell command to check the status of cluster storage Get-StorageSubSystem S2DCluster The default behavior of Failover Cluster creation is to set aside the non-public facing subnet (configured on the SMB2 vnic) as a cluster heartbeat network. When 1 GbE was the standard, this made perfect sense. However, since we are using 10 GbE in this solution, we 20 Microsoft Storage Spaces Direct (S2D) Deployment Guide

21 don t want to dedicate half our bandwidth to this important, but mundane task. We use Failover Cluster Manager to resolve this issue as follows: 1. In Failover Cluster Manager navigate to Failover Cluster Manager Clustername Networks in the left navigation panel, as shown in Figure 12. Figure 12 Networks available for the cluster 2. Note the Cluster Use setting for each network. If this setting is Cluster Only, right-click on the network entry and select Properties. 3. In the Properties window that opens ensure that the Allow cluster network communication on this network radio button is selected. Also, select the Allow clients to connect through this network checkbox, as shown in Figure 13 on page 21. Optionally, change the network Name to one that makes sense for your installation and click OK. Figure 13 SMB2 network set to allow cluster and client traffic After making this change, both networks should show Cluster and Client in the Cluster Use column, as shown in Figure 14. It is generally a good idea to use the cluster network Properties window to specify cluster network names that makes sense and will aid in troubleshooting later. To be consistent, we 21

22 name our cluster networks after the vnics that carry the traffic for each, as shown in Figure 14. Figure 14 Cluster networks shown with names to match the vnics that carry their traffic It is also possible to accomplish the cluster network role and name changes using PowerShell. Example 18 provides a script to do this. Example 18 PowerShell script to change names and roles of cluster networks # Update the cluster networks that were created by default # First, look at what's there Get-ClusterNetwork ft Name, Role, Address # Change the cluster network names so they're consistent with the individual nodes (Get-ClusterNetwork -Name "Cluster Network 1").Name = "SMB1" (Get-ClusterNetwork -Name "Cluster Network 2").Name = "SMB2" # Enable Client traffic on the second cluster network (Get-ClusterNetwork -Name "SMB2").Role = 3 # Check to make sure the cluster network names and roles are set properly Get-ClusterNetwork ft Name, Role, Address Figure 15 shows output of the PowerShell commands to display the initial cluster network parameters, modify the cluster network names, enable client traffic on the second cluster network, and check to make sure cluster network names and roles are set properly. Figure 15 PowerShell output showing cluster network renaming and results 22 Microsoft Storage Spaces Direct (S2D) Deployment Guide

23 You can also verify the cluster network changes by viewing them in Failover Cluster Manager by navigating to Failover Cluster Manager Clustername Networks in the left navigation panel. Cluster file share witness It is recommended to create a cluster file share witness. The cluster file share witness quorum configuration enables the 4-node cluster to withstand up to two node failures. For information on how to create a cluster file share witness, read the Microsoft article, Configuring a File Share Witness on a Scale-Out File Server, available at: tness-on-a-scale-out-file-server/ Note: Make sure the file share for the cluster file share witness has the proper permissions for the cluster name object as in the example shown in Figure 16. Figure 16 Security tab of the Permissions screen Once the cluster is operational and the file share witness has been established, it is time to enable and configure the Storage Spaces Direct feature. Enable and configure Storage Spaces Direct Once the failover cluster has been created, run the PowerShell command in Example 19 to enable S2D on the cluster. Example 19 PowerShell command to enable Storage Spaces Direct Enable-ClusterStorageSpacesDirect CimSession S2DCluster -PoolFriendlyName S2DPool This PowerShell command will do the following automatically: 1. Create a single storage pool that has a name as specified by the -PoolFriendlyName parameter 23

24 2. Configure S2D cache tier using the highest performance storage devices available, such as NVMe or SSD 3. Create two storage tiers, one called Capacity and the other called Performance. Note: You may notice that during process of enabling S2D, the process pauses for an extended period with the message Waiting until physical disks are claimed... In our testing we saw this delay at roughly 24-28%, which lasted anywhere from 20 minutes to over an hour. This is a known issue that is being worked by Microsoft. This pause does not affect S2D configuration or performance once complete. Take a moment to run a few PowerShell commands at this point to verify that all is as expected. First, run the command shown in Example 20. The results should be similar to those in our environment, shown in Figure 17 on page 24. Example 20 PowerShell command to check S2D storage tiers Get-StorageTier ft FriendlyName, ResiliencySettingName Figure 17 PowerShell query showing resiliency settings for storage tiers At this point we can also check to make sure RDMA is working. We provide two suggested approaches for this. First, Figure 18 shows a simple netstat command that can be used to verify that listeners are in place on port 445 (in the yellow boxes). This is the port typically used for SMB and the port specified when we created the network QoS policy for SMB in Example 9 on page 17. Figure 18 The netstat command can be used to confirm listeners configured for port 445 The second method for verifying that RDMA is configured and working properly is to use PerfMon to create an RDMA monitor. To do this, following these steps: 1. At the PowerShell or Command prompt, type perfmon and press Enter. 2. In the Performance Monitor window that opens, select Performance Monitor in the left pane and click the green plus sign ( + ) at the top of the right pane. 24 Microsoft Storage Spaces Direct (S2D) Deployment Guide

25 Figure 19 Initial Performance Monitor window before configuration 3. In the Add Counters window that opens, select RDMA Activity in the upper left pane. In the Instances of selected object area in the lower left, choose the instances that represent your vnics (for our environment, these are Hyper-V Virtual Ethernet Adapter #2 and Hyper-V Virtual Ethernet Adapter #3 ). Once the instances are selected, click the Add button to move them to the Added counters pane on the right. Click OK. Figure 20 The Add counters window for Performance Monitor 4. Back in the Performance Monitor window, click the drop-down icon to the left of the green plus sign and choose Report. 25

26 Figure 21 Choose the Report format 5. This should show a report of RDMA activity for your vnics. Here you can view key performance metrics for RDMA connections in your environment, as shown in Figure 22 on page 26. Figure 22 Key RDMA performance metrics Create virtual disks After the S2D cluster is created, create virtual disks or volumes based on your performance requirements. There are three common volume types for general deployments: Mirror Parity Multi-Resilient Table 1 shows the volume types supported by Storage Spaces Direct and several characteristics of each. 26 Microsoft Storage Spaces Direct (S2D) Deployment Guide

27 Table 1 Summary of characteristics associated with common storage volume types Mirror Parity Multi-resilient Optimized for Performance Efficiency Balanced performance and efficiency Use case All data is hot All data is cold Mix of hot and cold data Storage efficiency Least (33%) Most (50+%) Medium (~50%) File system ReFS or NTFS ReFS or NTFS ReFS only Minimum nodes Use the PowerShell commands in Example 21 on page 27 through Example 23 on page 27 to create and configure the virtual disks. Choose any or all types of volumes shown, adjusting the volume names and sizes to suit your needs. This solution yields a total pool size of about 146TB to be consumed by the volumes you create. However, the amount of pool space consumed by each volume will depend on which Storage Tier is used. For example, the commands below create three volumes that consume a total of 88TB from the pool. Create a mirror volume using the commands in Example 21 on page 27. Example 21 PowerShell command to create a new mirror volume New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName "Mirror" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 6TB Create a Parity Volume using the commands in Example 22. Example 22 PowerShell command to create a new parity volume New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName "Parity" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Capacity -StorageTierSizes 24TB Create a Multi-Resilient Volume using the commands in Example 23. Example 23 PowerShell command to create a new multi-resilient volume New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName "Resilient" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance, Capacity -StorageTierSizes 2TB, 8TB Once S2D installation is complete and volumes have been created, the final step is to verify that there is fault tolerance in this storage environment. Example 24 shows the PowerShell command to verify the fault tolerance of the S2D storage pool and Figure 23 shows the output of that command in our environment. To query the storage pool use the command in Example 24. Example 24 PowerShell command to determine S2D storage pool fault tolerance Get-StoragePool FriendlyName S2DPool FL FriendlyName, Size, FaultDomainAwarenessDefault 27

Microsoft Storage Spaces Deployment on Dell EMC PowerEdge R730xd

Microsoft Storage Spaces Deployment on Dell EMC PowerEdge R730xd Microsoft Storage Spaces Deployment on Dell EMC PowerEdge R730xd How Dell EMC provides the storage density and compute power to maximize the benefits of S2D and the advance features sets in Windows Server

More information

Dell EMC Ready Solutions for Microsoft WSSD QuickStart Configurations for ROBO and Edge Environments. Integration and Deployment Guidance

Dell EMC Ready Solutions for Microsoft WSSD QuickStart Configurations for ROBO and Edge Environments. Integration and Deployment Guidance Dell EMC Ready Solutions for Microsoft WSSD QuickStart Configurations for ROBO and Edge Environments Integration and Deployment Guidance Notes, cautions, and warnings NOTE: A NOTE indicates important information

More information

Installation and Configuration Guide for Microsoft Exchange Server 2013

Installation and Configuration Guide for Microsoft Exchange Server 2013 Installation and Configuration Guide for Microsoft Exchange Server 2013 Last update: 25 January 2016 Designed for organizations implementing Exchange 2013 in a virtual environment Includes step-by-step

More information

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage Solution Brief DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage DataON Next-Generation All NVMe SSD Flash-Based Hyper-Converged

More information

Windows Server System Center Azure Pack

Windows Server System Center Azure Pack Windows Server System Center Azure Pack Tenant Deployment Multi-Tier LOB Application Tenant Deployment Multi-Tier LOB Application Inbox feature for integrated management of IP addresses, domain

More information

Lenovo Database Configuration Guide

Lenovo Database Configuration Guide Lenovo Database Configuration Guide for Microsoft SQL Server OLTP on ThinkAgile SXM Reduce time to value with validated hardware configurations up to 2 million TPM in a DS14v2 VM SQL Server in an Infrastructure

More information

Dell EMC Microsoft Storage Spaces Direct Ready Node

Dell EMC Microsoft Storage Spaces Direct Ready Node Dell EMC Microsoft Storage Spaces Direct Ready Node Deployment Guide for scalable hyper-converged infrastructure with PowerEdge R440, R740xd and R640 Storage Spaces Direct Ready Nodes Notes, cautions,

More information

Dell EMC Microsoft Storage Spaces Direct Ready Node

Dell EMC Microsoft Storage Spaces Direct Ready Node Dell EMC Microsoft Storage Spaces Direct Ready Node Deployment Guide for scalable hyper-converged infrastructure with R740xd and R640 Storage Spaces Direct Ready Nodes Notes, cautions, and warnings NOTE:

More information

StarWind Virtual SAN Hyperconverged 2-Node Scenario with Hyper-V Server 2016

StarWind Virtual SAN Hyperconverged 2-Node Scenario with Hyper-V Server 2016 One Stop Virtualization Shop StarWind Virtual SAN Hyperconverged 2-Node Scenario with Hyper-V Server 2016 FEBRUARY 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind and the StarWind

More information

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Performance Study Dell EMC Engineering October 2017 A Dell EMC Performance Study Revisions Date October 2017

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure

More information

Dell Storage with Microsoft Storage Spaces Deployment Guide

Dell Storage with Microsoft Storage Spaces Deployment Guide Dell Storage with Microsoft Storage Spaces Deployment Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION

More information

Windows Server 2016 Software-Defined Networking Oliver Ryf

Windows Server 2016 Software-Defined Networking Oliver Ryf Digicomp Microsoft Evolution Day 2015 1 Windows Server 2016 Software-Defined Networking Oliver Ryf Partner: Digicomp Microsoft Evolution Day 2015 2 Agenda Begrüssung Vorstellung Referent PowerShell Desired

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

MICHAËL BORGERS, SYSTEM ENGINEER MATTHIAS SPELIER, SYSTEM ENGINEER. Software Defined Datacenter

MICHAËL BORGERS, SYSTEM ENGINEER MATTHIAS SPELIER, SYSTEM ENGINEER. Software Defined Datacenter MICHAËL BORGERS, SYSTEM ENGINEER MATTHIAS SPELIER, SYSTEM ENGINEER Software Defined Datacenter Virtual Machine Manager 2016 SOFTWARE DEFINED DATACENTER Migration BLUE SDN RED TRANSPORT SDC SDS Legacy Hardware

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Exam Objectives for MCSA Installation, Storage, and Compute with Windows Server 2016

Exam Objectives for MCSA Installation, Storage, and Compute with Windows Server 2016 Exam Objectives for MCSA 70-740 Installation, Storage, and Compute with Windows Server 2016 The Windows Server 2016 70-740 Exam is articulated around six main exam objectives. As per below table these

More information

Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630

Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630 Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630 A Dell reference architecture for 5000 Users Dell Global Solutions Engineering June 2015 A Dell Reference Architecture THIS

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

VM density on a Microsoft Storage Spaces Direct solution powered by the AMD EPYC 7601 processor

VM density on a Microsoft Storage Spaces Direct solution powered by the AMD EPYC 7601 processor A Principled Technologies proof-of-concept study: Hands-on work. Real-world results. density on a Microsoft Storage Spaces Direct solution powered by the AMD EPYC 7601 processor Businesses seeking solutions

More information

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012!

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012! Windows Server 2012 Hands- On Camp Learn What s Hot and New in Windows Server 2012! Your Facilitator Damir Bersinic Datacenter Solutions Specialist Microsoft Canada Inc. damirb@microsoft.com Twitter: @DamirB

More information

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018 One Stop Virtualization Shop StarWind Virtual SAN HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2 MARCH 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

Configuring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper

Configuring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper Technical white paper Configuring SR-IOV with HP Virtual Connect and Microsoft Hyper-V Table of contents Abstract... 2 Overview... 2 SR-IOV... 2 Advantages and usage... 2 With Flex-10... 3 Setup... 4 Supported

More information

Lenovo Software Defined Storage with Windows Server 2016 Datacenter Storage Spaces

Lenovo Software Defined Storage with Windows Server 2016 Datacenter Storage Spaces Lenovo Software Defined Storage with Windows 2016 Datacenter Storage Spaces Solution Brief Microsoft and Lenovo have worked together to validate a software-defined storage solution running on Lenovo servers

More information

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2016

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2016 One Stop Virtualization Shop StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2016 FEBRUARY 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and

More information

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2 One Stop Virtualization Shop StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2 DECEMBER 2017 TECHNICAL PAPER Trademarks StarWind, StarWind Software

More information

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Step-by-Step Guide to Installing Cluster Service

Step-by-Step Guide to Installing Cluster Service Page 1 of 23 TechNet Home > Products & Technologies > Windows 2000 Server > Deploy > Configure Specific Features Step-by-Step Guide to Installing Cluster Service Topics on this Page Introduction Checklists

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Dell EMC Engineering December 2016 A Dell Best Practices Guide Revisions Date March 2011 Description Initial

More information

Implementing SharePoint Server 2010 on Dell vstart Solution

Implementing SharePoint Server 2010 on Dell vstart Solution Implementing SharePoint Server 2010 on Dell vstart Solution A Reference Architecture for a 3500 concurrent users SharePoint Server 2010 farm on vstart 100 Hyper-V Solution. Dell Global Solutions Engineering

More information

SAN Virtuosity Fibre Channel over Ethernet

SAN Virtuosity Fibre Channel over Ethernet SAN VIRTUOSITY Series WHITE PAPER SAN Virtuosity Fibre Channel over Ethernet Subscribe to the SAN Virtuosity Series at www.sanvirtuosity.com Table of Contents Introduction...1 VMware and the Next Generation

More information

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate NIC-PCIE-1SFP+-PLU PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate Flexibility and Scalability in Virtual

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

Consolidating Microsoft SQL Server databases on PowerEdge R930 server

Consolidating Microsoft SQL Server databases on PowerEdge R930 server Consolidating Microsoft SQL Server databases on PowerEdge R930 server This white paper showcases PowerEdge R930 computing capabilities in consolidating SQL Server OLTP databases in a virtual environment.

More information

DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE

DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE EXCHANGE SERVER 2016 Design Guide ABSTRACT This Design Guide describes the design principles and solution components for Dell EMC Ready Bundle for Microsoft

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

Dell EMC Microsoft Exchange 2016 Solution

Dell EMC Microsoft Exchange 2016 Solution Dell EMC Microsoft Exchange 2016 Solution Design Guide for implementing Microsoft Exchange Server 2016 on Dell EMC R740xd servers and storage Dell Engineering October 2017 Design Guide Revisions Date October

More information

Guide for Deploying a Software-Defined Data Center (SDDC) with Solutions from Lenovo, VMware, and Intel

Guide for Deploying a Software-Defined Data Center (SDDC) with Solutions from Lenovo, VMware, and Intel Guide for Deploying a Software-Defined Data Center (SDDC) with Solutions from Lenovo, VMware, and Intel Installation Guide Intel Builders Lenovo vsan ReadyNodes Deploying a Software-Defined Data Center

More information

StarWind Virtual SAN Configuring HA SMB File Server in Windows Server 2016

StarWind Virtual SAN Configuring HA SMB File Server in Windows Server 2016 One Stop Virtualization Shop StarWind Virtual SAN Configuring HA SMB File Server in Windows Server 2016 APRIL 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind and the StarWind

More information

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers Highly Available Scale-Out File Server on IBM System x3650 M4 November 2012 Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System

More information

Microsoft SharePoint Server 2013 on Dell PowerEdge R630 with Microsoft Hyper-V Virutalization Deployment Guide

Microsoft SharePoint Server 2013 on Dell PowerEdge R630 with Microsoft Hyper-V Virutalization Deployment Guide Microsoft SharePoint Server 2013 on Dell PowerEdge R630 with Microsoft Hyper-V Virutalization Deployment Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you

More information

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage Performance Study of Microsoft SQL Server 2016 Dell Engineering February 2017 Table of contents

More information

Lenovo XClarity Administrator Quick Start Guide Configuring Servers Using Lenovo XClarity Administrator

Lenovo XClarity Administrator Quick Start Guide Configuring Servers Using Lenovo XClarity Administrator Lenovo XClarity Administrator Quick Start Guide Configuring Servers Using Lenovo XClarity Administrator Version 2.3.0 Note Before using this information and the product it supports, read the general and

More information

Virtualizing your Datacenter

Virtualizing your Datacenter Virtualizing your Datacenter with Windows Server 2012 R2 & System Center 2012 R2 Hands-On Lab Step-by-Step Guide For the VMs the following credentials: Username: Contoso\Administrator Password: Passw0rd!

More information

IBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights

IBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights IBM Spectrum NAS Easy-to-manage software-defined file storage for the enterprise Highlights Reduce capital expenditures with storage software on commodity servers Improve efficiency by consolidating all

More information

StarWind Virtual SAN Installing and Configuring SQL Server 2017 Failover Cluster Instance on Windows Server 2016

StarWind Virtual SAN Installing and Configuring SQL Server 2017 Failover Cluster Instance on Windows Server 2016 One Stop Virtualization Shop Installing and Configuring SQL Server 2017 Failover Cluster Instance on Windows Server 2016 OCTOBER 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework White Paper Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes A Dell Reference Architecture Dell Engineering August 2015 A Dell Reference Architecture Revisions Date September

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

Dell Storage with Microsoft Storage Spaces Best Practices Guide

Dell Storage with Microsoft Storage Spaces Best Practices Guide Dell Storage with Microsoft Storage Spaces Best Practices Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION

More information

StarWind Virtual SAN. Installing and Configuring SQL Server 2014 Failover Cluster Instance on Windows Server 2012 R2. One Stop Virtualization Shop

StarWind Virtual SAN. Installing and Configuring SQL Server 2014 Failover Cluster Instance on Windows Server 2012 R2. One Stop Virtualization Shop One Stop Virtualization Shop StarWind Virtual SAN Installing and Configuring SQL Server 2014 Failover Cluster Instance on Windows Server 2012 R2 OCTOBER 2018 TECHNICAL PAPER Trademarks StarWind, StarWind

More information

Pass-Through Technology

Pass-Through Technology CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,

More information

Dell PowerEdge R630 Configuration for Microsoft Private Cloud Fast Track v4

Dell PowerEdge R630 Configuration for Microsoft Private Cloud Fast Track v4 Dell PowerEdge R630 Configuration for Microsoft Private Cloud Fast Track v4 Deploying a scalable Microsoft Private Cloud on Dell PowerEdge servers Dell ESG Cloud Solutions Marketing Dell Enterprise Solutions

More information

StarWind Virtual SAN AWS EC2 Deployment Guide

StarWind Virtual SAN AWS EC2 Deployment Guide One Stop Virtualization Shop StarWind Virtual SAN AUGUST 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind and the StarWind Software logos are registered trademarks of StarWind

More information

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing. SUSE OpenStack Cloud Production Deployment Architecture Guide Solution Guide Cloud Computing Table of Contents page Introduction... 2 High Availability Configuration...6 Network Topography...8 Services

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays This whitepaper describes Dell Microsoft SQL Server Fast Track reference architecture configurations

More information

DELL EMC READY BUNDLE FOR MICROSOFT SQL SERVER 2016

DELL EMC READY BUNDLE FOR MICROSOFT SQL SERVER 2016 DELL EMC READY BUNDLE FOR MICROSOFT SQL SERVER 2016 Enabled by Hyper-V Virtualization on Windows Server 2016, PowerEdge R740 Servers, and Unity 400 Hybrid Flash Storage January 2018 Abstract This deployment

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Dell EqualLogic Best Practices Series SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Storage Infrastructure

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

Cisco UCS S3260 System Storage Management

Cisco UCS S3260 System Storage Management Storage Server Features and Components Overview, page 1 Cisco UCS S3260 Storage Management Operations, page 9 Disk Sharing for High Availability, page 10 Storage Enclosure Operations, page 15 Storage Server

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 5 Connectivity Matrix, on page 7 Deployment Options, on page 7 Management Through

More information

Using Switches with a PS Series Group

Using Switches with a PS Series Group Cisco Catalyst 3750 and 2970 Switches Using Switches with a PS Series Group Abstract This Technical Report describes how to use Cisco Catalyst 3750 and 2970 switches with a PS Series group to create a

More information

SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS

SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS SCALE-OUT SERVER SAN WITH DISTRIBUTED NVME, POWERED BY HIGH-PERFORMANCE NETWORK TECHNOLOGY INTRODUCTION The evolution in data-centric applications,

More information

Storage Protocol Offload for Virtualized Environments Session 301-F

Storage Protocol Offload for Virtualized Environments Session 301-F Storage Protocol Offload for Virtualized Environments Session 301-F Dennis Martin, President August 2016 1 Agenda About Demartek Offloads I/O Virtualization Concepts RDMA Concepts Overlay Networks and

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

StarWind Virtual SAN Installing and Configuring SQL Server 2019 (TP) Failover Cluster Instance on Windows Server 2016

StarWind Virtual SAN Installing and Configuring SQL Server 2019 (TP) Failover Cluster Instance on Windows Server 2016 One Stop Virtualization Shop StarWind Virtual SAN Installing and Configuring SQL Server 2019 (TP) Failover Cluster Instance on Windows Server 2016 OCTOBER 2018 TECHNICAL PAPER Trademarks StarWind, StarWind

More information

Dell Compellent Storage Center

Dell Compellent Storage Center Dell Compellent Storage Center How to Setup a Microsoft Windows Server 2012 Failover Cluster Reference Guide Dell Compellent Technical Solutions Group January 2013 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL

More information

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches A Dell EqualLogic Best Practices Technical White Paper Storage Infrastructure

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

InfiniBand Networked Flash Storage

InfiniBand Networked Flash Storage InfiniBand Networked Flash Storage Superior Performance, Efficiency and Scalability Motti Beck Director Enterprise Market Development, Mellanox Technologies Flash Memory Summit 2016 Santa Clara, CA 1 17PB

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

RocketU 1144BM Host Controller

RocketU 1144BM Host Controller RocketU 1144BM Host Controller USB 3.0 Host Adapters for Mac User s Guide Revision: 1.0 Oct. 22, 2012 HighPoint Technologies, Inc. 1 Copyright Copyright 2012 HighPoint Technologies, Inc. This document

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

StarWind Virtual SAN Compute and Storage Separated with Windows Server 2012 R2

StarWind Virtual SAN Compute and Storage Separated with Windows Server 2012 R2 One Stop Virtualization Shop StarWind Virtual SAN Compute and Storage Separated with Windows Server 2012 R2 FEBRUARY 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind and the

More information

StarWind Virtual SAN 2-Node Stretched Hyper-V Cluster on Windows Server 2016

StarWind Virtual SAN 2-Node Stretched Hyper-V Cluster on Windows Server 2016 One Stop Virtualization Shop StarWind Virtual SAN 2-Node Stretched Hyper-V Cluster on Windows Server 2016 APRIL 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind and the StarWind

More information

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Storage Transitions Change Network Needs Software Defined Storage Flash Storage Storage

More information

Lenovo - Excelero NVMesh Reference Architecture

Lenovo - Excelero NVMesh Reference Architecture Lenovo - Excelero NVMesh Reference Architecture How adding a dash of software to your server platform turns DAS into a high performance shared storage solution. Introduction Following the example of Tech

More information

Accelerate Applications Using EqualLogic Arrays with directcache

Accelerate Applications Using EqualLogic Arrays with directcache Accelerate Applications Using EqualLogic Arrays with directcache Abstract This paper demonstrates how combining Fusion iomemory products with directcache software in host servers significantly improves

More information

N V M e o v e r F a b r i c s -

N V M e o v e r F a b r i c s - N V M e o v e r F a b r i c s - H i g h p e r f o r m a n c e S S D s n e t w o r k e d f o r c o m p o s a b l e i n f r a s t r u c t u r e Rob Davis, VP Storage Technology, Mellanox OCP Evolution Server

More information

Upgrading Your Skills to MCSA: Windows Server 2016

Upgrading Your Skills to MCSA: Windows Server 2016 Upgrading Your Skills to MCSA: Windows Server 2016 Audience Profile: Candidates for this exam are IT professionals who implement the Windows Server 2016 core infrastructure services. Candidates have already

More information

QuickSpecs. HP Z 10GbE Dual Port Module. Models

QuickSpecs. HP Z 10GbE Dual Port Module. Models Overview Models Part Number: 1Ql49AA Introduction The is a 10GBASE-T adapter utilizing the Intel X722 MAC and X557-AT2 PHY pairing to deliver full line-rate performance, utilizing CAT 6A UTP cabling (or

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

PowerEdge FX2 - Upgrading from 10GbE Pass-through Modules to FN410S I/O Modules

PowerEdge FX2 - Upgrading from 10GbE Pass-through Modules to FN410S I/O Modules PowerEdge FX - Upgrading from 0GbE Pass-through Modules to FN40S I/O Modules Dell Networking Solutions Engineering June 06 A Dell EMC Deployment and Configuration Guide Revisions Date Revision Description

More information

SMB Direct Update. Tom Talpey and Greg Kramer Microsoft Storage Developer Conference. Microsoft Corporation. All Rights Reserved.

SMB Direct Update. Tom Talpey and Greg Kramer Microsoft Storage Developer Conference. Microsoft Corporation. All Rights Reserved. SMB Direct Update Tom Talpey and Greg Kramer Microsoft 1 Outline Part I Ecosystem status and updates SMB 3.02 status SMB Direct applications RDMA protocols and networks Part II SMB Direct details Protocol

More information

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary v1.0 January 8, 2010 Introduction This guide describes the highlights of a data warehouse reference architecture

More information

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING Meeting Today s Datacenter Challenges Produced by Tabor Custom Publishing in conjunction with: 1 Introduction In this era of Big Data, today s HPC systems are faced with unprecedented growth in the complexity

More information

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v A Design and Implementation Guide for SharePoint Server 2010 Collaboration Profile on Active System 800 with VMware vsphere Dell

More information

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family Dell MD Family Modular storage The Dell MD storage family Dell MD Family Simplifying IT The Dell MD Family simplifies IT by optimizing your data storage architecture and ensuring the availability of your

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

Cisco HyperFlex Systems and Veeam Backup and Replication

Cisco HyperFlex Systems and Veeam Backup and Replication Cisco HyperFlex Systems and Veeam Backup and Replication Best practices for version 9.5 update 3 on Microsoft Hyper-V What you will learn This document outlines best practices for deploying Veeam backup

More information

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC white paper FlashGrid Software Intel SSD DC P3700/P3600/P3500 Topic: Hyper-converged Database/Storage FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC Abstract FlashGrid

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information