Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide

Similar documents
ibutils2 - InfiniBand Diagnostic Utilities Release Notes

Mellanox ConnectX-4/ ConnectX-4 Lx Plugin for RedHat OpenStack Platform 10

MLNX_EN for FreeBSD Release Notes

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Mellanox SwitchX Firmware (fw-sx) Release Notes

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Mellanox ConnectX-3 ESXi 6.5 Inbox Driver Release Notes. Rev 1.0

Ubuntu Inbox Driver Release Notes. Ubuntu 16.10

Mellanox ConnectX-3 ESXi 6.0 Inbox Driver

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Red Hat Enterprise Linux (RHEL) 7.4-ALT Driver Release Notes

Mellanox ConnectX-4 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 Release Notes. Rev /

SUSE Linux Enterprise Server (SLES) 12 SP3 Driver SLES 12 SP3

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

WinOF-2 Release Notes

Mellanox NATIVE ESX Driver for VMware vsphere 6.5 Release Notes

40Gb/s InfiniBand Switch Module (HSSM) for IBM BladeCenter

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Innova-2 Flex Open for Application Acceleration EN Adapter Card. Software and Firmware Bundle Release Notes

Red Hat Enterprise Linux (RHEL) 7.5-ALT Driver Release Notes

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-5 Release Note. Rev 3.4.1

Mellanox GPUDirect RDMA User Manual

RHEL6.x Deployment over iscsi over IPoIB Interfaces

SUSE Linux Enterprise Server (SLES) 12 SP2 Driver SLES 12 SP2

Mellanox MLX4_EN Driver for VMware README

WinOF VPI for Windows Installation Guide

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

SUSE Linux Enterprise Server (SLES) 15 Inbox Driver User Manual

Mellanox Innova IPsec 4 Lx Ethernet Adapter Quick Start Guide

SUSE Linux Enterprise Server (SLES) 15 Inbox Driver Release Notes SLES 15

SUSE Linux Enterprise Server (SLES) 12 SP2 Driver User Manual

Configuring Mellanox Hardware for VPI Operation Application Note

Mellanox GPUDirect RDMA User Manual

SUSE Linux Enterprise Server (SLES) 12 SP4 Inbox Driver Release Notes SLES 12 SP4

Mellanox DPDK. Release Notes. Rev 16.11_4.0

WinOF-2 for Windows 2016 Release Notes

Mellanox DPDK. Release Notes. Rev 16.11_2.3

Red Hat Enterprise Linux (RHEL) 7.3 Driver Release Notes

Mellanox WinOF VPI Release Notes

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-4 Lx/ ConnectX-5 Release Note. Rev 3.5.0

Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 Release Notes

Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vsphere 6.7 Release Notes

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vsphere 6.0 Release Notes

Mellanox ConnectX -3 Pro Firmware Release Notes

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Mellanox GPUDirect RDMA User Manual

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

RoCE vs. iwarp Competitive Analysis

Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vsphere 6.5 Release Notes

ARISTA: Improving Application Performance While Reducing Complexity

Red Hat Enterprise Linux (RHEL) 7.3 Driver User Manual

Mellanox MLX4_EN Driver for VMware ESXi 5.1 and ESXi 5.5 User Manual

Mellanox FlexBoot for ConnectX -4 / ConnectX -4 Lx and Connect-IB Release Notes

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

Storage Protocol Offload for Virtualized Environments Session 301-F

Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vsphere 6.5 Release Notes

Mellanox ConnectX-4 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 User Manual

InfiniScale IV fw-is4 Release Notes

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Mellanox DPDK Release Notes

InfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide

NVMe over Universal RDMA Fabrics

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Mellanox NIC s Performance Report with DPDK Rev 1.0

Mellanox PreBoot Drivers (FlexBoot & UEFI)

Ethernet. High-Performance Ethernet Adapter Cards

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Mellanox PreBoot Drivers (FlexBoot & UEFI)

Application Acceleration Beyond Flash Storage

Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vsphere 6.5 Release Notes

InfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

Mellanox Virtual Modular Switch

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

InfiniBand Networked Flash Storage

NVMe Direct. Next-Generation Offload Technology. White Paper

Mellanox ConnectX-4 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 User Manual

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Mellanox Virtual Modular Switch Reference Guide

Introduction to Infiniband

QuickSpecs. HP Z 10GbE Dual Port Module. Models

Mellanox PreBoot Drivers (FlexBoot & UEFI) User Manual. Rev 2.8

Mellanox VMS Wizard 1.0.5

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

Performance Characterization of ONTAP Cloud in Azure with Application Workloads

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

SurFS Product Description

HCI File Services Powered by ONTAP Select

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

Configuring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Solutions for Scalable HPC

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Transcription:

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 www.mellanox.com

NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS. THE CUSTOMER'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCTO(S) AND/OR THE SYSTEM USING IT. THEREFORE, MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT, INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 350 Oakmead Parkway Suite 100 Sunnyvale, CA 94085 U.S.A. www.mellanox.com Tel: (408) 970-3400 Fax: (408) 970-3403, Ltd. Beit Mellanox PO Box 586 Yokneam 20692 Israel www.mellanox.com Tel: +972 (0)74 723 7200 Fax: +972 (0)4 959 3245 Copyright 2014.. All Rights Reserved. Mellanox, Mellanox logo, BridgeX, ConnectX, Connect-IB, CoolBox, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MetroX, MLNX-OS, PhyX, ScalableHPC, SwitchX, UFM, Virtual Protocol Interconnect and Voltaire are registered trademarks of, Ltd. ExtendX, FabricIT, Mellanox Open Ethernet, Mellanox Virtual Modular Switch, MetroDX, TestX, Unbreakable-Link are trademarks of, Ltd. All other trademarks are property of their respective owners. 2 MLNX-15-3736

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 Table of Contents Preface... 4 1 Overview... 5 2 Virtualization... 6 2.1 eswitch Capabilities and Characteristics... 7 2.2 Performance Measurements... 7 3 Storage Acceleration... 9 4 Networking... 11 4.1 Network Types... 12 4.1.1 Admin (PXE) Network... 12 4.1.2 Storage Network... 12 4.1.3 Management Network... 13 4.1.4 Private Networks... 13 4.1.5 Public Network... 14 4.2 Physical Connectivity... 16 4.3 Network Separation... 16 4.4 Lossless Fabric (Flow-Control)... 16 5 Requirements... 17 5.1 Hardware Requirements... 17 5.2 Operating Systems... 18 6 Rack Configuration... 19 6.1 32 Compute Setup with HA (3 Controller )... 19 7 Installation and Configuration... 20 3

Rev 1.2 Overview Preface About this Manual This manual is a reference architecture and an installation guide for a small size OpenStack cloud of 2-32 compute nodes based on Mellanox interconnect hardware and Mirantis Fuel software. Audience This manual is intended for IT engineers, system architects and anyone interested in understanding or deploying Mellanox CloudX using Mirantis Fuel. Related Documentation For additional information, see the following documents: Document Mellanox OpenStack Reference Architecture Mellanox MLNX-OS User Manual HowTo Install Mirantis Fuel 5.1/5.1.1 OpenStack with Mellanox Adapters Support HowTo Install Mirantis Fuel 6.0 OpenStack with Mellanox Adapters Support HowTo Configure 56GbE Link on Mellanox Adapters and Switches Firmware - Driver Compatibility Matrix Mirantis Openstack Documentation guides Location http://www.mellanox.com/openstack/ http://support.mellanox.com/ NOTE: active support account required to access manual. http://community.mellanox.com/docs/doc-1474 https://community.mellanox.com/docs/doc-2077 http://community.mellanox.com/docs/doc-1460 http://www.mellanox.com/page/mlnx_ofed_matrix?mtag=linux_sw_drivers http://docs.mirantis.com/openstack/fuel/fuel-5.1/ http://docs.mirantis.com/openstack/fuel/fuel-6.0/ Revision History Revision Description 1.0 First document 1.1 Minor changes for 5.1.1 support. 1.2 Minor changes for 6.0 support. 4

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 1 Overview Mellanox CloudX is reference architecture for the most efficient cloud infrastructure which makes use of open source cloud software (i.e. OpenStack) while running on Mellanox interconnect technology. CloudX utilizes off-the-shelf building blocks (servers, storage, interconnect and software) to form flexible and cost-effective private, public, and hybrid clouds. In addition, it incorporates virtualization with high-bandwidth and lowlatency interconnect solutions while significantly reducing data center costs. Built around the fastest interconnect technology of 40Gb/s and 56Gb/s Ethernet, CloudX provides the fastest data transfer and most effective utilization of computing, storage and Flash SSD components. Based on Mellanox high-speed, low-latency converged fabric, CloudX provides significant cost reductions in CAPEX and OPEX through the following means: High VM rate per compute node Efficient CPU utilization due to hardware offloads High throughput per server for compute and hypervisor tasks Fast, low-latency access to storage Mirantis OpenStack is one of the most progressive, flexible, open distributions of OpenStack. In a single commercially supported package, Mirantis OpenStack combines the latest innovations from the open source community with the testing and reliability expected of enterprise software. The integration of Mirantis Fuel software and Mellanox hardware generates a solid and high performing solution for cloud providers. The solution discussed in this guide is based on Mirantis OpenStack (OpenStack Icehouse release). Single root I/O virtualization (SR-IOV) based networking and iser block storage over the Mellanox ConnectX-3 adapter family are integrated into Mirantis OpenStack which offer the following features: Fabric speeds of up to 56GbE based on Mellanox SX1036 Ethernet switch systems iser (High performance iscsi protocol over RDMA) storage transport for Cinder SR-IOV high performance VM links by Mellanox SR-IOV plugin for OpenStack (included in the ML2 plugin) 5

Rev 1.2 Virtualization 2 Virtualization SR-IOV allows a single physical PCIe device to present itself as multiple devices on the PCIe bus. Mellanox ConnectX -3 adapters are capable of exposing up to 127 virtual instances called virtual functions (VFs). These VFs can then be provisioned separately. Each VF can be viewed as an additional device associated with a physical function (PF). The VF shares the same resources with the PF, and its number of ports equals those of the PF. SR-IOV is commonly used in conjunction with an SR-IOV enabled hypervisor to provide virtual machines (VMs) with direct hardware access to network resources, thereby improving performance. Mellanox ConnectX-3 adapters equipped with an onboard embedded switch (eswitch) are capable of performing Layer-2 (L2) switching for the different VMs running on the server. Using the eswitch yields even higher performance levels, and improves security and isolation. The installation is capable of handling Mellanox NIC cards. It updates to the proper firmware version which incorporates SR-IOV enablement and defines 16 VFs by default. Each spawned VM is provisioned with one VF per network attached. The solution supports up to 16 VMs on a single compute node connected to single network or 8 VMs connected to 2 networks or any other combination which sums up to 16 networks in total. To support more than 16 vnics, contact Mellanox Support. Note: SR-IOV support for OpenStack is under development. Security groups are not supported with SR-IOV. If the setup is based on Mellanox OEM NICs, make sure to have a firmware version compatible with OFED version 2.2-1.0.0 (FW version 2.31.5050) or later. Make sure that this firmware version supports SR-IOV. Consult vendor for more information. If the setup is based on Mellanox OEM NICs, make sure to have a the following minimum firmware version (or later): Fuel 5.1/5.1.1 - OFED version 2.2-1.0.0 (FW version 2.31.5050) or later Fuel 6.0 OFED version 2.3-2.0.0 (FW version 2.32.5100) or later Make sure that this firmware is enabled with SR-IOV. 6

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 Figure 1 - eswitch Architecture 2.1 eswitch Capabilities and Characteristics The main capabilities and characteristics of eswitch are as follows: Virtual switching: Creating multiple logical virtualized networks. The eswitch offload engines handle all networking operations to the VM, thereby dramatically reducing software overheads and costs. Performance: Switching is handled by hardware as opposed to other applications that use a software-based switch. This enhances performance by reducing CPU overhead. Security: The eswitch enables network isolation (using VLANs) and anti-mac spoofing. Monitoring: Port counters are supported. 2.2 Performance Measurements Many data center applications benefit from low-latency network communication while others require deterministic latency. Using regular TCP connectivity between VMs can create high latency and unpredictable delay behavior. Figure 2 exhibits the dramatic difference (20X improvement) delivered by SR-IOV connectivity running RDMA compared to paravirtualized vnic running a TCP stream. Using the direct connection of SR-IOV and ConnectX-3, the hardware eliminates software processing which delays packet movement. This results in consistent low-latency by allowing application software to rely on deterministic packet-transfer times. 7

Rev 1.2 Virtualization Figure 2 - Latency Comparison 8

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 3 Storage Acceleration Data centers rely on communication between compute and storage nodes as compute servers read and write data from storage servers constantly. To maximize the server s application performance, communication between the compute and storage nodes must have the lowest possible latency and CPU utilization, and the highest possible bandwidth. Figure 3 - OpenStack Based IaaS Cloud POD Deployment Example Storage applications, relying on iscsi over TCP communications protocol stack, continuously interrupt the processor to perform basic data movement tasks (packet sequence and reliability tests, reordering, acknowledgements, block level translations, memory buffer copying, etc). This causes data center applications that rely heavily on storage communication to suffer from reduced CPU efficiency as the processor is busy sending data to and from the storage servers rather than performing application processing. The data path for applications and system processes must wait in line with protocols such as TCP, UDP, NFS, and iscsi for their turn to use the CPU. This not only slows down the network, but also uses system resources that could otherwise have been used for executing applications faster. Mellanox OpenStack solution extends the Cinder project by adding iscsi running over RDMA (iser). By leveraging RDMA, Mellanox OpenStack delivers 6X better data throughput (for example, increasing from 1GB/s to 6GB/s) while simultaneously reducing CPU utilization by up to 80% (see Figure 4). Mellanox ConnectX -3 adapters bypass the operating system and CPU by using RDMA, thereby allowing much more efficient data movement. iser capabilities are used to accelerate hypervisor traffic, including storage access, VM migration, and data and VM replication. The use of RDMA shifts data movement processing to the Mellanox ConnectX-3 hardware, which provides zero-copy message transfers for SCSI packets to the application, producing significantly faster performance, lower network latency, lower access time, and lower CPU overhead. iser can provide 6X faster performance than traditional TCP/IP based 9

Rev 1.2 Storage Acceleration iscsi. The iser protocol unifies the software development efforts of both Ethernet and InfiniBand communities, and reduces the number of storage protocols a user must learn and maintain. RDMA bypass allows the application data path to effectively skip to the front of the line. Data is provided directly to the application upon receipt without being subject to various delays due to CPU load-dependent software queues. This has the following three effects: The latency of transactions is incredibly reduced; Because there is no contention for resources, the latency is deterministic, which is essential for offering end-users a guaranteed SLA; Bypassing the OS using RDMA results in significant savings in CPU cycles. With a more efficient system in place, those saved CPU cycles can be used to accelerate application performance. In Figure 4 it is clear that by performing hardware offload of the data transfers using the iser protocol, the full capacity of the link is utilized to the maximum of the PCIe limit. To summarize, network performance is a significant element in the overall delivery of data center services and benefits from high speed interconnects. Unfortunately, the high CPU overhead associated with traditional storage adapters prevents systems from taking full advantage of these high-speed interconnects. The iser protocol uses RDMA to shift data movement tasks to the network adapter, thusly freeing up CPU cycles that would otherwise be consumed executing traditional TCP and iscsi protocols. Hence, using RDMA-based fast interconnects significantly increases data center application performance levels. Figure 4 - RDMA Acceleration 10

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 4 Networking In this solution, we define the following node functions: Fuel node (master) Compute nodes Controllers (and network) node Storage node The following five networks are required for this solution: Public network Admin (PXE) network Storage network Management network Private network In this solution all nodes are connected to all five networks, besides the Fuel node that is connected to the Public and Admin (PXE) networks only. Although not all nodes may be required to connect to all networks, this is done by Fuel design. The five networks are implemented using three physical networks. 11

Rev 1.2 Networking Figure 5 - Solution Networking Private, Management, Storage Networks Mellanox SX1036 Ethernet Switch (40/56GbE) Fuel Master Compute Storage Node Controller Admin (PXE) Network Public Network 1G switch 1G switch Firewall Public Network Private, Management, Storage Networks Admin (PXE) Network Internet 4.1 Network Types 4.1.1 Admin (PXE) Network The Admin (PXE) network is used for the cloud servers PXE boot and OpenStack installation. It uses the 1GbE port on the servers. Figure 6 - Admin (PXE) Network Fuel Master Compute Storage Node Controller Admin (PXE) Network 1G switch 4.1.2 Storage Network The storage network is used for tenant storage traffic. The storage network is connected via the SX1036 (40/56GbE switch). 12

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 The iser protocol runs between the hypervisors and the storage node over the 40/56GbE storage network. The VLAN used for the storage network is configured by the Fuel UI. 4.1.3 Management Network The Management network is an internal network. All OpenStack components communicate with each other using this network. It is connected with the SX1036 Ethernet switch (40/56GbE). The VLAN used for the management network is configured by the Fuel UI. Figure 7 - Management and Storage Networks Management and Storage Networks Mellanox SX1036 Ethernet Switch (40/56GbE) Fuel Master Compute Storage Node Controller 4.1.4 Private Networks The private networks are used for communication among the tenant VMs. Each tenant may have several networks. If connectivity is required between networks owned by the same tenant, the traffic passes through the network node which is responsible for the routing. The VLAN used for the management network is configured by the Fuel UI. Fuel 5.1/5.1.1/6.0 is based on OpenStack Icehouse which only supports one network technology. This means that all the private networks in the OpenStack deployment should use Mellanox Neutron agent which is based on VLANs assigned to VFs. The VLAN range used for private networks is configured by the Fuel UI. Note: Allocate a number of VLANs according to the number of private networks to be used. 13

Rev 1.2 Networking Figure 8 - Private Network Tenant A, net 1 Tenant A, net 2 Tenant B Tenant C Controller & Network VMs VMs VMs VMs 4.1.5 Public Network The public network enables external connectivity to all nodes (e.g. internet). The public network runs over the 1GbE ports of each server. This network is also used to access the different OpenStack APIs. The public network range is split into two parts: Public range, which allows external connectivity to the compute hypervisors and all other hosts; and Floating IP range which enables VMs to communicate with the outside world and is a subset of addresses within the public network (via the controller node) Another use for public access for hypervisors or node OS might be necessary (e.g. ssh access). However, the cloud owner must decide how to allow external connectivity to cloud servers, or cloud servers access to the internet (either the PXE or public network may be used). 14

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 Figure 9 - Public Network Fuel Master Compute Storage Node Controller Public Network Firewall Internet The floating IP uses the private network to pass traffic from the VMs to the network node and then the public network to pass traffic from the network node to the internet. Figure 10 - Floating-IP Network Mellanox SX1036 Ethernet Switch (40/56GbE) Private Network Fuel Master Compute Storage Node Controller Public Network Firewall Internet 15

Rev 1.2 Networking 4.2 Physical Connectivity The five networks discussed above can be connected via three ports in each server using 1GbE ports for admin (PXE) and public networks, and a single ConnectX -3 Pro adapter port for private, storage, and management networks. Figure 11 - Physical connectivity 4.3 Network Separation VLANs should be configured on the switches to allow network separation. The VLANs to be configured on the switches should be aligned with the VLAN range configured in Fuel for the management, private, and storage networks. 4.4 Lossless Fabric (Flow-Control) RDMA requires lossless fabric. To achieve that, flow control (global pause) should be enabled on all ports that may run RDMA and are connected to the SX1036 switch system. This is achieved by configuring global pause across all network hardware components. 16

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 5 Requirements Mirantis Fuel defines the following node functions: Fuel node Compute nodes Controller nodes Storage nodes 5.1 Hardware Requirements Table 1 lists the minimum requirements of hardware components: Table 1 - Hardware Requirements Component Quantity Description Fuel node (master) 1 Intel server, 2 Gb Eth Ports, >4 cores CPU, >=8GB RAM, 0.5 TB SATA HDD Compute nodes 2-32 Intel PCI-Ex Gen-3 server, at least one 8x PCI-Ex 3 slot, 2 Gb Eth Ports, >4 cores CPU x 2, >=128GB RAM RAM, 0.5 TB SATA HDD, SRIOV support in BIOS. ConnectX -3 PRO EN or ConnectX -3 PRO VPI Single Port network adapter. P/N (EN: MCX313A-BCCT or VPI: MCX353A-FCCT) Controller nodes 3 Intel PCI-Ex Gen 3 server, at least one 8x PCI-Ex 3 slot, >2 x 1 Gb Eth Ports, >4 cores CPU x 2, >=32 GB RAM, 1 TB SAS HDD, SRIOV support in BIOS ConnectX -3 PRO EN or ConnectX -3 PRO VPI Single Port network adapter. P/N (EN: MCX313A-BCCT or VPI: MCX353A-FCCT) Storage node 1 Intel PCI-Ex Gen-3 server, at least one 8x PCI-Ex 3 slot, 2 Gb Eth Ports, >4 cores CPU x 2, >=64GB RAM RAM, 0.5 TB SATA HDD ConnectX -3 PRO EN or ConnectX -3 PRO VPI Single Port network adapter. P/N (EN: MCX313A-BCCT or VPI: MCX353A-FCCT) Storage, management, private switch Public, admin (PXE) switch 56Gb/s cables 1Gb/s cables 1 Mellanox SX1036 40/56GbE 36 ports 1 or 2 1Gb switch (any switch) 1 per server 2 per server FDR InfiniBand/56GbE copper cables up to 2m. P/N: MC2207130-XXX. 17

Rev 1.2 Requirements Note: For an example of high-performance storage solution using RAID adapter, click here. 5.2 Operating Systems All servers must be equipped with CentOS6.5 or Ubuntu 12.04.4 operating systems (Mirantis distro) via Fuel servers. The VM should have the appropriate driver to support SR-IOV mode over Mellanox ConnectX-3. 18

CONSOLE CONSOLE 49 50 49 50 1 1 2 2 3 3 4 1 4 1 4 1 4 1 4 1 4 1 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 ilo ilo ilo ilo ilo ilo 20 20 UID UID UID UID UID UID 21 21 22 22 23 23 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31 32 32 33 33 34 34 35 35 36 36 37 37 38 38 39 39 40 40 41 41 42 Extreme Networks Summit 48s 42 43 43 44 44 45 45 Extreme Networks Summit 48s 46 46 47 47 48 48 i i Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide Rev 1.2 6 Rack Configuration 6.1 32 Compute Node Setup with HA (3 Controller ) This section supplies a rack recommendation design for basic cloud setup up for 36 nodes (32 compute nodes). Figure 12 - Rack Configuration PORT 50 PORT 49 MGMT Public 1Gb Switch PORT 50 PORT 49 MGMT Admin (PXE) 1Gb Switch TOP * * * 32 Compute Cloud nodes have identical wiring: Admin (PXE) switch Public switch Cloud Network switch TOP TOP TOP 3 Controller TOP Storage (Cinder) Node Fuel not connected to Cloud Switch TOP Fuel Node Cloud Networks 40/56 Gb Switch Management, Storage, Private 19

Rev 1.2 Installation and Configuration 7 Installation and Configuration For information about cloud installation and configuration for Fuel 5.1/5.1.1 click here. For information about cloud installation and configuration for Fuel 6.0 click here. For custom storage server installation and a configuration example click here. 20