MCP Standard Configuration. version latest

Size: px
Start display at page:

Download "MCP Standard Configuration. version latest"

Transcription

1 MCP Standard Configuration version latest

2 Contents Copyright notice 1 Preface 2 Intended audience 2 Documentation history 2 Cloud sizes 3 OpenStack environment sizes 3 Kubernetes cluster sizes 3 StackLight LMA and environment sizes 4 Ceph cluster sizes 4 Minimum hardware requirements 6 Infrastructure nodes disk layout 9 Networking 10 Server networking 10 Access networking 10 Switching fabric capabilities 12 NFV considerations 13 StackLight LMA scaling 14 StackLight LMA server roles and services distribution 14 StackLight LMA resource requirements per cloud size 14 OpenStack environment scaling 16 OpenStack server roles 16 Services distribution across nodes 17 Operating systems and versions 18 OpenStack compact cloud architecture 19 OpenStack small cloud architecture 21 OpenStack medium cloud architecture with Neutron OVS DVR/non-DVR 23 OpenStack medium cloud architecture with OpenContrail 26 OpenStack large cloud architecture 29 Kubernetes cluster scaling 33 Small Kubernetes cluster architecture 33 Medium Kubernetes cluster architecture , Mirantis Inc. Page i

3 Large Kubernetes cluster architecture 36 Ceph hardware requirements 38 Ceph cluster considerations 38 Ceph hardware considerations , Mirantis Inc. Page ii

4

5 Copyright notice 2018 Mirantis, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. No part of this publication may be reproduced in any written, electronic, recording, or photocopying form without written permission of Mirantis, Inc. Mirantis, Inc. reserves the right to modify the content of this document at any time without prior notice. Functionality described in the document may not be available at the moment. The document contains the latest information at the time of publication. Mirantis, Inc. and the Mirantis Logo are trademarks of Mirantis, Inc. and/or its affiliates in the United States an other countries. Third party trademarks, service marks, and names mentioned in this document are the properties of their respective owners. 2018, Mirantis Inc. Page 1

6 Preface This documentation provides information on how to use Mirantis products to deploy cloud environments. The information is for reference purposes and is subject to change. Intended audience This documentation is intended for deployment engineers, system administrators, and developers; it assumes that the reader is already familiar with network and cloud concepts. Documentation history This is the latest version of the Mirantis Cloud Platform documentation. It is updated continuously to reflect the recent changes in the product. To switch to any release-tagged version of the Mirantis Cloud Platform documentation, use the Current version drop-down menu on the home page. 2018, Mirantis Inc. Page 2

7 Cloud sizes The Mirantis Cloud Platform (MCP) enables you to deploy OpenStack environments and Kubernetes clusters of different scales. This document uses the terms small, medium, and large clouds to correspond to the number of virtual machines that you can run in your OpenStack environment or the number of pods that you can run in your Kubernetes cluster. This section describes sizes of MCP clusters. OpenStack environment sizes All cloud environments require you to configure a staging environment which mimics the exact production setup and enables you to test configuration changes prior to applying them in production. However, if you have multiple clouds of the same type, one staging environment is sufficient. Environments configured with Neutron OVS as a networking solution require additional hardware nodes called network nodes or tenant network gateway nodes. The following table describes the number virtual machines for each scale: Environment size OpenStack environment sizes Number of virtual machines Number of compute nodes Compact Up to 1,000 Up to 50 3 Small 1,000-2, Number of infrastructure nodes Medium 2,500-5, (Neutron OVS or OpenContrail) Large 5,000-10, (OpenContrail only) Kubernetes cluster sizes Kubernetes and etcd are lightweight applications that typically do not require a lot of resources. However, each Kubernetes components scales differently and some of them may have limitations on scaling. Based on performance testing, an optimal density of pods is 100 pods per Kubernetes Node. Note If you run both Kubernetes clusters and OpenStack environments in one MCP installation, you do not need to configure additional infrastructure nodes. Instead, the same infrastructure nodes are used for both. The following table describes the number of Kubernetes Master nodes and Kubernetes Nodes for each scale: Small cloud 2018, Mirantis Inc. Page 3

8 Cluster size Number of pods Number of Nodes Number of Master nodes Small 2,000-5, Medium 5,000-20, Large 20,000-50, StackLight LMA and environment sizes Number of infrastructure nodes The following table contains StackLight LMA sizing recommendations for different sizes of monitored clouds. Environment size Number of monitored nodes OpenStack environment sizes Number of StackLight LMA nodes Compact Up to 50 0 (colocated with VCP) Notes and comments StackLight LMA is installed on virtual machines running on the same KVM infrastructure nodes as VCP. Small StackLight LMA is installed on dedicated infrastructure nodes running Prometheus long-term storage and/or InfluxDB, Elasticsearch, and the Docker Swarm mode cluster. Medium StackLight LMA is installed on dedicated infrastructure nodes running Prometheus long-term storage and/or InfluxDB, Elasticsearch, and the Docker Swarm mode cluster. Large StackLight LMA is installed on 3 dedicated infrastructure nodes running Prometheus long-term storage and/or InfluxDB and Elasticsearch, and on 3 bare metal servers running the Docker Swarm mode cluster. The Elasticsearch cluster can be scaled up to 5 nodes that rises the number of StackLight LMA nodes to 8. Ceph cluster sizes The following table summarizes the number of storage nodes required depending on cloud size to meet the performance characteristics described in Ceph cluster considerations. Ceph storage nodes per cloud size 2018, Mirantis Inc. Page 4

9 Cloud size Number of compute nodes Number of virtual machines Number of storage nodes Compact Up to 50 Up to 1,000 Up to 9 (20-disk chassis) Small ,000-2, (20-disk chassis) Medium ,000-4, (20-disk chassis) Large ,000-10, (36-disk chassis) Number of Ceph Monitors Seealso Ceph hardware considerations 2018, Mirantis Inc. Page 5

10 Minimum hardware requirements When calculating hardware requirements, you need to plan for the infrastructure nodes, compute nodes, storage nodes, and, if you are installing a Kubernetes cluster, Kubernetes Master nodes and Kubernetes Nodes. Note For more details about services distribution throughout the nodes, see: OpenStack environment scaling and Kubernetes cluster scaling. The following tables list minimal hardware requirements for the corresponding nodes of the cloud: MCP foundation node Parameter Value CPU RAM Disk Network 1 x 12 Core CPU Intel Xeon E5-2670v3 1 x 14 Core CPU Intel Xeon Gold 5120 (Skylake-SP) 32 GB 1 x 2 TB HDD 1 x 1/10 GB Intel X710 dual-port NICs OpenStack infrastructure node Parameter Value CPU 2 x 12 Core CPUs Intel Xeon E5-2670v3 2 x 12 Core CPUs Intel Xeon Gold 6126 (Skylake-SP) RAM Disk Network 2 x 14 Core CPUs Intel Xeon Gold 5120 (Skylake-SP) 256 GB 2 x 1.6 TB SSD Intel S3520 or similar 2 x 10 GB Intel X710 dual-port NICs OpenStack compute node Parameter Value 2018, Mirantis Inc. Page 6

11 CPU 2 x 12 Core CPUs Intel Xeon E5-2670v3 2 x 12 Core CPUs Intel Xeon Gold 6126 (Skylake-SP) RAM Disk Network 2 x 14 Core CPUs Intel Xeon Gold 5120 (Skylake-SP) 256 GB 1 x 2TB HDD for system storage 2 x 960 GB SSD Intel S3520 for VM disk storage 1 2 x 10 GB Intel X710 dual-port NICs 1 Only required if local storage for VM ephemeral disks is used. Not required if network distributed storage is set up in the cloud, such as Ceph. 2018, Mirantis Inc. Page 7

12 Ceph storage node Parameter Value CPU RAM Disk Boot drives Network 2 x 8 Core CPU Intel E5-2630v4 or 2 x 8 Core CPU Intel Xeon Silver GB 4 x 200 GB SSD Intel S3710 for journal storage 20 x 2 TB HDD for data storage 2 x 128 GB Disk on Module (DOM) or 2 x 150 GB SSD Intel S3520 or similar 1 x 10 GB Intel X710 dual-port NICs or similar Kubernetes Master node or Kubernetes Node Parameter Value CPU 2 x 12 Core CPUs Intel Xeon E5-2670v3 2 x 12 Core CPUs Intel Xeon Gold 6126 RAM Disk Network 2 x 14 Core CPUs Intel Xeon Gold GB 2 x 960 GB SSD Intel S3520 or similar for system storage 2 x 10 GB Intel X710 dual-port NICs 2018, Mirantis Inc. Page 8

13 Infrastructure nodes disk layout Infrastructure nodes are typically installed on hardware servers. These servers run all components of management and control plane for both MCP and the cloud itself. It is very important to configure hardware servers properly upfront because changing their configuration after initial deployment is costly. For instructions on how to configure the disk layout for MAAS to provision the hardware machines, see MCP Deployment Guide: Add a custom disk layout per node in the MCP model. Consider the following recommendations: Layout Mirantis recommends using the LVM layout for disks on infrastructure machines. This option allows for more operational flexibility, such as resizing the Volume Groups and Logical Volumes for scale-out. LVM Volume Groups According to Minimum hardware requirements, an infrastructure node typically has two SSD disks. These disks must be configured as LVM Physical Volumes and joined into a Volume Group. The name of the Volume Group is the same across all infrastructure nodes to ensure consistency of LCM operations. Mirantis recommends following the vg01 naming convention for the Volume Group. LVM Logical Volumes The following table summarizes the recommended Logical Volume schema for infrastructure nodes. Follow the instructions in the MCP Deployment Guide to configure this in your cluster model. Logical Volume schema for infrastructure nodes Logical Volume path Mount point Size /dev/vg01/root '/' 70 GB /dev/vg01/gluster /srv/glusterfs 200 GB /dev/vg01/images /var/lib/libvirt/images/ 700 GB 2018, Mirantis Inc. Page 9

14 Networking This section describes the key hardware recommendations on server and data networking, as well as switching fabric capabilities and NFV considerations. Server networking The minimal network configuration for hardware servers includes two dual-port 1/10 Gbit Ethernet Network Interface Cards (NICs). The first pair of 1/10 Gbit Ethernet interfaces is used for the PXE, management, and control plane traffic. These interfaces should be connected to access switch in 1 or 10 GbE mode. The following options are recommended to configure this NIC in terms of bonding and distribution of the traffic: Use one 1/10 GbE interface for PXE or management network, and the other 1/10 GbE interface for control plane traffic. This is the most simple and generic option recommended for most use cases. Create a bond interface with this pair of 1/10 GbE physical interfaces and configure it to the balance-alb mode, following the Linux Ethernet Bonding driver documentation. This configuration allows using the interfaces separately from bond for PXE purposes and does not require support from switch fabric. Create a bond interface with this pair of 1/10 GbE interfaces in 802.3ad mode and set load balancing policy to transmission hash policy based on TCP/UDP port numbers (xmit_hash_policy encap3+4). This configuration provides for better availability of the connection. It requires that switching fabric supports LACP Fallback capability. See Switching fabric capabilities for details. The second NIC with two interfaces is used for the data plane traffic and storage traffic. On the operating system level, ports on this 1/10 GbE card are joined into an LACP bond (Linux bond mode 802.3ad). Recommended LACP load balancing method for this bond interface is transmission hash policy based on TCP/UDP port numbers (xmit_hash_policy encap3+4). This NIC must be connected to an access switch in 10 GbE mode. Note The LACP configuration in 802.3ad mode on the server side must be supported by the corresponding configuration of switching fabric. See Switching fabric capabilities for details. Seealso Linux Ethernet Bonding Driver Documentation Access networking 2018, Mirantis Inc. Page 10

15 The top of the rack (ToR) switches provide connectivity to servers on physical and data-link levels. They must provide support for LACP and other technologies used on the server side, for example, 802.1q VLAN segmentation. Access layer switches are used in stacked pairs. Mirantis recommends installing the following 10 GbE switches as the top of the rack (ToR) for Public, Storage, and Tenant networks in MCP: Juniper QFX S 48x 1/10 GbE ports, 6x 40 GbE ports Quanta T3048-LY9 48x 1/10 GbE ports, 4x 40 GbE ports Arista 7050T-64 48x 1/10 GbE ports, 4x 40 GbE QSFP+ ports The recommended IPMI switch is Dell PowerConnect 6248 (no stacking or uplink cards required) or a Juniper EX4300 Series switch. The following 1/10 GbE switches are recommended to install as ToR for PXE and Management networks in MCP: Juniper EX T-BF Arista 7010T-48 QuantaMesh T1048-LB9 The following diagram shows how a server is connected to the switching fabric, and how the fabric itself is configured. For external networking with OpenContrail, the following hardware is recommended: 2018, Mirantis Inc. Page 11

16 Juniper MX104 Virtual s vmx or vsrx for lower bandwidth requirements (up to 80 Gbps) Switching fabric capabilities The following table summarizes requirements for the switching fabric capabilities: Switch fabric capabilities summary Name of requirement LACP TCP/UDP hash balance mode Multihome server connection support LAG/port-channel links Description Level 4 LACP hash balance mode is recommended to support services that employ TCP sessions. This helps to avoid fragmentation and asymmetry in traffic flows. There are two major options to support multihomed server connections: Switch stacking Stacked switches work as a single logical unit from configuration and data path standpoint. This allows you to configure IP default gateway on the logical stacked switch to support multi-rack use case. Multipath Link Aggregation Groups MLAG support is recommended to allow cross-switch bonding across stacked ToR switches. Using MLAG allows you to maintain and upgrade the switches separately without network interruption for servers. On the other hand, the Layer-3 network configuration is more complicated when using MLAG. Therefore, Mirantis recommends using MLAG with plain access network topology. The number of supported LAGs/port-channel links per switch must be twice the number of ports. Take this parameter into account so that you can create the required number of LAGs to accommodate all servers connected to the switch. Note LACP configurations on access and server levels must be compatible with each other. In general, it might require additional design and testing effort in every particular case, depending on the models of switching hardware and the requirements to networking performance. The following capabilities are required from switching fabric to support PXE and BOOTP protocols over LACP bond interface. See Server networking for details. Switch fabric capabilities summary Name of requirement Description 2018, Mirantis Inc. Page 12

17 LACP fallback mode Servers must be able to boot with bootp/pxe protocol over a network connected through LACP bond interface in the 802.3ad mode. LACP fallback capability allows you to dynamically assemble an LAG based on status of member NICs on the server side. Upon the initial boot, the interfaces are not bonded, LAG is disassembled, and the server boots normally over a single PXE NIC. This requirement applies to the multihomed Management network option only. Note that different switch vendors have different notations for the said functionality. See links to the examples below. Seealso Configuring LACP Fallback on Arista switches Forcing MC-LAG Links or Interfaces With Limited LACP Capability to Be Up Configure LACP auto ungroup on Dell switches NFV considerations Network function virtualization (NFV) is an important factor to consider while planning hardware capacity for the Mirantis Cloud Platform. Mirantis recommends separating nodes that support NFV from other nodes to reserve them as Data Plane nodes that use network virtualization functions. The following types of the Data Plane nodes use NFV: 1. Compute nodes that can run virtual machines with hardware-assisted Virtualized Networking Functions (VNF) in terms of the OPNFV architecture. 2. Networking nodes that provide gateways and routers to the OVS-based tenant networks using network virtualization functions. The following table describes compatibility of NFV features for different MCP deployments. NFV for MCP compatibility matrix Type Host OS Kernel Huge pages DPDK SR-IOV NUMA CPU pinning Multiqueue OVS Xenial 4.8 Yes No Yes Yes Yes Yes Kernel vrouter DPDK vrouter DPDK OVS Xenial 4.8 Yes No Yes Yes Yes Yes Xenial 4.8 Yes Yes No Yes Yes No (version 3.2) Xenial 4.8 Yes Yes No Yes Yes Yes 2018, Mirantis Inc. Page 13

18 StackLight LMA scaling The MCP StackLight LMA enables the user to monitor all kinds of MCP clusters, including OpenStack, Kubernetes, and Ceph environments. This section describes the distribution of StackLight LMA services and hardware requirements for different sizes of clouds. StackLight LMA server roles and services distribution The following table lists the roles of StackLight LMA nodes and their names in the Salt Reclass metadata model: Server role name StackLight LMA nodes Server role group name in Reclass model Description StackLight LMA metering node mtr Servers that run Prometheus long-term storage and/or InfluxDB. StackLight LMA log storage and visualization node log Servers that run Elasticsearch and Kibana. StackLight LMA monitoring node mon Servers that run the Prometheus, Grafana, Pushgateway, Alertmanager, and Alerta services in containers in Docker Swarm mode. StackLight LMA resource requirements per cloud size Compact cloud The following table summarizes resource requirements of all StackLight LMA node roles for compact clouds (up to 50 compute nodes). Virtual server roles Resource requirements per StackLight LMA role for compact cloud # of s CPU vcores per Memory (GB) per mon mtr log Small cloud Disk space (GB) per The following table summarizes resource requirements of all StackLight LMA node roles for small clouds ( compute nodes). 2018, Mirantis Inc. Page 14

19 Virtual server roles Resource requirements per StackLight LMA role for small cloud # of s CPU vcores per Memory (GB) per mon mtr log Medium cloud Disk space (GB) per The following table summarizes resource requirements of all StackLight LMA node roles for medium clouds ( compute nodes). Virtual server roles Resource requirements per StackLight LMA role for medium cloud # of s CPU vcores per Memory (GB) per Disk space (GB) per mon SSD mtr SSD log SSD Large cloud Disk type The following table summarizes resource requirements of all StackLight LMA node roles for large clouds ( compute nodes). Virtual server roles Resource requirements per StackLight LMA role for large cloud # of s CPU vcores per Memory (GB) per Disk space (GB) per mon SSD mtr SSD log SSD Disk type 2018, Mirantis Inc. Page 15

20 OpenStack environment scaling The Mirantis Cloud Platform (MCP) enables you to deploy OpenStack environments at different scales. This document defines the following sizes of environments: small, medium, and large. Each environment size requires a different number of infrastructure nodes. The Virtualized Control Plane (VCP) services are distributed among the physical infrastructure nodes for optimal performance. This section describes VCP services distribution, as well as hardware requirements for different sizes of clouds. OpenStack server roles Components of the Virtualized Control Plane (VCP) have roles that define their functions. Each role can be assigned to a specific set of virtual servers which allows to adjust the number of s with a particular role independently of other roles providing greater flexibility to the environment architecture. The following table lists the OpenStack roles and their names in the SaltStack formulas: Server role name OpenStack infrastructure nodes Server role group name in SaltStack formulas Description Infrastructure node kvm Infrastructure KVM hosts that run MCP component services as virtual machines. Network node gtw Nodes that provide tenant network data plane services. DriveTrain Salt Master node cfg The Salt Master node that is responsible for sending commands to Salt Minion nodes. DriveTrain / StackLight OSS node cid Nodes that run in StackLight OSS and DriveTrain services in containers in Docker Swarm mode. RabbitMQ server node msg Nodes that run the message queue service RabbitMQ. Database server node dbs Nodes that run the database cluster called Galera. OpenStack controller nodes ctl Nodes that run the Virtualized Control Plane service, including the API servers and scheduler components. OpenStack compute nodes cmp Nodes that run the hypervisor service and VM workloads. OpenStack monitoring database nodes mdb Nodes that run the Telemetry monitoring database services. 2018, Mirantis Inc. Page 16

21 Proxy node prx Nodes that run reverse proxy that exposes OpenStack API, dashboards, and other components externally. Contrail controller nodes ntw Nodes that run the OpenContrail controller services. Contrail analytics nodes nal Nodes that run the OpenContrail analytics services. StackLight LMA log nodes log Nodes that run the StackLight LMA logging and visualization services. StackLight LMA database nodes mtr Nodes that run the StackLight database services. StackLight LMA and OSS nodes mon Nodes that run the StackLight LMA monitoring services and StackLight OSS (DevOps Portal) services. Seealso StackLight LMA server roles and services distribution Services distribution across nodes The distribution of services across physical nodes depends on their resources consumption profile and the number of compute nodes they administer. Mirantis recommends the following distribution of services across nodes: Distribution of services across nodes in an OpenStack environment Service Physical server group Virtual aodh-api kvm yes mdb aodh-evaluator kvm yes mdb aodh-listener kvm yes mdb aodh-notifier kvm yes mdb VM role group ceilometer-agent-central kvm yes mdb or ctl 2 ceilometer-agent-notification kvm yes mdb or ctl 2 ceilometer-api kvm yes mdb or ctl 2 cinder-api kvm yes ctl cinder-scheduler kvm yes ctl 2018, Mirantis Inc. Page 17

22 cinder-volume kvm yes ctl DriveTrain 3 kvm yes cid glance-api kvm yes ctl glance-registry kvm yes ctl gnocchi-metricd 4 kvm yes mdb horizon/apache2 kvm yes prx keystone-all kvm yes ctl mysql-server kvm yes dbs neutron-dhcp-agent 5 gtw no N/A neutron-l2-agent 5 cmp, gtw no N/A neutron-l3-agent 5 gtw no N/A neutron-metadata-agent 5 gtw no N/A neutron-server 5 kvm yes ctl nova-api kvm yes ctl nova-compute cmp no N/A nova-conductor kvm yes ctl nova-scheduler kvm yes ctl OSS Tools 3 kvm yes cid panko-api 4 kvm yes mdb rabbitmq-server kvm yes msg Operating systems and versions The following table lists the operating systems used for different roles in the Virtualized Control Plane (VCP): Operating systems and versions Server role name Server role group name in the SaltStack model Infrastructure node kvm xenial/16.04 Network node gtw xenial/16.04 StackLight LMA monitoring node mon xenial/16.04 Stacklight LMA metering and tenant telemetry node mtr xenial/16.04 Ubuntu version 2018, Mirantis Inc. Page 18

23 StackLight LMA log storage and visualization node log xenial/16.04 DriveTrain Salt Master node cfg xenial/16.04 DriveTrain StackLight OSS node cid xenial/16.04 RabbitMQ server node msg xenial/16.04 Database server node dbs xenial/16.04 OpenStack controller node ctl xenial/16.04 OpenStack compute node cmp xenial/16.04 (depends on whether NFV features enabled or not) Proxy node prx xenial/16.04 OpenContrail controller node ntw trusty/14.04 for v3.2, xenial/16.04 for v4.x OpenContrail analytics node nal trusty/14.04, xenial/16.04 for v4.x OpenStack compact cloud architecture A compact OpenStack cloud includes up to 50 compute nodes and requires you to have at least three infrastructure nodes. A compact cloud includes all the roles described in OpenStack server roles. The following diagram describes the distribution of VCP and other services throughout the infrastructure nodes. 2(1, 2, 3) The mdb role is for Pike, the ctl role is for Ocata. 3(1, 2) DriveTrain and OSS Tools services run in the Docker Swarm Mode cluster. 4(1, 2) Gnocchi and Panko services are added starting Pike. 5(1, 2, 3, 4, 5) Services are related to Neutron OVS only. 2018, Mirantis Inc. Page 19

24 The following table describes the hardware nodes in an OpenStack environment of an MCP cluster, roles assigned to them, and resources per node: Physical server roles and hardware requirements Node type Role name Number of servers Infrastructure nodes kvm 3 OpenStack compute nodes cmp up to 50 The following table summarizes the VCP virtual machines mapped to physical servers. Virtual server roles Resource requirements per VCP and DriveTrain roles Physical servers # of s CPU vcores per Memory (GB) per ctl kvm01 kvm02 kvm msg kvm01 kvm02 kvm dbs kvm01 kvm02 kvm prx kvm02 kvm cfg kvm mon kvm01 kvm02 kvm mtr kvm01 kvm02 kvm log kvm01 kvm02 kvm Disk space (GB) per 2018, Mirantis Inc. Page 20

25 cid kvm01 kvm02 kvm gtw kvm01 kvm02 kvm TOTAL Note The gtw VM should have four separate NICs for the following interfaces: dhcp, primary, tenant, and external. It simplifies the host networking: you do not need to pass VLANs to VMs. The prx VM should have an additional NIC for the Proxy network. All other nodes should have two NICs for DHCP and Primary networks. OpenStack small cloud architecture A small OpenStack cloud includes up to 100 compute nodes and requires you to have at least 6 infrastructure nodes, including additional servers for KVM infrastructure and RabbitMQ/Galera services. The Stacklight LMA services are distributed across 3 additional infrastructure nodes. See detailed resource requirements for StackLight LMA in StackLight LMA resource requirements per cloud size. A small cloud includes all the roles described in OpenStack server roles. The number of nodes is the same for both OpenContrail and Neutron OVS-based clouds. However, the roles on network nodes are different. The following diagram describes the distribution of VCP and other services throughout the infrastructure nodes for Neutron OVS-based small clouds. 2018, Mirantis Inc. Page 21

26 The following table describes the hardware nodes in MCP OpenStack, roles assigned to them, and resources per node: Physical server roles and quantities Node type Role name Number of servers Infrastructure nodes (VCP, OpenContrail) kvm 3 Infrastructure nodes (StackLight LMA) kvm 3 OpenStack compute nodes cmp Staging infrastructure nodes kvm 6 Staging OpenStack compute nodes cmp 2-5 The following table summarizes the VCP virtual machines mapped to physical servers. Virtual server roles Resource requirements per VCP and DriveTrain roles Physical servers # of s CPU vcores per Memory (GB) per ctl kvm01 kvm02 kvm Disk space (GB) per 2018, Mirantis Inc. Page 22

27 msg kvm01 kvm02 kvm dbs kvm01 kvm02 kvm prx kvm02 kvm cfg kvm cid kvm01 kvm02 kvm gtw kvm01 kvm02 kvm TOTAL The following table summarizes the OpenContrail virtual machines mapped to physical servers (optional). This is only needed when OpenContrail is used for OpenStack Tenant Networking. Virtual server roles Resource requirements per OpenContrail roles (optional) Physical servers # of s CPU vcores per Memory (GB) per ntw kvm01 kvm02 kvm nal kvm01 kvm02 kvm TOTAL Disk space (GB) per Note A vcore is the number of available virtual cores considering hyper-threading and overcommit ratio. Assuming an overcommit ratio of 1, the number of vcores in a physical is roughly the number of physical cores multiplied by 1.3. Note The gtw VM should have four separate NICs for the following interfaces: dhcp, primary, tenant, and external. It simplifies the host networking: you do not need to pass VLANs to VMs. The prx VM should have an additional NIC for the Proxy network. All other nodes should have two NICs for DHCP and Primary networks. OpenStack medium cloud architecture with Neutron OVS DVR/non-DVR A medium OpenStack cloud includes up to 200 compute nodes and requires you to have at least 9 infrastructure nodes, including additional servers for KVM infrastructure and RabbitMQ/Galera services. For 2018, Mirantis Inc. Page 23

28 Neutron OVS as a networking solution, 3 bare-metal network nodes are installed to accommodate network traffic as opposed to virtualized controllers in the case of OpenContrail. The Stacklight LMA services are distributed across 3 additional infrastructure nodes. See detailed resource requirements for StackLight LMA in StackLight LMA resource requirements per cloud size. A medium cloud includes all roles described in OpenStack server roles. The following diagram describes the distribution of VCP and other services throughout the infrastructure nodes. The following table describes the hardware nodes in MCP OpenStack, roles assigned to them, and resources per node: 2018, Mirantis Inc. Page 24

29 Physical server roles and quantities Node type Role name Number of servers Infrastructure nodes (VCP) kvm 6 Infrastructure nodes (StackLight LMA) kvm 3 Tenant network gateway nodes gtw 3 OpenStack compute nodes cmp Staging infrastructure nodes kvm 9 Staging tenant network gateway nodes gtw 3 Staging compute nodes cmp 2-5 The following table summarizes the VCP virtual machines mapped to physical servers. Virtual server roles Physical servers Resource requirements per VCP role # of s CPU vcores per Memory (GB) per ctl kvm01 kvm02 kvm dbs kvm01 kvm02 kvm msg kvm04 kvm05 kvm prx kvm04 kvm TOTAL Disk space (GB) per Virtual server roles Resource requirements per DriveTrain role Physical servers # of s CPU vcores per Memory (GB) per cfg kvm cid kvm01 kvm02 kvm TOTAL Disk space (GB) per Note A vcore is the number of available virtual cores considering hyper-threading and overcommit ratio. Assuming an overcommit ratio of 1, the number of vcores in a physical is roughly the number of physical cores multiplied by , Mirantis Inc. Page 25

30 OpenStack medium cloud architecture with OpenContrail A medium OpenStack cloud includes up to 200 compute nodes and requires you to have at least 9 infrastructure nodes, including servers for KVM infrastructure and RabbitMQ/Galera services. For OpenContrail as a networking solution, a virtualized OpenContrail Controller is installed onto the infrastructure nodes instead of bare metal network nodes as in the case of Neutron OVS. The Stacklight LMA services are distributed across 3 additional infrastructure nodes. See detailed resource requirements for StackLight LMA in StackLight LMA resource requirements per cloud size. The following diagram describes the distribution of VCP and other services throughout the infrastructure nodes. Note The diagram displays the OpenContrail 4.0 nodes that have the control, config, database, and analytics services running as plain Docker containers managed by docker-compose. In OpenContrail 3.2, these services are not Docker-based. The following table describes the hardware nodes in MCP OpenStack, roles assigned to them, and resources per node: 2018, Mirantis Inc. Page 26

31 Physical server roles and quantities Node type Role name Number of servers Infrastructure nodes (VCP) kvm 6 Infrastructure nodes (StackLight LMA) kvm 3 Infrastructure nodes (OpenContrail) kvm 3 OpenStack compute nodes cmp Staging infrastructure nodes kvm 12 Staging OpenStack compute nodes cmp , Mirantis Inc. Page 27

32 The following table summarizes the VCP virtual machines mapped to physical servers. Virtual server roles Physical servers Resource requirements per VCP role # of s CPU vcores per Memory (GB) per ctl kvm01 kvm02 kvm dbs kvm01 kvm02 kvm msg kvm04 kvm05 kvm prx kvm04 kvm TOTAL Disk space(gb) per Virtual server roles Resource requirements per DriveTrain role Physical servers # of s CPU vcores per Memory (GB) per cfg kvm cid kvm01 kvm02 kvm TOTAL Disk space(gb) per Virtual server roles Resource requirements per OpenContrail role Physical servers # of s CPU vcores per Memory (GB) per ntw kvm07 kvm08 kvm nal kvm07 kvm08 kvm TOTAL Disk space(gb) per Note A vcore is the number of available virtual cores considering hyper-threading and overcommit ratio. Assuming an overcommit ratio of 1, the number of vcores in a physical is roughly the number of physical cores multiplied by , Mirantis Inc. Page 28

33 OpenStack large cloud architecture A large OpenStack cloud includes up to 500 compute nodes and requires you to have at least 12 infrastructure nodes, including dedicated bare-metal servers for RabbitMQ, OpenStack API services, and database servers. Use OpenContrail as a networking solution for large OpenStack clouds. Neutron OVS is not recommended. The StackLight LMA services are distributed across additional 6 infrastructure nodes, including 3 bare metal nodes for the Docker Swarm Mode cluster. See detailed resource requirements for StackLight LMA in StackLight LMA resource requirements per cloud size. A large cloud includes all roles described in OpenStack server roles. The following diagram describes the distribution of VCP and other services throughout the infrastructure nodes. 2018, Mirantis Inc. Page 29

34 Note The diagram displays the OpenContrail 4.0 nodes that have the control, config, database, and analytics services running as plain Docker containers managed by docker-compose. In OpenContrail 3.2, these services are not Docker-based. 2018, Mirantis Inc. Page 30

35 The following table describes the hardware nodes in MCP OpenStack, roles assigned to them, and resources per node: Physical server roles and quantities Node type Role name Number of servers Infrastructure nodes (VCP) kvm 9 Infrastructure nodes (OpenContrail) kvm 3 Monitoring nodes (StackLight LMA) mon 3 Infrastructure nodes (StackLight LMA) kvm 3 OpenStack compute nodes cmp Staging infrastructure nodes kvm 18 Staging OpenStack compute nodes cmp 2-5 The following table summarizes the VCP virtual machines mapped to physical servers. Virtual server roles Physical servers Resource requirements per VCP role # of s CPU vcores per Memory (GB) per ctl kvm02 kvm03 kvm04 kvm05 kvm dbs kvm04 kvm05 kvm msg kvm07 kvm08 kvm prx kvm07 kvm TOTAL Disk space (GB) per Virtual server roles Resource requirements per DriveTrain role Physical servers # of s CPU vcores per Memory (GB) per cfg kvm cid kvm01 kvm02 kvm TOTAL Disk space(gb) per Resource requirements per OpenContrail role 2018, Mirantis Inc. Page 31

36 Virtual server roles Physical servers # of s CPU vcores per Memory (GB) per ntw kvm10 kvm11 kvm nal kvm10 kvm11 kvm TOTAL Disk space (GB) per Note A vcore is the number of available virtual cores considering hyper-threading and overcommit ratio. Assuming an overcommit ratio of 1, the number of vcores in a physical is roughly the number of physical cores multiplied by , Mirantis Inc. Page 32

37 Kubernetes cluster scaling As described in Kubernetes cluster sizes, depending on the anticipated number of pods, your Kubernetes cluster may be a small, medium, or large scale deployment. A different number of Kubernetes Nodes is required for each size of cluster. This section describes the services distribution across hardware nodes for different sizes of Kubernetes clusters. Small Kubernetes cluster architecture A small Kubernetes cluster includes 2,000-5,000 pods spread across roughly physical Kubernetes Nodes and 3 Kubernetes Master nodes. Mirantis recommends separating the Kubernetes control plane that includes etcd and the Kubernetes Master node components from Kubernetes workloads. In addition, dedicate separate physical servers for shared storage services GlusterFS and Ceph. Running these components on the same physical servers as the control plane components may result in insufficient cycles for etcd cluster to stay synced and allow the kubelet agent to report their statuses reliably. 2018, Mirantis Inc. Page 33

38 The following diagram displays the layout of services per physical node for a small Kubernetes cluster. 2018, Mirantis Inc. Page 34

39 Medium Kubernetes cluster architecture A medium Kubernetes cluster includes 5,000-20,000 pods spread across roughly physical Kubernetes Nodes and 6 Kubernetes Master nodes. Mirantis recommends separating the Kubernetes control plane that includes etcd and the Kubernetes Master node components from Kubernetes workloads. While Kubernetes components can run on the same host, run etcd on dedicated servers as the etcd workload increases due to constant recording and checking pod and kubelet statuses. You can place shared storage services GlusterFS and Ceph on the same physical nodes as Kubernetes control plane components. 2018, Mirantis Inc. Page 35

40 The following diagram displays the layout of services per physical node for a medium Kubernetes cluster. Large Kubernetes cluster architecture A large Kubernetes cluster includes 20,000-50,000 pods spread across roughly physical Kubernetes Nodes and 9 Kubernetes Master nodes. Mirantis recommends to separate the Kubernetes control plane that includes etcd and Kubernetes Master node components from the Kubernetes workloads. Note While Kubernetes components can run on the same host, run etcd on dedicated servers as the etcd workload increases due to constant recording and checking pod and kubelet statuses. 2018, Mirantis Inc. Page 36

41 In addition, Mirantis recommends placing kube-scheduler separately from the rest of the Kubernetes control plane components due to the high number of Kubernetes pod turnover and rescheduling, kube-scheduler requires more resources than other Kubernetes components. You can place shared storage services GlusterFS and Ceph on the same physical nodes as Kubernetes control plane components. The following diagram displays the layout of services per physical node for a large Kubernetes cluster. 2018, Mirantis Inc. Page 37

42 Ceph hardware requirements Mirantis recommends to use the Ceph cluster as primary storage solution for all types of ephemeral and persistent storage. A Ceph cluster that is built in conjunction with MCP must be designed to accommodate: Capacity requirements Performance requirements Operational requirements This section describes differently sized clouds that use the same Ceph building blocks. Ceph cluster considerations When planning storage for your cloud, you must consider performance, capacity, and operational requirements that affect the efficiency of your MCP environment. Based on those considerations and operational experience, Mirantis recommends no less than nine-node Ceph clusters for OpenStack production environments. Recommendation for test, development, or PoC environments is a minimum of five nodes. See details in Ceph cluster sizes. Note This section provides simplified calculations for your reference. Each Ceph cluster must be evaluated by a Mirantis Solution Architect. Capacity When planning capacity for your Ceph cluster, consider the following: Total usable capacity The existing amount of data plus the expected increase of data volume over the projected life of the cluster. Data protection (replication) Typically, for persistent storage a factor of 3 is recommended, while for ephemeral storage a factor of 2 is sufficient. However, with a replication factor of 2, an object can not be recovered if one of the replicas is damaged. Cluster overhead To ensure cluster integrity, Ceph stops writing if the cluster is 90% full. Therefore, you need to plan accordingly. Administrative overhead To catch spikes in cluster usage or unexpected increases in data volume, an additional 10-15% of the raw capacity should be set aside. The following table describes an example of capacity calculation: 2018, Mirantis Inc. Page 38

43 Example calculation Parameter Value Current capacity persistent Expected growth over 3 years Required usable capacity 500 TB 300 TB 800 TB Replication factor for all pools 3 Raw capacity With 10% cluster internal reserve With operational reserve of 15% Total cluster capacity 2.4 PB 2.64 PB 3.03 PB 3 PB Overall sizing When you have both performance and capacity requirements, scale the cluster size to the higher requirement. For example, if a Ceph cluster requires 10 nodes for capacity and 20 nodes for performance to meet requirements, size the cluster to 20 nodes. Operational recommendations A minimum of 9 Ceph OSD nodes is recommended to ensure that a node failure does not impact cluster performance. Mirantis does not recommend using servers with excessive number of disks, such as more than 24 disks. All Ceph OSD nodes must have identical CPU, memory, disk and network hardware configurations. If you use multiple availability zones (AZ), the number of nodes must be evenly divisible by the number of AZ. Perfromance considerations When planning performance for your Ceph cluster, consider the following: Raw performance capability of the storage devices. For example, a SATA hard drive provides 150 IOPS for 4k blocks. Ceph read IOPS performance. Calculate it using the following formula: number of raw read IOPS per device X number of storage devices X 80% Ceph write IOPS performance. Calculate it using the following formula: number of raw write IOPS per device X number of storage devices / replication factor X 65% Ratio between reads and writes. Perform a rough calculation using the following formula: 2018, Mirantis Inc. Page 39

44 read IOPS X % reads + write IOPS X % writes Note Do not use this formula for a Ceph cluster that is based on SSDs only. Contact Mirantis for evaluation. Storage device considerations The expected number of IOPS that a storage device can carry out, as well as its throughput, depends on the type of device. For example, a hard disk may be rated for 150 IOPS and 75 MB/s. These numbers are complementary because IOPS are measured with very small files while the throughput is typically measured with big files. Read IOPS and write IOPS differ depending on the device. onsidering typical usage patterns helps determining how many read and write IOPS the cluster must provide. A ratio of 70/30 is fairly common for many types of clusters. The cluster size must also be considered, since the maximum number of write IOPS that a cluster can push is divided by the cluster size. Furthermore, Ceph can not guarantee the full IOPS numbers that a device could theoretically provide, because the numbers are typically measured under testing environments, which the Ceph cluster cannot offer and also because of the OSD and network overhead. You can calculate estimated read IOPS by multiplying the read IOPS number for the device type by the number of devices, and then multiplying by ~0.8. Write IOPS are calculated as follows: (the device IOPS * number of devices * 0.65) / cluster size If the cluster size for the pools is different, an average can be used. If the number of devices is required, the respective formulas can be solved for the device number instead. Ceph hardware considerations When sizing a Ceph cluster, you must consider the number of drives needed for capacity and the number of drives required to accommodate performance requirements. You must also consider the largest number of drives that ensure all requirements are met. The following list describes generic hardware considerations for a Ceph cluster: Create one Ceph Object Storage Device (OSD) per HDD disk in Ceph OSD nodes. Allocate 1 CPU thread per OSD. Allocate 1 GB of RAM per 1 TB of disk storage on the OSD node. Do not use RAID arrays for Ceph disks. Instead, all drives must be available to Ceph individually. Storage controllers with support for host bus adapter (HBA) or just-a-bunch-of-disks (JBOD) mode shoulud be selected for Ceph OSD nodes. Preferably, configure storage controllers on OSD nodes to work in the HBA or initiator target (IT) mode. 2018, Mirantis Inc. Page 40

45 If the RAID controller does not support HBA mode, you can configure the JBOD mode with disks exposed individually. Confirm that disks are exposed directly to the operating system by checking the device parameters. The Ceph monitors can run in virtual machines on the infrastructure nodes, as they use relatively few resources. Place Ceph write journals on write-optimized SSDs instead of OSD HDD disks. Use at least 1 journal device per 5 OSD devices. See the detailed hardware requirements to Ceph OSD storage nodes in Minimum hardware requirements. The following table provides an example of input parameters for a Ceph cluster calculation: Example of input parameters Parameter Value Virtual size 40 GB Read IOPS 14 Read to write IOPS ratio 70/30 Number of availability zones 3 For 50 compute nodes, 1,000 s Number of OSD nodes: 9, 20-disk 2U chassis This configuration provides 360 TB of raw storage and with cluster size of 3 and 60% used initially, the initial amount of data should not exceed 72 TB (out of 120 TB of replicated storage). Expected read IOPS for this cluster is approximately 20,000 and write IOPS 5,000, or 15,000 IOPS in a 70/30 pattern. Note In this case performance is the driving factor, and so the capacity is greater than required. For 300 compute nodes, 6,000 s Number of OSD nodes: 54, 36-disks chassis The cost per node is low compared to the cost of the storage devices and with a larger number of nodes failure of one node is proportionally less critical. A separate replication network is recommended. For 500 compute nodes, 10,000 s Number of OSD nodes: 60, 36-disks chassis You may consider using a larger chassis. A separate replication network is required. 2018, Mirantis Inc. Page 41

46 Note For a large cluster, three monitors is the upper recommended limit. If scale-out is planned, deploy five monitors initially. 2018, Mirantis Inc. Page 42

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 10.0 Modified: 2018-04-04 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 13.0 Modified: 2018-08-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

MCP Reference Architecture. version latest

MCP Reference Architecture. version latest MCP Reference Architecture version latest Contents Copyright notice 1 Preface 2 Intended audience 2 Documentation history 2 Introduction 3 MCP design 3 Deployment automation 5 SaltStack and Reclass model

More information

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing. SUSE OpenStack Cloud Production Deployment Architecture Guide Solution Guide Cloud Computing Table of Contents page Introduction... 2 High Availability Configuration...6 Network Topography...8 Services

More information

Build Cloud like Rackspace with OpenStack Ansible

Build Cloud like Rackspace with OpenStack Ansible Build Cloud like Rackspace with OpenStack Ansible https://etherpad.openstack.org/p/osa-workshop-01 Jirayut Nimsaeng DevOps & Cloud Architect 2nd Cloud OpenStack-Container Conference and Workshop 2016 Grand

More information

MCP Deployment Guide. version latest

MCP Deployment Guide. version latest MCP Deployment Guide version latest Copyright notice 2018 Mirantis, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. No part of this

More information

, )!"#$%#$&! " # # # $!!" S ÔÕµaz`]^

, )!#$%#$&!  # # # $!! S ÔÕµaz`]^ !"#$%#$&! " # ! ' ' !"#$%&'()*+,-.,'/0 1#20&-$# 3-4' 56$7#, %869& 3#:&,-$ ;

More information

Dell EMC Ready Architecture for Red Hat OpenStack Platform

Dell EMC Ready Architecture for Red Hat OpenStack Platform Dell EMC Ready Architecture for Red Hat OpenStack Platform Cumulus Switch Configuration Guide Version 13 Dell EMC Service Provider Solutions ii Contents Contents List of Tables...iii Trademarks... iv Notes,

More information

Dell Red Hat OpenStack Cloud Solution Reference Architecture Guide - Version 5.0

Dell Red Hat OpenStack Cloud Solution Reference Architecture Guide - Version 5.0 Dell Red Hat OpenStack Cloud Solution Reference Architecture Guide - Version 5.0 2014-2016 Dell Inc. Contents 2 Contents Trademarks... 4 Notes, Cautions, and Warnings... 5 Glossary... 6 Overview...9 OpenStack

More information

DEPLOYING NFV: BEST PRACTICES

DEPLOYING NFV: BEST PRACTICES DEPLOYING NFV: BEST PRACTICES Rimma Iontel Senior Cloud Architect, Cloud Practice riontel@redhat.com Julio Villarreal Pelegrino Principal Architect, Cloud Practice julio@redhat.com INTRODUCTION TO NFV

More information

Dell EMC Red Hat OpenStack Cloud Solution. Architecture Guide Version 6.0

Dell EMC Red Hat OpenStack Cloud Solution. Architecture Guide Version 6.0 Dell EMC Red Hat OpenStack Cloud Solution Architecture Guide Version 6.0 Dell EMC Validated Solutions Contents 2 Contents List of Figures...4 List of Tables...5 Trademarks... 6 Glossary... 7 Notes, Cautions,

More information

Dell EMC Ready Bundle for Red Hat OpenStack Platform. PowerEdge FX Architecture Guide Version

Dell EMC Ready Bundle for Red Hat OpenStack Platform. PowerEdge FX Architecture Guide Version Dell EMC Ready Bundle for Red Hat OpenStack Platform PowerEdge FX Architecture Guide Version 10.0.1 Dell EMC Converged Platforms and Solutions ii Contents Contents List of Figures...iv List of Tables...v

More information

Dell EMC + Red Hat NFV Solution. Dell EMC PowerEdge R-Series Architecture Guide Version 10.0

Dell EMC + Red Hat NFV Solution. Dell EMC PowerEdge R-Series Architecture Guide Version 10.0 Dell EMC + Red Hat NFV Solution Dell EMC PowerEdge R-Series Architecture Guide Version 10.0 Dell EMC Service Provider Solutions ii Contents Contents List of Figures... iv List of Tables... v Trademarks...

More information

Dell EMC Ready Bundle for Red Hat OpenStack Platform. Dell EMC PowerEdge R-Series Architecture Guide Version

Dell EMC Ready Bundle for Red Hat OpenStack Platform. Dell EMC PowerEdge R-Series Architecture Guide Version Dell EMC Ready Bundle for Red Hat OpenStack Platform Dell EMC PowerEdge R-Series Architecture Guide Version 10.0.1 Dell EMC Converged Platforms and Solutions ii Contents Contents List of Figures...iv List

More information

Dell EMC Ready Bundle for Red Hat OpenStack Platform

Dell EMC Ready Bundle for Red Hat OpenStack Platform Dell EMC Ready Bundle for Red Hat OpenStack Platform PowerEdge R-Series Architecture Guide Version 10.1 Dell EMC Converged Platforms and Solutions Contents iii Contents List of Figures...v List of Tables...vii

More information

Installation runbook for

Installation runbook for Installation runbook for Arista Networks ML2 VLAN driver, L3 plugin integration Partner Name: Product Name: Product Version: Arista Networks Arista EOS EOS-4.14.5 or above MOS Version: Mirantis OpenStack

More information

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary Course Summary Description This course will take students through an in-depth look at HPE Helion OpenStack V5.0. The course flow is optimized to address the high-level architecture and HPE Helion OpenStack

More information

Contrail Service Orchestration Release Notes

Contrail Service Orchestration Release Notes Contrail Service Orchestration Release Notes Release 3.3.0 30 Jul 2018 Revision 7 These Release Notes accompany Release 3.3.0 of Juniper Networks Contrail Service Orchestration (CSO). They contain installation

More information

Dell EMC NFV Ready Bundle for Red Hat

Dell EMC NFV Ready Bundle for Red Hat Dell EMC NFV Ready Bundle for Red Hat Architecture Guide Version 10.2 Dell EMC Service Provider Solutions ii Contents Contents List of Figures...iv List of Tables... v Trademarks... vi Glossary... vii

More information

SurFS Product Description

SurFS Product Description SurFS Product Description 1. ABSTRACT SurFS An innovative technology is evolving the distributed storage ecosystem. SurFS is designed for cloud storage with extreme performance at a price that is significantly

More information

Baremetal with Apache CloudStack

Baremetal with Apache CloudStack Baremetal with Apache CloudStack ApacheCon Europe 2016 Jaydeep Marfatia Cloud, IOT and Analytics Me Director of Product Management Cloud Products Accelerite Background Project lead for open source project

More information

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016 Enterprise Ceph: Everyway, your way! Amit Bhutani @ Dell Kyle Bader @ Red Hat Red Hat Summit June 2016 Agenda Overview of Ceph Components and Architecture Evolution of Ceph in Dell-Red Hat Joint OpenStack

More information

Launching StarlingX. The Journey to Drive Compute to the Edge Pilot Project Supported by the OpenStack

Launching StarlingX. The Journey to Drive Compute to the Edge Pilot Project Supported by the OpenStack Launching StarlingX The Journey to Drive Compute to the Edge Pilot Project Supported by the OpenStack Foundation Ian Jolliffe, WIND RIVER SYSTEMS Director Engineering @ian_jolliffe Project Overview An

More information

CONTAINERS AND MICROSERVICES WITH CONTRAIL

CONTAINERS AND MICROSERVICES WITH CONTRAIL CONTAINERS AND MICROSERVICES WITH CONTRAIL Scott Sneddon Sree Sarva DP Ayyadevara Sr. Director Sr. Director Director Cloud and SDN Contrail Solutions Product Line Management This statement of direction

More information

MCP Deployment Guide. version 1.0

MCP Deployment Guide. version 1.0 MCP Deployment Guide version 1.0 Copyright notice 2017 Mirantis, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. No part of this

More information

MidoNet Scalability Report

MidoNet Scalability Report MidoNet Scalability Report MidoNet Scalability Report: Virtual Performance Equivalent to Bare Metal 1 MidoNet Scalability Report MidoNet: For virtual performance equivalent to bare metal Abstract: This

More information

Cloudian Sizing and Architecture Guidelines

Cloudian Sizing and Architecture Guidelines Cloudian Sizing and Architecture Guidelines The purpose of this document is to detail the key design parameters that should be considered when designing a Cloudian HyperStore architecture. The primary

More information

Reference Architecture: Red Hat OpenStack Platform

Reference Architecture: Red Hat OpenStack Platform Reference Architecture: Red Hat OpenStack Platform Last update: 08 August 2017 Version 1.4 Provides both economic and high performance options for cloud workloads Describes Lenovo System x servers, networking,

More information

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP HPE HELION CLOUDSYSTEM 9.0 HPE Helion CloudSystem Foundation CloudSystem Foundation Key Use Cases Automate dev/test CICD on OpenStack technology compatible infrastructure Accelerate cloud-native application

More information

DRBD SDS. Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner. Flash Memory Summit 2016 Santa Clara, CA 1

DRBD SDS. Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner. Flash Memory Summit 2016 Santa Clara, CA 1 DRBD SDS Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner Santa Clara, CA 1 DRBD s use cases Cloud Ready High Availability Disaster Recovery Storage is one of the

More information

Cloud CPE Solution Release Notes

Cloud CPE Solution Release Notes Cloud CPE Solution Release Notes Release 3.2 6 February 2018 Revision 2 These Release Notes accompany Release 3.2 of the Juniper Networks Cloud CPE Solution. They contain installation information, and

More information

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion HPE Helion OpenStack Carrier Grade 1.1 Release Notes 2017-11-14 HPE Helion Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes... 3 Changes in This Release... 3 Usage Caveats...4 Known Problems

More information

Reference Architecture: Red Hat OpenStack Platform with ThinkSystem Servers

Reference Architecture: Red Hat OpenStack Platform with ThinkSystem Servers Reference Architecture: Red Hat OpenStack Platform with ThinkSystem Servers Last update: 26 September 2017 Provides both economic and high performance options for cloud workloads Describes Lenovo ThinkSystem

More information

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS Carrier networks are undergoing their biggest transformation since the beginning of the Internet. The ability to get to market quickly and to respond to

More information

OpenStack Networking: Where to Next?

OpenStack Networking: Where to Next? WHITE PAPER OpenStack Networking: Where to Next? WHAT IS STRIKING IS THE PERVASIVE USE OF OPEN VSWITCH (OVS), AND AMONG NEUTRON FEATURES, THE STRONG INTEREST IN SOFTWARE- BASED NETWORKING ON THE SERVER,

More information

Red Hat OpenStack Platform 10

Red Hat OpenStack Platform 10 Red Hat OpenStack Platform 10 Network Functions Virtualization Planning Guide Planning for NFV in Red Hat OpenStack Platform 10 Last Updated: 2018-03-01 Red Hat OpenStack Platform 10 Network Functions

More information

Cloud CPE Solution Release Notes

Cloud CPE Solution Release Notes Cloud CPE Solution Release Notes Release 3.1.1 14 December 2017 Revision 8 These Release Notes accompany Release 3.1.1 of the Juniper Networks Cloud CPE Solution. They contain installation information,

More information

Contrail Service Orchestration Release Notes

Contrail Service Orchestration Release Notes Contrail Service Orchestration Release Notes Release 3.3.1 30 July 2018 Revision 5 These Release Notes accompany Release 3.3.1 of Juniper Networks Contrail Service Orchestration (CSO). They contain installation

More information

HP Helion OpenStack Carrier Grade 1.1: Release Notes

HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade Contents 2 Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes...3 Changes in This Release... 5 Usage Caveats...7

More information

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes A Dell Reference Architecture Dell Engineering August 2015 A Dell Reference Architecture Revisions Date September

More information

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV)

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV) White Paper December, 2018 AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV) Executive Summary Data centers and cloud service providers are creating a technology shift

More information

Openstack Networking Design

Openstack Networking Design Openstack Networking Design Pete Lumbis CCIE #28677, CCDE 2012::3 Cumulus Networks Technical Marketing Engineer 1 Openstack Overview Takes a pool of servers Deploys s (OS, disk, memory, CPU cores, etc)

More information

Huawei FusionSphere 6.0 Technical White Paper on OpenStack Integrating FusionCompute HUAWEI TECHNOLOGIES CO., LTD. Issue 01.

Huawei FusionSphere 6.0 Technical White Paper on OpenStack Integrating FusionCompute HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Technical White Paper on OpenStack Integrating Issue 01 Date 2016-04-30 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or

More information

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES Helion OpenStack Carrier Grade 4.0 RELEASE NOTES 4.0 Copyright Notice Copyright 2016 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The

More information

Introduction to Neutron. Network as a Service

Introduction to Neutron. Network as a Service Introduction to Neutron Network as a Service Assaf Muller, Associate Software Engineer, Cloud Networking, Red Hat assafmuller.wordpress.com, amuller@redhat.com, amuller on Freenode (#openstack) The Why

More information

Project Calico v3.1. Overview. Architecture and Key Components

Project Calico v3.1. Overview. Architecture and Key Components Project Calico v3.1 Overview Benefits Simplicity. Traditional Software Defined Networks (SDNs) are complex, making them hard to deploy and troubleshoot. Calico removes that complexity, with a simplified

More information

Cloud CPE Solution Release Notes

Cloud CPE Solution Release Notes Cloud CPE Solution Release Notes Release 3.1 20 December 2017 Revision 10 These Release Notes accompany Release 3.1 of the Juniper Networks Cloud CPE Solution. They contain installation information, and

More information

T-CAP (Converged Appliance Platform)

T-CAP (Converged Appliance Platform) T-CAP (Converged Appliance Platform) 2016. 6 Sohn, Minho / SDI Tech. Lab 0 Trends Data Center Networking is changing. New Architecture for Virtualization, Big Storage, Overlay N/W, Computing & Storage

More information

DEEP DIVE: OPENSTACK COMPUTE

DEEP DIVE: OPENSTACK COMPUTE DEEP DIVE: OPENSTACK COMPUTE Stephen Gordon Technical Product Manager, Red Hat @xsgordon AGENDA OpenStack architecture refresher Compute architecture Instance life cycle Scaling compute

More information

Product Description. Architecture and Key Components OSS/BSS. VNF Element Management Systems VNF1 NFVI. Virtual Computing Red Hat.

Product Description. Architecture and Key Components OSS/BSS. VNF Element Management Systems VNF1 NFVI. Virtual Computing Red Hat. Cloud Product Overview Since the inception of NFV, service providers have been trying to reap its business and operational benefits. However, the road to telco cloud is fraught with challenges, including

More information

Overview. SUSE OpenStack Cloud Monitoring

Overview. SUSE OpenStack Cloud Monitoring Overview SUSE OpenStack Cloud Monitoring Overview SUSE OpenStack Cloud Monitoring Publication Date: 08/04/2017 SUSE LLC 10 Canal Park Drive Suite 200 Cambridge MA 02141 USA https://www.suse.com/documentation

More information

StarlingX. StarlingX is aligned with the OpenStack Foundation Edge Working Group and the Linux Foundation Akraino Edge Stack.

StarlingX. StarlingX is aligned with the OpenStack Foundation Edge Working Group and the Linux Foundation Akraino Edge Stack. StarlingX 1 StarlingX The StarlingX Project provides a fully integrated openstack platform with focus on High Availability, Quality of Service, Performance and Low Latency needed for industrial and telco

More information

Building a Video Optimized Private Cloud Platform on Cisco Infrastructure Rohit Agarwalla, Technical

Building a Video Optimized Private Cloud Platform on Cisco Infrastructure Rohit Agarwalla, Technical Building a Video Optimized Private Cloud Platform on Cisco Infrastructure Rohit Agarwalla, Technical Leader roagarwa@cisco.com, @rohitagarwalla DEVNET-1106 Agenda Cisco Media Blueprint Media Workflows

More information

Extremely Fast Distributed Storage for Cloud Service Providers

Extremely Fast Distributed Storage for Cloud Service Providers Solution brief Intel Storage Builders StorPool Storage Intel SSD DC S3510 Series Intel Xeon Processor E3 and E5 Families Intel Ethernet Converged Network Adapter X710 Family Extremely Fast Distributed

More information

Evaluation and Integration of OCP Servers from Software Perspective. Internet Initiative Japan Inc. Takashi Sogabe

Evaluation and Integration of OCP Servers from Software Perspective. Internet Initiative Japan Inc. Takashi Sogabe Evaluation and Integration of OCP Servers from Software Perspective Internet Initiative Japan Inc. Takashi Sogabe Who am I? Takashi Sogabe @rev4t Software Engineer, Internet Initiative Japan Inc. Focusing

More information

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems Xen and CloudStack Ewan Mellor Director, Engineering, Open-source Cloud Platforms Citrix Systems Agenda What is CloudStack? Move to the Apache Foundation CloudStack architecture on Xen The future for CloudStack

More information

Red Hat OpenStack Platform 12

Red Hat OpenStack Platform 12 Red Hat OpenStack Platform 12 Director Installation and Usage An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud Last Updated: 2018-07-02 Red Hat OpenStack

More information

Red Hat OpenStack Platform 11

Red Hat OpenStack Platform 11 Red Hat OpenStack Platform 11 Director Installation and Usage An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud Last Updated: 2018-10-11 Red Hat OpenStack

More information

Building NFV Solutions with OpenStack and Cisco ACI

Building NFV Solutions with OpenStack and Cisco ACI Building NFV Solutions with OpenStack and Cisco ACI Domenico Dastoli @domdastoli INSBU Technical Marketing Engineer Iftikhar Rathore - INSBU Technical Marketing Engineer Agenda Brief Introduction to Cisco

More information

Windows Server 2012: Server Virtualization

Windows Server 2012: Server Virtualization Windows Server 2012: Server Virtualization Module Manual Author: David Coombes, Content Master Published: 4 th September, 2012 Information in this document, including URLs and other Internet Web site references,

More information

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE DATASHEET THUNDER SOFTWARE FOR BARE METAL YOUR CHOICE OF HARDWARE A10 Networks application networking and security solutions for bare metal raise the bar on performance with an industryleading software

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung Alexei Agueev, Systems Engineer ETHERNET MIGRATION 10G/40G à 25G/50G/100G Interface Parallelism Parallelism increases

More information

Red Hat OpenStack Platform 13

Red Hat OpenStack Platform 13 Red Hat OpenStack Platform 13 Director Installation and Usage An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud Last Updated: 2018-10-11 Red Hat OpenStack

More information

Tiered IOPS Storage for Service Providers Dell Platform and Fibre Channel protocol. CloudByte Reference Architecture

Tiered IOPS Storage for Service Providers Dell Platform and Fibre Channel protocol. CloudByte Reference Architecture Tiered IOPS Storage for Service Providers Dell Platform and Fibre Channel protocol CloudByte Reference Architecture Table of Contents 1 Executive Summary... 3 2 Performance Specifications by Tier... 4

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.9 August 2018 215-13526_A0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads.

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads. Project Calico v3.2 Overview Benefits Simplicity. Traditional Software Defined Networks (SDNs) are complex, making them hard to deploy and troubleshoot. Calico removes that complexity, with a simplified

More information

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches A Dell EqualLogic Best Practices Technical White Paper Storage Infrastructure

More information

Lenovo Database Configuration Guide

Lenovo Database Configuration Guide Lenovo Database Configuration Guide for Microsoft SQL Server OLTP on ThinkAgile SXM Reduce time to value with validated hardware configurations up to 2 million TPM in a DS14v2 VM SQL Server in an Infrastructure

More information

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS Field Activities Matt Smith Senior Solution Architect Red Hat, Inc @rhmjs Jeremy Eder Principal Performance Engineer Red Hat, Inc @jeremyeder CLOUD ARCHITECTURE

More information

Accelerate OpenStack* Together. * OpenStack is a registered trademark of the OpenStack Foundation

Accelerate OpenStack* Together. * OpenStack is a registered trademark of the OpenStack Foundation Accelerate OpenStack* Together * OpenStack is a registered trademark of the OpenStack Foundation Considerations to Build a Production OpenStack Cloud Ruchi Bhargava, Intel IT Shuquan Huang, Intel IT Kai

More information

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Revised January 30, 2015 06:00 pm IST Citrix CloudPlatform Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5

More information

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research Fast packet processing in the cloud Dániel Géhberger Ericsson Research Outline Motivation Service chains Hardware related topics, acceleration Virtualization basics Software performance and acceleration

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

INSTALLATION RUNBOOK FOR Netronome Agilio OvS. MOS Version: 8.0 OpenStack Version:

INSTALLATION RUNBOOK FOR Netronome Agilio OvS. MOS Version: 8.0 OpenStack Version: INSTALLATION RUNBOOK FOR Netronome Agilio OvS Product Name: Agilio OvS Driver Version: 2.2-r4603 MOS Version: 8.0 OpenStack Version: Liberty Product Type: Network Offload Driver 1. Introduction 1.1 Target

More information

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti S3IT: Service and Support for Science IT Cloud middleware Part2: Let s pick one cloud IaaS middleware: OpenStack Sergio Maffioletti S3IT: Service and Support for Science IT, University of Zurich http://www.s3it.uzh.ch/

More information

Provisioning Intel Rack Scale Design Bare Metal Resources in the OpenStack Environment

Provisioning Intel Rack Scale Design Bare Metal Resources in the OpenStack Environment Implementation guide Data Center Rack Scale Design Provisioning Intel Rack Scale Design Bare Metal Resources in the OpenStack Environment NOTE: If you are familiar with Intel Rack Scale Design and OpenStack*

More information

RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING

RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING EXECUTIVE SUMMARY Today, businesses are increasingly turning to cloud services for rapid deployment of apps and services.

More information

FUJITSU Software ServerView Cloud Monitoring Manager V1.0. Overview

FUJITSU Software ServerView Cloud Monitoring Manager V1.0. Overview FUJITSU Software ServerView Cloud Monitoring Manager V1.0 Overview J2UL-2073-01ENZ0(00) November 2015 Trademarks Copyright FUJITSU LIMITED 2015 LINUX is a registered trademark of Linus Torvalds. The OpenStack

More information

DRIVESCALE-MAPR Reference Architecture

DRIVESCALE-MAPR Reference Architecture DRIVESCALE-MAPR Reference Architecture Table of Contents Glossary of Terms.... 4 Table 1: Glossary of Terms...4 1. Executive Summary.... 5 2. Audience and Scope.... 5 3. DriveScale Advantage.... 5 Flex

More information

Contrail Networking: Evolve your cloud with Containers

Contrail Networking: Evolve your cloud with Containers Contrail Networking: Evolve your cloud with Containers INSIDE Containers and Microservices Transformation of the Cloud Building a Network for Containers Juniper Networks Contrail Solution BUILD MORE THAN

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure

More information

Virtualization of Customer Premises Equipment (vcpe)

Virtualization of Customer Premises Equipment (vcpe) Case Study Virtualization of Customer Premises Equipment (vcpe) Customer Profile Customer: A Cloud Service Provider Reach: Global Industry: Telecommunications The Challenge A Cloud Service Provider serving

More information

Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale

Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale What You Will Learn Cisco and Scality provide a joint solution for storing and protecting file, object, and OpenStack data at

More information

Virtualization Design

Virtualization Design VMM Integration with UCS-B, on page 1 VMM Integration with AVS or VDS, on page 3 VMM Domain Resolution Immediacy, on page 6 OpenStack and Cisco ACI, on page 8 VMM Integration with UCS-B About VMM Integration

More information

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform 7

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform 7 Reference Architecture: Red Hat Enterprise Linux OpenStack Platform 7 Last update: 11 March 2016 Version 1.1 Provides both economic and high performance options for cloud workloads Describes Lenovo E5-2600

More information

Docker All The Things

Docker All The Things OpenStack Services Docker All The Things and Kubernetes and Atomic OpenStack Summit Paris, November 2014 @brentholden @jameslabocki Agenda The Problem Current Solutions Tomorrow s Improvements Demonstration

More information

SSD and Container Native Storage for High- Performance Databases

SSD and Container Native Storage for High- Performance Databases SSD and Native Storage for High- Performance Databases Earle F. Philhower, III Sr. Technical Marketing Manager, Western Digital August 2018 Agenda There Will Be Math * 1 Databases s = Null Set? 2 Integral(VMs

More information

Veeam Backup & Replication on IBM Cloud Solution Architecture

Veeam Backup & Replication on IBM Cloud Solution Architecture Veeam Backup & Replication on IBM Cloud Solution Architecture Date: 2018 07 20 Copyright IBM Corporation 2018 Page 1 of 12 Table of Contents 1 Introduction... 4 1.1 About Veeam Backup & Replication...

More information

Cloud CPE Solution Release Notes

Cloud CPE Solution Release Notes Cloud CPE Solution Release Notes Release 2.1 10 February 2017 Revision 3 These Release Notes accompany Release 2.1 of the Juniper Networks Cloud CPE Solution. They contain installation information, and

More information

OpenStack Architectural Decisions

OpenStack Architectural Decisions OpenStack Architectural Decisions How to architect OpenStack for your organization Thomas Goh Agenda The General Purpose Cloud General Considerations Compute Considerations Network Considerations Considerations

More information

OpenContrail Overview Architecture & Demo

OpenContrail Overview Architecture & Demo www.opencontrail.org OpenContrail Overview Architecture & Demo Qasim Arham Oct, 2014 Agenda Introduction OpenStack Architecture and Overview OpenContrail and OpenStack Integration OpenStack Neutron Overview

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine April 2007 Part No 820-1270-11 Revision 1.1, 4/18/07

More information

INSTALLATION RUNBOOK FOR. VNF (virtual firewall) 15.1X49-D30.3. Liberty. Application Type: vsrx Version: MOS Version: 8.0. OpenStack Version:

INSTALLATION RUNBOOK FOR. VNF (virtual firewall) 15.1X49-D30.3. Liberty. Application Type: vsrx Version: MOS Version: 8.0. OpenStack Version: INSTALLATION RUNBOOK FOR Juniper vsrx Application Type: vsrx Version: VNF (virtual firewall) 15.1X49-D30.3 MOS Version: 8.0 OpenStack Version: Liberty 1 Introduction 1.1 Target Audience 2 Application Overview

More information

Deterministic Storage Performance

Deterministic Storage Performance Deterministic Storage Performance 'The AWS way' for Capacity Based QoS with OpenStack and Ceph Federico Lucifredi - Product Management Director, Ceph, Red Hat Sean Cohen - A. Manager, Product Management,

More information

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers An Oracle Technical White Paper October 2011 Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers Introduction... 1 Foundation for an Enterprise Infrastructure... 2 Sun

More information

Technical Presales Guidance for Partners

Technical Presales Guidance for Partners Technical Presales Guidance for Partners Document version Document release date 1.1 25th June 2012 document revisions Contents 1. OnApp Deployment Types... 3 1.1 Public Cloud Deployments... 3 1.2 Private

More information

PUBLIC SAP Vora Sizing Guide

PUBLIC SAP Vora Sizing Guide SAP Vora 2.0 Document Version: 1.1 2017-11-14 PUBLIC Content 1 Introduction to SAP Vora....3 1.1 System Architecture....5 2 Factors That Influence Performance....6 3 Sizing Fundamentals and Terminology....7

More information