SolidFire Mirantis Unlocked Reference Architecture. SolidFire Inc. US HEADQUARTERS. Sunnyvale, CA

Size: px
Start display at page:

Download "SolidFire Mirantis Unlocked Reference Architecture. SolidFire Inc. US HEADQUARTERS. Sunnyvale, CA"

Transcription

1 US HEADQUARTERS Sunnyvale, CA 525 Almanor Ave Sunnyvale, CA Phone Fax SolidFire Inc. Boulder, CO 1600 Pearl St, Suite 200 Boulder Colorado Phone: Fax: SolidFire Mirantis Unlocked Reference Architecture Original Release December 2015 Revision 1 April 2016 Authors: Ed Balduf (Cloud Solutions Architect, SolidFire) Christian Huebner (Sr. Systems Architect, Mirantis) All Rights Reserved Page 1

2 Table of Contents 1 Abstract About OpenStack Why OpenStack About SolidFire About Mirantis About this Reference Architecture Target Audience OpenStack Compute Service (Nova) Block Storage Service (Cinder) Image Service (Glance) Keystone Other OpenStack Services Neutron OpenStack Storage Storage Types in OpenStack Ephemeral Storage Block Storage Object Storage File Storage Image Storage Solidfire Storage SolidFire Element OS SolidFire Active Support SolidFire Active IQ SolidFire Storage Value for OpenStack Reference Architecture Architecture Physical Design Network layout OpenStack Storage Block Storage SolidFire Ceph Image storage Object Storage Best Practices Mirantis OpenStack Best Practices Networking Best Practices Storage Best Practices All Rights Reserved Page 2

3 6.4 Database Best Practices Hypervisor Operating System Tuning Implementation SolidFire Fuel plugin Deployment Workflow Verification and Testing Health Check Functional testing Verify Backend Placement Configure and Verify SolidFire Image caching System testing Performing functional testing Support Conclusion Software and Documentation Addendum - Network Config...84 Configuring the network switch All Rights Reserved Page 3

4 Table of Images Image 1: OpenStack Architecture Image 2: OpenStack and NetApp FAS network architecture Image 3: Control flow (black) and data flow (red) between participants Image 4: OpenStack and NetApp E Series architecture Image 5: Control flow (black) and data flow (red) between participants Image 6: Fuel Boot Image 7: Fuel Configuration Screen Image 8: Fuel Plugin Installation Image 9: OpenStack Environment Creation Image 10: Assign Roles in Fuel UI Image 11: Configure Networking Image 12: Enable and Configure NetApp Plugin Image 13: Deploy Changes All Rights Reserved Page 4

5 1 Abstract Cloud Computing has transformed the IT landscape and ushered in an era of Infrastructure-as-a- Service (IaaS). Applications running on cloud infrastructure can take advantage of its flexibility, cost-effectiveness, disaster-recovery options, and security. OpenStack is the leading free and opensource cloud computing IaaS platform and offers a rich set of features beyond traditional compute, storage and networking. This reference architecture was prepared jointly by Mirantis Unlocked and SolidFire. It details architectural considerations, best practices and deployment methodologies to create a highly available Mirantis OpenStack 7.0 (OpenStack Kilo Release) cloud using SolidFire storage. The architectural design described has been verified by Mirantis and SolidFire and can be deployed using the Mirantis 7.0 Fuel plugin for SolidFire. 1.1 About OpenStack OpenStack is an open-source project, released under the Apache 2.0 license, implementing services to support Infrastructure as a Service (IaaS) and with additional Platform as a Service (PaaS) components. The project is managed by the OpenStack Foundation, a nonprofit corporate entity established in 2012 to promote, protect, and empower OpenStack software and its associated community. OpenStack technology comprises a series of modular projects that control large pools of processing, storage, and networking resources throughout a datacenter, all managed through a single dashboard that gives administrators control and enables self-service resource provisioning by users. 1.2 Why OpenStack OpenStack provides the best mechanism for IT organizations and service providers to leverage flexible, cost-effective, enterprise-ready Infrastructure-as-a-Service (IaaS). Contributions made by Solidfire, Mirantis, and others can be deployed with existing hardware in your data center or proofof-concept labs. In most cases, drivers for specific infrastructure solutions are provided open source and free of cost. OpenStack use cases include public cloud providers, private cloud deployments, All Rights Reserved Page 5

6 and hybrid clouds. For more information on OpenStack benefits and enterprise use-cases, see About SolidFire SolidFire delivers the industry s most comprehensive OpenStack block storage integration. Combining this integration with SolidFire s guaranteed performance, high availability, and scale, customers can now confidently host performance-sensitive applications in their OpenStack cloud infrastructure. SolidFire began contributing code to OpenStack in 2012, released our first Cinder driver with OpenStack Folsom in Fall of that year, and is an OpenStack Foundation Corporate Sponsor. Further information about SolidFire s OpenStack integration and contributions can be found at About Mirantis Mirantis is the pure play OpenStack company. Mirantis delivers all the software, services, training, and support needed for running OpenStack. More customers rely on Mirantis than any other company to get to production deployment of OpenStack at scale. Mirantis is among the top three companies worldwide in contributing open source software to OpenStack, and has helped build and deploy some of the largest OpenStack clouds in the world, at companies such as Cisco, Comcast, Ericsson, NASA, Samsung and Symantec. Mirantis is venture-backed by August Capital, Ericsson, Intel Capital, Insight Venture Partners, Sapphire Ventures and WestSummit Capital, with headquarters in Sunnyvale, California. Follow us on Twitter 1.5 About this Reference Architecture OpenStack has steadily gained features and deployment tools that allow for high availability at the control plane, hypervisor, and storage tiers. Enterprises are now looking for a single solution that can meet the requirements of all their workloads, rather than piecing together various different solutions. In this reference architecture we focus on architectural considerations and deployment methodologies for a highly available Mirantis OpenStack 7.0 (MOS 7.0) cloud with SolidFire storage. We will also cover architectural considerations for multiple storage back-ends including All Rights Reserved Page 6

7 SolidFire and perform some basic performance testing. We will cover the use of the Mirantis OpenStack 7.0 SolidFire Fuel plugin. The reference architecture described in this document is based on the SolidFire OpenStack Configuration Guide and the Mirantis OpenStack Operations Guide and User Guide. 2 Target Audience The target audience for this document includes the following groups: Systems Engineers, Solutions Architects, Consultants or IT administrators with basic configuration and proof of concept efforts. Technical decision makers. This document describes the value of using SolidFire storage solutions with OpenStack cloud services to create a cloud environment that enterprises and service providers can use to meet their respective needs. Service provider and Enterprise cloud architects. SolidFire storage provides an enterprise class underpinning for OpenStack-based clouds. This document describes architecture, best practices, and key design considerations for deploying OpenStack in a highly available manner. SolidFire and Mirantis partners and implementers. It is important for SolidFire and Mirantis partners to understand the solution architecture of OpenStack on SolidFire storage in order to meet and exceed customer requirements and expectations for their cloud solutions. The document assumes the reader has an architectural understanding of OpenStack and has reviewed related content in the OpenStack documentation. The OpenStack documentation center can be found at: 3 OpenStack OpenStack is a cloud operating system that controls pools of compute, storage, and networking. OpenStack consists of a number of different services, some of which are shown below All Rights Reserved Page 7

8 Image 1: OpenStack Architecture 3.1 Compute Service (Nova) The OpenStack Nova Compute Service manages the instance lifecycle from creation to scheduling, management, and eventually deletion. Nova manages compute resources and provides a common REST API, which developers can use to automate cloud operations and provide external orchestration. The API is also used by administrators to control and manage compute resources and instances. Nova is designed to scale horizontally on standard hardware. 3.2 Block Storage Service (Cinder) The OpenStack Cinder block storage service provides dynamic provisioning and portability of block storage devices for Nova instances. Cinder provides persistent block devices mapped to OpenStack compute instances (which use ephemeral, transient storage by default). Cinder manages the creation, attachment, and detachment of the block devices to instances. Persistent block devices provided by Cinder also support instance booting and provide mechanisms for creating Cinder snapshots. Cinder is built as a pluggable architecture, which permits using a wide range of storage back-ends, ranging from a simplistic Cinder reference back-end using Linux LVM, to enterprise class scale-out storage platforms such as SolidFire. Regardless of the back-end, Cinder provides a standard subset of functionality to Nova and the hypervisors. Some back-ends, including those from SolidFire, provide additional functionality, such as the Quality of Service features inherent to SolidFire block storage volumes. Although fully All Rights Reserved Page 8

9 integrated with Nova and Horizon, Cinder can also be used independently from OpenStack to provide a standardized abstraction for block storage provisioning. 3.3 Image Service (Glance) OpenStack Image Service (Glance) provides registration and delivery services for disk and server images. The ability to copy a server image and immediately store it away is a powerful feature of OpenStack. When multiple servers are being provisioned, a stored image can be used as a template to get new servers up and running more quickly and consistently than by installing a server OS and individually configuring additional services Keystone OpenStack Identity Service (Keystone) provides a central directory of users mapped to the OpenStack services they can access. It acts as a common authentication system across the cloud operating system, and it can integrate with existing back-end directory services such as Microsoft Active Directory (AD) and Lightweight Directory Access Protocol (LDAP). 3.4 Other OpenStack Services Neutron OpenStack Network Service (Neutron) is a pluggable, scalable, and API-driven system for managing networks and IP addresses. Like other aspects of the cloud OS, it can be used by administrators and users to increase the value of existing data center assets. Neutron prevents the network from becoming a bottleneck or limiting factor in a cloud deployment and provides users with self service capability for their network configurations Horizon The OpenStack Dashboard (Horizon) offers administrators and users a graphical interface to access, provision, and automate cloud-based resources Ceilometer All Rights Reserved Page 9

10 OpenStack Telemetry (Ceilometer) provides a common infrastructure for collecting usage and performance measurements in an OpenStack cloud. Its primary initial targets are monitoring and metering Heat OpenStack Orchestration (Heat) implements a service to orchestrate multiple composite cloud applications by using the AWS CloudFormation template format, through an OpenStack-native and CloudFormation-compatible API. 4 OpenStack Storage Instances created in an OpenStack can be configured to use various forms of back-end storage, ranging from a simple LVM on compute node approach to enterprise class storage infrastructure. This reference architecture will focus on SolidFire scale-out, all-flash, enterprise-class storage. 4.1 Storage Types in OpenStack Ephemeral Storage Ephemeral storage can be used to boot instances on a compute node. It is controlled by OpenStack Nova. As it is not persistent beyond the instance s lifecycle, it cannot be used to store any kind of persistent data. Ephemeral Storage in OpenStack is generally not shared between compute nodes, and when used in this manner, instances cannot be migrated among nodes. However, if the ephemeral storage location is a shared location and the proper configuration is in place, migration may take place Block Storage In OpenStack, the generic term Block Storage refers to block storage which is persistent and survives when the instance is destroyed. As block storage resources are generally shared, the replacement instance may be on an arbitrary compute node. Cinder is the OpenStack component that provides access to and manages block storage. To OpenStack hosts, the storage appears as a All Rights Reserved Page 10

11 block device but uses iscsi, Fibre Channel, NFS or a range of other proprietary protocols for backend connectivity. Cinder serves two distinct roles around block storage: the first being allocation and control of block storage resources using any storage back-end with a proper driver. Cinder's second role is to provide an example reference implementation of a block storage back-end, using common Linux tools. The Cinder reference back-end is mainly used for testing during OpenStack development. Cinder is capable of addressing more than one back-end with the multi-back-end feature. This feature is useful for multiple performance tiers, e.g. SolidFire FlashArray//m as tier 1 for IOPSheavy workloads, with NetApp FAS with NFS for general cloud storage Object Storage Object storage stores reference data as binary objects (rather than as files or LUNs), typically storing or retrieving the entire object in one command. Objects are stored and referenced using HTTP (web-based) protocols with simple commands such as PUT and GET File Storage The File Share Service, in a manner similar to Cinder, provides coordinated access to shared or distributed file systems. While the primary consumption of file shares would be across OpenStack Compute instances, the service is also intended to be accessible as an independent capability in line with the modular design established by other OpenStack services. The development name for this project is Manila. The Manila project has taken on the task of managing file system based storage (NFS, CIFS, HDFS) in an OpenStack context. Manila graduated incubation and was designated as a core service as of OpenStack's Kilo release (May 2015) Image Storage The Glance component of OpenStack provides a service where users can upload and discover data assets that are meant to be used with other services. This currently includes images and metadata All Rights Reserved Page 11

12 definitions. While not strictly a storage subsystem, Glance is a core component of OpenStack and heavily relies on other storage subsystems for its actual storage needs 1. Glance image services include discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. Like Cinder and Manila, Glance works with multiple back-ends, among them Object Storage, Ephemeral storage, File storage and Cinder storage 4.2 Solidfire Storage SolidFire provides a scale-out all-flash storage platform designed to deliver guaranteed storage performance to thousands of application workloads side-by-side, allowing consolidation under a single storage platform. Our scale-out system is built specifically for the Next Generation Data Center and has the unique ability to manage storage performance completely separately from storage capacity, so as to meet both technology pressures and business demands. Mix and match any SolidFire storage nodes within a system to take advantage of the most current and cost-effective Flash technology available. The SolidFire system can be combined together over a 10Gb Ethernet network or 8/16 Gb FC clients in clusters ranging from 4 to 100 nodes. The SolidFire primary storage solution scales up to 100 nodes, provides capacity from 35TB to 3.4PB, and can deliver between 200,000 and 7.5M guaranteed IOPS to more than 100,000 volumes / applications within a single system increasing overall business productivity through consolidation, automation and granular scalability. 4.3 SolidFire Element OS SolidFire Element OS uniquely answers all five key storage requirements for a true next generation data center. This all-inclusive, industry-leading storage operating system is at the core of every SolidFire infrastructure, giving you capabilities no other storage technology can match. Element OS enables: Scale-Out Growth: Grow in response to business demands by seamlessly and granularly adding capacity and performance for each new business need. 1 Glance itself does not provide storage, but relies upon other facilities to store images. All file operations are performed using the glance_store library which is responsible for interactions with external storage back ends or local filesystems All Rights Reserved Page 12

13 Guaranteed Performance: Optimize your infrastructure by managing performance independent of capacity. Automated management: Set your infrastructure to automatic and respond to business demands rapidly with on-the-fly adjustments. Data assurance: Consolidate with confidence using SolidFire s Helix data protection assurance for protecting your most critical information. Global efficiencies: Maximize TCO with always-on deduplication, compression, and thin provisioning that reduce storage, space, and power costs. 4.4 SolidFire Active Support Support shouldn t be something that happens every once in awhile. At SolidFire, support is an active process that happens consistently from the moment a SolidFire cluster is deployed. SolidFire Active Support continuously monitors and diagnoses systems, ensuring SolidFire products are maintained and operated at the highest possible level of availability and performance. SolidFire Active Support continuously monitors your systems, proactively alerting you when a problem is present. SolidFire support can remotely and securely log into systems to provide handson real-time support. SolidFire s support is global; international offerings are available, with up to 4-hour break-fix SLAs. All calls and cases are handled by tier three support engineers who can resolve your issues or answer your questions the first time, and every time. 4.5 SolidFire Active IQ The SolidFire Active IQ SaaS platform, a key element of Active Support, provides real-time health diagnostics, historical performance, and trending from the system level all the way down to each individual volume. This holistic approach to infrastructure monitoring, combined with SolidFire s unique abilities to upgrade, tune, and scale on-demand without disruption, redefines operational success for storage infrastructure. Anticipate business demands: Real-time usage modeling and granular system data increase agility enabling you to proactively plan for evolving business demands, simplifying storage resource optimization. Increased productivity: Consolidated monitoring saves time when managing multiple clusters/sites, and comprehensive storage metrics reduce assessment efforts. This means you have continuous visibility into changing conditions, empowering you to avoid issues rather than react to them All Rights Reserved Page 13

14 Reduced risk: Customizable alerts notify you instantly of possible issues. Faster response reduces risks to the business. 4.6 SolidFire Storage Value for OpenStack Reliable and flexible infrastructure is critical to any business, and the era of cloud services has raised the bar on what it means to deliver IT services. When you deploy SolidFire storage with OpenStack, you can focus up the stack on application deployments, iterate and innovate more, and get back to the business of your business. SolidFire s all-flash, scale-out storage provides the following features to OpenStack: Provides one of the industry's best Cinder 2 integrations and can be deployed for use in OpenStack in under a minute, by inserting a minimum of four lines of configuration into the Cinder configuration file 3. Scales non-disruptively, enabling you to grow your Cinder block storage resources without impact as business needs demand. Guarantees performance to all tenants and applications in your cloud 4. Ensures all your cloud data is available and protected against loss and corruption. Is an easily programmable resource through our full-featured API and complete automation, reducing manual coding and errors. Is fully compatible with Swift object storage for integrated backup and restore. SolidFire s Cinder driver is elegant and straightforward, allowing customers to configure OpenStack for SolidFire quickly, without the need for additional configuration libraries or add-ons. The maturity of the driver reveals itself in its easy setup for users and the completeness of the feature set once setup is complete: Clone full environment with no impact to storage capacity Guarantee performance through full Quality of Service (QoS) Speed deployment with Glance image caching, and boot-from-volume capabilities Live-migrate instances between OpenStack compute servers completely non-disruptively 2 As defined by the Cinder driver support matrix ( ) and the lack of any additional installation requirements beyond the standard Cinder install. 3 May be dependent on how fast one can type. 4 Through use of Quality of Service parameters All Rights Reserved Page 14

15 Easily triage issues through 1:1 ID mapping between Cinder Vol-ID and SolidFire volume name 1:1 mapping and automated creation of tenant/project IDs as SolidFire accounts Automate configuration of SolidFire s Cinder driver through available Puppet module 5 Reference Architecture 5.1 Architecture For this use case, we have targeted the automated deployment and configuration of OpenStack services and multiple OpenStack storage back ends. With this goal in mind, we have only deployed two compute nodes, with a small SolidFire cluster and a small Ceph storage cluster. We will focus on the steps necessary to automate the deployment and utilize the two storage back-ends. The basic stack deployed will be as shown below, with the absence of the Blue 3rd party tools although Fuel has many plugins to support these tools, we will leave the selection and implementation details for the tool vendors. Fuel will deploy the operating system components, OpenStack components, Ceph storage and connections amongst the components as necessary All Rights Reserved Page 15

16 5.2 Physical Design The hardware used to build this reference architecture is shown below, configured in a rack. The hardware setup comprises eight Dell R620 servers (one deployment node, three controller nodes, three Ceph storage nodes and two compute nodes), a four-node SolidFire cluster, two 10 Gbps Top Of Rack (TOR) switches and two 1 Gbps management switches. The configuration consumes 16 Rack Units leaving vacant 26 rack units for growth. We have chosen to place the storage on the top and the bottom, and the compute node in the middle to allow for scaling of storage (both storage types) and compute. 5.3 Network layout Administrative / Provisioning TOR Access Switches The Dell s55 switches serve to provide 1 Gb/s connectivity for administrative in-band access to all infrastructure components, as well as dedicated connectivity for infrastructure bare-metal provisioning All Rights Reserved Page 16

17 These switches are configured with stacking. In the stacked configuration, the switches provide both redundancy and a single point of switch administration. Stacked switches are seen as a single switch, and thus have the added benefit of allowing the directly connected systems to utilize LACP bonding for aggregated bandwidth, in addition to providing redundant connectivity. When connected to the data center aggregation layer, it s recommended that each admin switch be connected to each upstream aggregation switch independently with at least two links. As the diagram below shows, this facilitates uptime and availability during an upstream aggregation switch outage or maintenance period. VLT is a different good/best practice in environments where the downstream devices like access switches need to build port channels/link Aggregation groups across two separate upstream switches. The downstream switches view the VLT peers as a single logical chassis, thus allowing the LAG group to be split across the two peers. The other benefit is that due to distinct control planes, it is possible to perform maintenance one switch at a time without taking the entire fabric down. Data and Storage TOR Access Switches All Rights Reserved Page 17

18 The Dell s4810 switches provide the necessary high bandwidth 10 Gb/s connectivity required to support OpenStack cluster operations and SolidFire storage access. Dual s4810 switches are configured to utilize virtual-link trunking (VLT). VLT facilitates enhanced high availability, and greater aggregated bandwidth to both the upstream data center aggregation/core layer, as well as downstream to the directly-connected hosts by allowing the switch pair to be seen as one switch, thus allowing the formation of link-aggregation-groups (LAGs) to combine multiple links into one logical link, and aggregating bandwidth. The VLT trunk utilizes 2 x 40GbE links to provide a total of 80Gb/s of total throughput between TOR switches. One or two of the remaining 40GbE ports may be used as uplinks to the data center aggregation switches. This configuration will deliver up to 160Gb/s total throughput between the TOR data/storage switches as the solution scales out rack by rack. The redundant link configuration as shown in the diagram below facilitates high availability and maintains a minimum number of network hops through the infrastructure in the event of an upstream outage. Cabling Considerations This reference architecture utilizes copper cabling throughout the design. For 10G connectivity, copper direct-attach cables, also known as TwinAx cabling, is more cost effective than comparable optical options, and SFPs are directly attached to the cable. Depending on port availability, All Rights Reserved Page 18

19 connectivity to the aggregation layer switches can be via QSFP+ cables, or QSFP+ to 4x10Gbe SFP+ breakout cables. Both options deliver up to 40Gb/s per QSFP+ port. Fiber is also an option, however SFP+ direct-attach cables tend to be more cost effective than optical SFP+ fiber options. Physical Node Connectivity As a general design guideline, all Agile Infrastructure nodes (OpenStack and SolidFire) are dual connected to both the administrative TOR access switches and the TOR data/storage access switches. This provides switch fault-tolerance and increased aggregated bandwidth when needed via multiple NIC bonding options. In this reference architecture, all nodes are configured to utilize NIC bonding to facilitate minimum downtime and added performance when needed. This provides simple, consistent physical connectivity across the deployment. However, different bonding modes were used between OpenStack nodes and SolidFire storage nodes, as explained below. Openstack nodes are configured to use NIC bonding mode 1, or Active-Passive. In addition to providing physical NIC failover in case of a failed link, Active-Passive bonding allows for simple predictable traffic flow over the designated active link. However, if bandwidth becomes a concern All Rights Reserved Page 19

20 at the compute layer or higher aggregated bandwidth is desired, changing to bonding mode 5 (TLB) is recommended to alleviate bandwidth constraints, while maintaining redundancy. Either bonding mode 1 (Active-Passive) or bonding mode 5 (TLB) is recommended for OpenStack nodes because switch-side port configuration is not necessary, making scaling compute nodes simple. SolidFire nodes are configured to use NIC bonding mode 4, or LACP bonding. This bonding mode provides both redundant switch connectivity, as well as aggregated bandwidth, with one main difference between bonding mode 5 (TLB) being that it uses the same mac address on all outbound traffic, and traffic is load-balanced inbound and outbound on all bonded links. LACP bonding suits the storage nodes well, giving up to 20 Gbp/s aggregated storage throughput, per node. Jumbo Frames For maximum performance, use of jumbo frames across the 10G network infrastructure is recommended. The design presented in this reference architecture relies on an MTU setting of 9212 or higher be configured end-to-end across the network path for proper operation and performance. OpenStack nodes and SolidFire nodes have all 10G interfaces set to use an MTU of This MTU is less than that defined on the switches because of the overhead required to process jumbo frames. The switches add additional information to each frame prior to transmission, and thus MTU must be higher than 9000 on the switch side. VLANs We have chosen to configure the four OpenStack networks using VLANs on the bonded 10Gbps network pairs (20 Gbps total throughput) as shown below, using four VLANs assigned by our lab administrator All Rights Reserved Page 20

21 Logical Network Layout Some notable points about our layout choices: 1) Using Neutron Network without the Virtual router option means that the public network is only required to be connected to the controller nodes. 2) The storage network needs to be connected to the Controller nodes to allow Cinder to manipulate data on storage volumes (i.e. for image to volume conversions). 3) Our corporate router/firewall complex does not pass DHCP requests from network to network; therefore, we do not need to have a completely private DHCP network for this architecture. We have a router port allowing access to the Fuel deployment node through the PXE network. The Fuel deployment node must still be the only DHCP controller on the network. 4) All of the storage on our lab is connected to its own control network for management, and we have chosen to route management traffic to it through our corporate router complex. We have moved the data path from the corporate storage network to this architecture s Storage network to facilitate a fast storage network. 5) The PXE network and the Corporate Storage Control network are the only two networks running on 1Gbps infrastructure All Rights Reserved Page 21

22 Networks Summary Networks used in this reference architecture are described below. Network PXE Boot OpenStack (Public) Purpose Used by nodes to PXE boot from the Fuel Provisioning server to provision Operating Systems and OpenStack. This network provides access between OpenStack VMs and the rest of the corporate or data center networks. It also provides portal / API access to OpenStack cloud controllers. Also known as the Public or Floating network, Floating IPs are assigned from this network to individual VMs to enable external connectivity to networks outside the OpenStack environment. The term public does not necessarily imply that the network is Internet accessible. Management Storage Private (mesh) Provides isolated access for OpenStack Cloud internal communications such as DB queries, messaging, HA services, etc. Isolated network access between Openstack nodes and SolidFire cluster or other storage (Ceph). Internal network to provide internal connectivity between tenant VMs across the entire Openstack Cloud. Also known as a fixed network, fixed IPs are assigned to individual VMs All Rights Reserved Page 22

23 Physical and Logical Interface Mapping Role Network Physical Interface VLAN ID Logical Interface OpenStack Controller PXE Port1, Port 2 bond1 OpenStack Port 3, Port bond0.1001@bond0 Storage Port 3, Port 4 25 bond0.25@bond0 Management Port 3, Port bond0.1004@bond0 Private Port 3, Port bond0.1003@bond0 OpenStack Compute PXE Port1, Port 2 bond1 Storage Port 3, Port 4 25 bond0.25@bond0 Management Port 3, Port bond0.1004@bond0 Private Port 3, Port bond0.1003@bond0 Provisioning Server PXE Port1, Port 2 bond1 SolidFire Storage Node Corporate Admin Port1, Port 2 bond1g Storage Port 3, Port 4 25 bond10g Ceph nodes PXE Port1, Port 2 bond1 Storage Port 3, Port 4 25 bond0.25@bond0 Management Port 3, Port bond0.1004@bond0 Private Port 3, Port bond0.1003@bond All Rights Reserved Page 23

24 5.3 OpenStack Storage OpenStack Storage takes on several flavors in every OpenStack installation. In this section we will address each one in some detail, but we will focus on the Block Storage aspects first and in the most depth Block Storage Cinder the OpenStack block storage subsystem consists of 5 cooperating components, which provide block storage services to VMs managed mainly by the Nova compute system. The five components consist of: API service - This service provides a standardized RESTful interface to accept requests, may update the database and places requests on the message bus to other services within Cinder. The API service may send requests directly to the Volume Manager or Volume service for requests going to an existing known volume. Scheduler Service - The scheduler is responsible for determining where and when in storage resources should be allocated. Requests are routed here when decisions of where or when need to be made. Volume Manager service - The Volume Manager service is responsible for communication to the actual storage device through a driver. The Volume Manager service does not necessarily provide any storage, especially when a 3rd party storage device/cluster is used. Volume Service (optional) - The Volume service is the reference architecture for storage provided with Cinder. The Volume service uses standard Linux mechanisms (LVM and TGT) to provide storage. The Volume service does not provide highly available scalable storage as would be demanded by a production cloud, but instead, quick convenient storage for testing OpenStack. Unfortunately the Volume Service is included as part of the code in the volume manager, and therefore many people associate the Volume service and the volume manager as one unit; they are not. In this reference architecture we will use two third-party storage services and not the volume service. Backup Service (optional) - The Cinder backup service allows Cinder to coordinate backup of volumes under its control to OpenStack Swift Object stores. Cinder backup will mount those snapshots to the Cinder controller and copy the data to the Swift service. The below diagram is a good representation of the architecture of Cinder and how we will utilize it in this reference architecture. It does however leave out the fact that Cinder (and Nova) are running in the Fuel High availability architecture utilizing HAproxy, Galera and Pacemaker, which means All Rights Reserved Page 24

25 there are 3 copies of each Cinder service, one of each on each of 3 Controller nodes. Notice that two drivers have been configured in the Cinder Volume Manager service to communicate with the two back-end storage devices (SolidFire and Ceph) Data flow As shown in the diagram above, the SolidFire driver communicates with the SolidFire cluster using a JSON-RPC RESTful interface and the bulk storage data flows from the VMs and Hypervisor via iscsi to the Solidfire cluster without interference from Cinder. Within the SolidFire cluster data for each volume is spread across the nodes (load balanced) by the SolidFire Element Operating System. There is no configuration necessary to take advantage of this load balancing. Data is compressed and deduplicated in-line in 4KB blocks across the cluster. Nova requests and Cinder provides the connection information; the SolidFire driver provides the SVIP as the initial connection point. When the hypervisor contacts the SVIP, Cinder redirects the iscsi session to the optimal location within the SolidFire cluster for the volume being requested. This initial contact with the SVIP interface allows the SolidFire cluster to also load balance iscsi connections throughout the networking connections of the cluster. The redirection of the iscsi connection to a node within the cluster means that all Hypervisors need high speed connection to All Rights Reserved Page 25

26 all SolidFire nodes -- that is why it is important to configure the networking based on the SolidFire networking best practices. In fact the SolidFire nodes also use the same network for internode communication Multi Back End and Message Routing Cinder is designed to support multiple storage devices or back-ends. Cinder will make decisions on which back-end to assign volumes to based on characteristics of the back-end and the request. The Cinder scheduler is the mechanism to make the decisions. A key point about Cinder and multiple back-ends is that Cinder stores information in its database differently when it is configured for multiple back-ends than when it is configured for a single back-end. The change in information storage is related to routing of request to back-end devices. When there is a single back-end it is always accessed through the same host of the message bus and therefore little to no routing information is needed. Once multiple back-ends are configured additional routing information is required. In this reference architecture we will experience this as Fuel is not currently designed for multiple storage back-ends. Once deployment is complete, we will quickly (before provisioning any storage) configure the OpenStack instance for multiple back-ends. The definition of the old single back-end will remain, but be inactive. Because changing from single to multi-back-ends would require a database change for any storage previously configured, it is recommended to immediately change all configurations to multi-back-end constructs (before any allocations) even if only a single backend is configured. In addition to supporting multiple back-ends the Cinder database contains routing information to the instance of Cinder-volume which supports the storage. In the case of the OpenStack reference implementation, cinder-volume is providing the storage (i.e. Cinder Volume service) and has locational demands (i.e. the storage lives on the host running cinder-volume and therefore all requests must go to that host). In the case of most 3rd party storage cinder-volume communicates via the network to orchestrate the storage, such that multiple cinder-volume services may be able to satisfy the request by communicating with the storage back-end over the network. In the first case, the routing information is generally the hostname or IP address of the cinder-volume service, but in the second case we want any and all cinder-volume services to be available to handle the request, we use a generic routing variable defined by the host directive in the cinder.conf to indicate a common host name. An example may help: All Rights Reserved Page 26

27 In this screenshot, the last three sections of cinder.conf are shown along with the cinder service list. You can see that cinder uses the host= configuration line to identify the host name. Any cinder-volume service with that hostname will be able to retrieve message requests for that backend (the message bus enforces atomic operations so only a single host will process each request), but the third storage system will always to routed to the host 'node-25.' The ability of multiple hosts to process requests is important for high availability of the OpenStack controller nodes. The SolidFire plugin configures the host to always be solidfire on all OpenStack controllers deployed by Fuel. In addition, if we look at specific volumes, we can see their routing information in the database. Notice that the pool name has been appended to the routing information (after the # ). If your services change from the volume entries in the database a migration of the data for the volumes is necessary Volume Types Volume types provide a tagging system to characterize the storage OpenStack makes available to users. If volume types are not used, all block storage is considered to be the same and the OpenStack scheduler will select the least full back-end to place any requested volumes All Rights Reserved Page 27

28 If, however, volume types are set up, sophisticated direction of volume requests to back-end characteristics, including quality of service (QoS), may be assigned. Volume types are given simple names to identify them for users -- but under the covers, key/value pairs provide direction to the storage subsystem and the scheduler to determine from which back-end to satisfy a request. The most common practice is to name various back-ends in cinder.conf, and then assign a specific volume type to the back-end by that name. Other options are to assign back-ends by vendor name or by some other characteristic, like replication Scheduling/Assignment The Cinder scheduler takes information from the volume request and uses that information to find a placement for the volume on a configured storage back-end. Cinder automatically polls configured back-ends every 20 seconds to determine their type and capacity. The information from the poll and configuration information in the cinder.conf, mainly Pools and volume_backend_name, are then utilized to determine on which back-end to place the requested volume. The scheduler has the ability for users to write their own filters, but most users interact with the scheduler through volume-types and volume_back-end_names. The volume_back-end_name parameter is configured in the stanza of the cinder.conf file and may be the same for multiple back-ends of the same type, but should be different for back-ends with different characteristics. When configuring a volume-type, the volume_back-end_name can and should be specified and the information will be carried along to the scheduler such that it can decide where to place the volume. If there are multiple back ends with the same volume_backend_name (or all other characteristics lead to multiple back-ends) the scheduler will utilize fullness information it obtains from the 20 second poll operation and place the volume on the less full back-end SolidFire All configuration in this Reference Architecture is based on version of the SolidFire Element Operating System OpenStack SolidFire integration SolidFire s integration with OpenStack is through the Cinder driver and the standard iscsi protocol. The integration through iscsi allows Cinder to perform any needed manipulations of data within volumes as needed for image to volume and backup operations. The SolidFire driver communicates with the SolidFire cluster using JSON-RPC on a HTTPS and TCP connection to the SolidFire All Rights Reserved Page 28

29 Cluster s Management Virtual IP Interface (MVIP). The MVIP interface will move throughout the cluster in response to load and/or failure events. The SolidFire Cinder driver contains all functionality for volume manipulations, including create, delete, attach, detach, extend, snapshot and others. The SolidFire driver also implements the Cinder functionality for Quality of Service (QoS) designations on volumes through the volume-type specifications (see next section for details). SolidFire also includes a unique capability to carry the tenant information into the storage subsystem and create unique iscsi CHAP credentials for each tenant. The example below details the integration of names between OpenStack and SolidFire. You can see in the display from OpenStack that the tenant/project ID 18e has an account created (automatically by the driver) on the SolidFire cluster and that there is one volume (a3f654 ) for the project in OpenStack attached to a running instance ( ) The ability to filter on the list of volumes by account allows for simple location of the volume, and the attributes provide some vital information about the volume including creation date, cloned count, and the instance the volume is attached to. The volume name is the volume_id from OpenStack prepended with the UUID- prefix. This information may also be obtained from the SolidFire API for script and program interaction with the SolidFire cluster and OpenStack API at the same time All Rights Reserved Page 29

30 In SolidFire, accounts provide the means to specify iscsi CHAP credentials. Therefore, if an administrator needed to track down connection information, they could access the CHAP credentials for the tenant/project as needed (See below). The CHAP credentials are randomly generated by the SolidFire OpenStack driver and applied to the Hypervisor by Nova after requesting from Cinder All Rights Reserved Page 30

31 On the hypervisor, because the iscsi protocol mounts the device, one can, if needed, investigate the connections to the SolidFire array utilizing standard Linux tools, including the following commands: ls -al /dev/disk/by-path/ip* iscsiadm -m node --show -T <iscsi target> iostat -mtx 5 fdisk -l QoS The SolidFire cluster implements Quality of Service metrics and enforcement for each and every volume. Through tight integration with OpenStack through the Cinder driver, OpenStack can and will allocate every volume on SolidFire with QoS metrics (if nothing else, the cluster default metrics will be applied). There are two possible ways to assign QoS levels to volumes in OpenStack. The first option involves the OpenStack administrator configuring OpenStack Volume Types for each disk service offering. This method allows the OpenStack administrator full control and tenants may only create volumes based on administrator-created types. The second option allows the tenant to either choose from pre-configured values or the tenant can also create new performance types. Use the second option with caution. Providing multiple volume types and utilizing OpenStack Ceilometer gives administrators the ability to present a self-service, enforced performance and chargeback-capable storage system to their users. SolidFire has worked with many customers to build public and private clouds which charge a differentiated price structure for various performance levels of storage. Many of these customers label their volume types with the expected performance (min. max and burst) and price per GB per month; for example, Gold-1000/2000/2500-$0.75/GB/month. Administrator Assigning QoS with OpenStack Cinder QoS Specs To assign QoS specifications to a SolidFire volume, the OpenStack QoS specs are used. The SolidFire driver in OpenStack will check the volume type and QoS spec information by default when the SolidFire volume is created. Following is an example of setting up and using Quality of Service settings with a SolidFire backend. Notice that the name for the Volume Type can be of your choosing; however, specific keys are expected in the QoS specs (miniops, maxiops and burstiops). Note: Admin access is required All Rights Reserved Page 31

32 Note: Key names are case sensitive. As the OpenStack admin user, navigate to Admin -> System -> Volumes -> Volume Types. Click on the Create a QoS Spec button and the Create QoS Spec dialog box will appear. Enter a name for the QoS specification and indicate that the Consumer will be back-end. The consumer is indicating where the Quality of Service is going to be enforced, either the back-end storage array or the front-end Hypervisor (KVM, QEMU, ESXi), or both. Upon completion, a new line will appear in the QoS Specs list. Next, click the Manage Specs button and the details for the Gold QoS Spec will be shown. Click the + Create button and the Create Spec dialog box (shown below) will appear All Rights Reserved Page 32

33 Use this combination to create the 3 QoS keys (miniops, maxiops, burstiops) and their respective values. Once complete the Gold QoS Spec should look like the following: Click Close and the QoS Spec list on the Volumes -> Volume Types page should now show. Next create a Volume Type to associate these QoS specs with by clicking on Create Volume Type in the upper right corner of the Volumes -> Volume Types page. Click Create Volume Type and then in the Actions column of the list, pull down the menu and select Manage QoS Spec Association. The Associate QoS Spec with Volume Type dialog will appear and allow selection of the QoS Specs assigned to this Volume Type. Once the QoS Specs have been selected, you will see word Gold in the second column of the Volume Type list. Tenants may change volume-types on volumes as they wish and Ceilometer will log the changes such that critical jobs (at month end, for example) may be accelerated for the appropriate fee All Rights Reserved Page 33

34 SnapShots/Clones The SolidFire Cinder driver implements OpenStack snapshots as clones and indicates as such through the use of the is_clone attribute on the volume (shown below). Clones on SolidFire clusters are lightweight and initially take only additional metadata space. Clones and snapshots (which are clones) are named using the same convention as Volumes by the driver with the UUID- prefix Data Efficiency SolidFire clusters provide full de-duplication and two levels compression across the cluster by default. The efficiency information may be obtained from the Reporting tab of the SolidFire UI and overall efficiency including thin provisioning savings is report in the bottom left corner Image Caching SolidFire image caching is used to eliminate the copy of glance images to volumes every time a bootable volume is needed. SolidFire image caching is built into the OpenStack Cinder driver and is enabled by default, but provides the administrator with the ability to indicate which images to cache with a special property placed on the image in the Glance database. The image cache, upon the first copy operation from Glance to a volume (if the property is set), will copy the image to a volume stored under the OpenStack administrative user, and then, using the lightweight cloning operation on the SolidFire cluster, clone a copy for the user who has requested the volume. On subsequent image to volume operations for the same image, the volume for the admin user will be checked to make sure it is still current, and if current, a clone operation will be performed to quickly present the next volume. Depending on the size of the image, and on network and Glance storage performance, this cloning operation has been known to save up to one-half hour of copy time from Glance, per image, thus All Rights Reserved Page 34

35 providing great workflow efficiency for Development and Test environments or production ondemand workloads Limits and Capabilities The SolidFire Element 8.0 cluster has the following design specifications that should be adhered to: System Design Nodes in a cluster Capability 4 Minimum, 30 (validated), 100 Maximum Volumes 2000 per account, 17,500 total volumes per cluster, 700 per node with 400 mounted with active I/O Volume Size Volume Snapshots Node IOPS Volume IOPS Sequential Read BW Sequential Write BW Cloning Integrated Backup/Restore Operations 8TB 32 per volume 50,000 (80% read, 4KB random) per node for SF3010, SF2405, SF6010, SF4805 and SF ,000 (80% read, 4KB random) per node for SF ,000 (80% read 4KB random) per volume 350 MB/s per node 190 MB/s per node 2 simultaneous clones in progress per volume 2 simultaneous bulk volume operations in progress per volume Ceph Ceph is based on RADOS: Reliable Autonomic Distributed Object Store. RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components: All Rights Reserved Page 35

36 Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data). Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server. Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. The components of Ceph are installed and configured by Fuel in accordance to the Best Practices detailed by Mirantis. In our reference design we have instructed Fuel to configure a minimal 3 node cluster with the OSD and MON demons. Our Ceph cluster consists of Dell R620 s with the same CPU and memory configuration as our Controller and Compute nodes, but with two additional 300GB SSDs. We are not using the Ceph Filesystem, which is still considered Beta at this writing. Ceph Clients include a number of service interfaces. These include: Object Storage: The Ceph Object Storage (a.k.a., RGW) service provides RESTful APIs with interfaces that are compatible with Amazon S3 and OpenStack Swift. Block Devices: The Ceph Block Device (a.k.a., RBD) service provides resizable, thinprovisioned block devices with snapshotting and cloning. Filesystem: The Ceph Filesystem (CephFS) service provides a POSIX compliant file system usable with mount or as a filesystem in userspace (FUSE) Object Storage The Ceph Object Storage daemon, radosgw, is a FastCGI service that provides a RESTful HTTP API to store objects and metadata. It layers on top of the Ceph Storage Cluster with its own data formats, and maintains its own user database, authentication, and access control. The RADOS Gateway uses a unified namespace, which means you can use either the OpenStack Swiftcompatible API or the Amazon S3-compatible API. For example, you can write data using the S3- compatible API with one application and then read data using the Swift-compatible API with another application. Ceph s Object Storage uses the term object to describe the data it stores. S3 and Swift objects are not the same as the objects that Ceph writes to the Ceph Storage Cluster. Ceph Object Storage objects are mapped to Ceph Storage Cluster objects. The S3 and Swift objects do not necessarily All Rights Reserved Page 36

37 correspond in a 1:1 manner with objects stored in the storage cluster. It is possible for an S3 or Swift object to map to multiple Ceph objects Block Storage A Ceph Block Device stripes a block device image over multiple objects in the Ceph Storage Cluster, where each object gets mapped to a placement group and distributed, and the placement groups are spread across separate ceph-osd daemons throughout the cluster. Striping allows RBD block devices to perform better than a single disk could, but does compete with all operations active in the Ceph storage cluster. In virtual machine scenarios, a Ceph Block Device with the rbd network storage driver is integrated in Qemu/KVM, where the host machine uses librbd to provide a block device service to the guest. The RBD devices and protocol -- which are open, but exclusive to Ceph -- have been integrated with libvirt and any hypervisors that use it. OpenStack can use thin-provisioned Ceph Block Devices with Qemu and libvirt. Ceph does not support hypervisors other than those utilizing libvirt at this time (example Xen or Hyper-V), but one may integrate with other hypervisors themselves with the rbd command line tool provided with Ceph FileSystem The Ceph Filesystem (Ceph FS) provides a POSIX-compliant filesystem as a service that is layered on top of the object-based Ceph Storage Cluster. Ceph FS files get mapped to objects that Ceph stores in the Ceph Storage Cluster. Ceph Clients mount a CephFS filesystem as a kernel object or as a Filesystem in User Space (FUSE). The Ceph Filesystem is still considered to be in beta at the time of this writing RADOS Protocol Ceph does not utilize a standards-based storage protocol such as iscsi or Fibre Channel, but accesses storage via an exclusive, but open source protocol called RADOS, referred to sometimes simply as RBD. The RBD protocol specifies a set of operations which are carried out over the TCP protocol between the client and the various Daemons of the Ceph cluster. The Ceph Storage Cluster does not perform request routing or dispatching on behalf of the Ceph Clients. Ceph Clients make requests directly to Ceph OSD Daemons which means clients need All Rights Reserved Page 37

38 access to all nodes in a Ceph cluster. Ceph OSD Daemons perform data replication on behalf of Ceph Clients, which means replication and other factors impose additional loads on Ceph Storage Cluster networks. Ceph (RedHat/Inktank) offer several libraries which provide easier access to the RADOS protocol and integrate the protocol into the Linux kernel. RADOS utilizes ports in the range and therefore firewall access must be allowed between the clients and this port range Server and Disk Layout The key to the Ceph and the RADOS protocol is the CRUSH (Controlled, Scalable, Decentralized Placement of Replicated Data) algorithm and CRUSH map to determine data placement. The CRUSH algorithm determines how to store and retrieve data by computing data storage locations. CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a centralized server or broker. With an algorithmically determined method of storing and retrieving data, Ceph attempts to avoid single points of failure, performance bottlenecks, and physical limits to scalability. CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and retrieve data in OSDs with a uniform distribution of data across the cluster. CRUSH maps contain a list of OSDs, a list of buckets for aggregating the devices into physical locations, and a list of rules that tell CRUSH how it should replicate data in a Ceph cluster s pools. By reflecting the underlying physical organization of the installation, CRUSH can model and thereby address potential sources of correlated device failures. Typical sources include physical proximity, a shared power source, and a shared network. By encoding this information into the cluster map, CRUSH placement policies can separate object replicas across different failure domains while still maintaining the desired distribution. For example, to address the possibility of concurrent failures, it may be desirable to ensure that data replicas are on devices using different shelves, racks, power supplies, controllers, and/or physical locations. The CRUSH map for our reference architecture simply has 3 buckets, one for each server, and weights the two OSD devices relative to their equal sizes (i.e. weights are equal because the SSDs are equal-sized) All Rights Reserved Page 38

39 Weighting Bucket Items Ceph expresses bucket weights as doubles, which allows for fine weighting. A weight is the relative difference between device capacities. We recommend using 1.00 as the relative weight for a 1TB storage device. In such a scenario, a weight of 0.5 would represent approximately 500GB, and a weight of 3.00 would represent approximately 3TB. Higher level buckets have a weight that is the sum total of the leaf items aggregated by the bucket. A bucket item weight is one dimensional, but you may also calculate your item weights to reflect the performance of the storage drive. For example, if you have many 1TB drives where some have relatively low data transfer rate and the others have a relatively high data transfer rate, you may weight them differently, even though they have the same capacity (e.g., a weight of 0.80 for the first set of drives with lower total throughput, and 1.20 for the second set of drives with higher total throughput). The bucket weighting is one of the main tunables of Ceph and one should consult the Ceph documentation for details on Tuning All Rights Reserved Page 39

40 OpenStack Block Storage Integration Ceph is integrated with the OpenStack block storage subsystem through Cinder and exclusive coded integrations via the librados library. The Cinder driver for Ceph contains all of the functionality to create, delete, attach, detach, extend, snapshot and most other volume manipulations. Since Ceph is uses its own protocol in place of the standard OpenStack storage protocols (iscsi, FC, iser, NFS etc.), at the present time there is no functionality to perform volume data manipulations on Ceph volumes (backup, image to volume, image conversion) by Cinder. This limitation may change in future OpenStack releases, but as of this writing is not included. Name integration between Ceph and Cinder has been done utilizing the volume and snapshot IDs assigned by Cinder and are allocated from pools configured by Ceph and the cinder.conf file. This straightforward integration strategy makes it easy for administrators to track down volumes and snapshots from OpenStack to their locations in Ceph. As an example see the following listing of a volume and its snapshots from Ceph and Cinder: Ceph has been integrated with libvirt and therefore VMs will access volumes through the RBD protocol directly from libvirt and will not be seen by the Hypervisor operating system. As an example, the following is the disk configuration of libvirt for 2 VMs. as you can see the first VM is configured to use a disk path from the Hypervisor and the second is configured to use Ceph via the RBD protocol directly from libvirt. Since the access is from libvirt and not the hypervisor kernel there is no visibility to the hypervisor All Rights Reserved Page 40

41 Image Caching If the Glance image storage is configured to directly utilize Ceph, Ceph will then attempt to perform a clone operation for all image to volume operations. The use of cloning by the Storage subsystem expedites the operation, prevents the repeated copying of images and allows the storage subsystem to thin-provision and de-duplicate data amongst volumes and images where possible, which is important for Ceph since it has no ability to deduplicate after the images and volumes are created. The use of image to volume operations is limited to raw images since there is no ability by Cinder to mount the Ceph volume, convert the image (from qcow2 for example) to a raw format and apply that raw image to the Ceph volume Image storage OpenStack image storage is actually a misnomer, in that the image service, known as Glance, does not provide any actual storage. Glance provides a catalog of images (VM templates) which contains location information and helper routines store and retrieve the actual images. Glance may also build an image cache which may be used on each or some hypervisors to speed up the creation of VMs with ephemeral storage. In the reference architecture we have chosen Ceph to be the primary storage location for image data (This is the Fuel default) Standard Catalog Images The standard image catalog consists of database entries and storage locations for each image. Glance will create the database entries and utilize its configured storage location to maintain the image. In this standard case for our reference architecture the image is located on the Ceph storage and is located under the images folder within Ceph. The image information may be examined using the following command (example shown below): rbd ls -l images All Rights Reserved Page 41

42 Note: Within Ceph, the image files are stored with the image_id assigned by Glance Snapshot Image Image snapshots are designed to allow users to quickly snapshot an instance and upload it as a new image within the Glance catalog. To demonstrate this capability, create an instance (VM) with a bootable volume using Project (non admin) -> Compute -> Instances -> Launch Instance and select the Instance Boot Source as shown below. Once the instance is booted, in the Admin -> System -> Instances -> <Instance> -> Create Snapshot and type in a name. You will be taken to the project -> Compute -> Images tab and a new image will have been created with your snapshot name. If we look at how this image is constructed via the command line, we can see the glance image with no Format or Size information as shown below. Looking at the image detail, we see the block_device_mapping property has a snapshot ID which corresponds to the snapshot on the volume as shown below All Rights Reserved Page 42

43 The snapshot image capability is a great method to create golden images from a running instance. This functionality is limited to the Ceph back-end only. The images created this way may not be used with other back-ends, and OpenStack will fail silently when attempting to use these images with other back-ends All Rights Reserved Page 43

44 The snapshot image functionality has only been partially implemented by the Glance project for storage other than Ceph in OpenStack releases up to and including Liberty. While this functionality appears to work for SolidFire (or any other back-end) it does not actually work. If one attempts to use this with iscsi/fc back-ends, the image will appear in the catalog, but the snapshot on the storage will never appear and OpenStack will report the snapshot being in error. The recommended approach to this need for all storage back-ends is to utilize snapshots through the volume subsystem and then to clone those snapshots to bootable images through the volume subsystem as shown in this sequence. First create a volume from an image and boot the instance from the volume. Next create a snapshot of the root volume of the running instance; in this case use the force option to force the snapshot while the instance is running (this will not be a consistent snapshot, if you want a consistent snapshot shutdown the VM). After creating the snapshot, boot another instance from the snapshot (this will create a new volume from the snapshot) as shown below All Rights Reserved Page 44

45 If more Instances are needed continue to create instances from the Golden snapshot. NOTE: This method documented here will work for all storage back-ends and is therefore the recommended method for creating Instances from a Golden Image Object Storage Object storage (also known as object-based storage [1] ) is a storage architecture that manages data as objects, as opposed to other storage architectures like file systems which manage data as a file hierarchy and block storage which manages data as blocks within sectors and tracks. [2] Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier. Object storage, stores and retrieves arbitrary unstructured data objects via a RESTful, HTTP based API. It is highly fault tolerant with its data replication and scale out architecture. Its implementation is not like a file server with mountable directories. OpenStack Object Storage is based on the Swift object model and may be implemented by a number of software packages including Swift, and in the case for this reference architecture, Ceph All Rights Reserved Page 45

46 In this reference architecture we have allowed Fuel to configure Object Storage through Ceph for completeness, but is not the focus and therefore less documented. 6 Best Practices 6.1 Mirantis OpenStack Best Practices Planning is important for successful deployment. The Mirantis OpenStack Planning Guide is a recommended resource for the planning phase. The following set of hardware should be provided: One Mirantis OpenStack server is required for all Mirantis OpenStack deployments Three controllers are required for HA An adequate number of compute nodes for the projected workload Redundant power and networking is highly recommended If possible, nodes should be equipped with: Four 10GbE interfaces for LACP and separation of storage and general networking One GbE interface for admin/pxe network A larger number of cloud nodes is preferable to cutting edge CPUs and large amounts of memory in a small number of nodes. Providing a larger number of nodes improves both performance and resilience to node failure while usually reducing cost. (Scale-out architecture is preferred to scale-up architecture.) While Mirantis OpenStack provides network verification, it is highly recommended to double check IP ranges to ensure a smooth deployment. IP ranges cannot be changed later without redeployment. Mirantis recommends adhering closely to the Mirantis OpenStack reference architecture for general Mirantis OpenStack configuration and deployment. Deviations from the reference architecture are feasible and in some cases required, but add complexity and require additional testing as well as operational procedures to support them. Changes after Fuel deployment should be kept to a minimum. If a major change is necessary, development of a Fuel plugin should be considered to provide repeatability of the deployment as well as reduce the risk of human error All Rights Reserved Page 46

47 For further details on these and other best practices, please refer to the Mirantis OpenStack 7.0 documentation. 6.2 Networking Best Practices Implement a redundant, fault-tolerant network infrastructure. Implement a dedicated 10 gigabit Storage Network. Implement a dedicated, or shared network for SolidFire storage node management and intra-cluster communications. Implement Jumbo Frames support end-to-end within the switching infrastructure, on all SolidFire storage nodes, and all servers connected to the storage network. Implement redundant connectivity for all storage nodes. For optimal end-to-end availability and enhanced performance, configure redundant NIC connectivity for all server-side connections utilizing NIC switch fault-tolerance and/or NIC bonding, where supported. Storage-side switches should have a non-blocking backplane architecture, and support fullduplex, line-rate forwarding on all ports. Backplane capacity should be greater than or equal to the following calculation: ((the total number of switch ports * 10Gb/s) * 2). SolidFire suggests a minimum of 512K packet buffers per port on all storage network switches. Some switches, by design, share the buffer allocation either between a group of ports, or for all ports as a whole on the switch. Best practice dictates using a switch that implements per-port packet buffers for dedicated high-bandwidth storage networks, otherwise performance may not be optimal. If the switch does not support per-port packet buffers, tune the settings, if possible, to allow adequate buffer space for the entire group of ports, and minimize the possibility of packet loss. Ethernet flow control must be implemented in a consistent manner across the network infrastructure to properly deal with network over-subscription situations that could impact performance. Failure to do so will likely cause packet loss and degraded network throughput. SolidFire storage nodes require availability of the following network services via the 1GbE Management interfaces and network to ensure proper SAN functionality: DNS - outbound access required for client lookups SNMP - inbound access for SNMP queries Syslog - outbound access for logging to an external log server SSH - inbound access required for node-level management All Rights Reserved Page 47

48 HTTPS - inbound access required for Cluster-level management and API communications SolidFire Fast Deploy Virtual Appliance communications Disable Delayed ACK - it's mostly for low b/w environments Disable Large Receive Offload (LRO) on network cards this is associated with TOE and is known to cause similar issues with ISCSI traffic 6.3 Storage Best Practices When Cinder is deployed with SolidFire, Cinder snapshots are created by leveraging the Cloning feature on the SolidFire Cluster. An attribute is_clone is set to True on the cloned volumes. SolidFire does not provide traditional multipath HA from Linux perspective. The SolidFire cluster uses a Virtual IP address which initiators communicate with. In the event of a failure the Virtual IP address is moved throughout the cluster to provide resiliency. Therefore; SolidFire does not recommend configuring multi-path HA in Nova or Cinder. If live migration of VMs is a functional requirement of an OpenStack deployment, SolidFire recommends disabling Configuration drives in the OpenStack environment as the existence of Config Drives prevents Live Migration. VLAN Tagging is supported on the SolidFire SVIP interface. However, OpenStack Cinder and the SolidFire driver have no concept of VLANs, therefore it is not recommended to utilize VLANs. However port-based, Layer-2 VLANs may be implemented at the customer switch for creating an isolated storage network. 6.4 Database Best Practices If you choose to place the OpenStack MySQL database on SolidFire 512-byte emulation on the SolidFire volume is required. Use the InnoDB storage engine of MySQL. Restrict database network traffic to the OpenStack management network. Configure the database to require SSL for database clients to protect data on the wire. SolidFire recommends breaking out the MySQL database components into individual volumes, allowing the appropriate MySQL binary log files to be applied for any necessary recovery after a restore of the data All Rights Reserved Page 48

49 MySQL has a default I/O block size of 16k, which SolidFire translates to 4k blocks in the Volume creation and modification interfaces. This default can be adjusted to 4k or 8k via the innodb_page_size configuration option, but this must be done before any InnoDB tablespaces are created. Existing databases will most likely be set to the default 16k page size. To determine the correct QoS settings for your existing application, a good rule of thumb is to take your specific MySQL IOPS requirements (at 16k block size) and multiply that number by 2.7 to get the SolidFire IOPS setting (at 4k). This adjustment factor is determined from a non-linear algorithm that accounts for the conversion between the 16k and 4k block sizes. Create application-consistent backups with SolidFire Snapshot copies by flushing the MySQL cache to disk. With SolidFire Element 6 and above, Real-Time Replication can be used to create a Cold Standby DR copy of a MySQL database in a remote SolidFire Cluster. To make use of this feature, set up the target MySQL volume as a replication target and pair the volumes as per the Element 6 User Guide (Real-Time Replication section). Follow the Galera configuration recommendations defined in the MySQL Galera cluster documentation. Deploy a minimum of three nodes for the MySQL Galera cluster because Galera is a quorum-based system that requires at least three nodes to achieve quorum. SolidFire highly recommends that database clients access MySQL Galera instances through a load balancer as set up by Fuel. For appropriate MySQL and SolidFire settings, refer to the SolidFire Best Practices for MySQL document available from the SolidFire website All Rights Reserved Page 49

50 6.5 Hypervisor Operating System Tuning To obtain maximum performance from your SolidFire volumes there are some Linux kernel parameters that can be tuned. Current tuning parameters for the volume can be checked by running a few commands. Procedure: To check tuning parameters, run the following commands: demo@demo:~# cat /sys/block/sdx/queue/rotational demo@demo:~# cat /sys/block/sdx/queue/scheduler demo@demo:~# cat /sys/block/sdx/queue/nr_requests demo@demo:~# cat /sys/block/sdx/queue/add_random It is recommended for OpenStack deployments that you set the tuning parameters with udev rules. Procedure: 1. Create the following file as root: /lib/udev/rules.d/62-sfudev.rules 2. Edit the /lib/udev/rules.d/62-sfudev.rules file as root and add the following lines: KERNEL=="sd*", \ SUBSYSTEM=="block", \ ENV{ID_VENDOR}=="SolidFir", \ RUN+="/bin/sh -c 'echo 0 > /sys/$devpath/queue/rotational && \ echo noop > /sys/$devpath/queue/scheduler && \ echo 1024 > /sys/$devpath/queue/nr_requests && \ echo 0 > /sys/$devpath/queue/add_random && \ /sbin/hdparm -Q <QD> /dev/$kernel \ ' \ " a. See below Volume I/O Queue Depth for an explanation on setting the <QD> value shown. 3. Trigger the udev rules so the tunes are executed: demo@demo:~# sudo udevadm trigger. NOTE: This is only done when the rules are created or updated. Going forward the rules are matched/executed when the OS boots All Rights Reserved Page 50

51 4. Verify that the tunes have been made by rechecking the tuning parameters (Checking Tuning Parameters). The values should be: rotational 0 scheduler [noop] nr_requests 1024 add_random 0 Volume I/O Queue Depth To optimize SolidFire QoS, it is necessary to update the volume queue depth to the appropriate value. If the queue depth is too high, then frames remain in the active queue too long. However, if the queue depth is too low, then the volume is unable to reach its desired performance levels. The following table assists you in evaluating what your queue depth should be: Min IOPS Queue Depth In addition, certain hypervisors and HBA s may throttle queue depth; refer to the Configuring SolidFire's Quality of Service or Defining SolidFire's Quality of Service guides for additional details. If the hdparm command cannot complete, then check and update the settings on these devices accordingly. NOTE: The queue depth settings listed are suggestions only. They should be used as a starting point for tuning your OS and application performance All Rights Reserved Page 51

52 7 Implementation 7.1 SolidFire Fuel plugin This reference architecture is deployed with a Mirantis Unlocked Validated Fuel plugin from SolidFire. The plugin provides a simple configuration of Fuel to configure OpenStack to utilize the Solidfire storage cluster. The plugin will eliminate the need to configure OpenStack manually if SolidFire is the only storage in the OpenStack environment. If other storage (such as Ceph) is to be used in addition to the SolidFire cluster, so small manual steps are required until Fuel is patched to support multi-back-end. The steps are all documented below. The plugin may be downloaded from the Fuel Plugin Catalog. 7.2 Deployment Workflow Deployment of the Fuel Master Node should follow directly from the Fuel User Guide. When arriving at the section Boot the node servers we have chosen to use a script which contacts the Dell Remote Access Controller (DRAC) and configures the Dell servers to PXE boot and then reboots the servers. In the following it is assumed that the DRAC has been configured with proper SSH keys such that passwords are not required. To configure a SSH key, using the root pasword: ssh root@ 'racadm sshpkauth -i 2 -k 1 "contents of the public key file"' The servers DRACs we have been assigned for this reference architecture are consecutively numbered from on their subnet, so a simple sequence of those IPs is used to drive the script. The script and its output are shown below. for i in `seq `; do echo $i ssh root@ $i "racadm config -g cfgserverinfo -o cfgserverbootonce 0" ssh root@ $i "racadm config -g cfgserverinfo -o cfgserverfirstbootdevice PXE" ssh root@ $i "racadm serveraction powercycle" done All Rights Reserved Page 52

53 In the next section of the user guide, Install Fuel Plugins, install the SolidFire Fuel plugin as shown below by copying the plugin to the master node using: scp fuel-plugin-solidfire-cinder noarch.rpm and then installing with the command: fuel plugins --install /tmp/fuel-plugin-solidfire-cinder noarch.rpm All Rights Reserved Page 53

54 To verify that the plugin has been installed correctly, consult with the Fuel GUI and the Plugins page as shown below. At this point, we will create an environment for the reference architecture. The steps are documented below. Select New OpenStack Environment and begin the wizard process by providing a name for the environment (we will call ours ReferenceArchitecture1). In Fuel 7.0 there are no other options for OpenStack Release than the Kilo on Ubuntu release All Rights Reserved Page 54

55 Next select the compute virtualization options. We will be using hardware based virtualization based on KVM for this reference architecture. For our hardware setup, we have been provided the necessary hardware networks, but not sufficient VLANs to provide each tenant with a VLAN for their private traffic, so we will let Fuel build an OpenStack environment with VXLAN segmentation All Rights Reserved Page 55

56 We will be deploying the reference environment with SolidFIre and Ceph based storage, to demonstrate how to make the two platforms work together, so select Ceph as shown. Finally we will install Ceilometer in case we decide to use the Heat AutoScaling features All Rights Reserved Page 56

57 Upon completion of the environment wizard you will be presented with other configuration options for the environment including the node list. Personally, I like to work these backwards from the settings tab. For our environment, we change two things within the settings tab, one being under common to enable OpenStack Debug logging. The second change we need to make is to enable the SolidFire Cinder plugin and enter the appropriate configuration information for the SolidFire cluster. The SolidFire cluster Management Virtual IP (MVIP) address needs to be entered into the Plugin configuration along with the Cluster Admin login and password you wish to use. The default cluster admin may be used or a new cluster admin may be created with Reporting, Volumes and Account credentials minimum (shown below) All Rights Reserved Page 57

58 The remainder of the settings in the SolidFire plugin will be left at their defaults for the environment as described, but for convenience, they are described here Cluster endpoint Port - If your architecture requires a proxy between the SolidFire and the OpenStack controller nodes, you may need to change the Cluster Endpoint port. Enable Caching/Template Account - Image Caching is enabled by default at the driver/cluster level, if you wish to not use image caching you can disable it, or if you wish to have cache images contained under a different account on the Solidfire cluster, change the Template account. SolidFire Emulate 512 block size - SolidFire natively uses 4KB block size, but will emulate a 512B block device if requested. Qemu and other parts of OpenStack require 512B support therefore leave this enabled. SF account prefix - If multiple OpenStack environments will be using the same SolidFire cluster, an account prefix may be added to each account created. This should be set if you anticipate multiple OpenStack environments access the same SolidFire cluster. Click Save Settings All Rights Reserved Page 58

59 Moving on to the Network tab next, fill in the information for the networks as designed. In our case, we will move the Public network to a VLAN and configure the appropriate VLANs and IP addresses for each network. If you are using an existing network (as we are for our storage network) and need to configure specific IP ranges for those networks, configure the base information here and in the next step we will reduce the range as appropriate. In our storage network, we need to utilize a subset of the IP range on the VLAN25 subnet /24 so we will follow the procedure as described in the Adding Network Ranges subsection of the Using the Fuel CLI section of the Fuel User Guide, as shown below for convenience All Rights Reserved Page 59

60 On the Fuel master node, we will run the commands shown below: Once the networks are defined correctly, proceed to the Nodes tab and add nodes to the various roles. For this reference architecture we will build three controller nodes, two compute nodes and three Ceph nodes. The Ceph nodes need to have enough disk space and must have similar disk configurations, so we ll select the three nodes with the appropriate disk configurations and the other node roles will be assigned from the remaining nodes All Rights Reserved Page 60

61 Once all the nodes have been defined, select all of them and click on the Configure Interfaces button in the upper right. This screen is where you define the logical to physical network layout. For our design we will bond the two 10Gbps ethernet interfaces together with LACP bonding and place the 4 VLANs on top of the bond. There are 3 options available for the bonding interface, Balance-rr, Active-Backup and LACP. Both Balance-rr and LACP require specific switch configurations. Active-Backup is the safest for any switch configuration, but also limits to the bandwidth of a single interface. NOTE: We experienced server network problems when we selected Balance-rr by accident with our multi-switch setup. If your network is not properly configured, use Active-backup until the network is configured properly for Balance-rr or LACP. NOTE: We have also changed the Maximum Transmission Unit (MTU) of the bond interface to 9000 to support jumbo frames to increase throughput on the networks All Rights Reserved Page 61

62 Click apply and then jump back to the Networks tab and scroll to the bottom and run the network verification tests as shown below. Correct any errors discovered in your network configuration. Next, on the Nodes tab select the three ceph nodes, click Configure disks, and verify their disk configuration meets the design All Rights Reserved Page 62

63 In our case, the default configuration selected by Fuel is sufficient. We are now ready to deploy the environment. Select the dashboard tab and verify that there are no messages that need to be addressed and click on the Deploy Changes button. This process will take about one hour All Rights Reserved Page 63

64 Monitor overall progress on the Dashboard tab. Or monitor process of individual nodes on the Nodes tab All Rights Reserved Page 64

65 When the deployment process is complete, you will see a message similar to the one shown below (Depending on which plugins you have installed). Notice the message indicating the environment has been deployed with the SolidFire plugin All Rights Reserved Page 65

66 At this point, you will have functioning OpenStack environment which may be accessed by the Proceed to Horizon button. However, it will be configured for SolidFire storage only. The next steps will take you through the process to enable SolidFire and Ceph storage together. Login to the Fuel master node and run the command: fuel node list grep controller awk '{print $10}' to get a list of the IP address of the 3 controller nodes. Login from the fuel master server to each of the controllers listed with the above command and then back up and edit the file /etc/cinder/cinder.conf. At the bottom of the file, create a new section called Ceph with the [ceph] syntax. Then move the following lines into this section from the [DEFAULT] section of the file (The UUID will vary, do not change the lines). [ceph] volume_driver=cinder.volume.drivers.rbd.rbddriver volume_back-end_name=default rbd_user=volumes rbd_max_clone_depth=5 rbd_ceph_conf=/etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot=false rbd_pool=volumes rbd_secret_uuid=a5d0dd94-57c4-ae55-ffe0-7e3732a24455 Note: There is a similar volume_driver line in the [solidfire] section, leave it in the [solidfire] section and make sure you move the one from the [DEFAULT] section. Once the [ceph] section has been created, enable it by modifying the enabled_back-ends directive in the [DEFAULT] section. It should now say: enabled_back-ends=solidfire,ceph Save the file. Log out of the controller node and repeat on all controller nodes. (Note: There are differences in the cinder.conf on each controller node. Do not simply copy the file amongst controller nodes). Once the file has been changed on each controller and you are back on the Fuel master node for the last time, run the following: for i in `fuel node list grep controller awk '{print $10}'` ;do echo "--> " $i; for j in api backup scheduler volume;do ssh $i service cinder-$j restart;done;done All Rights Reserved Page 66

67 You should see the following: Now verify that the services have been configured and started properly. Using Horizon with the admin login, select Admin -> System Information -> Block Storage Services, as shown below. Notice that there are two cinder-volume processes which are down. This is expected, since we have moved these services to a section other than DEFAULT. Note: The host column of the table shows the routing of requests to the various back-ends. The format is <driver:config>@<hostname>#<pool>. Both back-ends use a generic hostname such that any controller in the redundant group can service the request, which is the case when storage is in an independent back-end. If the storage is physically located on a single node, only that node can service that request (i.e. LVMdriver) and the hostname should be a physical host. Note: If you have previously created volumes in Ceph while it was configured in under the DEFAULT section, those volumes will need to be migrated in the database to their new section name (i.e. ceph). To perform the migration use: cinder-manage volume update_host --currenthost CURRENTHOST --newhost NEWHOST@BACKEND All Rights Reserved Page 67

68 At this point, we need to configure OpenStack volume-types for each storage back-end. Login to Horizon as the admin user and select the Admin -> Volumes menu from the left and then select the Volume Types tab at the top and the + Create Volume Type on the upper right. A Create Volume Type dialog will appear as shown below: Type a name and click Create Volume Type. Do this once for SolidFire and once for Ceph. Upon completion, find the two types in the main table as shown below and use the pulldown on each one, selecting View Extra Specs All Rights Reserved Page 68

69 The following will display: Click Create:... and fill in the key as volume_back-end_name and the value as solidfire (Note: be aware of the spelling and capitalization of key and value.) Do this again for Ceph: the Value for Ceph should be DEFAULT as shown below: This value comes from the volume_back-end_name definition in the cinder.conf file. Check the one for SolidFire in the [solidfire] section of cinder.conf All Rights Reserved Page 69

70 8 Verification and Testing 8.1 Health Check Fuel provides a 6th tab called Health Check which provides a very nice test of the functions of the deployed OpenStack environment. The Health Check may be run at any time after the deployment process, and will check the default storage back-end only. Select the Select All button and Run Tests. The tests take about 15 minutes to complete and each test has an expected run time listed. In our reference environment, all tests passed, except for the last three which check to determine if the OpenStack environment is still using the default login credentials, which we are. Our Lab environment is very isolated and transient, so we will leave the default credentials in place All Rights Reserved Page 70

71 All Rights Reserved Page 71

72 8.2 Functional testing Verify Backend Placement. In addition to the Health check tests, the reference architecture begs a test of both storage backends to verify that volumes are being created on the back-end requested. Start by creating a volume on each back-end by passing the appropriate volume-type to cinder create (as shown below) root@node-25:~$ cinder create --volume-type solidfire --display-name "solidfire" Property Value attachments [] availability_zone nova bootable false created_at T21:00: display_description None display_name solidfire encrypted False id d27b9207-bae7-4c07-8c27-ae713ee0b28a metadata {} multiattach false size 20 snapshot_id None source_volid None status creating volume_type solidfire All Rights Reserved Page 72

73 cinder create --volume-type ceph --display-name "ceph" Property Value attachments [] availability_zone nova bootable false created_at T21:00: display_description None display_name ceph encrypted False id cca5b5a8-37ce-46c8-ae91-4fec922304bf metadata {} multiattach false size 22 snapshot_id None source_volid None status creating volume_type ceph Then check the Ceph volumes pool using the rados ls command to make sure the volume is seen within Ceph. The Ceph volume name is the same as the volume ID created by Cinder. See below for an example. rados -p volumes ls rbd_directory rbd_id.volume-cca5b5a8-37ce-46c8-ae91-4fec922304bf rbd_header.69b9ceb4985 In order to verify the creation on the SolidFire cluster check the SolidFire UI and use the volume Filter to search for the created volume. The SolidFire volume name is created by pre-pending UUID- to the volume ID from Cinder. See example below All Rights Reserved Page 73

74 8.2.2 Configure and Verify SolidFire Image caching SolidFire image caching is used to eliminate the copy of glance images to volumes every time a bootable volume is needed. In order to take advantage of SolidFire image caching, an additional parameter named virtual_size must be placed on the Glance images. This additional parameter also functions as a method to enable caching on individual images within the glance catalog. To configure to configure the virtual_size parameter on a glance image perform the following as admin root@node-25:~$ source ~/admin-openrc.sh root@node-25:~$ glance image-list ID Name Disk Format Container Format Size Status e9cae63-cb9a-4650-a596-55e952f56eaa CentOS 6 qcow2 bare active b983db69-3d44-4e5a-96bc-a5cda0a7762c CentOS 7 qcow2 bare active fe a e1cfabc8 Fedora 23 qcow2 bare active c8d5caa2-2a5c-4ecd-aa9b-0f1d19b184ee GoldenImageSnap active 7efc0905-9fff-46ab-b7fa-664db088b65f TestVM qcow2 bare active 11e362ed bad-add1-431ba270291b Ubuntu Precise qcow2 bare active 8c63e3b2-8cc b2-73a768297c89 Ubuntu Trusty qcow2 bare active 67465d23-29f3-46d7-b1e5-2bc8deb6e0d7 Windows2012R2-Eval qcow2 bare active All Rights Reserved Page 74

75 Show the selected image to make sure the virtual_size property is not already set. glance image-show b983db69-3d44-4e5a-96bc-a5cda0a7762c Property Value checksum df10cd9c933ecc06056e52ca73 container_format bare created_at T20:21: deleted False disk_format qcow2 id b983db69-3d44-4e5a-96bc-a5cda0a7762c is_public True min_disk 0 min_ram 0 name CentOS 7 owner 231f33cc482f426eadf e0379f protected False size status active updated_at T20:25: Then add the virtual_size property to the image using the image-update command to Glance. Use a virtual_size equivalent or larger than the minimum root disk of a running VM from the image. For example this CentOS 7 image requires a 16GB minimum root drive (Note: the compressed qcow2 image is 859MB) root@node-25:~$ glance image-update --property virtual_size= b983db69-3d44-4e5a-96bc-a5cda0a7762c Property Value Property 'virtual_size' checksum df10cd9c933ecc06056e52ca73 container_format bare created_at T20:21: deleted False deleted_at None disk_format qcow2 id b983db69-3d44-4e5a-96bc-a5cda0a7762c is_public True min_disk 0 min_ram 0 name CentOS 7 owner 231f33cc482f426eadf e0379f protected False size status active updated_at T22:17: virtual_size None All Rights Reserved Page 75

76 To verify SolidFire Image caching is configured and working correctly, create a bootable volume through OpenStack by selecting Project (non-admin) -> Compute -> Volumes -> Create Volume and then selecting Volume Source = Image and selecting a Glance image from the pull down list, as shown below. Note: Make sure you size your volume for the image selected (i.e. if the image is 16GB the volume must be 16GB or larger depending on how the image is configured). Next, login to the SolidFire UI and look in the Accounts tab to verify that an account called openstack-vtemplate has been created as shown below All Rights Reserved Page 76

77 NOTE: The openstack-vtemplate account will be prefixed by any sf_account_prefix configured in the cinder.conf. Click on the number in the Active Volumes column to filter the volume list by the account. You should see at least one volume under the openstack-vtemplate account. In the example below, there are two. Click on the Volume details tab to see the Attributes assigned to the volumes. Within the attributes column you will notice the details of the Glance image that is contained in that cache entry including Image_id, Image_name, Image_info and creation date. This attribute information allows the admin to track back to glance each cache entry. If entries are no longer desired, they may be deleted from the SolidFire via the SolidFire UI (the OpenStack driver will re-create the cache entry as needed) All Rights Reserved Page 77

78 As an example of how SolidFire image caching will help during Bootable volume creation, the following example (shown below) was run on the reference architecture using the verification scripts available at the Solidfire Github site. In the example, we are using an image which is 17GB and the first time the bootable volume is created it takes 480 seconds, subsequent creation once the cache was created took just 60 seconds All Rights Reserved Page 78

79 8.3 System testing In addition to validating functionality, installation and deployment, modification of environment using the plugin, and uninstall were thoroughly tested and verified for production use with Mirantis OpenStack. Note: The SolidFire Cinder drivers and Fuel plugin integration are validated on Mirantis OpenStack based on Juno (Mirantis OpenStack 6.1) and Kilo (Mirantis OpenStack 7.0). Note 2: The Cinder Volume Manager service must be enabled on all Controller nodes. 8.4 Performing functional testing In addition to the testing performed by Fuel, SolidFire has a set of scripts to create a large number of boot from volume instances to confirm functionality of an Openstack setup. The scripts may found in our Agile Infrastructure github site. We will utilize five of the scripts available to 1) Build template and clone 40 volumes. 2) Boot 40 instances from those 40 volumes. 3) retype the volumes the instances are built on and observe the performance changes. 4) delete the 40 instances and 5) delete the 40 volumes. We have created a VM image which automatically runs a webserver type workload upon boot. The image has been loaded into the glance image catalog and is ready for use by Openstack. The VM will run the fio testing utility on boot with the config file (below) and output its performance into a local file for future reference. The workload of this fio job will be run against the root disk of the instance, so it is important that the root disk be on the device under test (i.e. the SolidFire). We begin by creating a template from our image and then cloning 40 bootable volumes from the template All Rights Reserved Page 79

80 Next we use the scripts to boot the instances from the volumes. For this example, we will change the volume-type from webserver to bronze and watch the change in performance. below you can see the specification we are using for webserver and bronze types. We ll utilize our scripts to retype all 40 (not the template) volumes and confirm the retype has completed with a cinder list command. Upon changing the volume type we can observe the SolidFire UI for the performance of the entire cluster and see that a dramatic increase in performance has been observed (below) All Rights Reserved Page 80

81 The SolidFire UI also provides per volume statistics, and if we look at one of our volumes, we can see an appropriate increase in performance. The per volume performance graph also provides hard Current, Average and Peak numbers for each metric All Rights Reserved Page 81

82 As a final check, we log into one of the running VMs and look at the last 20 lines of output from our load generator and see that the performance has gone up significantly (column 1: Read BW, Column 2: Read IOPS, Column 3: Write BW, Column 4: Write IOPS) when the volume retype was performed (see below) All Rights Reserved Page 82

83 Lastly we can clean up our volumes and instances. 9 Support Support for SolidFire s storage systems with Mirantis OpenStack is provided jointly by SolidFire and Mirantis. Please call either SolidFire or Mirantis and have your issue resolved accordingly. In general, storage-related and driver-related issues will be resolved by SolidFire Support and Mirantis OpenStack-related issues will be resolved by Mirantis Support. 10 Conclusion OpenStack Storage is becoming increasingly complex. This reference architecture from SolidFire and Mirantis provides a stable, high-performance and affordable solution with many options for customers whose businesses depend on their ability to store, protect and serve data. The reference architecture should be easy to install and maintain. The combination of SolidFire and Mirantis All Rights Reserved Page 83

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved.

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved. SolidFire Petr Slačík Systems Engineer NetApp petr.slacik@netapp.com 27.3.2017 1 2017 NetApp, Inc. All rights reserved. 1 SolidFire Introduction 2 Element OS Scale-out Guaranteed Performance Automated

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO3308BES NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure Gabriel Chapman: Sr. Mgr. - NetApp HCI GTM #VMworld #STO3308BES Disclaimer This presentation may contain product features

More information

NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Content: Not for publication Gabriel Chapman: Sr. Mgr. - NetA

NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Content: Not for publication Gabriel Chapman: Sr. Mgr. - NetA STO3308BUS NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Gabriel Chapman: Sr. Mgr. - NetApp HCI GTM Cindy Goodell: Sr. Mktg. Mgr NetApp HCI Content: Not for publication

More information

Build Cloud like Rackspace with OpenStack Ansible

Build Cloud like Rackspace with OpenStack Ansible Build Cloud like Rackspace with OpenStack Ansible https://etherpad.openstack.org/p/osa-workshop-01 Jirayut Nimsaeng DevOps & Cloud Architect 2nd Cloud OpenStack-Container Conference and Workshop 2016 Grand

More information

Installation runbook for Hedvig + Cinder Driver

Installation runbook for Hedvig + Cinder Driver Installation runbook for Hedvig + Cinder Driver Partner Name: Product Name: Product Version: Hedvig Inc. Hedvig Distributed Storage Platform V-1.0.0 MOS Version: Kilo on Ubuntu 14.04 (2015.1.0-7.0) OpenStack

More information

Workspace & Storage Infrastructure for Service Providers

Workspace & Storage Infrastructure for Service Providers Workspace & Storage Infrastructure for Service Providers Garry Soriano Regional Technical Consultant Citrix Cloud Channel Summit 2015 @rhipecloud #RCCS15 The industry s most complete Mobile Workspace solution

More information

SolidFire and Ceph Architectural Comparison

SolidFire and Ceph Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Ceph Architectural Comparison July 2014 Overview When comparing the architecture for Ceph and SolidFire, it is clear that both

More information

An Introduction to Red Hat Enterprise Linux OpenStack Platform. Rhys Oxenham Field Product Manager, Red Hat

An Introduction to Red Hat Enterprise Linux OpenStack Platform. Rhys Oxenham Field Product Manager, Red Hat An Introduction to Red Hat Enterprise Linux OpenStack Platform Rhys Oxenham Field Product Manager, Red Hat What is OpenStack? What is OpenStack? Fully open source cloud operating system Comprised of several

More information

EMC STORAGE SOLUTIONS WITH MIRANTIS OPENSTACK

EMC STORAGE SOLUTIONS WITH MIRANTIS OPENSTACK EMC STORAGE SOLUTIONS WITH MIRANTIS OPENSTACK Managing EMC Storage Arrays with OpenStack Juno EMC Solutions May 2015 Copyright 2015 EMC Corporation. All Rights Reserved. Published May 2015 EMC believes

More information

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti S3IT: Service and Support for Science IT Cloud middleware Part2: Let s pick one cloud IaaS middleware: OpenStack Sergio Maffioletti S3IT: Service and Support for Science IT, University of Zurich http://www.s3it.uzh.ch/

More information

Jumpstart your Production OpenStack Deployment with

Jumpstart your Production OpenStack Deployment with Jumpstart your Production OpenStack Deployment with Dave Cain Wednesday April 27 th, 2016 11:50am-12:30pm CST 1 About me Dave Cain 12+ years working on IT in datacenters B.S. Computer Science @ NC State

More information

Architecture and terminology

Architecture and terminology Architecture and terminology Guy Carmin RHCE, RHCI, RHCVA, RHCSA Solution Architect IGC, Red Hat Roei Goldenberg RHCE Linux Consultant and Cloud expert, Matrix May 2015 Agenda RHEL-OSP services modules

More information

DEEP DIVE: OPENSTACK COMPUTE

DEEP DIVE: OPENSTACK COMPUTE DEEP DIVE: OPENSTACK COMPUTE Stephen Gordon Technical Product Manager, Red Hat @xsgordon AGENDA OpenStack architecture refresher Compute architecture Instance life cycle Scaling compute

More information

Build your own Cloud on Christof Westhues

Build your own Cloud on Christof Westhues Build your own Cloud on Christof Westhues chwe@de.ibm.com IBM Big Data & Elastic Storage Tour Software Defined Infrastructure Roadshow December 2 4, 2014 New applications and IT are being built for Cloud

More information

Red Hat OpenStack Platform 10 Product Guide

Red Hat OpenStack Platform 10 Product Guide Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team rhos-docs@redhat.com

More information

Hedvig as backup target for Veeam

Hedvig as backup target for Veeam Hedvig as backup target for Veeam Solution Whitepaper Version 1.0 April 2018 Table of contents Executive overview... 3 Introduction... 3 Solution components... 4 Hedvig... 4 Hedvig Virtual Disk (vdisk)...

More information

How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO)

How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO) Economic Insight Paper How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO) By Eric Slack, Sr. Analyst December 2017 Enabling you to make the best technology

More information

NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst

NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst ESG Lab Spotlight NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst Abstract: This ESG Lab Spotlight explores how NetApp Data ONTAP 8.2 Storage QoS can

More information

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds NephOS A Single Turn-key Solution for Public, Private, and Hybrid Clouds What is NephOS? NephoScale NephOS is a turn-key OpenStack-based service-provider-grade cloud software suite designed for multi-tenancy.

More information

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER WHITE PAPER THE ZADARA CLOUD An overview of the Zadara Storage Cloud and VPSA Storage Array technology Zadara 6 Venture, Suite 140, Irvine, CA 92618, USA www.zadarastorage.com EXECUTIVE SUMMARY The IT

More information

Bringing OpenStack to the Enterprise. An enterprise-class solution ensures you get the required performance, reliability, and security

Bringing OpenStack to the Enterprise. An enterprise-class solution ensures you get the required performance, reliability, and security Bringing OpenStack to the Enterprise An enterprise-class solution ensures you get the required performance, reliability, and security INTRODUCTION Organizations today frequently need to quickly get systems

More information

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds NephOS A Single Turn-key Solution for Public, Private, and Hybrid Clouds What is NephOS? NephoScale NephOS is a turn-key OpenStack-based service-provider-grade cloud software suite designed for multi-tenancy.

More information

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Revised January 30, 2015 06:00 pm IST Citrix CloudPlatform Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5

More information

Cisco Solution for Private Cloud

Cisco Solution for Private Cloud Dubrovnik, Croatia, South East Europe 20-22 May, 2013 Cisco Solution for Private Cloud Sascha Merg Technical Solutions Architect 2011 2012 Cisco and/or its affiliates. All rights reserved. Cisco Connect

More information

Genomics on Cisco Metacloud + SwiftStack

Genomics on Cisco Metacloud + SwiftStack Genomics on Cisco Metacloud + SwiftStack Technology is a large component of driving discovery in both research and providing timely answers for clinical treatments. Advances in genomic sequencing have

More information

Road to Private Cloud mit OpenStack Projekterfahrungen

Road to Private Cloud mit OpenStack Projekterfahrungen Road to Private Cloud mit OpenStack Projekterfahrungen Andreas Kress Enterprise Architect Oracle Sales Consulting DOAG Regio Nürnberg/Franken 20. April 2017 Safe Harbor Statement The following is intended

More information

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing. SUSE OpenStack Cloud Production Deployment Architecture Guide Solution Guide Cloud Computing Table of Contents page Introduction... 2 High Availability Configuration...6 Network Topography...8 Services

More information

70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure

70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure 70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure Slide 1 Creating the Virtualization Infrastructure Slide 2 Introducing Microsoft System Center

More information

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion HPE Helion OpenStack Carrier Grade 1.1 Release Notes 2017-11-14 HPE Helion Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes... 3 Changes in This Release... 3 Usage Caveats...4 Known Problems

More information

Building a Video Optimized Private Cloud Platform on Cisco Infrastructure Rohit Agarwalla, Technical

Building a Video Optimized Private Cloud Platform on Cisco Infrastructure Rohit Agarwalla, Technical Building a Video Optimized Private Cloud Platform on Cisco Infrastructure Rohit Agarwalla, Technical Leader roagarwa@cisco.com, @rohitagarwalla DEVNET-1106 Agenda Cisco Media Blueprint Media Workflows

More information

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1 Introducing VMware Validated Design Use Cases Modified on 21 DEC 2017 VMware Validated Design 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Small & Medium Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

BCS EXIN Foundation Certificate in OpenStack Software Syllabus

BCS EXIN Foundation Certificate in OpenStack Software Syllabus BCS EXIN Foundation Certificate in OpenStack Software Syllabus Version 1.2 April 2017 This qualification is not regulated by the following United Kingdom Regulators - Ofqual, Qualification in Wales, CCEA

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Cloud Storage. Patrick Osborne Director of Product Management. Sam Fineberg Distinguished Technologist.

Cloud Storage. Patrick Osborne Director of Product Management. Sam Fineberg Distinguished Technologist. Cloud Storage Patrick Osborne (@patrick_osborne) Director of Product Management Sam Fineberg Distinguished Technologist HP Storage Why HP will WIN with Converged Storage Industry Standard x86-based platforms

More information

CLOUD INFRASTRUCTURE ARCHITECTURE DESIGN

CLOUD INFRASTRUCTURE ARCHITECTURE DESIGN CLOUD INFRASTRUCTURE ARCHITECTURE DESIGN Dan Radez OpenStack Red Hat Brad Ascar CloudForms Red Hat Agenda Red Hat OpenStack Platform Installation OpenStack Architecture Highly Available OpenStack Red Hat

More information

HP Helion OpenStack Carrier Grade 1.1: Release Notes

HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade Contents 2 Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes...3 Changes in This Release... 5 Usage Caveats...7

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

SolidFire and Pure Storage Architectural Comparison

SolidFire and Pure Storage Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Pure Storage Architectural Comparison June 2014 This document includes general information about Pure Storage architecture as

More information

QNAP OpenStack Ready NAS For a Robust and Reliable Cloud Platform

QNAP OpenStack Ready NAS For a Robust and Reliable Cloud Platform QNAP OpenStack Ready NAS For a Robust and Reliable Cloud Platform Agenda IT transformation and challenges OpenStack A new star in the cloud world How does OpenStack satisfy IT demands? QNAP + OpenStack

More information

Dell EMC Red Hat OpenStack Cloud Solution. Architecture Guide Version 6.0

Dell EMC Red Hat OpenStack Cloud Solution. Architecture Guide Version 6.0 Dell EMC Red Hat OpenStack Cloud Solution Architecture Guide Version 6.0 Dell EMC Validated Solutions Contents 2 Contents List of Figures...4 List of Tables...5 Trademarks... 6 Glossary... 7 Notes, Cautions,

More information

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS Carrier networks are undergoing their biggest transformation since the beginning of the Internet. The ability to get to market quickly and to respond to

More information

DEPLOYING NFV: BEST PRACTICES

DEPLOYING NFV: BEST PRACTICES DEPLOYING NFV: BEST PRACTICES Rimma Iontel Senior Cloud Architect, Cloud Practice riontel@redhat.com Julio Villarreal Pelegrino Principal Architect, Cloud Practice julio@redhat.com INTRODUCTION TO NFV

More information

Dell EMC Ready Bundle for Red Hat OpenStack Platform. Dell EMC PowerEdge R-Series Architecture Guide Version

Dell EMC Ready Bundle for Red Hat OpenStack Platform. Dell EMC PowerEdge R-Series Architecture Guide Version Dell EMC Ready Bundle for Red Hat OpenStack Platform Dell EMC PowerEdge R-Series Architecture Guide Version 10.0.1 Dell EMC Converged Platforms and Solutions ii Contents Contents List of Figures...iv List

More information

Dell EMC Ready Bundle for Red Hat OpenStack Platform. PowerEdge FX Architecture Guide Version

Dell EMC Ready Bundle for Red Hat OpenStack Platform. PowerEdge FX Architecture Guide Version Dell EMC Ready Bundle for Red Hat OpenStack Platform PowerEdge FX Architecture Guide Version 10.0.1 Dell EMC Converged Platforms and Solutions ii Contents Contents List of Figures...iv List of Tables...v

More information

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary Course Summary Description This course will take students through an in-depth look at HPE Helion OpenStack V5.0. The course flow is optimized to address the high-level architecture and HPE Helion OpenStack

More information

Availability for the Modern Data Center on FlexPod Introduction NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

Availability for the Modern Data Center on FlexPod Introduction NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only Availability for the Modern Data Center on FlexPod Introduction 2014 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only Abstract Veeam Availability Suite v8 leverages NetApp storage

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

70-745: Implementing a Software-Defined Datacenter

70-745: Implementing a Software-Defined Datacenter 70-745: Implementing a Software-Defined Datacenter Target Audience: Candidates for this exam are IT professionals responsible for implementing a software-defined datacenter (SDDC) with Windows Server 2016

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Cisco Enterprise Cloud Suite Overview Cisco and/or its affiliates. All rights reserved.

Cisco Enterprise Cloud Suite Overview Cisco and/or its affiliates. All rights reserved. Cisco Enterprise Cloud Suite Overview 2015 Cisco and/or its affiliates. All rights reserved. 1 CECS Components End User Service Catalog SERVICE PORTAL Orchestration and Management UCS Director Application

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Merging Enterprise Applications with Docker* Container Technology

Merging Enterprise Applications with Docker* Container Technology Solution Brief NetApp Docker Volume Plugin* Intel Xeon Processors Intel Ethernet Converged Network Adapters Merging Enterprise Applications with Docker* Container Technology Enabling Scale-out Solutions

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

THE OPEN DATA CENTER FABRIC FOR THE CLOUD Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Deterministic Storage Performance

Deterministic Storage Performance Deterministic Storage Performance 'The AWS way' for Capacity Based QoS with OpenStack and Ceph Federico Lucifredi - Product Management Director, Ceph, Red Hat Sean Cohen - A. Manager, Product Management,

More information

REFERENCE ARCHITECTURE. Rubrik and Nutanix

REFERENCE ARCHITECTURE. Rubrik and Nutanix REFERENCE ARCHITECTURE Rubrik and Nutanix TABLE OF CONTENTS INTRODUCTION - RUBRIK...3 INTRODUCTION - NUTANIX...3 AUDIENCE... 4 INTEGRATION OVERVIEW... 4 ARCHITECTURE OVERVIEW...5 Nutanix Snapshots...6

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 10.0 Modified: 2018-04-04 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

The Latest EMC s announcements

The Latest EMC s announcements The Latest EMC s announcements Copyright 2014 EMC Corporation. All rights reserved. 1 TODAY S BUSINESS CHALLENGES Cut Operational Costs & Legacy More Than Ever React Faster To Find New Growth Balance Risk

More information

Modelos de Negócio na Era das Clouds. André Rodrigues, Cloud Systems Engineer

Modelos de Negócio na Era das Clouds. André Rodrigues, Cloud Systems Engineer Modelos de Negócio na Era das Clouds André Rodrigues, Cloud Systems Engineer Agenda Software and Cloud Changed the World Cisco s Cloud Vision&Strategy 5 Phase Cloud Plan Before Now From idea to production:

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

ENHANCE APPLICATION SCALABILITY AND AVAILABILITY WITH NGINX PLUS AND THE DIAMANTI BARE-METAL KUBERNETES PLATFORM

ENHANCE APPLICATION SCALABILITY AND AVAILABILITY WITH NGINX PLUS AND THE DIAMANTI BARE-METAL KUBERNETES PLATFORM JOINT SOLUTION BRIEF ENHANCE APPLICATION SCALABILITY AND AVAILABILITY WITH NGINX PLUS AND THE DIAMANTI BARE-METAL KUBERNETES PLATFORM DIAMANTI PLATFORM AT A GLANCE Modern load balancers which deploy as

More information

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013 Distributed Systems 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski Rutgers University Fall 2013 December 12, 2014 2013 Paul Krzyzanowski 1 Motivation for the Cloud Self-service configuration

More information

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON A SOLIDFIRE COMPETITIVE COMPARISON NetApp SolidFire and Pure Storage Architectural Comparison This document includes general information about Pure Storage architecture as it compares to NetApp SolidFire.

More information

MyCloud Computing Business computing in the cloud, ready to go in minutes

MyCloud Computing Business computing in the cloud, ready to go in minutes MyCloud Computing Business computing in the cloud, ready to go in minutes In today s dynamic environment, businesses need to be able to respond quickly to changing demands. Using virtualised computing

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 13.0 Modified: 2018-08-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

The OnApp Cloud Platform

The OnApp Cloud Platform The OnApp Cloud Platform Everything you need to sell cloud, dedicated, CDN, storage & more 286 Cores / 400 Cores 114 Cores 218 10 86 20 The complete cloud platform for service providers OnApp software

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Building Scaleable Cloud Infrastructure using the Red Hat OpenStack Platform

Building Scaleable Cloud Infrastructure using the Red Hat OpenStack Platform Building Scaleable Cloud Infrastructure using the Red Hat OpenStack Platform Will Foster Sr. Systems Engineer, Red Hat Dan Radez Sr. Software Engineer, Red Hat Kambiz Aghaiepour Principal Software Engineer,

More information

VMware Integrated OpenStack Quick Start Guide

VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack 1.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Dell Red Hat OpenStack Cloud Solution Reference Architecture Guide - Version 5.0

Dell Red Hat OpenStack Cloud Solution Reference Architecture Guide - Version 5.0 Dell Red Hat OpenStack Cloud Solution Reference Architecture Guide - Version 5.0 2014-2016 Dell Inc. Contents 2 Contents Trademarks... 4 Notes, Cautions, and Warnings... 5 Glossary... 6 Overview...9 OpenStack

More information

IN2P3-CC cloud computing (IAAS) status FJPPL Feb 9-11th 2016

IN2P3-CC cloud computing (IAAS) status FJPPL Feb 9-11th 2016 Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules IN2P3-CC cloud computing (IAAS) status FJPPL Feb 9-11th 2016 1 Outline Use cases R&D Internal core services Computing

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. reserved. Insert Information Protection Policy Classification from Slide 8

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. reserved. Insert Information Protection Policy Classification from Slide 8 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,

More information

Data Center and Cloud Automation

Data Center and Cloud Automation Data Center and Cloud Automation Tanja Hess Systems Engineer September, 2014 AGENDA Challenges and Opportunities Manual vs. Automated IT Operations What problem are we trying to solve and how do we solve

More information

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework White Paper Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Pure Storage FlashArray OpenStack Cinder Volume Driver Setup Guide

Pure Storage FlashArray OpenStack Cinder Volume Driver Setup Guide Pure Storage FlashArray OpenStack Cinder Volume Driver 6.0.0 Setup Guide Thursday, September 14, 2017 19:52 Pure Storage FlashArray OpenStack Cinder Volume Driver 6.0.0 Setup Guide Contents Chapter 1:

More information

VMWARE SOLUTIONS AND THE DATACENTER. Fredric Linder

VMWARE SOLUTIONS AND THE DATACENTER. Fredric Linder VMWARE SOLUTIONS AND THE DATACENTER Fredric Linder MORE THAN VSPHERE vsphere vcenter Core vcenter Operations Suite vcenter Operations Management Vmware Cloud vcloud Director Chargeback VMware IT Business

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

NET1821BU THE FUTURE OF NETWORKING AND SECURITY WITH NSX-T Bruce Davie CTO, APJ 2

NET1821BU THE FUTURE OF NETWORKING AND SECURITY WITH NSX-T Bruce Davie CTO, APJ 2 NET1821BU The Future of Network Virtualization with NSX-T #VMworld #NET1821BU NET1821BU THE FUTURE OF NETWORKING AND SECURITY WITH NSX-T Bruce Davie CTO, APJ 2 DISCLAIMER This presentation may contain

More information

Quantum, network services for Openstack. Salvatore Orlando Openstack Quantum core developer

Quantum, network services for Openstack. Salvatore Orlando Openstack Quantum core developer Quantum, network services for Openstack Salvatore Orlando sorlando@nicira.com Openstack Quantum core developer Twitter- @taturiello Caveats Quantum is in its teenage years: there are lots of things that

More information

Introducing VMware Validated Design Use Cases

Introducing VMware Validated Design Use Cases Introducing VMware Validated Design Use Cases VMware Validated Designs 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

OPENSTACK Building Block for Cloud. Ng Hwee Ming Principal Technologist (Telco) APAC Office of Technology

OPENSTACK Building Block for Cloud. Ng Hwee Ming Principal Technologist (Telco) APAC Office of Technology OPENSTACK Building Block for Cloud Ng Hwee Ming Principal Technologist (Telco) APAC Office of Technology ABOUT RED HAT FROM COMMUNITY TO PRODUCT STABILIZ E INTEGRAT E PARTICIPATE INTEGRAT E STABILIZ E

More information

vsan Management Cluster First Published On: Last Updated On:

vsan Management Cluster First Published On: Last Updated On: First Published On: 07-07-2016 Last Updated On: 03-05-2018 1 1. vsan Management Cluster 1.1.Solution Overview Table Of Contents 2 1. vsan Management Cluster 3 1.1 Solution Overview HyperConverged Infrastructure

More information

Orchestrating the Cloud Infrastructure using Cisco Intelligent Automation for Cloud

Orchestrating the Cloud Infrastructure using Cisco Intelligent Automation for Cloud Orchestrating the Cloud Infrastructure using Cisco Intelligent Automation for Cloud 2 Orchestrate the Cloud Infrastructure Business Drivers for Cloud Long Provisioning Times for New Services o o o Lack

More information

Why software defined storage matters? Sergey Goncharov Solution Architect, Red Hat

Why software defined storage matters? Sergey Goncharov Solution Architect, Red Hat Why software defined storage matters? Sergey Goncharov Solution Architect, Red Hat sgonchar@redhat.com AGENDA Storage and Datacenter evolution Red Hat Storage portfolio Red Hat Gluster Storage Red Hat

More information

DRAFT Pure Storage FlashArray OpenStack Cinder Volume Driver Setup Guide

DRAFT Pure Storage FlashArray OpenStack Cinder Volume Driver Setup Guide DRAFT Pure Storage FlashArray OpenStack Cinder Volume Driver 5.0.0 Setup Guide Thursday, September 14, 2017 19:59 DRAFT Pure Storage FlashArray OpenStack Cinder Volume Driver 5.0.0 Setup Guide Contents

More information

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES WHITE PAPER JULY 2017 Table of Contents 1. Executive Summary 4 2.

More information

POWERED BY OPENSTACK. Powered by OpenStack. Globo.Tech GloboTech Communications

POWERED BY OPENSTACK. Powered by OpenStack. Globo.Tech GloboTech Communications PRIVATE PRIVATE CLOUD CLOUD POWERED BY OPENSTACK Powered by OpenStack Globo.Tech GloboTech Communications sales@globo.tech TABLE OF CONTENT 2 EXECUTIVE SUMMARY...3 OPENSTACK... 4 INFRASTRUCTURE... 8 GLOBOTECH...

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS

VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS VMware customers trust their infrastructure to vsan #1 Leading SDS Vendor >10,000 >100 83% vsan Customers Countries Deployed Critical Apps

More information