HP Helion CloudSystem 9.0 Administrator Guide

Size: px
Start display at page:

Download "HP Helion CloudSystem 9.0 Administrator Guide"

Transcription

1 HP Helion CloudSystem 9.0 Administrator Guide Abstract This information is for use by administrators using HP Helion CloudSystem Software 9.0, who are assigned to configure and provision compute resources for deployment and use in virtual data centers. HP Part Number: Published: September 2015 Edition: 1

2 Copyright 2014, 2015 Hewlett-Packard Development Company, L.P. Microsoft and Windows are trademarks of the Microsoft group of companies. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. VMware vcenter and VMware vsphere are registered trademarks of VMware, Inc. in the United States and/or other jurisdictions. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

3 Contents I Understanding CloudSystem Quick start...14 Plan...14 Install...14 For Foundation (HP Helion OpenStack)...14 Configure...14 Launch...14 For Enterprise (HP Cloud Service Automation)...14 Configure...14 Launch Concepts and architecture...16 Solution components...17 Management hypervisors and managed resources...18 CloudSystem virtual appliances...18 CloudSystem user interfaces...19 CloudSystem storage...20 CloudSystem features Security in CloudSystem...24 Best practices for maintaining a secure appliance...24 II CloudSystem appliances management Create a root certificate for the management hypervisor Manage users...29 Infrastructure administrators...29 Add, edit, or remove an administrator user...29 Cloud administrators and users...30 Configuring administrator passwords...30 Add, edit, or remove a cloud administrator user or cloud user Backup, restore, and recover CloudSystem appliances...32 Backup and restore as a service...32 Best practices for backing up the CloudSystem appliances...33 Setting the location where the backup file is stored...34 Backing up the CloudSystem appliances...35 Viewing the backup log and the backup job list...36 Using a cron job to automate frequent backups...37 Restoring appliances from a backup file...38 Restoring the CloudSystem appliances...38 Restoring the Management appliances (self restore)...38 Recovering Cloud controller appliance databases and compute node data after a restore...39 Running a recovery report...40 Recovering appliances and compute nodes...40 Modifying the backup file encryption key Backup, restore, and recover the OVSvApp agent...42 Backing up and restoring the OVSvApp agent...42 Limitations after restoring the OVSvApp agent...42 Deleting stale VMs and port groups...42 Identifying stale port groups...42 Deleting stale port groups...44 Contents 3

4 8 Backup, restore, and recover the SDN controller...46 SDN controller backup best practices...46 REST APIs to perform backup of the SDN controller...46 REST APIs to restore the SDN controller...47 Recovering from the unusable OVSDB state...48 Using the SDN controller console user interface...49 Using the SDN controller using RSDoc...50 Changing the controller password Shut down and restart CloudSystem appliances...52 Shut down CloudSystem appliances...52 Starting CloudSystem appliances after a shutdown...54 Restart CloudSystem appliances and services...58 Recover from a power outage or shutdown...59 Health checks Manage CloudSystem software licensing and license keys...64 CloudSystem software license models...64 One licensing model per cloud...64 Licensing of embedded technologies and installed components...64 Licensing of HP products delivered with CloudSystem software...64 License keys...65 Managing license keys...65 Managing license compliance...66 Tracking OSI license agreement compliance...66 Replacing a server managed by Matrix OE Monitor resource use, allocation, and health...68 Dashboard...68 Resource graphs...68 Activity Dashboard...69 Activity statuses...70 Logging Dashboard...70 Viewing logging...70 Monitoring...71 Monitoring components...71 Viewing monitoring information...72 Monitoring ESXi compute clusters...73 Monitoring the OVSvApp service VM...73 Monitoring UI snapshots...74 Creating a support dump file...75 Creating a support dump...76 Viewing the contents of the support dump file...77 Viewing the audit log Manage the Management appliance trio...79 Disabling ESXi DRS anti-affinity rules and disabling DRS on CloudSystem appliances...79 Viewing appliances...80 Updating CloudSystem appliances...80 Downloading, uploading, and installing the update file Manage the Cloud controller trio...83 Configure OpenLDAP or Active Directory for OpenStack user portal authentication (Keystone)...83 Checking the connection to the directory service...83 Adding OpenStack service users and internal users to the directory service...84 Configuring security settings to add an Active Directory or OpenLDAP directory service...86 (Optional) Manage OpenStack compute (Nova) logs Contents

5 14 Manage the Enterprise appliance trio...90 Installing the Enterprise appliance after First-Time Installation...90 Changing the Enterprise appliance password when Enterprise is deployed after First-Time Installation...91 Logging in and changing the default HP CSA and Marketplace Portal password...92 III Resource configuration in CloudSystem Network configuration...95 Tenant networks...95 Add segmentation ID ranges...96 Delete segmentation ID ranges...96 Provider networks...96 Add a Provider network...97 Delete a Provider network...98 Edit a Provider network...98 Manage Provider network subnets...99 External Network Creating the External Network Configuring the External Network Creating the External Network subnet Creating an External Network router Assigning floating IP addresses to instances Access and security for instances Create a security group Create a key pair Integrated tool connectivity and configuration Register VMware vcenter Manage VMware vcenter HP Operations Orchestration Central Image management Image format support Image naming and single datastore support in VMware vcenter Creating and obtaining images Setting custom attributes on Microsoft Windows images Updating image metadata Expanding the Glance disk size Adding images Storage management Block storage (Cinder) Block storage and HA Block storage networks VMware VMFS storage devices Set up a VMFS storage device Register a VMFS storage device Manage a VMFS storage device HP 3PAR StoreServ storage devices Set up 3PAR storage device hardware Register a 3PAR storage device Manage CPGs for 3PAR FC Manage CPGs for 3PAR iscsi Best practices for using HP 3PAR storage systems StoreVirtual VSA storage devices Set up HP StoreVirtual VSA storage device hardware Contents 5

6 Register HP StoreVirtual VSA storage device Manage clusters for VSA storage devices Managing 3PAR and VSA block storage device configurations and connections Creating and attaching volumes in the OpenStack user portal Create volumes in the OpenStack user portal Attach a volume to a VM instance in the OpenStack user portal Delete Volumes Compute node creation Creating ESXi compute clusters Install and configure an ESXi compute cluster Configure networks for an ESXi cluster Enable ESXi networking for instance security groups Open vswitch vapp (OVSvApp) Creating Hyper-V compute nodes Install and configure a Hyper-V compute node Configure networks for a Hyper-V compute node Creating KVM compute nodes Install and configure a KVM compute node Configure networks for a KVM compute node Check RHEL KVM 6.5 and 7.0 dependencies Create an RHEL repo on a KVM compute node Calculating the number of instances that can be provisioned to a compute node Compute node activation and management Adding compute nodes to the cloud Activating a compute node Activate an ESXi compute cluster Expand an activated cluster Activate a Hyper-V compute node Limitations in clustered Hyper-V compute nodes and instances Activate a KVM compute node Creating compute node host aggregates and availability zones Configuring host aggregates and availability zones in the OpenStack user portal Compute node details Deactivate a compute node Delete a compute node Compute node summary Compute node utilization and allocation graphs IV Optional services installation (Swift and Platform Services) Object storage (OpenStack Swift) Installing Object storage Object storage configuration Overview of object storage networks Prepare the object storage deployer Prepare servers for provisioning Install object storage (Swift) Configure external load balancer and keystone Managing object storage Perform scale operations on a cluster Expand a cluster Shrink a cluster Manage rings and storage policies Monitor a cluster Monitoring dashboard Contents

7 Monitoring CLI Backup object storage cluster management data Platform Services, including Helion Development Platform and DNS as a Service Configure the service provider network Install the Platform Services disk Option 1: ESXi management hypervisor and compute node installation Option 2: KVM management hypervisor and compute node installation NFS mount the Platform Services disk Install HP Helion Development Platform Enable HP Helion Development Platform endpoint Install the HP Helion Development Platform Database Service Verify quotas Download the Database Service from the local file system Configure the Database Service Configure the Cloud controller HAProxy for DBaaS Optional: Install HP Helion Application Lifecycle Service (ALS) Optional: Install Microsoft.NET support for Helion Development Platform Install HP Helion DNS as a Service Prerequisites Creating prerequisite credentials Publishing the update package and booting the installer VM Installing and configuring DNSaaS Configuring the Cloud controller HAProxy for DNSaaS Registering the service with Keystone Initial service configuration Post-installation cleanup Uninstalling DNaaS Increasing quotas V Cloud service provisioning and deployment Using Orchestration templates to launch a cloud Launch a stack using the OpenStack user portal Using CloudSystem Enterprise to manage multiple HP Helion CloudSystem providers Requirements for supporting multiple CloudSystem Foundation providers in CloudSystem Enterprise Set up a CloudSystem environment with multiple OpenStack providers Using the OpenStack user portal to launch instances Launching instances in the OpenStack user portal Instance metadata Provisioning and deploying cloud services in HP CSA Configuring additional providers and loading additional content packs Configure additional resource providers Loading additional content packs Upload additional content packs Preparing HP Operations Orchestration for CloudSystem Enterprise Using OO Central Installing OO Studio Uninstalling HP OO Studio Contents 7

8 30 HP Operations Orchestration management Manage purge settings Manage MySQL log files Manage step history logs VI Appendix A Configuring secure OpenLDAP and Active Directory Configure keystone.conf Copy certificate file Run update_deployed_avm Roles in the service and demo project for all LDAP service users Admin role for the enterpriseinternal user in the default domain B Support and other resources Information to collect before contacting HP How to contact HP Registering for software technical support and update service HP authorized resellers Documentation feedback Related information HP Helion CloudSystem documents HP Helion OpenStack documents HP Insight Management documents Third-party documents HP 3PAR StoreServ documents HP VSA StoreVirtual documents HP ProLiant servers documents Index Contents

9 Figures 1 HP Helion CloudSystem overview HP Helion CloudSystem Enterprise overview CloudSystem architecture Certificate fields in the CloudSystem Management Appliance Installer Port Groups- VLAN Port Groups- VXLAN Identifying the stale port group Delete confirmation The hpvcn-neutron-agent and OVSvApp in Monitoring dashboard OVSvApp VM alarms Adding the user name attribute for OpenStack service and internal users Block storage networks Object storage configuration Cluster details Object storage management tasks Sample Server Inventory JSON file Service provider network architecture Integrated LDAP/AD server Sample LDAP server entries for a new organization Sample LDAP entries for user attributes Sample Access Control entries for a new organization New group for administrators Tables 1 CloudSystem user interfaces Admin role on the Management appliance Admin and member roles on the Cloud controller Default CloudSystem virtual appliance names CloudSystem software license keys Dashboard resource graphs Dashboard states Block storage options RHEL KVM common (6.5 and 7.0) dependencies RHEL KVM 7.0 dependencies Resource oversubscription rates for ESXi, Hyper-V, and KVM compute nodes Compute nodes summary Compute nodes state Physical Utilization Virtual Allocation Quotas on the OpenStack user portal (Horizon) Access OO Central Default database purge settings Procedures 1 Add or edit an administrator user Removing an administrator user...30

10 3 Set the location where the backup file is stored Back up the databases on CloudSystem appliances Full backup that runs daily at 2 AM Incremental backup that runs hourly Restore the CloudSystem appliances Restore the Management appliances (self restore) Running a recovery report for a CloudSystem appliance Recovering CloudSystem appliances and compute nodes after a restore Reset the backup file encryption key Shutting down a single CloudSystem virtual appliance Shutting down a trio of CloudSystem virtual appliances Shutting down compute nodes Shutting down an entire cloud Starting the Management appliances Starting the Cloud controllers Starting the Enterprise appliances Starting the Monitoring appliances Starting the Update appliance Restarting a single CloudSystem appliance Restarting a trio of CloudSystem appliances Restarting compute nodes Resyncing appliance databases after a power outage Checking health through the Monitoring dashboard Checking health from the HA Proxy Example: Logging a successful instance launch Example: Logging a failed instance launch Accessing the Monitoring appliances Launch monitoring Create a support dump file from the Operations Console Create a support dump from the command line Optional: Disable DRS anti-affinity rules Optional: Disable DRS management of appliance VMs View details about an appliance Accessing the Update appliance Download the update file to your local computer Upload and install the update file View the status of the update Retry the update from the command line Add Helion OpenStack service users and internal users to the directory service Configure security settings to add an authentication directory server and service Compressing ESXi compute logs on the Cloud controller appliances Installing the Enterprise appliance driver Install the Enterprise appliance after FTI Set the password on the Enterprise appliance when the appliance was installed after First-Time Installation Changing the default HP CSA admin password Changing the default Marketplace Portal consumer password Adding a segmentation ID range for use in Tenant Networks Deleting segmentation ID ranges Adding a Provider Network Deleting a Provider Network Managing subnets Creating the External Network Create the External Network subnet Creating a router to connect Tenant Network instances to the External Network subnet...101

11 57 Assigning floating IP addresses to instances Creating a security group Creating a key pair in the OpenStack user portal Register VMware vcenter Manage VMware vcenter Update Glance image properties (when no instances have been created from the image) Recreate the Glance image (when instances have been created from the image) Expanding the size of a glance disk in an ESXi environment Expanding the size of a glance disk in a KVM environment Adding Images Register a 3PAR block storage device Managing a common provisioning group for 3PAR Managing backends for 3PAR FC Managing CPGs for 3PAR iscsi Managing backends for 3PAR iscsi Register a VSA block storage device Managing a VSA cluster Managing a VSA backend Viewing and downloading configuration Editing block storage device connections Unregister a 3PAR or VSA block storage device Creating volumes in the OpenStack user portal Attaching volumes in the OpenStack user portal Deleting Volumes Create distributed virtual switches and port groups Upload the OVSvApp template Install and configure a Hyper-V compute node Activate an ESXi cluster Expanding an activated cluster Activate a Hyper-V compute node (clustered or standalone) Clustering a Hyper-V compute node after it has been activated Adding a new Hyper-V compute node to an existing Hyper-V cluster after activating other hosts in the cluster Activate a KVM compute node Deactivate an ESXi cluster or Hyper-V or KVM compute node Delete a Hyper-V or KVM compute node Preparing the object storage deployer Preparing bare metal servers for provisioning Creating the cluster Allocating servers for the cluster Formatting disks Activating the cluster Preparing CloudSystem to perform load balancing Configuring the external load balancer and keystone Adding nodes to the cluster Removing a node from a cluster Reprovisioning a node that was removed from a cluster Performing ring administrative tasks Performing policy administrative tasks Create the service provider network Mount the ESXi vmdk disk Mount the KVM qcow2 disk Update prerequisites Check iptables and add iptables rules NFS mount the Platform Services disk...177

12 111 Install and configure the Application Lifecycle Service Enable the Application Lifecycle Service panel Install Microsoft.NET support for Helion Development Platform Launching a stack Installing CloudSystem and integrating LDAP servers Copying the CA root certificate to the certificate store Creating new users in LDAP Configuring the second OpenStack provider in HP CSA Creating a new organization for each environment Creating a new resource environment for each environment Launching an instance in the OpenStack user portal Uploading additional content packs Installing OO Studio Rolling back and uninstalling HP OO Studio Updating the OOPurgeHistory file Modifying the log expiration limit Disabling step history logs...211

13 Part I Understanding CloudSystem HP Helion CloudSystem delivers an enterprise private cloud in HP Converged Infrastructure environments. HP Helion CloudSystem Foundation is based on the HP Helion OpenStack distribution of OpenStack cloud software. HP Helion CloudSystem Enterprise expands on CloudSystem Foundation to automate the integration of servers, storage, networking, security, and monitoring capabilities throughout the infrastructure service delivery lifecycle of a virtualized data center. Through the addition of HP CSA, Enterprise offers additional design tools and provider integration, and with the Marketplace Portal, users have secure access to these services.

14 1 Quick start This chapter provides links to the information that you need to launch your first virtual machine instance with CloudSystem. Plan 1. Check requirements and versions See the HP Helion CloudSystem 9.0 Support Matrix in the Enterprise Information Library. 2. Plan the network architecture See the HP Helion CloudSystem 9.0 Network Planning Guide at Enterprise Information Library. Install 1. Install the Management appliance using the Management Appliance Installer (csstart) 2. Install the CloudSystem appliances and network infrastructure using the First-Time Installer See the HP Helion CloudSystem 9.0 Installation and Configuration Guide in the Enterprise Information Library. 3. Create a compute node For Foundation (HP Helion OpenStack) Configure Launch In the Operations Console 1. Register VMware vcenter (for ESXi environments) 2. Configure storage 3. Configure tenant network and (optional) provider network 4. Activate a compute node In the OpenStack user portal 1. Create image 2. Create external network and subnet 3. Create tenant network 4. Create default security group 5. Create a keypair 6. Launch an instance 7. Associate floating IP addresses For Enterprise (HP Cloud Service Automation) Configure In the Cloud Service Automation Management Console 1. Create a design 2. Configure a design 3. Create an offering 14 Quick start

15 Launch In the Marketplace Portal 1. Deploy an offering For Enterprise (HP Cloud Service Automation) 15

16 2 Concepts and architecture HP Helion CloudSystem is designed for converged infrastructure environments and provides a software-defined approach to managing the cloud. CloudSystem consists of two offerings: HP Helion CloudSystem Foundation is based on the HP Helion OpenStack distribution of OpenStack cloud software. It integrates hardware and software to deliver core Infrastructure as a Service (IaaS) provisioning and lifecycle management of compute, network and storage resources. You manage CloudSystem from its Operations Console and its CLIs. You develop, deploy and scale cloud applications using the OpenStack user portal and the OpenStack APIs and CLIs. You can also deploy HP Helion Development Platform on top of HP Helion CloudSystem Foundation to use its Platform as a Service (PaaS) features in your cloud applications. Figure 1 HP Helion CloudSystem overview HP Helion CloudSystem Enterprise adds features in HP Cloud Service Automation (HP CSA) that integrate servers, storage, networking, security, and management to automate the lifecycle for hybrid service delivery. Application architects can use CloudSystem Enterprise to create application and infrastructure templates and publish them as offerings in a service catalog. Users select offerings from a catalog and request provisioning of a new service instance, or subscription. When a service is requested, Enterprise automatically provisions the servers, storage, and networking into the subscription. Enterprise includes an embedded version of HP Operations Orchestration (OO) for automating administrative processes as well as an enhanced set of Operations Orchestration workflows. 16 Concepts and architecture

17 Figure 2 HP Helion CloudSystem Enterprise overview Solution components Management hypervisors and managed resources (page 18) CloudSystem virtual appliances (page 18) CloudSystem user interfaces (page 19) CloudSystem storage (page 20) CloudSystem user interfaces (page 19) Figure 3 CloudSystem architecture Solution components 17

18 Management hypervisors and managed resources Management hypervisors host the CloudSystem virtual appliances that comprise the CloudSystem solution. These hypervisors are arranged as a three-node configuration of ESXi clusters or KVM hosts. VMware vcenter acts as a central administrator for ESXi clusters that are connected on a network. VMware vcenter allows you to pool and manage the resources of multiple hosts, as well as monitor and manage your physical and virtual infrastructure. You can activate ESXi clusters in the Operations Console after you register a connection with vcenter. ESXi instance security is provided by the HP s open source Virtual Cloud Networking Open vswitch vapp (OVSvApp) appliance. This appliance is automatically installed on each ESXi compute hypervisor during activation after you load an OVSvApp image in your data store. An HP 3PAR StoreServ storage system provides a method of carving storage for KVM and Hyper-V compute nodes. HP 3PAR StoreServ block storage drivers are registered in the Operations Console. HP StoreVirtual VSA provides block storage for KVM and Hyper-V compute nodes. HP StoreVirtual VSA block storage drivers are registered in the Operations Console. An FC SAN, iscsi or Flat SAN network connects the HP 3PAR storage system to compute nodes or ESXi clusters. An iscsi connection is required for HP StoreVirtual VSA storage. HP OneView manages your converged infrastructure and supports key scenarios such as deploying bare-metal servers, performing ongoing hardware maintenance, and responding to alerts and outages. It is designed for the physical infrastructure needed to support virtualization, cloud computing, big data, and mixed computing environments. HP Insight Control server provisioning (ICsp) deploys operating systems on HP ProLiant bare-metal servers, updates drivers, utilities, and firmware, and configures system hardware. HP CloudSystem Matrix can be configured as an additional provider in HP CSA. The HP FlexFabric 5930 Switch Series is a family of high-performance and ultra-low-latency 40 GbE top-of-rack (ToR) data center switches. Swift PAC and Swift Object servers support object storage. These servers are not configured as part of the initial CloudSystem installation. The object storage networks must be configured to support this feature and you must install the OpenStack Swift CLI on the Management appliance to manage scaled-out object storage. An external Load Balancer is also required. CloudSystem virtual appliances CloudSystem supports a three-node KVM management host or ESXi management cluster that hosts the following virtual appliances in an HA configuration. CloudSystem Foundation virtual appliances a trio of Management appliances The Management appliance is responsible for standing up and managing CloudSystem virtual appliances. The Operations Console is the administrative interface for this appliance. a trio of Cloud controllers 18 Concepts and architecture The Cloud controller contains the majority of the OpenStack services used in CloudSystem. The OpenStack user portal is the cloud user interface for this appliance. A data volume containing the Glance image repository is part of the Cloud appliance trio. Glance is configured to use a local disk as its image store location, and this is not a shared disk.

19 a trio of Monitoring appliances The Monitoring appliance contains the monitoring services that are used to monitor the performance and health of CloudSystem virtual appliances and compute nodes. one Update appliance The Update appliance manages patches and upgrades to the CloudSystem environment. one SDN controller appliance The SDN controller is only deployed in environments configured to support VxLANs for Tenant and Provider networks. It manages the L2 gateway to bridge the cloud VxLAN network and the legacy data center VLAN network. CloudSystem Enterprise virtual appliances a trio of Enterprise appliances The Enterprise appliance contains the core functionality of the Enterprise offering, including HP Cloud Service Automation (HP CSA), the Marketplace Portal, Topology Designer and Sequential Designer. HP Operations Orchestration (OO) Central is also embedded in the Enterprise appliance. HP CSA Cloud Service Management Console is the administrative portal for the Enterprise appliance. Designs are provisioned as offerings in the HP CSA console. The Marketplace Portal displays offerings that can be purchased and applied to a cloud environment as a subscription. HP CSA Topology Designer is an easy to use solution for infrastructure provisioning designs. HP CSA Sequential Designer handles more complex application provisioning designs. HP Operations Orchestration (OO) Central provides the ability to run scripted workflows in HP CSA. CloudSystem user interfaces HP OO Studio provides the ability to create and customize new workflows and debug and edit existing workflows. OO Studio is installed separately, using the executable file included in CloudSystem. CloudSystem includes the following user interfaces for administrators and cloud users. Table 1 CloudSystem user interfaces UI How to access Virtual appliance hosting the UI Used in CloudSystem to... Credentials Management Appliance Installer Launch the csstartgui.bat file included in the CloudSystem release package from a staging server. N/A* *This is run from a staging server. Install the Management appliance, create the Data Center Management Network and rough in the Cloud Management Network. N/A First-Time Installer Launch the Operations Console for the first time and the installer launches automatically. Management appliance Install the remaining virtual N/A Solution components 19

20 Table 1 CloudSystem user interfaces (continued) UI How to access Virtual appliance hosting the UI Used in CloudSystem to... Credentials appliances and create the rest of the network infrastructure. Operations Console Management appliance Manages the cloud environment. Set during first-time installation. OpenStack user portal Cloud controller Create, launch, and manage virtual machine instances. Set during first-time installation. OpenStack monitoring portal Management appliance View and manage monitoring services. Set during first-time installation. Cloud Service Automation Management Console (HP CSA) Enterprise appliance Create and manage service offerings and service catalogs. admin/cloud Marketplace Portal Enterprise appliance Select cloud services from a catalog and monitor and manage existing services, with subscription pricing. consumer/cloud HP Operations Orchestration Enterprise appliance Attach workflows to server lifecycle actions or schedule flows for regular execution. Set during first-time installation. CloudSystem storage Compute block storage is provided through a variety of storage solutions. Block storage VMware Virtual Machine File System (VMFS) CloudSystem works with VMware VMFS to provide boot from volume functionality for ESXi compute hosts. HP 3PAR StoreServ Fibre Channel CloudSystem works with HP 3PAR StoreServ Fibre Channel to provide instance data storage for KVM compute hosts. 20 Concepts and architecture

21 HP 3PAR StoreServ iscsi CloudSystem works with HP 3PAR StoreServ iscsi to provide boot and instance data storage for KVM and Hyper-V compute hosts. HP StoreVirtual VSA CloudSystem works with StoreVirtual VSA to provide virtual storage for instances on KVM and Hyper-V compute hosts. Object storage (Swift) Scale-out object storage is provided by a minimum of four dedicated servers: two Swift PAC servers and two Swift Object servers. The Object Proxy Network and Object Storage Network support object storage. You can manage object storage using the Swift CLI on the Management appliance. See Installing Object storage (page 145). File storage Ephemeral (assigned to a VM instance when the instance is created and released when the instance is deleted) storage for ESXi, KVM and Hyper-V compute hosts. Management hypervisor storage for CloudSystem virtual appliances. CloudSystem features Deployment and management Easily install HP Helion CloudSystem Foundation and Enterprise simultaneously, using two new UI-based installation processes. The Management Appliance Installer brings up the Management appliance, the Data Center Management Network and lays the groundwork for the Cloud Management Network. The First-Time Installer deploys the remaining virtual appliances in an HA configuration and completes the network configuration. Update and patch CloudSystem appliances using the new Update appliance, which is part of the CloudSystem suite of virtual appliances. Continuously consume CloudSystem services during planned maintenance or unplanned outages of the management servers running CloudSystem. CloudSystem uses an active/active HA configuration, where virtual appliance clusters provide continuous availability to HP CSA and OpenStack services. The virtual appliance clusters ensure that the customer always has a seamless user experience and never perceives an interruption in service. Protect CloudSystem data with enhanced backup, restore and recover functionality. Manage cloud resources using the new Operations Console. Import and activate ESXi, KVM, and Hyper-V clusters and hosts by deploying and maintaining OpenStack software agents to configure the hypervisors into OpenStack compute nodes. HP Helion CloudSystem Enterprise Use HP Cloud Service Automation in CloudSystem Enterprise to: Design simple or complex application and infrastructure services. Publish service designs to catalogs for self service consumption. Provision services as subscriptions on the Marketplace Portal. Manage, modify, and retire subscriptions. Use HP Operations Orchestration workflows to automate operational tasks and processes. CloudSystem features 21

22 Manage multiple simultaneous cloud environments, including HP Helion Public Cloud, Amazon Web Services (AWS), Microsoft Azure, VMware and OpenStack technology, while fully controlling where workloads reside. Configure an HP OneView provider to enable design, offering, and provisioning of application services running on physical servers. Deploy existing AWS workloads onto AWS-compatible private clouds through integration with HP Helion Eucalyptus. HP Helion CloudSystem Foundation Build and manage your cloud applications using HP Helion OpenStack services, built from OpenStack Juno functionality. Use OpenStack APIs and CLIs to automate the provisioning, modification, and deletion of cloud application resources. Create a cloud application using OpenStack Heat templates that automatically provision resources based on definitions in the Heat template. Monitoring Manage the lifecycle of your cloud infrastructure including its health and performance using new monitoring features available in the new Operations Console, backed by HP's open source Monasca technology. Capture logs for all CloudSystem services and virtual appliances for improved troubleshooting capabilities. Access the logs using the Kibana UI, which is available from the Operations Console. Networking Deploy virtual machine instances into tenant isolated networks backed by VLAN or VxLAN technology. Connect select cloud applications to datacenter services outside of CloudSystem. Expose select cloud applications using floating IP addresses to applications outside of CloudSystem. Configure Central Virtual Routing (CVR) in ESXi, Hyper-V, or KVM environments, or Distributed Virtual Routing (DVR) in KVM environments. Secure virtual machine instances for KVM and Hyper-V using OpenStack Neutron agents, and for ESXi using HP s open source Virtual Cloud Networking Open vswitch vapp (OVSvApp) technology. Block storage (Cinder) Configure block storage for Openstack Cinder backed by VMWare VMFS, HP StoreVirtual VSA and HP 3PAR to support ESXi, Hyper-V and KVM compute nodes. Consume OpenStack Cinder to create block storage volumes for cloud applications and attach and detach these volumes as needed. 22 Concepts and architecture

23 Object storage (Swift) Deploy and maintain an Openstack Swift cluster for object storage for use by cloud applications. Platform Services including HP Helion Development Platform and DNS as a Service Develop, deploy and scale cloud applications using the Platform as a Service (PaaS) capabilities of Platform Services, including HP Helion Development Platform. HDP includes Database as a Service (DBaaS), a managed database service that is based on OpenStack technologies. This service can be managed and configured by IT, but is easily consumable by developers. HDP also includes Application Lifecycle Service (ALS), a Cloud Foundry-based, managed runtime environment for applications. Domain Name System as a Service (DNSaaS) is based on the OpenStack Designate project and is engineered to help you create, publish, and manage your DNS zones and records securely and efficiently to either a public or private DNS server network. CloudSystem features 23

24 3 Security in CloudSystem This chapter describes security concepts to consider when working with browsers, certificates, and networks for secure communication and transfer of data among the appliances, networks, and computes nodes in a CloudSystem virtualized environment. Best practices for maintaining a secure appliance Most security policies and practices used in a traditional environment apply in a virtualized environment. However, in a virtualized environment, these policies might require modifications and additions. The following table comprises a partial list of security best practices that HP recommends in both physical and virtual environments. Differing security policies and implementation practices make it difficult to provide a complete and definitive list. 24 Security in CloudSystem

25 Topic Best Practice Passwords The admin password for the Operations Console and the OpenStack user portal: Is set during First-Time Installation. Cannot be changed from the Operations Console after First-Time Installation. Can contain eight characters or less. Can be a combination of uppercase and lowercase letters and numerals. Cannot contain symbols and special characters. Do not change the admin password on the OpenStack user portal on the Cloud controller or the Monitoring portal on the Management appliance, which is used to access the Monitoring dashboard. The cloudadmin password for the CloudSystem appliances: Is set during First-Time Installation. Can be changed in the operating system running on each appliance using the passwd command. If you change the password on one node, you must change the password on the other nodes of an appliance trio. See Default CloudSystem virtual appliance names (page 52) for default appliance names to use when you SSH to CloudSystem appliances. The passwords on different appliance trios do not have to match. For example, if you change the cloudadmin password on the Management appliance trio, you do not have to change the password on the Cloud controller trio or the Enterprise appliance trio. Do not change the OpenStack service or internal user account passwords. See Adding OpenStack service users and internal users to the directory service (page 84). For local accounts on the Management appliance, change the passwords periodically according to your password policies. Accounts Certificates Limit the number of local accounts on the CloudSystem Operations Console. Integrate the OpenStack user portal with an enterprise directory solution such as Microsoft Active Directory or OpenLDAP. Use certificates signed by a trusted certificate authority (CA), if possible. CloudSystem uses certificates to authenticate and establish trust relationships. One of the most common uses of certificates is when a connection from a web browser to a web server is established. The machine level authentication is carried out as part of the HTTPS protocol, using SSL. Certificates can also be used to authenticate devices when setting up a communication channel. CloudSystem supports self-signed certificates and certificates issued by a CA. HP advises customers to examine their security needs (that is, to perform a risk assessment) and to use certificates signed by a trusted certificate authority: Ideally, you should use your company's existing CA and import their trusted certificates. The trusted root CA certificate should be deployed to user s browsers that will contact systems and devices that will need to perform certificate validation. If your company does not have its own certificate authority, then consider using an external CA. There are numerous third-party companies that provide trusted certificates. You will need to work with the external CA to have certificates generated for specific devices and systems and then import these trusted certificates into the components that use them. Updates Ensure that a process is in place to determine if software and firmware updates are available, and to install updates for all components in your environment on a regular basis. Best practices for maintaining a secure appliance 25

26 Topic Cloud environment Best Practice Restrict access to the appliance consoles to authorized users. If you use an Intrusion Detection System (IDS) solution in your environment, ensure that the solution has visibility into network traffic in the virtual switch. 26 Security in CloudSystem

27 Part II CloudSystem appliances management This part of the Administrator Guide will help you with tasks necessary to configuring and maintaining aspects of the CloudSystem appliances.

28 4 Create a root certificate for the management hypervisor A root certificate is the starting point for secure communication in your cloud environment. CloudSystem uses certificates to authenticate and establish trust relationships. One of the most common uses of certificates is when a connection from a web browser to a web server is established. The machine level authentication is carried out as part of the HTTPS protocol, usingssl. Certificates can also be used to authenticate devices when setting up a communication channel. HP Helion CloudSystem supports two methods of applying security certificates to your management hypervisors. During installation, you will choose one of the following options. Use your company s existing certificate authority (CA) and import their trusted certificates. During installation, enter the path to the key and certificate you received from your local CA. NOTE: It is important to set the system timestamp accurately if you are using a self-signed certificate. You can use the Linux date command to set the system timestamp. Use CloudSystem to automatically generate a private key and certificate. During installation, leave the certificate fields blank and CloudSystem will automatically generate a private key and certificate. Figure 4 Certificate fields in the CloudSystem Management Appliance Installer IMPORTANT: During installation, if you choose to allow CloudSystem to generate the private key and certificate, you cannot decide later to use a local certificate authority. 28 Create a root certificate for the management hypervisor

29 5 Manage users Use the information in this chapter to learn how to configure user authentication in the Operations Console on the Management appliance and in the OpenStack user portal on the Cloud controller. Infrastructure administrators Use the Operations Console Users screen to manage local administrator user accounts. Select Users on the main menu to view the current list of users. An administrator can add new administrator users and modify or remove existing administrator user accounts. NOTE: All users in the Operations Console are administrator users, and can perform all tasks. See Table 2. The OpenStack Keystone service in the Management appliance contains a local directory. Therefore, local users, and not directory service users, can log in to the Management appliance from the Operations Console and the CloudSystem command line interfaces. Table 2 Admin role on the Management appliance Role Type of user Associated permissions or privileges Notes Admin Infrastructure administrator View, create, edit, monitor, or remove resources and other admin users managed by the appliance, including management of the appliance itself through the UI or command line. An Infrastructure administrator can create a backup file and recover the appliance from a backup file. An Infrastructure administrator can also manage information provided by the appliance in the form of activities, notifications, and logs. An Infrastructure administrator (Admin role) created in the Operations Console can view and manage all resources in the Operations Console. Add, edit, or remove an administrator user Use this procedure to add a locally authenticated administrator user with access to all resources in the Operations Console. Prerequisites You must have the following information: User's unique identifier name User's address Initial password Procedure 1 Add or edit an administrator user 1. From the Operations Console main menu, select Users. 2. To add a new administrator user, click Add New User. To modify an existing administrator user, select the row of the user you want to modify. Click the Action menu ( ) and select Edit. 3. Enter the user required information. 4. Select the demo project. Infrastructure administrators 29

30 5. Click Update. To exit the action without changes, click Cancel. The user with full administrator privileges you added or edited appears in the list of users on the Users overview screen. Procedure 2 Removing an administrator user 1. From the main menu, select Users. 2. Select the row of the user you want to remove. 3. Click the Action menu ( ) and select Remove. NOTE: Do not remove the OpenStack service users and internal users (for example, nova, cinder, enterpriseinternal, opsconsole). 4. Select one or more users, then click Remove User(s). To exit the action without removing users, click Cancel. Cloud administrators and users The OpenStack Keystone service in the Cloud controller, which hosts the OpenStack user portal, can be configured for local logins (the default) or directory service authentication using OpenLDAP and Microsoft Active Directory. You can configure directory services for the Cloud controller on the Security pane of the Operations Console System Summary screen. See Configure OpenLDAP or Active Directory for OpenStack user portal authentication (Keystone) (page 83). Use the OpenStack user portal to manage cloud administrator and cloud user accounts. Table 3 Admin and member roles on the Cloud controller Role Type of user Associated permissions or privileges Notes Admin Cloud administrator View the Admin tab in the OpenStack user portal. Cloud administrator users can view usage and manage instances, volumes, volume types, flavors, images, projects, users, services, and quotas. See the OpenStack Admin User Guide at OpenStack Cloud Software. A cloud administrator created in the OpenStack user portal can view and manage all resources in the OpenStack user portal. The Cloud administrator can log in to the Operations Console only if he or she has a user account in the Operations Console. Member Cloud user View the Project tab in the OpenStack user portal. Cloud users can view and manage resources in the project to which they are assigned. See the OpenStack End User Guide at OpenStack Cloud Software. A cloud user created in the OpenStack user portal can view all services available to him or her in the OpenStack user portal and can create, edit, and delete resources provided by those services. The actions a cloud user can perform on his or her cloud are a subset of the actions an administrator can perform. Configuring administrator passwords During First-Time Installation, you configure a password for the Operations Console and the OpenStack user portal admin accounts. (The admin account in the Operations Console is the infrastructure administrator; the admin account in the OpenStack user portal is the cloud administrator.) You cannot change the name of the admin accounts. You cannot change the password of the admin accounts after they are set during First-Time Installation. 30 Manage users

31 IMPORTANT: Do not change the password for admin accounts in the Operations Console, OpenStack user portal, and monitoring portal. Changing these passwords does not update services on the Management appliance and Cloud controller, and CloudSystem features will not work correctly. Add, edit, or remove a cloud administrator user or cloud user Log in to the OpenStack user portal using the admin account and password you set during First-Time Installation. TIP: A link to the OpenStack user portal is available on the Operations Console Integrated Tools screen. From the OpenStack user portal, you can add locally or directory service authenticated cloud administrator users or cloud users. NOTE: When creating a cloud user, do not select the heat_stack_user role. For information about creating users in the OpenStack user portal, see the OpenStack Admin User Guide at OpenStack Cloud Software. Cloud administrators and users 31

32 6 Backup, restore, and recover CloudSystem appliances CloudSystem provides the ability to save your configuration settings and data to a backup file while the appliances are running, and enables you to use that backup to restore appliance databases in the event of data loss. Backup process overview 1. Setting the location where the backup file is stored (page 34) 2. Backing up the CloudSystem appliances (page 35) Restore and recover process overview 1. Restoring appliances from a backup file (page 38) 2. Running a recovery report (page 40) 3. Recovering appliances and compute nodes (page 40) Backup and restore as a service CloudSystem implements backup and restore as a service (BRAAS) using the attis service, which runs on the Management appliance. You can use the Operations Console to perform backup, restore, report, and recover actions. You can also execute attis commands from the Management appliance. For information about the attis CLI, see the HP Helion CloudSystem 9.0 Command Line Interface Guide at Enterprise Information Library. From the Operations Console Backup & Restore screen, you can: Back up (full and incremental) the data (databases and files) for the trio of Management appliances, Cloud controllers, Enterprise appliances, and monitoring appliances Restore from the backup file the data for the trio of Cloud controllers, Enterprise appliances, and Monitoring appliances Restoring the data for the Management appliance trio is described in Restore the Management appliances (self restore) (page 39). Recover the Cloud controller data after the restore CloudSystem encrypts the backup file using 256-bit encryption. See Modifying the backup file encryption key (page 41). IMPORTANT: In the unlikely event you need to restore the CloudSystem appliances data, HP recommends backing up your appliance configuration on a regular basis. To automate frequent backups, you can use a cron job. See Using a cron job to automate frequent backups (page 37). What the backup process backs up Management appliance MySQL database Cloud controller MySQL database Enterprise appliance MySQL database Monitoring appliance MySQL database What the backup process does not back up Non-data files: Static files that are installed as part of the execution environment, and are not specific to the appliances or managed environment configuration Log files First-time setup configuration files 32 Backup, restore, and recover CloudSystem appliances

33 What the backup process backs up Monitoring appliance Vertica database (full backups only) System files: Non-database data What the backup process does not back up File-based storage Compute node data Instances data License files Format of the backup file name Full backup: <current-date>/fb_<num>.tar.enc.xx Incremental backup: <current-date>/incr_<num>.tar.enc.xx where xx is used when a backup file is very large; for example if a backup file is split into two parts, xx is.00 and.01. Best practices for backing up the CloudSystem appliances Method Description Creating Always use the Backup & Restore screens in the CloudSystem Operations Console or the attis CLI to back up your CloudSystem appliances. CAUTION: Do not use any hypervisor-provided capabilities or snapshots to back up CloudSystem appliances. Doing so can cause synchronization errors and result in unpredictable and unwanted behavior. Back up the following backup sources. Other configurations are provided if you want more granular backup options. Cloud_Controller_Appliance_Data_Backup Enterprise_Appliance_Data_Backup Management_Appliance_Data_Backup Monitoring_Appliance_Data_Backup Monitoring_Appliance_Vertica_DB_Backup (Full backups only) Back up the following sources at the same time or one after the other: Cloud_Controller_Appliance_Data_Backup and Enterprise_Appliance_Data_Backup Monitoring_Appliance_Data_Backup and Monitoring_Appliance_Vertica_DB_Backup Frequency Back up your appliance configuration on a regular basis, preferably a full backup daily and an incremental backup hourly and, and especially: After changing the appliance configuration, for example, after: Changing the credentials of the Management appliance virtual machines Registering a new VMware vcenter Adding new users (administrators or self-service users) After adding network resources such as subnets, ports, and routers Before and after updating the appliance software After activating new compute nodes (ESXi clusters, KVM hosts, or Hyper-V hosts) Best practices for backing up the CloudSystem appliances 33

34 Method Description You can back up the appliances while they are in use and while normal activity is taking place. You do not need to wait for tasks to stop before creating a backup file. See Using a cron job to automate frequent backups (page 37). Archiving HP recommends using an enterprise backup product such as HP Data Protector to archive backup files. For information on HP Data Protector, see the following website: Setting the location where the backup file is stored Before you start your first backup, you must use the attis CLI to specify where CloudSystem will store backup files. You can store backup files using: SCP: A location on a remote server where the backup file is secure copied. NFS: A location on a network file system. FILE: A location on the Management appliance. HP recommends that you configure CloudSystem to use an SCP or NFS server so that backup files are automatically moved to an off-appliance location in case of a catastrophic failure of the Management appliance trio. You can use the FILE option if you want to manually mount shared storage on all three nodes of the Management appliance. IMPORTANT: If you do not set a storage location, the backup will not succeed. In the backup job list, you will see that SSH connection to the storage path failed, and the state is Failed. Procedure 3 Set the location where the backup file is stored Perform the following steps on any node of the Management appliance trio. For information about the attis CLI, see the HP Helion CloudSystem 9.0 Command Line Interface Guide at Enterprise Information Library. 1. From the management hypervisor console underlying the CloudSystem Management appliance, SSH to the any node of the Management appliance trio (for example, ma1). 2. Log in to the Management appliance using the cloudadmin credentials you set during First-Time Installation. 3. Switch to the root user by running sudo i. 4. Set the type of backup storage location. a. To set the backup storage location to an SCP server, run: attis storage --use SCP b. To set the backup storage location to an NFS server, run: attis storage --use NFS 5. List the storage details. root@ma1:~# attis storage id type selected cd8fb-c6a2-401d-9caf-61caeede7f81 FILE False d b1-4d e3ffa44a449e NFS False 7abb6f30-3f f8d-a58b157d59b9 SCP True Backup, restore, and recover CloudSystem appliances

35 6. View the storage parameters that you will need to update by selecting the storage ID of the selected type. Example: attis storage --id 7abb6f30-3f f8d-a58b157d59b id type selected abb6f30-3f f8d-a58b157d59b9 SCP True id storage_id parameter_name value bb9b5e-<...> 7abb6f30-<...> user 78aaa60e-<...> 7abb6f30-<...> storage_path c0a5c549-<...> 7abb6f30-<...> server e34ac1b2-<...> 7abb6f30-<...> private_key_filepath /home/attis/.ssh/id_rsa Set the attis configuration of the storage type you entered in step 4 by entering: attis storage --update '{"parameter":"new value", "parameter":"new value"}' --id <storage_id from the table> where parameter in an SCP configuration is: user: User to SSH into the remote SCP server. storage_path: Path on the remote SCP server where you want to store backup files. server: IP address or DNS resolvable hostname of the remote server. private_key_filepath: Private key used to SSH to the remote SCP server. By default /home/attis/.ssh/id_rsa on MA is prepopulated. SCP Example To specify the details of an SCP server to use for backup file storage: attis storage --update '{"storage_path":"/mybackupdironremotemachine", "private_key_filepath":"/home/attis/.ssh/is_rsa","server":"hostofscplocation","user":"remotescpuser"}' NFS Example To specify the details of an NFS server to use for backup file storage: attis storage --update '{"server":"nfs server ip", "client_mount_path": "name of storage mount on management appliance", "storage_path":"mybackupdironremotemachine"}' --id <copy the storage_id from attis storage> IMPORTANT: If you select SCP, you must copy the public key text located at /home/attis/.ssh/id_rsa.pub on each node of the Management appliance trio (ma1, ma2, and ma3) and paste it into the authorized_keys file of the SCP server. If you want to generate a new key for setting up trust between attis and the SCP server, copy the private key to each Management appliance node. Then use the attis command storage --update to update the private_key_filepath. 8. Stop and restart the attis-server on all three nodes of the Management appliance (ma1, ma2, ma3). stop attis-server status attis-server Make sure that attis-server has stopped, then enter: start attis-server Backing up the CloudSystem appliances A backup file saves the configuration settings and management data for your CloudSystem appliances. You can recover from a data loss by restoring your appliance databases from the backup file. IMPORTANT: Before you back up CloudSystem appliance data for the first time, you must use the attis CLI to specify the location where the backup file will be stored. See Setting the location where the backup file is stored (page 34). Backing up the CloudSystem appliances 35

36 Prerequisites Review the best practices for backing up CloudSystem appliances. Run the attis CLI on the Management appliance to specify the location where the backup file will be stored. See Setting the location where the backup file is stored (page 34). Procedure 4 Back up the databases on CloudSystem appliances 1. From the Operations Console main menu, select Backup & Restore. 2. Optional Change the configuration of the backup service for each appliance. These values are prepopulated. You can update the details if desired. 3. In the row of the backup source you want to back up, click the Action menu ( ) and select Start backup. NOTE: HP recommends backing up the following backup sources. Other configurations are provided if you want more granular backup options. Cloud_Controller_Appliance_Data_Backup Enterprise_Appliance_Data_Backup Management_Appliance_Data_Backup Monitoring_Appliance_Data_Backup Monitoring_Appliance_Vertica_DB_Backup (Full backups only) 4. From the drop down list, select one of the following: Full Backup: A backup of all files in the selected database Incremental Backup: A backup of all changed and new files since the last backup NOTE: Backup. You must create at least one Full Backup before you create an Incremental 5. Click Start Backup. Click Cancel to exit the action without starting a backup. 6. Repeat steps 3 through 5 for each backup source you want to back up. 7. Optional: If you have not configured the backup service to store the backup file on an NFS or SCP server, consider copying the backup file to an external location for safekeeping. Viewing the backup log and the backup job list For each appliance backup, you can view: A backup log shows a successfully completed backup job, which contains an identifier, base folder, date created, type, and last incremental reference. A job list, which contains any backup, restore, and recovery operations performed, their status (completed or failed), and the date and time of the operation. By default, the Operations Console shows the last five days of backup logs and job lists. You can use the attis CLI for more date range options. 1. From the Operations Console main menu, select Backup & Restore. 2. For a list of operations performed, click View Job List. 3. For a backup log, select the appliance backup file you want to view, then click the Action menu ( ). 4. Select View Backup Log. 36 Backup, restore, and recover CloudSystem appliances

37 Using a cron job to automate frequent backups Cron is a utility that allows tasks to be automatically run in the background at regular intervals. Crontab is a file that contains the schedule of cron entries to be run at specified times. Following are examples of cron jobs that you can configure to automatically run daily full backups and hourly incremental backups. Procedure 5 Full backup that runs daily at 2 AM 1. From the Management hypervisor console underlying the CloudSystem Management appliance, log in to the first node of the Management appliance trio (ma1). 2. Obtain the config-id of an action you want to automate by running the following command. attis config For example, note the config-id that is on the same line as the Management_Appliance_Data_Backup action. 3. Create the file /home/attis/full_backupjob.sh and add the following lines, replacing ID with the config ID obtained in step 2 and including the details specific to your environment. export OS_USERNAME=admin export OS_TENANT_NAME=demo export OS_PASSWORD=<password-set-during-first-time installation> export OS_AUTH_URL= export OS_REGION_NAME=RegionOne attis backup --config-id <ID> 4. Create the file /home/attis/crontab_fullbackupjob.txt and add the following line: 0 2 * * *. /home/attis/full_backupjob.sh > /home/attis/cron_fullbackup_job.log 2>&1 5. Run the following command to add your newly created file to crontab: crontab /home/attis/crontab_fullbackupjob.txt Procedure 6 Incremental backup that runs hourly 1. From the Management hypervisor console underlying the CloudSystem Management appliance, log in to the first node of the Management appliance trio (ma1). 2. Obtain the config-id of an action you want to automate by running the following command. attis config For example, note the config-id that is on the same line as the Management_Appliance_Data_Backup action. 3. Create the file /home/attis/incremental_backupjob.sh and add the following lines, replacing ID with the config ID obtained in step 2 and including the details specific to your environment. export OS_USERNAME=admin export OS_TENANT_NAME=demo export OS_PASSWORD=<password-set-during-First-Time-Installation> export OS_AUTH_URL= export OS_REGION_NAME=RegionOne attis backup --config-id <ID> --backup-type incremental 4. Create the file /home/attis/crontab_incrementalbackupjob.txt and add the following line: 0 * * * *. /home/attis/incremental_backupjob.sh > /home/attis/cron_incremental_backup_job.log 2>&1 5. Run the following command to add your newly created file to crontab: crontab /home/attis/crontab_incrementalbackupjob.txt Using a cron job to automate frequent backups 37

38 Restoring appliances from a backup file Restoring an appliance data from a backup file replaces all management data and most configuration settings on the appliance. After you restore the backup file of the Cloud controller, run a recovery report to see any discrepancies that a restore operation could not resolve automatically. Restore and recover process overview 1. Restoring the CloudSystem appliances (page 38) 2. Restoring the Management appliances (self restore) (page 38) Restoring the CloudSystem appliances Prerequisites Note the current user accounts and passwords you use. The restore operation resets the user names and passwords to those that were in effect when the backup file was created. Stop all automatically scheduled backups that you configured with a cron job. Restart the automatically scheduled backups after the appliances are restored. Make the backup file accessible to the appliances from which you plan to issue the upload request. If you are using an enterprise backup product to archive backup files, follow any steps required by your backup product to prepare for the restore operation. Make sure that all users logged in to the appliances log out. Users who are logged in when the restore operation begins are automatically logged out, losing whatever work was in progress. All users are blocked from logging in during the restore operation. Procedure 7 Restore the CloudSystem appliances 1. From the Operations Console main menu, select Backup & Restore. 2. In the row of the appliance database you want to restore, click the Action menu ( ) and select Start restore. 3. From the drop down list, select the backup file that you want to restore. The date, time, and backup type are shown. Use the latest backup file to restore the appliances. Changes made after the backup file is created cannot be saved. NOTE: You can use a backup file to restore the appliance database for which the backup file was created. You cannot restore the backup file to create a replacement appliance. Incremental Backup: Restore all changed and new files since the last backup Full Backup: Restore all files in the selected database 4. Click Start Restore. Click Cancel to exit the action without restoring the database. 5. Repeat steps 2 through 4 for each database you want to restore. 6. Wait for the restore to complete. Restoring the Management appliances (self restore) If you need to restore the Management appliance database from a backup, execute the following commands from the attis CLI. You cannot restore the Management appliance database from the Operations Console. 38 Backup, restore, and recover CloudSystem appliances

39 Prerequisites The attis service has permissions to access the folder where the backup file is located. Procedure 8 Restore the Management appliances (self restore) 1. From the management hypervisor console underlying the CloudSystem Management appliance, SSH to the any node of the Management appliance trio (for example, ma1). 2. Log in to the Management appliance using the cloudadmin credentials you set during First-Time Installation. 3. Change to the attis user. su attis 4. Because the database is presumed inoperable, load the configuration from a config file. Samples of config files are available at /etc/attis/config_template/managementappliancedbrestore.json. 5. If an SCP server is used for storing the backup file, copy the date stamped directory from the SCP server to a location accessible to the attis user on any management appliance. 6. Open the JSON file /etc/attis/config_template/managementappliancedbrestore.json and add information specific to your environment, then save the file. For example, where backup_base_dirpath is the base backup folder where Management appliance data backup is present (generally a date stamped folder), and storage_path is the path on the Management appliance on which space for attis has been allocated: "config": { "flow_name": "managementappliancedb", "name": "Percona_Xtradb_Restore_ManagementApplianceDB", "parameters": { "target": "managementappliances", "Percona_XtraDB_Restore_dbuser": "root", "Percona_XtraDB_Restore_dbpassword":"unset", "Percona_XtraDB_Restore_temp_dirpath": "/tmp", "Percona_XtraDB_Restore_target_user":"attis-access", "restore_dbpath": "/mnt/state/var/lib/mysql", "backup_base_dirpath":"<date stamped folder>", #user can find date stamped/base_folder for corresponding backup using cli, attis backuplog --name managementappliancedb "restore_to_incrementnumber":"" } }, "storage": { "type": "FILE", "parameters": { "storage_path": "<absolute storage path on management appliance>" } } } ~ 7. Execute the following commands: attis restore --config-path /etc/attis/config_template/managementappliancedbrestore.json Attis self restore runs os-refresh-config on all Management appliance nodes as part of the restore operation. If os-refresh-config fails, run it manually on all Management appliance nodes. 8. Stop and restart the attis-server on all three nodes of the Management appliance (ma1, ma2, ma3). stop attis-server status attis-server Make sure that attis-server has stopped, then enter: start attis-server Recovering Cloud controller appliance databases and compute node data after a restore Recover process overview 1. Running a recovery report (page 40) 2. Recovering appliances and compute nodes (page 40) Recovering Cloud controller appliance databases and compute node data after a restore 39

40 During a recover operation, the appliance reconciles the data in the backup file with the current state of the managed environment. A recovery maintains consistency between the appliances and the compute nodes after a restore operation by performing necessary cleanup on the Cloud controller database to ensure that functionality of the cloud is restored to normal. There are some discrepancies that a restore operation cannot resolve automatically. For example: After a restore, the appliance databases may not contain information about new OpenStack objects (virtual machine instances, networks, routers, volumes), however the compute node hypervisor may contain the respective realized components (KVM virtual machine instance, VIFs, namespaces, links). This situation can occur if an instance or volume is launched after the last backup, or a Cinder volume is created and attached after the last backup and before the restore operation. After a restore, the databases may contain stale information. When a database contains stale objects, the Nova Compute service may not start if it searches for and cannot find the filesystem related to a stale instance. This situation can occur if an OpenStack object was removed after the last backup and before the restore operation. The recover operation generates a report and asks for consent from the cloud administrator before cleaning up artifacts from the Nova database on the Cloud controller. What the recover process cleans up OpenStack Nova database on the Cloud controller What the recover process does not clean up Orphaned resources. The administrator must clean up: Virtual machines Volumes Images Port groups Running a recovery report A recovery report displays a list of resources that were added, changed, or removed after the backup file was restored. The recovery report shows the suggested action to clean up the Nova database to ensure that functionality of the cloud is restored. These suggested actions are not performed by CloudSystem until you confirm the recover action. Procedure 9 Running a recovery report for a CloudSystem appliance 1. From the Operations Console main menu, select Backup & Restore. 2. In the row of the appliance for which you want to run a recovery report, click the Action menu ( ) and click Run Recovery Report. Click Cancel to exit the action without running a report. 3. Click Run Recovery Report. 4. Repeat steps 2 and 3 for each appliance that you plan to recover. Next steps Recovering appliances and compute nodes (page 40) Recovering appliances and compute nodes After you generate a recovery report, run a recover to implement the actions shown in the report. Running a recover includes confirming that you have reviewed the latest recovery report to be used for the recover. 40 Backup, restore, and recover CloudSystem appliances

41 The recover operation cleans up artifacts from the Nova database on the Cloud controller and powers off orphaned virtual machines. Procedure 10 Recovering CloudSystem appliances and compute nodes after a restore 1. From the Operations Console main menu, select Backup & Restore. 2. In the row of the database you want to recover, click the Action menu ( ) and select Run recover. 3. Click Run recovery. Click Cancel to exit the action without recovering. Modifying the backup file encryption key CloudSystem encrypts the backup file using 256-bit encryption. The default encryption key is an alphanumeric string generated during installation. You can change the key as follows. IMPORTANT: If you restart the Management appliance, the encryption key is reset to the original generated value. After a restart, you must reset the encryption key again. Procedure 11 Reset the backup file encryption key 1. From the management hypervisor console underlying the CloudSystem Management appliance, SSH to the first node of the Management appliance trio (ma1). 2. Log in to the Management appliance using the cloudadmin credentials you set during First-Time Installation. 3. Edit the file /var/lib/heat-cfntools/cfn-init-data. 4. Enter the new key in encryption_key. 5. Repeat step 4 on the second and third nodes of the Management appliance trio (ma2 and ma3). 6. Run the command os-refresh-config on ma1, then on ma2 and ma3. 7. Stop and restart the attis-server on all three nodes of the Management appliance (ma1, ma2, ma3). stop attis-server status attis-server Make sure that attis-server has stopped, then enter: start attis-server Modifying the backup file encryption key 41

42 7 Backup, restore, and recover the OVSvApp agent Backing up and restoring the OVSvApp agent The OVSvApp agent is an L2 agent, which runs within the OVSvApp service VM to support ESXi cloud deployments. The following sections provide information about backing up and restoring the OVSvApp agent. After backing up and restoring the OVSvApp agent: You can add new network resources such as subnets, ports, and routers, and update existing network resources. Network resources that are created or updated after the backup are not restored. Stale virtual machines and port groups are not deleted automatically. See Deleting stale VMs and port groups (page 42) Limitations after restoring the OVSvApp agent Any new VMs created after a backup and before the restore are reachable after the restore using the vcenter guest VM console. VMs that are created after a backup and are able to communicate using a router are not able to communicate after the restore. However, the VMs that were not able to communicate after the backup and before the restore will start to communicate after the restore. VMs that are able to communicate due to security group rules (or security groups) applied after the backup and before the restore are not able to communicate after the restore. VMs that are able to access the External Network because of a gateway applied to their router after a backup and before a restore are not able to reach the External Network after the restore. VMs that are associated with floating IP addresses after a backup and before a restore lose that association after the restore. If a VM's port was blocked after a backup and before the restore, the port becomes active after the restore. If the port was active after a backup and before a restore, it becomes blocked after the restore. Deleting stale VMs and port groups VMs that are created after a backup and before the restore are known as stale VMs. Port groups created on the trunk distributed switch (DVS/VDS) after a backup and before the restore are stale port groups. IMPORTANT: You must delete stale port groups if your tenant network type is VxLAN. Stale port groups do no harm if your tenant network type is VLAN. Identifying stale port groups Tenant network type VLAN 1. Identify the trunk DVS. NOTE: There will be one trunk DVS per datacenter in VLAN. a. By default, the trunk DVS name is CS-OVS-Trunk _<datacenter name>. b. If the trunk DVS name is not set to the default, then do the following: 42 Backup, restore, and recover the OVSvApp agent

43 1) Log in to Management appliance. 2) Run the following command: curl -k -H "X-Auth-Token: <TOKEN>" GET python -m json.tool > tmp.json 3) The value of trunk_dvs_name in network section of tmp.json file, represents the trunk DVS name. 2. To get the list of port groups from vcenter, do the following: a. Log in to vcenter using administrator credentials. b. Go to Home Inventory Networking. c. Click on the Trunk DVS name. Select Networks to get the list of port group names. Figure 5 shows the port groups under TestTrunk. Figure 5 Port Groups- VLAN d. Ignore the following port groups: The Trunk port group (meant for OVSvApp VM), you can obtain the OVSvApp trunk port group name from tmp.json file created earlier in step 1.b.2. Locate the trunk_port_name field in network section. The default name is CS-Trunk-PG. For example, TrunkPortgroup in Figure 5. The uplink port groups, that are created by vcenter. For example, TestTrunk-DVUplinks-40 in Figure 5. e. The remaining port groups are created by the OVSvApp agent. 3. Extract the list of tenant networks from OpenStack controller: a. Log in to the OpenStack controller using administrator credentials. b. Execute the command neutron net-list. 4. Extract the list of stale port groups. a. Compare the port group names created by the OVSvApp agent with the OpenStack network list (network_uuid). The port group names that are not present in the OpenStack network list are considered stale port groups. b. Port groups that have stale VM(s) only (OpenStack network list lists a network, but stale VMs are associated with that network) are also considered stale port groups. Tenant network type VXLAN 1. To identify the trunk DVS, do the following: NOTE: There will be one trunk DVS per datacenter in VLAN. a. By default, the trunk DVS name is CS-OVS-Trunk _<datacenter name>. b. If trunk DVS name is not set to default, then do the following: Backing up and restoring the OVSvApp agent 43

44 1) Log in to the Management Appliance. 2) Run the following command: curl -k -H "X-Auth-Token: <TOKEN>" GET python -m json.tool > tmp.json 3) Fetch the value of trunk_dvs_name in network section of tmp.json file. Append the cluster name to trunk_dvs_name which represents trunk DVS name. 2. To get the list of port groups from vcenter, do the following: a. Login to vcenter using administrator credentials. b. Go to Home Inventory Networking. c. Click on the Trunk DVS name. Select Networks to get the lists of port group names. Figure 6 shows the port groups under TestTrunk. Figure 6 Port Groups- VXLAN d. Ignore the following port groups: Trunk port group (meant for OVSvApp VM): Obtain the OVSvApp trunk port group name from tmp.json file created earlier in step 1 b 2. Locate the trunk_port_name field in network section. The default name is CS-Trunk-PG_<cluster_name>. For example, TrunkPortgroup_TestCluster in Figure 6. The uplink port groups that are created by vcenter. For example, TestTrunk-DVUplinks-40 in Figure 6. e. The remaining port groups are created by OVSvApp agent. 3. Extract the list of tenant networks from Openstack controller: a. Login to the openstack controller using the administrator credentials. b. Execute the command neutron net-list. 4. Extract the list of stale port groups. a. You must compare the port group names created by OVSvApp agent with Openstack network list (network_uuid). The port group names (<network_uuid>-<cluster_id>), that are not present in the Openstack network list are qualified as stale port groups. NOTE: You must compare only the network_uuid portion in the port group name. b. The port groups, which have stale VM(s) only [Openstack network list lists a network, but stale VMs are associated with that network] are also qualified as stale port groups. Deleting stale port groups To delete the stale port groups, do the following: NOTE: You must delete stale VMs before deleting the stale port groups. 44 Backup, restore, and recover the OVSvApp agent

45 1. Login to vcenter using the administrator credentials. 2. Go to Home Inventory Networking. 3. Select the identified stale port group that are required to be deleted as shown in Figure 7. Figure 7 Identifying the stale port group 4. Right click on the port group, that is required to be deleted. 5. Click Delete. You will receive a confirmation message box as shown in Figure 8. Figure 8 Delete confirmation 6. Click Yes button. The stale port group is now deleted. IMPORTANT: If the tenant network type is VLAN, and the stale port groups are not deleted, then the new VMs spawned on this network after restore use an existing stale port group to attach its interface and does not cause any issues. If the tenant network type is VXLAN, and the new VMs spawned on this network after restore will use existing stale port group to attach its interface. This may result in network issues and newly spawned VMs may not get IP address. Therefore, it is recommended that the stale port groups must be deleted before spawning new VMs after the restore. Backing up and restoring the OVSvApp agent 45

46 8 Backup, restore, and recover the SDN controller The SDN controller is an important component in the L2 gateway solution. The SDN controller node hosts the OVSDB server that stores the configuration information and the learnt information of the hardware L2 gateway. The SDN controller additionally hosts the L2 gateway agent that acts as a proxy between the Neutron server and the OVSDB server. Information about logical gateways and gateway connections is stored in the Neutron database. NOTE: Back up the SDN controller at the same time as the Neutron database. If you back up the data at different times, then this may lead to inconsistency between the data in Neutron database and in the OVSDB server. This may affect the data path tunnels between the workloads on the virtual networks and the physical networks. You must bring the two sides to a consistent state after the Neutron database is restored. You may want to automate the process of backing up the SDN controller periodically. SDN controller backup best practices Back up your SDN controller on a regular basis, especially: When a new switch is discovered or activated in the SDN controller When a switch is deleted or deactivated from the SDN controller When a new L2 gateway is created, updated, or deleted using one of following commands: neutron l2-gateway-create neutron l2-gateway-update neutron l2-gateway-delete When a connection is created or deleted using the following commands: neutron l2-gateway-connection-create neutron l2-gateway-connection-delete REST APIs to perform backup of the SDN controller This procedure backs up the OVS database (OVSDB) on the SDN controller appliance. 1. Log in to the Management appliance (ma1) using the cloudadmin credentials you set during First-Time Installation. 2. SSH to the SDN controller. 3. To acquire the authentication token for the controller backup, run the following command. The default values are "domain":"sdn","user":"sdn","password":"skyline": curl --noproxy <sdn_controller_ip> X POST --fail -kssfl -url " -H "Content-Type: application/json" --data-binary '{"login": {"domain": "<domain>","user": "<user>","password":"<password>"}}' 4. To acquire the controller UID run the following command: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kssfl --request GET 5. To use the acquired controller UID to set the IP address of the controller, run the following command: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kssfl --request PUT 46 Backup, restore, and recover the SDN controller

47 " --data-binary '{"system":{"ip":"<sdn_controller_ip>"}}' 6. To perform the backup process, run the following command:: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kss --request POST --url " 7. To check on the status of the backup process, run the following command: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kssfl --request GET --url " 8. To download the backup file, run the following command: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kssfl --request GET --url " <path-and-file-name>.zip NOTE: The backup file name must begin with sdn_controller_backup. REST APIs to restore the SDN controller This procedure restores the OVS database (OVSDB) on the SDN controller appliance. 1. Uninstall the HP Converged Control SDN application and the HP VAN SDN Controller. 2. Log in to the Management appliance (ma1) using the cloudadmin credentials you set during First-Time Installation. 3. SSH to the SDN controller. 4. In the ~/.sdn_install_options file, set CTL_RESTORE_INSTALL_MODE=True. If the file does not exist, create it and add the specified entry. 5. Reinstall the HP VAN SDN Controller in restore mode using the command: dpkg i <controller build>.deb The SDN controller debian is placed in the /opt/stack/sdn folder on the SDN appliance. 6. Make sure that any changes made after you installed the VAN SDN Controller before the backup are redone before restoring the SDN controller. For example, if you changed the remote keystone IP/token, make that change again. 7. Acquire the authentication token for the system restore. Default values are "domain":"sdn","user":"sdn","password":"skyline": curl --noproxy <sdn_controller_ip> -X POST --fail -kssfl --url " -H "Content-Type: application/json" --data-binary '{"login": {"domain": "<domain>","user": "<user>","password":"<password>"}}' 8. Acquire the controller UID and set the IP address: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kssfl --request GET --url curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kssfl --request PUT --data-binary '{"system":{"ip":"<sdn_controller_ip>"}}' 9. Upload the backup file: curl --noproxy <sdn_controller_ip> -X POST --fail -kssfl --url " -H "X-Auth-Token:<auth_token>" where path-and-file-name is the full path to the file and the filename. REST APIs to restore the SDN controller 47

48 10. Initiate the restore: curl --noproxy <sdn_controller_ip> --header "X-Auth-Token:<auth_token>" --fail -kss --request POST --url " Recovering from the unusable OVSDB state After the SDN controller is restored, the OVSDB might become unstable in the following two circumstances: A switch or bare metal servers that were discovered by the SDN controller before it was backed up was deactivated after the backup. For example, at the time of backup, physical switch S1 and bare metal MAC m1 were present in the OVSDB tables. However, after the backup, the switch S1 was deactivated and the bare metal with MAC address m1 was removed, and a new switch S2 with a bare metal MAC address m2 is discovered. In this case, after the restore operation, the physical topology contains the switch S2 and the bare metal server with MAC m2. However, the information in the OVSDB server tables contains switch S1 and the bare metal MAC m1. The Neutron database was not backed up at the same time as the SDN controller. In both situations, complete the following steps to solve the problem. 1. Log in to the SDN Controller Console. 2. Deactivate the old switch. 3. Delete the logical gateways in Neutron that involves the old switch and its interfaces. 4. Delete any connections you might have created on the old switch. 5. Activate the new switch. 6. Create the logical gateways to the switch. 48 Backup, restore, and recover the SDN controller

49 7. Create the connections of the logical gateways with the virtual networks. Using the SDN controller console user interface 1. Use a supported browser, such as Google Chrome, to access the controller's UI at the controller IP address: UI Example 2. Enter user name and password credentials, then select Login. Example Default user name: sdn Default password:skyline 3. The main controller screen appears with the Alerts screen displayed. Using the SDN controller console user interface 49

50 Using the SDN controller using RSDoc Use a supported browser, such as Google Chrome, to access the controller's UI at the controller IP address: Example If the controller is running, the RSdoc tool appears. Changing the controller password The HP VAN SDN Controller uses a local Keystone authentication server to authenticate administrators. The installation process creates two user accounts in the Keystone server: sdn rsdoc For both accounts, the default password is skyline. After completing the HP VAN SDN Controller installation, change the password on both accounts using the Keystone command-line interface. NOTE: Where a command in this procedure is shown with multiple lines, the line breaks are inserted at the points where a space occurs in the actual command. 1. To retrieve the user-id, run the following command:. For example, with only the default users enabled: ~$ keystone -token ADMIN --endpoint user-list The following output appears. 50 Backup, restore, and recover the SDN controller

51 2. Change the password for the selected account name by specifying the new password and corresponding user ID. For example, if the new password is MyNewPassword, for user rsdoc, enter the following command: ~$ keystone -token ADMIN --endpoint user-password-update - pass MyNewPassword e4ec14ea5b971d9d5180a4d1b Changing the controller password 51

52 9 Shut down and restart CloudSystem appliances You can perform maintenance on a virtual machine that is hosting CloudSystem appliances by using the shutdown action to gracefully stop the guest operating system and release the physical resources. CloudSystem appliances are clustered in a trio configuration and must be shut down and then restarted in a very precise order. Make sure that you can identify the first node in each appliance trio, as that node is typically the management node, and should always be the last node shut down and the first node restarted. Virtual appliance names To ensure that you are communicating with the correct virtual appliance, always log on to the first Management appliance, then SSH in to the other appliances using the internal name assigned on the Cloud Management Network. A list of internal names is provided in the table below. The management node for the trio is listed in bold. Table 4 Default CloudSystem virtual appliance names CloudSystem appliance Management appliance Cloud controller Enterprise appliance Monitoring appliance Update appliance SDN controller Internal name assigned on the Cloud Management Network ma1, ma2, ma3 cmc, cc1, cc2 ea1, ea2, ea3 mona1, mona2, mona3 ua1 sdn Only deployed in VxLAN configurations. OVSvApp appliance ovsvapp Security option for ESXi-provisioned instances Shut down CloudSystem appliances Shutting down a single CloudSystem virtual appliance (page 52) Shutting down a trio of CloudSystem virtual appliances (page 53) Shutting down compute nodes (page 53) Shutting down an entire cloud (page 54) Procedure 12 Shutting down a single CloudSystem virtual appliance Use this procedure to gracefully shut down a single CloudSystem appliance. If the appliance is part of a trio, the remaining appliances in the trio are not affected by this procedure and will continue to run. IMPORTANT: The first appliance typically manages the trio. You can shut down the second and third appliances at any time, but do not shut down the first appliance while the other two are still running. When the first appliance in the Cloud controller (cmc) is shut down, you cannot perform block storage (Cinder) operations such as creating and attaching a volume. However, volumes already created and attached to instances are not affected. The instances continue to have connectivity to the volume. 52 Shut down and restart CloudSystem appliances

53 1. Using cloudadmin credentials, SSH in to the first Management appliance in the trio. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 2. From the Management appliance, SSH to the appliance you want to shut down and run the shutdown command. Example ssh sudo shutdown h now ssh cloudadmin@ua1 sudo shutdown h now 3. Wait for the appliance shutdown process to complete. Procedure 13 Shutting down a trio of CloudSystem virtual appliances Use this procedure to gracefully shut down all three appliances in the trio. 1. Using cloudadmin credentials, SSH in to the first Management appliance in the trio. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 2. From the Management appliance, SSH in to the third appliance in the trio and run the shutdown command. Example ssh cloudadmin@[internal_name] sudo shutdown h now ssh cloudadmin@cc2 sudo shutdown h now 3. Wait for the appliance shutdown process to complete. 4. Repeat steps one and two for the second appliance in the trio. Example ssh cloudadmin@[internal_name] sudo shutdown h now ssh cloudadmin@cc1 sudo shutdown h now. 5. Repeat steps one and two for the first appliance in the trio. Example ssh cloudadmin@[internal_name] sudo shutdown h now ssh cloudadmin@cmc sudo shutdown h now. Procedure 14 Shutting down compute nodes 1. From the OpenStack user portal, shut down all virtual machine instances running on the compute node. 2. For ESXi compute clusters: a. Using administrator credentials, log in to vcenter. b. Shut down the OVSvApp running on the compute node. c. Shut down the compute node. 3. For KVM compute nodes: a. Using cloudadmin credentials, SSH in to the compute node. b. Run the shutdown command: sudo shutdown h now c. Wait for the compute node shutdown process to complete. 4. For Hyper-V compute nodes: a. Use a remote desktop connection to access the compute node. b. From the Settings tab on the bottom-right taskbar, select Power Shut down. c. Wait for the compute node shutdown process to complete. Shut down CloudSystem appliances 53

54 Procedure 15 Shutting down an entire cloud Use this procedure to shut down all CloudSystem appliances in the cloud. 1. Using cloudadmin credentials, SSH in to the first Management appliance in the trio. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 2. From the Management appliance, SSH in to the Update appliance and run the shutdown command. Example ssh sudo shutdown h now. 3. Wait for the Update appliance shutdown process to complete. 4. Repeat steps one and two for the remaining CloudSystem appliances in the following order: SDN controller (if present in your environment) sdn OVSvApp (if present in your environment) ovsvapp NOTE: Follow Shutting down compute nodes (page 53). Compute nodes Monitoring appliance mona3 mona2 mona1 Enterprise appliance ea3 ea2 ea1 Cloud Controller cc2 cc1 cmc Management appliance ma3 ma2 ma1 Starting CloudSystem appliances after a shutdown Use the following procedures below to bring CloudSystem appliances back up after a shutdown. Perform the procedures in the order in which they are listed. 54 Shut down and restart CloudSystem appliances

55 NOTE: You should follow these instructions when starting a single appliance or a trio of appliances. Procedure 16 Starting the Management appliances 1. From the management hypervisor, power on the ma1 virtual machine. 2. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 3. Using cloudadmin credentials, SSH in to ma1. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 4. Run the command: service mysql bootstrap-pxc 5. Wait for the mysql service to start. You can verify this by running: service mysql status 6. From the management hypervisor, power on the ma2 virtual machine. 7. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 8. From the management hypervisor, power on the ma3 virtual machine. 9. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 10. From the Management appliance, SSH in to ma2: ssh cloudadmin@ma2 11. Wait for the mysql service to start. You can verify this by running: service mysql status 12. From the Management appliance, SSH in to ma3: ssh cloudadmin@ma3 13. Wait for the mysql service to start. You can verify this by running: service mysql status 14. Refresh the configuration on all three appliances, starting with ma1, then ma2, then ma3: os-refresh-config 15. Perform a health check of the nodes in the trio. See Health checks (page 63). Procedure 17 Starting the Cloud controllers 1. From the management hypervisor, power on the cmc virtual machine. 2. Wait for the configuration to refresh. This may take up to five minutes. 3. Using cloudadmin credentials, SSH in to ma1. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 4. From the Management appliance, SSH in to cmc and restart the mysql service: ssh cloudadmin@cmc service mysql bootstrap-pxc 5. Wait for the mysql service to start. You can verify this by running: service mysql status 6. Switch to the root user: sudo su 7. From the management hypervisor, power on the cc1 virtual machine. 8. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 9. From the management hypervisor, power on the cc2 virtual machine. Starting CloudSystem appliances after a shutdown 55

56 10. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 11. From the Management appliance, SSH in to cc1: ssh 12. Wait for the mysql service to start. You can verify this by running: service mysql status 13. Switch to the root user: sudo su 14. From the Management appliance, SSH in to cc2: ssh 15. Wait for the mysql service to start. You can verify this by running: service mysql status 16. Switch to the root user: sudo su 17. Refresh the configuration on all three appliances, starting with cmc, then cc1, then cc2: os-refresh-config 18. Perform a health check of the nodes in the trio. See Health checks (page 63). Procedure 18 Starting the Enterprise appliances 1. From the management hypervisor, power on the ea1 virtual machine. 2. Wait for the configuration to refresh. This may take up to five minutes. 3. Using cloudadmin credentials, SSH in to ma1. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 4. From the Management appliance, SSH in to ea1 and restart the mysql service: ssh cloudadmin@ea1 service mysql bootstrap-pxc 5. Wait for the mysql service to start. You can verify this by running: service mysql status 6. Restart HP CSA: sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start sudo -u csauser /usr/local/hp/csa/scripts/msvc start service csa restart 7. Restart Marketplace Portal: service mpp restart 8. Restart OO Central: service HPOOCentral restart 9. From the management hypervisor, power on the ea2 virtual machine. 10. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 11. From the management hypervisor, power on the ea3 virtual machine. 12. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 13. From the Management appliance, SSH in to ea2: ssh cloudadmin@ea2 14. Wait for the mysql service to start. You can verify this by running: service mysql status 15. Restart HP CSA: 56 Shut down and restart CloudSystem appliances

57 sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start sudo -u csauser /usr/local/hp/csa/scripts/msvc start service csa restart 16. Restart Marketplace Portal: service mpp restart 17. Restart OO Central: service HPOOCentral restart 18. From the Management appliance, SSH in to ea3: ssh 19. Wait for the mysql service to start. You can verify this by running: service mysql status 20. Restart HP CSA: sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start sudo -u csauser /usr/local/hp/csa/scripts/msvc start service csa restart 21. Restart Marketplace Portal: service mpp restart 22. Restart OO Central: service HPOOCentral restart 23. Perform a health check of the nodes in the trio. See Health checks (page 63). Procedure 19 Starting the Monitoring appliances 1. From the management hypervisor, power on the mona1 virtual machine. 2. Wait for the configuration to refresh. This may take up to five minutes. 3. Using cloudadmin credentials, SSH in to ma1. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 4. From the Management appliance, SSH in to mona1 and restart the mysql service: ssh cloudadmin@mona1 service mysql bootstrap-pxc 5. Wait for the mysql service to start. You can verify this by running: service mysql status 6. From the management hypervisor, power on the mona2 virtual machine. 7. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 8. From the management hypervisor, power on the mona3 virtual machine. 9. Wait for the virtual machine to start and the configuration to refresh. This may take up to five minutes. 10. From the Management appliance, SSH in to mona2: ssh cloudadmin@mona2 11. Wait for the mysql service to start. You can verify this by running: service mysql status 12. From the Management appliance, SSH in to mona3: ssh cloudadmin@mona3 13. Wait for the mysql service to start. You can verify this by running: service mysql status 14. Perform a health check of the nodes in the trio. See Health checks (page 63). Procedure 20 Starting the Update appliance 1. From the management hypervisor, power on the ua1 virtual machine. Starting CloudSystem appliances after a shutdown 57

58 2. Wait for the configuration to refresh. This may take up to five minutes. 3. Using cloudadmin credentials, SSH in to ma1. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 4. From the Management appliance, SSH in to ua1 and refresh the configuration: ssh os-refresh-config Restart CloudSystem appliances and services Restarting a single CloudSystem appliance (page 58) Restarting a trio of CloudSystem appliances Restarting compute nodes (page 59) Use the restart action to bring services and CloudSystem appliances back up after performing planned maintenance on a virtual appliance. NOTE: When you restart the first Management appliance in the trio (the one created by the CloudSystem Management Appliance installer), you will experience some interruptions in the services running on that node. Administrators may also temporarily lose access to the Operations Console until the restart action completes. When you restart the OVSvApp appliance, network access for the virtual machines on the compute node is temporarily interrupted. Once the OVSvApp appliance completes the restart action, network access is restored. Procedure 21 Restarting a single CloudSystem appliance 1. Using cloudadmin credentials, SSH in to the first Management appliance in the trio. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 2. From the Management appliance, SSH in to the appliance you want to restart and run the restart command. Example ssh cloudadmin@[internal_name] sudo shutdown r now ssh cloudadmin@cc2 sudo shutdown r now. 3. Wait for the virtual machine to start up. It can take up to five minutes for the configuration to refresh. Procedure 22 Restarting a trio of CloudSystem appliances The cluster does not need to shut down completely during a restart. This allows you to access cloud services that are replicated across virtual machines in the cluster. You can restart CloudSystem appliances in any order. NOTE: Do not restart all nodes in the cluster at the same time, as that will result in a shutdown action. Always keep at least one virtual machine in the cluster up while waiting for the others to complete the restart action. 1. Using cloudadmin credentials, SSH in to the first Management appliance in the trio. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. 2. From the Management appliance, SSH in to the first appliance in the cluster and run the restart command. 58 Shut down and restart CloudSystem appliances

59 Example ssh sudo shutdown r now ssh cloudadmin@cmc sudo shutdown r now. 3. Wait for the virtual machine to start up. It can take up to five minutes for the os-refresh-config script to complete. 4. Repeat steps one and two for the second node in the trio. Example ssh cloudadmin@cc1 sudo shutdown r now. 5. Repeat steps one and two for the third node in the trio. Example ssh cloudadmin@cc2 sudo shutdown r now. 6. Perform a health check of the nodes in the trio. See Health checks (page 63). Procedure 23 Restarting compute nodes 1. Migrate workloads off of the compute node that you want to reboot. 2. Perform the reboot action: a. For ESXi compute nodes: i. Using administrator credentials, log in to vcenter. ii. Right-click the compute node and select Power Restart Guest. iii. Wait for the compute node restart process to complete. b. For KVM compute nodes: i. Using cloudadmin credentials, SSH in to the compute node. ii. Run the restart command: iii. sudo shutdown r now Wait for the compute node restart process to complete. c. For Hyper-V compute nodes: i. Use a remote desktop connection to access the compute node. ii. From the Settings tab on the bottom-right taskbar, select Power Reset. iii. Wait for the compute node restart process to complete. 3. Migrate workloads back to the compute node. Recover from a power outage or shutdown When CloudSystem appliances experience an interruption in power, all services and operations are stopped immediately. For trio appliance configurations, you need to determine which appliance had the most up-to-date database at the time of the power outage or shutdown. Use the procedures in this section to identify and replicate the most current database on the other two appliances in the trio. IMPORTANT: If a loss of power was only experienced by a few virtual appliances and not the entire cloud, then perform the restart procedure on the virtual appliances that lost power. Do NOT perform this procedure on the entire cloud when only a few appliances experienced a loss of power. See Restart CloudSystem appliances and services (page 58). Recover from a power outage or shutdown 59

60 Procedure 24 Resyncing appliance databases after a power outage 1. Resync and restart the Management appliances. a. From the management hypervisor, power on all Management appliance virtual machines in the appliance trio. b. Determine which virtual machine has the most up-to-date database. TIP: If MySQL started correctly in the node, this indicates that it joined the cluster correctly and you don t need to validate the seqno. You can skip steps i-iv below. i. On each Management appliance, run the command: ii. iii. iv. cat /mnt/state/var/lib/mysql/grastate.dat Find the value for seqno. The highest value represents the most current database. If the seqno value is the same in all nodes of the trio, then bootstrap the first node. On the Management appliance with the highest seqno value (or on the first node if all values are the same), run: service mysql bootstrap-pxc Verify that the MySQL service started: service mysql status c. Restart the MySQL service on the other two virtual machines in the appliance trio: service mysql start d. Verify that the MySQL service started. service mysql status e. Restart the nodes in the trio. See Restarting a trio of CloudSystem appliances (page 58). 2. Resync and restart the Cloud controllers. a. From the management hypervisor, power on all Cloud controller virtual machines in the trio. b. Determine which virtual machine has the most up-to-date database. TIP: If MySQL started correctly in the node, this indicates that it joined the cluster correctly and you don t need to validate the seqno. You can skip steps i-iv below. i. On each Cloud controller, run the command: ii. iii. iv. cat /mnt/state/var/lib/mysql/grastate.dat Find the value for seqno. The highest value represents the most current database. If the seqno value is the same in all nodes of the trio, then bootstrap the first node. On the Cloud controller with the highest seqno value (or on the first node if all values are the same), run: service mysql bootstrap-pxc Verify that the MySQL service started. service mysql status c. Restart the MySQL service on the other two virtual machines in the appliance trio: service mysql start d. Verify that the MySQL service started. service mysql status e. Restart the nodes in the trio. See Restarting a trio of CloudSystem appliances (page 58). 3. Resync and restart the Enterprise appliances. 60 Shut down and restart CloudSystem appliances

61 a. From the management hypervisor, power on all Enterprise virtual machines in the appliance trio. b. Determine which virtual machine has the most up-to-date database. TIP: If MySQL started correctly in the node, this indicates that it joined the cluster correctly and you don t need to validate the seqno. You can skip steps i-iv below. i. On each Enterprise appliance, run the command: ii. iii. iv. cat /mnt/state/var/lib/mysql/grastate.dat Find the value for seqno. The highest value represents the most current database. If the seqno value is the same in all nodes of the trio, then bootstrap the first node. On the Enterprise appliance with the highest seqno value (or on the first node if all values are the same), run the following commands: service mysql bootstrap-pxc sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start sudo -u csauser /usr/local/hp/csa/scripts/msvc start service csa restart service mpp restart service HPOOCentral restart Verify that the MySQL service started. service mysql status c. Restart the MySQL service on the other two virtual machines in the appliance trio: service mysql start d. Verify that the MySQL service started. service mysql status e. Restart the nodes in the trio. See Restarting a trio of CloudSystem appliances (page 58). 4. Restart the Update appliance. a. Power on the Update appliance and wait for the operation to complete. b. Restart the Update appliance. See Restarting a single CloudSystem appliance (page 58). 5. Resync and restart the Monitoring appliances, including the MySQL and Vertica databases. a. From the management hypervisor, power on all Monitoring appliance virtual machines in the trio. b. MySQL: Determine which virtual machine has the most up-to-date MySQL database. TIP: If MySQL started correctly in the node, this indicates that it joined the cluster correctly and you do not need to validate the seqno. You can skip steps i-iv below. i. On each Monitoring appliance, run the command: ii. iii. iv. cat /mnt/state/var/lib/mysql/grastate.dat Find the value for seqno. The highest value represents the most current database. If the seqno value is the same in all nodes of the trio, then bootstrap the first node. On the Monitoring appliance with the highest seqno value (or on the first node if all values are the same), run: service mysql bootstrap-pxc Verify that the MySQL service started. service mysql status c. Restart the MySQL service on the other two virtual machines in the appliance trio. Recover from a power outage or shutdown 61

62 service mysql start d. Verify that the MySQL service started. service mysql status e. Restart the nodes in the trio. See Restarting a trio of CloudSystem appliances (page 58). f. Vertica: Determine if the Vertica database is up on all three nodes and restart it if necessary. If two or three nodes of the Monitoring appliance trio are shut down (forced or regular shutdown), you must restart the Vertica database on those nodes. If only one node was shut down, that node comes back up and rejoins the cluster on its own. Note that the /opt/vertica/bin/admintools commands must be run as sudo, and always after the export PYTHONPATH command. The export PYTHONPATH command must be run each time you log in to mona1 to use admintools. i. SSH into the first node of the Monitoring appliance (mona1). ii. iii. ssh cloudadmin@mona1 sudo s Execute the command: export PYTHONPATH=/opt/vertica/oss/python/lib/python2.7/site-packages Check the status of the Vertica database. su dbadmin -c 'python /opt/vertica/bin/admintools -t view_cluster -d mon'; iv. If the result of the command is UP for ALL, then Vertica came back up on its own and no recovery is needed. v. If part of the cluster is initializing, then Vertica is still attempting to start up. Wait to see if the initialization succeeds; run the view_cluster command to check again after several minutes. vi. If two of the three nodes are UP and one is DOWN, then ssh into the one node that is down and run: vii. sudo service verticad start If the result shows DOWN for ALL, restart Vertica from the last good known state. A. Open the hosts file. vi /home/cloudadmin/hosts B. Copy the <vertica_admin_password>. C. On mona1 as root user, restart Vertica: su dbadmin -c 'python /opt/vertica/bin/admintools -t restart_db -d mon -e last -p <vertica_admin_password> ; viii. Check the status of the database by rerunning the view_cluster command. ix. If the database is still not up: A. Open the hosts file. vi /home/cloudadmin/hosts B. Copy the <vertica_cluster>. C. On mona1 as root user, force restart each node in the Vertica cluster from the last good known state. su dbadmin -c 'python /opt/vertica/bin/admintools -t restart_node -s <vertica_cluster> -d mon -p <vertica_admin_password> -F' 6. Restart the SDN controller. a. Power on the SDN controller and wait for the operation to complete. b. Restart the SDN controller. See Restarting a single CloudSystem appliance (page 58). 62 Shut down and restart CloudSystem appliances

63 7. Restart the OVSvApp appliance. a. Power on the OVSvApp appliance and wait for the operation to complete. b. Restart the OVSvApp appliance. See Restarting a single CloudSystem appliance (page 58). Health checks After performing shutdown, restart, or reboot procedures on a trio of nodes, best practice is to verify the health of each node in the trio. You can use the Monitoring Dashboard or the HA Proxy to check the status of the nodes in the trio. Procedure 25 Checking health through the Monitoring dashboard 1. In the Operations Console Main Menu, select General Monitoring. 2. Click the link to Launch Monitoring Dashboard. This action launches the dashboard. 3. Log in to the OpenStack user portal. 4. All services and servers displayed on the screen should be in a green status. Procedure 26 Checking health from the HA Proxy 1. Open the HA Proxy for each appliance Verify that the node status is either green (ok) or blue (ok standby). Health checks 63

64 10 Manage CloudSystem software licensing and license keys The information in this chapter will help you understand the licensing models available for CloudSystem. You will also find information on how to manage license keys in the HP CSA Cloud Service Management Console. You do not need to add or manage license keys in the Operations Console. CloudSystem software licensing is based on one of the following license model options: Per-OSI license: A license is purchased for each managed operating system instance (per OSI). OSI licenses allow a limited number of virtual machine instances to be deployed and managed across an unlimited number of servers in a private, hybrid, or public cloud infrastructure. Per-server license: A license is purchased for each managed server. Server licenses allow an unlimited number of virtual machine instances to be deployed and managed on a limited set of licensed servers. Refer to your license entitlement to verify the license model selected for your CloudSystem software and to confirm the number of instances or servers that you are licensed to manage. For license support, see To read the license documents, see software-licensing.html. CloudSystem software license models One licensing model per cloud CloudSystem software must generally be licensed under one license model for any cloud. This means that you may not generally use CloudSystem software under a per-server license model to manage one part of a cloud, and also use CloudSystem software under a per-osi model to manage a different part of the same cloud. You can mix licensing models across clouds, for example, in a hybrid cloud environment where your on-premise cloud is licensed per-server and your public cloud resources are managed per-osi instance. Licensing of embedded technologies and installed components CloudSystem software includes multiple embedded technologies and separately installed components, some of which are also offered by HP as independent products. These technologies are provided to you collectively, as a single CloudSystem software product, and your CloudSystem software license agreement covers your use of all included technology components. Your rights to use CloudSystem software component technologies may differ from rights provided when these components are purchased as separate, standalone products. You can review license terms that are specific to CloudSystem software in the CloudSystem software End User License Agreement. Licensing of HP products delivered with CloudSystem software The rights to use HP OneView are not granted by the CloudSystem or Enterprise software licenses. While both CloudSystem software and HP OneView are often sold and delivered together, HP OneView and CloudSystem software are distinct software products, and are licensed independently under their respective license agreements. Rights to use HP CloudSystem Enterprise Performance, Applications, and Analytics Add-on Software Suite are not granted by your CloudSystem Enterprise software license. While CloudSystem 64 Manage CloudSystem software licensing and license keys

65 License keys Enterprise software and the CloudSystem Enterprise Performance, Applications, and Analytics Add-on Software Suite are often sold and delivered together, they are distinct software products, and are licensed independently under their respective license agreements. License keys are provided for CloudSystem software components, as needed, to facilitate your use of the software. The license keys you receive may or may not reflect the type or extent of your purchased rights to use CloudSystem software. For example, depending on which CloudSystem software product you purchased, you may be provided with multiple license keys, with both OSI-based and server-based keys, and/or with keys that suggest usage limits that may fall short of, or that may exceed, the capacity that you have purchased. Regardless of the number and types of keys provided to you, your rights to use CloudSystem software are defined in your license and purchase agreements. IMPORTANT: The license keys that you receive with your CloudSystem Software: are provided to facilitate use of the CloudSystem software do not necessarily reflect the type or extent of your license agreement and right to use CloudSystem software are not to be relied upon for tracking compliance with or enforcement of your license agreement License keys are provided as follows for CloudSystem software technology components: Table 5 CloudSystem software license keys Purchased software HP CloudSystem Foundation software HP CloudSystem Enterprise software Components that require a license key None HP Cloud Service Automation software HP Matrix Operating Environment software HP Insight Control software HP Integrated Lights-Out (ilo) software Type of license key provided N/A OSI Server Server Server HP Cloud Service Automation software always requires OSI-based license keys. You will receive OSI license keys even if you have purchased CloudSystem software under a per-server license agreement. If you purchased CloudSystem under a per-server license agreement, you will receive a large number of OSI keys for Cloud Service Automation software, to allow you use of the technology to manage your licensed number of servers. Even though you receive OSI-based keys, your right to use these technologies is limited by your per-server license agreement. Managing license keys License keys are required to enable the Enterprise components of the purchased CloudSystem software product. To add license keys: License keys 65

66 1. Activate your license(s) on to obtain license keys. 2. For CloudSystem Enterprise licenses, add license keys to enable the use of each of the following CloudSystem Enterprise software components: CloudSystem Enterprise appliance Matrix Operating Environment HP Insight Control Using Infrastructure Administrator privileges, add each license key to the corresponding management console that you plan to use. For example, add the Enterprise license key to the Cloud Service Management Console in the Enterprise appliance, and add the Matrix OE license to the Central Management Server (CMS). To add HP CSA license keys after you install CloudSystem Enterprise during First-Time Installation, click the link for HP CSA on the Integrated Tools screen to launch the management console. From the Options menu, select Licensing. In the free trial period (the first 90 days), if you have not yet added a license key, HP CSA limits the number of new instances you can create. 3. Purchase and add additional license keys at any time to increase your OSI or server capacity. Managing license compliance You are accountable for sizing your license requirements and purchasing the number of licenses necessary to meet your needs. You must track your compliance and purchase additional licenses if you exceed your license limits. License compliance is subject to HP audit at any time. Tracking OSI license agreement compliance Under a per-osi license agreement, you may use any or all provided CloudSystem software to manage up to the number of instances allowed under your purchase agreement. The number of instances managed by CloudSystem software does not necessarily correlate to the number of instances managed by any one component of CloudSystem software. The number of instances being managed at any one time is not necessarily represented as the sum of all instances being managed by all components at that time. For example, from a license compliance perspective, any instance managed by CloudSystem software counts as one instance, regardless of whether that instance is being managed by the CloudSystem appliance alone, by Cloud Service Automation software alone, or by both components. Since license keys track instances managed by individual CloudSystem software components only, monitoring license key consumption may not be an accurate indicator of compliance. Some suggested ways to track OSI license agreement compliance are provided below: Track the average number of instances-per-provisioned-service in your cloud, and then compute instances being managed as average instances-per-provisioned-service, multiplied by the number of provisioned services. Example: If you have an average of 15 OS instances in each of the services you deploy in your cloud environment, and if there are 10 such services currently provisioned, then you must be licensed to manage a total of 150 OS instances in order to maintain compliance. Track the average virtual machine density across your cloud infrastructure and multiply the average density by the number of servers in your cloud to measure the concurrent instances. Example: If you deploy an average of 10 virtual machines per server in your cloud environment and your environment contains 20 servers, then you must be licensed to manage a total of 200 OS instances in order to maintain compliance. 66 Manage CloudSystem software licensing and license keys

67 Replacing a server managed by Matrix OE When you purchase CloudSystem software under a per-osi license agreement, your rights to use the software do not expire when your cloud infrastructure resources are retired. This means that your CloudSystem Enterprise per-osi license agreement entitles you to continue to use Matrix OE to manage a replacement server if a server you have been managing with Matrix OE is retired. In order to manage a replacement server with Matrix OE you will need to add Matrix OE, Insight Control, and ilo license keys to the replacement server. To key a replacement server so that it can be managed by Matrix OE: 1. Add the existing Matrix OE license key to the new server. 2. Add the replacement Insight Control Server Replacement license key to the new server. (The replacement license key is provided along with the original license key when you purchase CloudSystem Enterprise software.) 3. Add the existing ilo key to the new server. NOTE: When you purchase CloudSystem Enterprise software under a per-server license agreement, your rights to use CloudSystem software, including Matrix OE, are attached to specific servers; under a per-server license agreement, license keys cannot be used to manage replacement servers. Replacing a server managed by Matrix OE 67

68 11 Monitor resource use, allocation, and health CloudSystem monitors your cloud by watching for problems and watching usage trends. The information that CloudSystem collects is provided in several ways. Dashboard and Compute Summary Visual representations of the general health and status of the CloudSystem appliances, and the health and status of compute nodes with resource usage and allocation details. Activity Alerts and other notifications about appliance activity and events occurring in your cloud. Logging Dashboard Logs from compute nodes and all of the services running on all CloudSystem appliances. Monitoring Detailed data obtained from high-speed metrics processing and querying about the health of appliances and compute nodes. Includes a streaming alarm engine and notification engine. Support dump Vital diagnostic information from the system, and from OpenStack and CloudSystem services and components on each appliance. Audit log Security-related actions that are occurring in your cloud. Select Dashboard on the Operations Console main menu to view a summary of information for the Operations Console. The charts on the Dashboard provide a visual representation of the general health and status of the CloudSystem virtual appliances and resources in your cloud. From the Dashboard, you can immediately see resources that need your attention. Color-coded graphs provide a quick visual update on the health and status of appliances, compute nodes, and OpenStack services. You can also see the status of the monitoring service to make sure that the data it monitors is current. Clicking the center of the Compute Nodes Summary graph takes you to the Compute Nodes screen, where you can see the status of the various compute nodes along with their state and allocation and usage of physical and virtual resources. Clicking the center of the Appliance Summary graph takes you to the Appliances screen, where you can see details about the virtual appliances in your cloud. Resource graphs The charts on the Dashboard provide a visual representation of the general health and status of the CloudSystem compute nodes, virtual appliances, and OpenStack services in your cloud. From the Dashboard, you can immediately see resources that need your attention. Graph colors provide a quick way to visually interpret the status of compute nodes, appliances, and OpenStack services. 68 Monitor resource use, allocation, and health

69 Table 6 Dashboard resource graphs Graph Compute Nodes Summary Appliance Summary Services Summary Volumes History Ports History Provider & Tenant Network History Description Total number of compute nodes, and the number of those compute nodes that are up, down, and unknown. NOTE: Hyper-V compute nodes are not monitored, and therefore always show a status of Unknown. Total number of appliances, and the number of those appliances that are up, down, and unknown. NOTE: The SDN controller is not monitored, and therefore always shows a status of Unknown. Total number of all services running in the environment, and the number of those services that are OK, critical, or unknown. Number of storage volumes created or deleted over a one week period. Number of Provider Network IP addresses created or deleted over a one week period. A port is a specific IP address, created from the subnet allocation pool on the Provider Network screen. Number of provider and tenant networks created or deleted over a one week period. Table 7 Dashboard states State Down Up Unknown Color Red Green Blue or Gray Description A critical alert message was received. Investigate Down states immediately. Normal behavior or information from a resource. The status of the compute node is unknown. (Blue) The status of the OpenStack service is unknown. (Gray) Activity Dashboard From the Activity Dashboard screen in the Operations Console, click Launch Activity Dashboard, which opens a new browser window. You can view alerts and other notifications about activities occurring in your cloud environment. NOTE: You must log in to Kibana the first time you launch Activity Dashboard, Logging, or Audit Dashboard on the System Summary screen. Enter the Operations Console credentials you set during First-Time Installation. The Activity screen displays: Alerts: Messages received from various services across CloudSystem and maintained independently. Tasks Messages from services grouped into one event based on a specific operation. CloudSystem displays activity information using the open source project Kibana. Kibana is a browser based analytics and search dashboard. Kibana allows you to search activity data by entering input Query box at the top of the page. Related activities are grouped for easier viewing, and progress details of long running tasks such as compute node activation can be tracked. More information Kibana 3.0 Docs from Elastic Kibana 3.0 Queries and Filters Activity Dashboard 69

70 Activity statuses Status Critical Major OK Unknown Description A critical alert message was received, or a task failed or was interrupted. Investigate Critical status activities immediately. An event occurred that might require your attention. A warning can mean that something is not correct within an appliance and it needs your attention. Investigate Major status activities immediately. For an alert, OK indicates normal behavior or information from a resource. For a task, OK indicates that it completed successfully. The status of the alert is unknown. Logging Dashboard From the Logging Dashboard screen in the Operations Console, click Launch Logging Dashboard, which opens a new browser window. NOTE: You must log in to Kibana the first time you launch Activity Dashboard, Logging Dashboard, or Audit Dashboard on the System Summary screen. Enter the Operations Console credentials you set during First-Time Installation. CloudSystem collects logs from compute nodes and all of the services running on all CloudSystem appliances. The logs are displayed in a single user interface, which is launched from the Operations Console. You can view log information in charts, graphs, tables, histograms, and other forms. Centralized logging helps you triage and troubleshoot the distributed cloud deployment from a single location. You are not required to access the appliances and compute nodes to view the individual log files. CloudSystem logging uses the open source projects Kibana for data visualization and the Elasticsearch database for searching and indexing. From the logging interface, you can: View all CloudSystem logs in one location Filter and sort log results Use logs to debug failures that involve several components See a graphical representation of the quantity of logs per component More information Kibana Dashboard Elasticsearch Viewing logging Following are two examples of using logging to gain a better understanding of the environment when launching an instance. Procedure 27 Example: Logging a successful instance launch 1. In the OpenStack user portal, launch an instance, then obtain the instance ID on the Instance Overview tab. 2. In the Operations Console, open the logging interface by selecting Logging Dashboard from the main menu. 3. Enter the instance ID in the Query box. 4. In the Filter list, check type, host, logger, and message. 70 Monitor resource use, allocation, and health

71 Monitoring 5. The filtered results show all of the logging related to that instance from all services. Procedure 28 Example: Logging a failed instance launch 1. In the OpenStack user portal, launch an instance, then obtain the instance ID on the Instance Overview tab. 2. In the Operations Console, open the logging interface by selecting Logging Dashboard from the main menu. 3. Enter the instance ID in the Query box. 4. In the Filter list, check type, host, logger, and message. Look for an error in the message results. 5. View details of the error. CloudSystem Monitoring-as-a-Service includes: A highly performant, scalable, reliable and fault-tolerant monitoring solution. Operational (internal) and Monitoring as a Service (customer facing) capabilities. CloudSystem monitoring consolidates and unifies both types of capabilities, which simplifies the number of systems that are required for monitoring. Multi-tenant and authenticated. Metrics are submitted and authenticated using an access token and stored associated with an ID. CloudSystem provides Monitoring-as-a-Service using the open source project Monasca. Monasca is a comprehensive cloud monitoring solution for OpenStack based clouds. Monasca uses node-based agents to report metrics to a centralized collection point, where alarms are triggered. Monasca enables users to understand the operational effectiveness of the services and underlying infrastructure that make up their cloud and provide actionable details when there is a problem. System status and supporting metrics are constantly monitored, readily available, and trackable, making system management tasks more timely and predictable. From the Monitoring Dashboard screen in the Operations Console, click Launch Monitoring Dashboard, which opens a specialized OpenStack Horizon portal running on the Management appliance. (This is different than the OpenStack user portal running on the Cloud controller.) See Viewing monitoring information (page 72) for details. From the Alarms panel, you can: Click any OpenStack Service name to see alarms for a service. Click any Server name to see alarms for a CloudSystem appliance. Data retention Monitoring information is contained in two databases. Both databases are backed up when you back up the Monitoring appliance. See Backup, restore, and recover CloudSystem appliances (page 32). Monitoring metrics are stored in Vertica for 7 days. Configuration settings are stored in MySQL. If services on the Monitoring appliance stop, under heavy load (for example, 15 appliances and 200 compute nodes), the message queue starts to clear in approximately 6 hours. Monitoring components The CloudSystem monitoring solution contains: Monitoring 71

72 Monitoring UI in OpenStack Horizon on Management appliance Overview and top level view of monitored services, compute nodes and appliances Create, read, update, and delete alarm definitions using an expression builder Read and delete alarms. View alarm history. Grafana Dashboard Visualization of metrics Monitoring appliance CloudSystem identifies the Monitoring appliances in the trio by the internal name assigned on the Cloud Management Network. mona1: the first Monitoring appliance in the trio. mona2: the second Monitoring appliance in the trio. mona3: the third Monitoring appliance in the trio. You should always access a Monitoring appliance through the first Management appliance (ma1). Procedure 29 Accessing the Monitoring appliances 1. Using cloudadmin credentials, SSH in to ma1. 2. From ma1, SSH to the Monitoring appliance you want to access. Characteristics of the Monitoring appliances include: Deployed during First-Time Installation as a trio of appliances for high availability Contains all Monasca server components (Vertica database, Kafka, Zookeeper) Exposes a REST API that is accessible only on Cloud Management Network for posting metrics by agents and querying measurements and alarms by Monasca clients and UI Monitoring agents Deployed on all CloudSystem appliances to monitor services and processes Deployed on all RHEL KVM compute nodes Deployed on Swift proxy and Swift object nodes for Scale out Swift monitoring Deployed on each OVSvApp appliance ESXi clusters are monitored using a Monitoring agent plug-in deployed on Cloud controllers Not monitored in this release: Hyper-V compute nodes Software Defined Networking (SDN) appliance, which is created in a KVM VxLAN configuration with a Centralized Virtualized Router (CVR ) More information Monasca wiki at OpenStack Cloud Software Viewing monitoring information Procedure 30 Launch monitoring 1. In the Operations Console, open the monitoring UI by selecting Monitoring Dashboard from the main menu. 72 Monitor resource use, allocation, and health

73 2. Click Launch Monitoring Dashboard. The Monitoring dashboard in OpenStack Horizon on the Management appliance opens. 3. Log in using the user name and password you set for the Operations Console during First-Time Installation. 4. View alarms. You can filter results on any screen. a. Click Alarms in the left navigation to see alarms for all services and appliances. From the Actions menu to the right of each row, you can click Graph metrics to see an Alarm drilldown, and you can show the history of the alarm and the alarm definition. You can also see the metric name at the top of the graph for that alarm. b. Click any OpenStack Service name to see alarms for a service. c. Click any Server name to see alarms for a CloudSystem appliance. 5. Click Alarm Definitions in the left navigation to view and edit the types of alarms that are enabled. IMPORTANT: Do not change or delete any of the default alarm definitions. However, you can add new alarm definitions. You can change the name, expression, and other details about the alarm. You may want to raise or lower alarm thresholds if you are receiving too many or not enough alarms. For information about writing an alarm expression, see Github Monasca API. 6. Optional: Click Dashboard. The OpenStack Dashboard (Grafana) opens. From this dashboard you can see a graphical representation of the health of OpenStack services, and the CPU and database usage of each CloudSystem appliance. a. Click the graph title (for example, CPU) and click Edit. b. Change the function to see other types of information in the graph. 7. Optional: Click Monasca Health. The Monasca Service Dashboard opens. From this dashboard you see a graphical representation of the health of the Monasca service. For more information, see Monitoring Of Monasca wiki from OpenStack Cloud Software. Monitoring ESXi compute clusters CloudSystem monitors ESXi compute clusters when performance data about the activated cluster is available from the cluster. The following VMware knowledge base articles and VMware community discussion may help you make sure that ESXi cluster performance data is available to CloudSystem. VMware KB VMware KB VMware KB VMware Communities Monitoring the OVSvApp service VM A service called monasca-agent is added to the OVSvApp service VM. Monitoring 73

74 The monasca-agent monitors the following four processes on the OVSvApp VM: beaver, which is a logging service. hpvcn-neutron-agent, which is the Neutron L2 agent for ESXi environments. openvswitch-switch and openvswitch-ovsdb processes, which provide the virtual switch. The monasca-agent periodically reports the status of those four monitored processes and metrics data ('load' - cpu.load_avg_1min, 'process' - process.pid_count, 'memory' - mem.usable_perc, 'disk' - disk.space_used_perc, 'cpu' - cpu.idle_perc for instance) to the Monasca system. The following features are not supported by monasca-agent: 1. The agent monitoring feature in OVSvApp is not replaced by monasca-agent. As long as the enable_agent_monitor flag is set to true in /etc/neutron/neutron.conf file on the controller, the agent monitoring is done in parallel with monasca monitoring activity. 2. The monasca-agent and OVSvApp agent operate independently. The installation or uninstallation of one service does not affect the functionality of other service. 3. The agent side mitigations, for example, restarting an interrupted service, are not performed by monasca-agent. Monitoring UI snapshots Figure 9 shows the top level view of monitored service hpvcn-neutron-agent, and ESXi compute node OVSvApp VM. Figure 9 The hpvcn-neutron-agent and OVSvApp in Monitoring dashboard Figure 10 shows alarms for an ESXi compute node OVSvApp VM. 74 Monitor resource use, allocation, and health

75 Figure 10 OVSvApp VM alarms Creating a support dump file From the System Summary screen, select Create Support Dump to create a support dump file. The Operations Console allows you to create a support dump that you can view or send to an authorized support representative for analysis. Creating a support dump triggers an asynchronous process that collects information and creates the support dump file in the background. The browser in which you are running the Operations Console downloads the support dump file to your default folder or prompts you for a location, depending on your browser settings. The CloudSystem support dump collects vital diagnostic information from the system, and from OpenStack and CloudSystem services and components on each appliance, including: Configuration files, including First-Time Installation settings Logs, including audit logs Alerts Hyper-V and KVM compute node logs ESXi compute cluster logs are not included in the support dump. Object storage (Scale out Swift) logs Specific information collected by a service or component's commands or scripts; for example, for networking: DNS local caching Name space information Time stamp in UTC time Creating a support dump file 75

76 IMPORTANT: The support dump file is not encrypted. It contains sensitive information, including configuration details and logs of your environment. Credentials from CloudSystem service configuration files are removed, but the support dump may contain credentials in non-cloudsystem service configuration and log files. Treat support dump files as confidential data and handle them appropriately. After downloading the support dump, encrypt the file if you plan to share or store the file on a non-secured or shared location. Use Secure FTP or another encrypted protocol when sending the support dump file to HP. (Support dump files sent to HP are deleted after use, as the HP data retention policy requires.) Creating a support dump Procedure 31 Create a support dump file from the Operations Console 1. From the Operations Console main menu, select System Summary. 2. In the CloudSystem Administration pane, select Create Support Dump from the drop down list. 3. Select options: Exclude compute nodes Check this box if you do not need to see the compute node log files, or if you plan to obtain those log files separately. The open source project sosreport must be installed on KVM compute hosts for inclusion of compute node log files in the support dump. Include all logs Check this box to include all log files, including verbose files. NOTE: HP recommends that you check this box if you plan to send the file to HP. If this box is unchecked, only logs from CloudSystem, OpenStack and a limited set of other services are collected. Uncheck this box only if the support dump file becomes too big to be usable. Add a Ticket Number If you plan to send this file to HP and it relates to an open support call, you can enter a ticket number that will be included in the file name. 4. Click Create Support Dump. When the tar file is created, a dialog appears. Click Download to save the file. If your browser settings specify a default download folder, the support dump file is placed in that folder. Otherwise, you are prompted to indicate where to download the file. 5. Contact your authorized support representative for instructions on how to transfer the support dump file to HP. Procedure 32 Create a support dump from the command line 1. Log in to the Management appliance (ma1) as cloudadmin and enter the password you set during First-Time Installation. (Optional) SSH to another appliance. You can run this command from any appliance connected to the Data Center Management network. You can also ssh to any appliance or compute node on the Cloud Management Network after you have logged in to ma1. 2. To create a support dump for the appliance that you have opened a SSH connection to, enter: sudo /var/lib/cloudsystem/supportdump/supportdump.sh -at <ticket_number> 76 Monitor resource use, allocation, and health

77 where <-a> collects all the logs on that system (recommended), and <-t> requires an argument value for the support or issue ticket number. 3. To create a support dump for a Swift node from the command line: Log in to the individual Swift node and enter: ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@<swift-node IP> sudo /var/lib/cloudsystem/supportdump/supportdump.sh a Viewing the contents of the support dump file The support dump is a compressed file with a name in the following format: support_dump_yy-mm-dd_utc-time-nnnnnnnnnn.tar.gz. The.tar.gz file contains.tar.xz files, which can be umcompressed using open source tools from and tukaani.org/xz/xz windows.zip. Extract the files to a local drive that is indexed so that you can perform a text search of the entire directory. On Windows, the contents that are indexed depend on your indexer configuration. The default is to index only the title for some file extensions, for example,.log. Use the following guidelines for viewing the support dump file. Folder./var/log or where the file is uncompressed./var/log/upstart/ and./var/log/./etc/./sos_commands./var/log/nova and./var/log/neutron./users/administrator/appdata/local/ Hewlett-Packard/CloudSystem/Log/./Program Files (x86)/hp/cloudsystem/log./windows/system32/winevt/logs./users/administrator/appdata/local/temp Contains Alerts and events that are seen in the Operations Console activity feed. Check alert_dump and event_dump (or event_data_dump) for more details. Check these files for errors logged around the time of an alert or event and for other issues. Logs for CloudSystem services and components Configuration files Output of commands that were run to collect service specific information KVM compute node log files Hyper-V compute node log files NOTE: ESXi compute logs are not collected separately because they are included in the Nova Compute service logs. Check VMware vcenter log files for more information. Viewing the audit log From the System Summary screen, Miscellaneous Settings panel, select Launch Audit Dashboard to view audit information in the Kibana dashboard. NOTE: You must log in to Kibana the first time you launch Activity Dashboard, Logging, or Audit Dashboard on the System Summary screen. Enter the Operations Console credentials you set during First-Time Installation. The audit log contains a record of actions performed on the CloudSystem appliances. You can use this information for individual accountability and for root cause analysis. Viewing the contents of the support dump file 77

78 Click on a row to see additional details. By default, the most recent 500 actions are displayed (100 per page). You can change this value by clicking the configure icon above the first row. Select the Paging tab, and change the number of actions per page and the page limit, or both. More information Kibana 3.0 Docs from Elastic 78 Monitor resource use, allocation, and health

79 12 Manage the Management appliance trio CloudSystem identifies the Management appliances in the trio by the internal name assigned on the Cloud Management Network. ma1: the first Management appliance in the trio. This is the Management appliance that was installed by the CloudSystem Management Appliance Installer. ma2: the second Management appliance in the trio. ma3: the third Management appliance in the trio. You should always access a Management appliance through the first Management appliance in the trio: Accessing the Management appliances 1. Using cloudadmin credentials, SSH in to ma1. 2. From ma1, SSH to the other Management appliance you want to access. IMPORTANT: When performing maintenance on a virtual machine hosting the Management appliances, make sure that ma1 is always the last node shut down and the first node restarted. For more information, see Shut down and restart CloudSystem appliances (page 52). Managing the primary DNS server on the Management appliance The ma1 appliance hosts the primary DNS server for the Cloud Management Network. You cannot update the DNS server from any other Management appliance. Always update the DNS server from the ma1 appliance. Viewing Management appliance details through the Operations Console The Appliances screen displays information about the CloudSystem appliances. Appliance name Role of the appliance within CloudSystem Status of the appliance Appliance resources and their status, including number of CPUs, amount of memory, and network interfaces From the Appliances screen, you can perform the following actions: Update CloudSystem appliances downloads the latest software versions for the CloudSystem appliances. See Updating CloudSystem appliances (page 80). Install Enterprise installs the Enterprise appliance, if you did not choose to install it during First-Time Installation. See Installing the Enterprise appliance after First-Time Installation (page 90). Disabling ESXi DRS anti-affinity rules and disabling DRS on CloudSystem appliances By default, CloudSystem virtual appliances in an ESXi management host are configured with vsphere Distributed Resource Scheduler (DRS) anti-affinity rules. An anti-affinity rule specifies that the members of a selected virtual machine DRS group cannot run on the members of a specific host DRS group. This means that each node in a CloudSystem appliance trio is hosted on a different host in the management cluster. Disabling ESXi DRS anti-affinity rules and disabling DRS on CloudSystem appliances 79

80 If your cluster has three nodes and one host in the cluster goes down, the anti-affinity rules prevent the VMware HA feature from powering on the appliances on another host. In that case, disable anti-affinity rules and disable DRS as shown below, then follow the steps in Restart CloudSystem appliances and services (page 58). Procedure 33 Optional: Disable DRS anti-affinity rules If your cluster has three nodes or fewer and you want ESXi to power on your appliances on other nodes in the cluster in case of failure, disable the anti-affinity rules. 1. In the VMware vsphere web client, right click on the management cluster and select Cluster properties. 2. Select vsphere DRS, then Rules. 3. Uncheck EnterpriseControllerRule, CloudControllerRule, ManagementControllerRule, and MonascaControllerRule. Procedure 34 Optional: Disable DRS management of appliance VMs If you want to manually manage CloudSystem appliance VMs, which means that they are not load balanced across hosts, disable DRS. 1. In the VMware vsphere web client, right click on the management cluster and select Cluster properties. 2. Select vsphere DRS, then Virtual Machine Options. 3. For each appliance VM, set the Automation Level to Disabled. Viewing appliances You can view details about an appliance from the Appliances screen. Procedure 35 View details about an appliance 1. Select the row of the appliance you want to view and click View Details from the Actions menu. 2. Alternatively, you can click Expand to view the appliance details on the main Appliances screen. The View Appliance screen displays detailed information about the CloudSystem appliances. Host name Role of the appliance within CloudSystem Status of the appliance Appliance resources and their status, including number of CPUs, amount of memory, and network interfaces Updating CloudSystem appliances Use this procedure to install updated software for the CloudSystem appliances and compute nodes. A patch is a set of files and scripts that enhance functionality or fix issues found in a previous release. The update process uses a package-based update technique, where the packages that give a particular appliance or compute node its features are replaced with newer versions or updated files. This includes the operating system and the services that make a node type unique. During the process of updating, the services or features may be unavailable. When an appliance is successfully updated, the appliance will be using the newer versions of packages and files with all the new services, functionality, and fixes included. 80 Manage the Management appliance trio

81 One large update image file (*.csu, which can be renamed to *.zip) updates one or more of these appliances. The scope of the patch determines the content packaged in the.csu file. Only the packages and files required for the update are included in the.csu file. Management appliance trio Cloud controller trio Enterprise appliance trio Monitoring appliance trio Update appliance SDN controller Scale out Swift nodes Compute nodes The Update appliance An Update appliance is deployed during first-time installation. The Update appliance controls the distribution of files to the other appliances and compute nodes. CloudSystem identifies the Update appliance by the internal name assigned on the Cloud Management Network. For the Update appliance, this is ua1. If you need to log on to the Update appliance for maintenance or management tasks, always access it through the first Management appliance (ma1). Procedure 36 Accessing the Update appliance 1. Using cloudadmin credentials, SSH in to ma1. 2. From ma1, SSH to ua1. To see update activity in text format, open a browser on your local computer (or staging server) and enter the IP address of the Update appliance on the Data Center Management Network. You can find this IP address on the Operations Console Appliances screen before the update begins. Downloading, uploading, and installing the update file IMPORTANT: When the update begins, some or all services on CloudSystem appliances may be stopped and started. Although CloudSystem services stop and restart, the physical systems hosting the compute nodes are not affected. The appliances running OpenStack and management services may be unavailable during an update, except for the update service itself and its UI. A smaller patch may have minimal impact on the appliances and therefore most appliances will be available during an update. This depends on the nature of the patch content. Process overview 1. Download the update file to your local computer (page 82) 2. Upload and install the update file (page 82) Prerequisites You have performed a backup of the CloudSystem appliances. See Backing up the CloudSystem appliances (page 35). You have added a route to the Data Center Management Network from your local computer and the local computer has access to the Internet. You have noted the IP address of the Update appliance. From the Appliances screen, select the row of the Update appliance, click the Action menu ( ) and select View Details. Downloading, uploading, and installing the update file 81

82 Procedure 37 Download the update file to your local computer The time required for the download depends on the content delivered in the update file and the speed of your network connection. 1. From the local computer that is connected to the Internet and to the Data Center Management Network, open a browser and navigate to the HP Helion Download Network at helion.hpwsportal.com/catalog.html. 2. Download the CloudSystem CSU file and signature file to your local computer. 3. From your local computer, open a browser and connect to the Operations Console. Enter the IP address of the Management appliance. For example, 4. Log in to the Operations Console using the user name and password you set during First-Time Installation. Procedure 38 Upload and install the update file 1. In the Operations Console, navigate to the Appliances screen. 2. Select Update CloudSystem from the drop down list in the Appliances screen. 3. Move the CSU and signature file that you downloaded to your local computer in step 2 above to the management appliance in one of the following ways: Drag the CSU and signature files from a folder on your local computer and drop them in the box on the Update CloudSystem screen. Click Browse, browse to the CSU and signature files, and upload them. 4. Click Install. Click Cancel to continue without installing the update. If this is the first time the update file is being uploaded, the Management appliance validates the file. If the file is invalid, or if there is insufficient disk space, the appliance deletes the file and displays errors on the Appliances screen. Depending on the components in the update, CloudSystem appliances might automatically restart when the update is complete. 5. Navigate to the Appliances screen from the main menu to check appliance statuses after the update. Procedure 39 View the status of the update If the Operations Console is unavailable while the Management appliance is updated, you can view status of the update from the Update appliance. 1. To see update activity in text format, open a browser on your local computer and enter the IP address of the Update appliance on the Data Center Management Network. Obtain the IP address from the details of the Update appliance on the Operations Console Appliances screen before you begin the update. Example: 2. Click the help icon in the top right corner of the screen to see the updated version number of CloudSystem. 3. When the update is complete, close the browser session. Procedure 40 Retry the update from the command line If the patch update fails and the Operations Console is unavailable, you can retry the update by executing the following command from the Update appliance. 1. SSH to the Update appliance and log in as root. 2. Go to the directory /opt/stack/cloudsystem-update/playbooks. 3. Execute the command: ansible-playbook cloud_update.yml vvvv 82 Manage the Management appliance trio

83 13 Manage the Cloud controller trio CloudSystem identifies the Cloud controllers in the trio by the internal name assigned on the Cloud Management Network. cmc: the first Cloud controller in the trio. cc1: the second Cloud controller in the trio. cc2: the third Cloud controller in the trio. You should always access a Cloud controller through the first Management appliance (ma1). Accessing the Cloud controllers 1. Using cloudadmin credentials, SSH in to ma1. 2. From ma1, SSH to the Cloud controller you want to access. IMPORTANT: When performing maintenance on a virtual machine hosting the Cloud controllers, make sure that cmc is always the last node shut down and the first node restarted. For more information, see Shut down and restart CloudSystem appliances (page 52). Configure OpenLDAP or Active Directory for OpenStack user portal authentication (Keystone) From the Security panel, you can configure an external authentication directory service to authenticate users logging in to the OpenStack user portal instead of maintaining individual local login accounts. An example of an authentication directory service is a corporate directory that uses LDAP (Lightweight Directory Access Protocol). CloudSystem supports the Microsoft Active Directory and OpenLDAP directory services. NOTE: If you want to use LDAP over SSL, do not use the Operations Console to configure LDAP. Instead, follow the steps in Configuring secure OpenLDAP and Active Directory (page 213). Process overview 1. Check the connection to the directory service 2. Add Helion OpenStack service users to the directory service 3. Configure security settings to add an Active Directory or OpenLDAP directory service NOTE: The CloudSystem Operations Console contains a local directory. You must log in to the Operations Console as a local administrator user. The OpenStack user portal can be configured for local logins (the default) or directory service logins, but not both. Use the Security Settings screen in the Operations Console to configure a directory server and service for the OpenStack user portal. After you configure the OpenStack user portal for directory service logins, local logins are disabled. If the AD or OpenLDAP server is unreachable, users cannot log in to the OpenStack user portal. The HP CSA Cloud Service Management Console and Marketplace Portal can be configured for directory service logins using the same OpenLDAP or Active Directory service. Specify the details by selecting LDAP on the Cloud Service Management Console organization's navigation frame. Checking the connection to the directory service An Active Directory or OpenLDAP server must be accessible from the Management appliance (ma1) and Cloud controllers. The Management appliance validates LDAP settings before applying the settings to the system. LDAP settings are applied only to the Cloud controllers. Configure OpenLDAP or Active Directory for OpenStack user portal authentication (Keystone) 83

84 1. Log in to the Management appliance (ma1) as cloudadmin and enter the password you set during First-Time Installation. 2. Test the connection to the directory server. In the following example, replace with the IP address or host name of your LDAP or AD server. curl ldap:// If there is a response from the server, you will see output similar to: DN: objectclass: top objectclass: OpenLDAProotDSE 3. If the host name does not resolve, then add the host name to the /etc/hosts file and repeat step SSH to the Cloud management controller (cmc) and repeat steps 2 and 3. Next step Adding OpenStack service users and internal users to the directory service (page 84) Adding OpenStack service users and internal users to the directory service When you configure the OpenStack user portal for directory service authentication, local Keystone users are no longer available. For OpenStack services to work properly, OpenStack service users, such as nova and cinder, and CloudSystem internal users must be added to the directory service before you configure the directory service in the Operations Console. A password utility called password-config is provided on the Management appliance. This utility retrieves the password of all OpenStack service users. You must assign the passwords you retrieved to the corresponding users in the directory service. IMPORTANT: If you are using Active Directory, you must change the password policy for the OpenStack service users and internal users to disable password complexity requirements and to disable password expirations. The OpenStack service user account passwords do not contain special characters and must not expire. See the Active Directory Services Fine-Grained Password and Account Lockout Policy at Microsoft TechNet Library. Do not change the user name or password of the OpenStack service users or internal users. Doing so will cause features in CloudSystem to stop working. 84 Manage the Cloud controller trio

85 Procedure 41 Add Helion OpenStack service users and internal users to the directory service 1. Obtain the passwords for OpenStack service users and internal users by entering the following commands on the Management appliance. a. Configure access to Keystone from ma1. (Change the values to those of your environment.) export OS_AUTH_URL=" export OS_PASSWORD="<password-set-during-first-time installation>" export OS_REGION_NAME="RegionOne" export OS_TENANT_NAME="demo" export OS_USERNAME="admin" b. Get a token: keystone token-get c. Paste the Keystone token ID into the password utility: /usr/local/bin/password-config t <token_id> -g f output.json The passwords returned in the output file will be similar to the following: "opsconsole": "062880fe03ea30e19de fab1ed464d14", "admin": "adminpwd", "eve": "581cdd0078f1dc94ee6b6053c2bea9425aebc58e", "nova": "4d1e22dcff46eab0dc89b7ec01beffa3676d3cd8", "heat": " cdb21f3bfd5b83eb573065ee6f15", "cinder": "84ff91faba0fd5342cf9b c8d348387", "glance": "99bd3325f61d6def9cd41ce8dd e64c", "enterpriseinternal": "010e2f571bcfc081f498ea967f41c500", "ccinternaladmin": "74fbd3a0ed1a62e19660ae6cbdf02817f15ead78", "neutron": "0a22e3c1a949e3d00d41734f7f4f4414b0556e93" NOTE: The password-config script may display the opsconsole user as ops-console. The correct user name is opsconsole. Be sure to enter the user name into the directory service without a hyphen. d. Save the password for the opsconsole user, which you may want to enter in Configure security settings to add an authentication directory server and service (page 87). 2. Add the following OpenStack service users and internal users with the passwords obtained in step 1 to your OpenLDAP or Active Directory service. opsconsole admin eve nova heat cinder glance enterpriseinternal ccinternaladmin neutron 3. In OpenLDAP or Active Directory, enter a value for the desired user name attribute (samaccountname, mail, cn, or uid) for each service user. The Value you enter for the user name attribute must be the service or internal name exactly as shown in step 2. For example, if you want to allow OpenLDAP or AD to authenticate users using their address, add the service user name as a value for the mail attribute for each OpenStack service user. For service and internal users, Value is the user name (for example, ccinternaladmin), not an address. For regular users, Value is an address. The following figure shows the mail attribute for ccinternaladmin. Configure OpenLDAP or Active Directory for OpenStack user portal authentication (Keystone) 85

86 Figure 11 Adding the user name attribute for OpenStack service and internal users 4. Important: Verify that you can log in to the OpenStack user portal with each user name and password you set in step 2. This step confirms that the Keystone and directory service passwords match. You can log in using these passwords before or after directory service authentication is enabled. Next step Configuring security settings to add an Active Directory or OpenLDAP directory service (page 86) Configuring security settings to add an Active Directory or OpenLDAP directory service Use the Operations Console System Summary screen to configure the OpenStack Keystone service on the Cloud controller to use an external authentication directory service (also called an enterprise directory or authentication login domain). The directory service authenticates users logging in to OpenStack user portal instead of maintaining individual local login accounts. (You must log in to the CloudSystem Operations Console as a local administrator user.) A directory service contains a set of entries that can represent users, groups, and other types of objects. Each entry has a unique identifier: its Distinguished Name (DN). When you specify an authentication directory service, you provide a search criterion so that CloudSystem can find the user by its DN. To authenticate a user, CloudSystem sends the authentication request to the configured OpenLDAP or Active Directory service. 86 Manage the Cloud controller trio

87 IMPORTANT: After you configure a directory server and service and click Save Changes, you cannot remove or unconfigure the server or service, which means you cannot change the authentication method back to local logins. Be sure you do not want to authenticate OpenStack user portal users using a local database before you save your changes. You can change the details of your directory server or service later if, for example, the service is moved or changes in the organizational structure require modification of the search criteria. Use care when changing the criteria, because it may result in a loss of access to the OpenStack user portal. Procedure 42 Configure security settings to add an authentication directory server and service Prerequisites An OpenLDAP or Active Directory service is configured and is accessible from the Management appliance and Cloud controllers. See Checking the connection to the directory service (page 83). OpenStack service users are added to the directory service. See Adding OpenStack service users and internal users to the directory service (page 84). 1. From the Operations Console main menu, select System Summary. 2. In the Security pane, select Update Security Settings. 3. Do not enter a value in the Certificate box. If you want to use LDAP over SSL, do not use the Operations Console to configure LDAP. Instead, follow the steps in Configuring secure OpenLDAP and Active Directory (page 213). 4. Enter information in the Authentication Credentials section. a. For the User Name, enter the full distinguished name of a user whose password will not change. Example: In the User Name box, enter cn=opsconsole,cn=users,dc=example,dc=com. NOTE: HP recommends using the opsconsole account for authentication credentials instead of an administrator account. The password for this account must not change, otherwise the Cloud controller s Keystone cannot search LDAP and authentication of OpenStack user portal users will not succeed. b. Enter the Password for the account specified in step 4a. If you specified the opsconsole account, the password was obtained in Step 1 of Add Helion OpenStack service users and internal users to the directory service (page 85). Configure OpenLDAP or Active Directory for OpenStack user portal authentication (Keystone) 87

88 5. Enter information in the Directory Configuration section. a. In Directory, enter the domain of the directory service. For example, dc=example,dc=com b. From the drop down menu, select a directory type: Active Directory or OpenLDAP. c. Enter the search context in two parts: User Name Attribute and User Tree DN. User Name Attribute Enter an attribute that uniquely identifies the user in the LDAP server.. Supported user name attributes are cn, mail, uid, and samaccountname. User Tree DN Enter the organizational unit that specifies where the Cloud controller should search for users. Browsing the LDAP server using an open source client such as JXplorer can help you determine the search context. Example Active Directory search context User Name Attribute samaccountname User Tree DN cn=users,dc=example,dc=com 6. Click Test Authentication and Directory Config to test the connection to the directory service. If the directory service is configured correctly, you will see three green check marks. Correct any errors if necessary, then test the configuration again. The following are verified before the LDAP configuration continues. Authentication succeeds to the directory service, using the user name and password credentials. All the required OpenStack service users are found in the user tree. Keystone has the required tenants (demo and service). Keystone has the required roles (member and admin). 7. Click Save Changes. This operation cannot be undone. Be sure that you want to use directory service authentication before saving your changes. When the operation completes, you will be returned to the login screen of the Operations Console. 8. Go to the System: Integrated Tools screen, and click OpenStack user portal. 9. Log in to the OpenStack user portal as admin using the password set during First-Time Installation. The admin user (and other service and internal users) is automatically assigned admin and member roles in the service and demo projects when LDAP is configured. 10. For each user that will log in to the OpenStack user portal, assign the user a role in a project. Do one of the following: Create a project and assign users to that new project. Add users to the demo project. 88 Manage the Cloud controller trio

89 Users can now log in to the OpenStack user portal. On the login screen, the user: Enters their user identifier, for example, Common Name or address, depending on the user attribute you specify on the Security Settings screen. Enters their password. (Optional) Manage OpenStack compute (Nova) logs Some ESXi compute logs grow over time and consume space on the Cloud controller appliances, which can cause compute services to perform more slowly. Log file names are of the pattern nova-compute-<vcenter name + cluster name>.log. The log process does not automatically rotate log files with dynamic names. You can manage the log size by compressing the logs on each Cloud controller appliance. If a large number of instances are created in your cloud environment, then you should check log file sizes every two weeks and compress them if they are larger than several GBs in size. Procedure 43 Compressing ESXi compute logs on the Cloud controller appliances 1. Using cloudadmin credentials, log in to ma1. 2. From ma1, SSH to cmc. 3. Switch to the root user: sudo su 4. Change directory to /var/log/nova. 5. Look for large log files with the naming convention nova-compute-<vcenter-name-cluster-name>. 6. Compress the log file: gzip nova-compute-<vcenter-name-cluster-name> 7. Perform steps one through six on cc1. 8. Perform steps one through six on cc2. (Optional) Manage OpenStack compute (Nova) logs 89

90 14 Manage the Enterprise appliance trio CloudSystem identifies the Enterprise appliances in the trio by the internal name assigned on the Cloud Management Network. ea1: the first Enterprise appliance in the trio. ea2: the second Enterprise appliance in the trio. ea3: the third Enterprise appliance in the trio. You should always access an Enterprise appliance through the first Management appliance in the trio: Accessing the Enterprise appliances 1. Using cloudadmin credentials, SSH in to ma1. 2. From ma1, SSH to the Enterprise appliance you want to access. IMPORTANT: When performing maintenance on a virtual machine hosting the Enterprise appliances, make sure that ea1 is always the last node shut down and the first node restarted. For more information, see Shut down and restart CloudSystem appliances (page 52). CloudSystem Enterprise is a separate virtual appliance that runs HP CSA. However, all appliance management tasks are performed through the CloudSystem Operations Console and all OpenStack functionality included in CloudSystem is available to Enterprise. HP CSA maps user roles through membership in LDAP groups configured through the LDAP service for the organization. HP CSA does not directly manage the creation or maintenance of individual users. As the HP CSA administrator creates organizations within HP CSA, the corresponding LDAP group membership must exist or be created. When users log in, LDAP authenticates login credentials and verifies the appropriate role through group membership. LDAP directories must be pre-configured for the access process to function correctly in HP CSA. See also Logging in and changing the default HP CSA and Marketplace Portal password (page 92). Installing the Enterprise appliance after First-Time Installation During First-Time Installation, CloudSystem Enterprise is installed by default. If you do not plan to use HP CSA or HP Operations Orchestration, you can disable installation of the Enterprise appliance in the First-Time Installer by selecting Off for Deploy Enterprise Appliance Trio. If you decide at a later time to install Enterprise, use the following procedures. Procedure 44 Installing the Enterprise appliance driver Before installing the Enterprise virtual appliances, the Enterprise appliance driver must be installed. The driver is available from the MySQL-ConnectorJ.zip in the HP Software Depot at go/cloudsystem/download. After downloading the driver, install it on the Management appliance (ma1) in the boot directory for each Enterprise appliance. This action must be performed by the customer, even if a technical services agent is engaged to assist with the installation. 1. Download the MySQL Connector/J (JDBC driver) package from the HP Software Depot. 2. Using cloudadmin credentials, SSH in to the Management appliance (ma1). 3. Switch to the root user: sudo -i 4. Change the directory to /boot/cloudsystem/cs-enterprise/. 5. Copy the MySQL JDBC driver to the directory. 90 Manage the Enterprise appliance trio

91 6. Update the MySQL JDBC driver file permissions: chown cloudsystem:cloudsystem libmysql-java_ _all.deb chmod 644 libmysql-java_ _all.deb 7. Change the directory to /boot/cloudsystem/cs-enterprise1/. 8. Copy the MySQL JDBC driver to the directory. 9. Update the MySQL JDBC driver file permissions: chown cloudsystem:cloudsystem libmysql-java_ _all.deb chmod 644 libmysql-java_ _all.deb 10. Change the directory to /boot/cloudsystem/cs-enterprise2/. 11. Copy the MySQL JDBC driver to the directory. 12. Update the MySQL JDBC driver file permissions: chown cloudsystem:cloudsystem libmysql-java_ _all.deb chmod 644 libmysql-java_ _all.deb Procedure 45 Install the Enterprise appliance after FTI 1. From the Operations Console Appliances screen, click Install Enterprise. This option is shown only if you did not install Enterprise during First-Time Installation. The Install Enterprise appliance screen is displayed. 2. Enter the data requested on the screen. DCM VIP: IP address registered in the DNS server with the FQDN of the Enterprise appliance on the Data Center Management Network. This is not the native IP address. CAN VIP: IP address registered in the DNS server with the FQDN of the Enterprise appliance on the Consumer Access Network. This is not the native IP address. Password: Password of the Operations Orchestration Administrator account you set during First-Time Installation. Image Name: Enterprise appliance image name that matches the OVA template name you created in vcenter. 3. Click Complete Installation. To exit the action without installing, click Cancel. 4. Verify that the Enterprise appliance was installed by viewing the Appliances overview screen. The Enterprise Appliance trio will be listed. Changing the Enterprise appliance password when Enterprise is deployed after First-Time Installation If you deploy the Enterprise appliance from the Appliances screen instead of during First-Time Installation, the password for the cloudadmin user on the Enterprise appliance is set to a default value. It is not set to the password you specified for cloudadmin in the First-Time Installer user interface. After the Enterprise appliance trio is installed, execute the following curl commands to set a new password for the cloudadmin account. Procedure 46 Set the password on the Enterprise appliance when the appliance was installed after First-Time Installation 1. Log in to the Management appliance (ma1) using the cloudadmin credentials you set during First-Time Installation. 2. Configure access to Keystone and obtain a token. export OS_USERNAME=admin export OS_TENANT_NAME=demo export OS_PASSWORD=<password-set-during-first-time installation> Changing the Enterprise appliance password when Enterprise is deployed after First-Time Installation 91

92 export OS_AUTH_URL= export OS_REGION_NAME=RegionOne keystone token-get 3. Set the password on the first Enterprise appliance (ea1). In steps 3 through 5, <token-id> is the token obtained in step 1, and <new-password> is the new password you want to set for cloudadmin on the Enterprise appliance trio. curl -X PUT -H "X-Auth-Token: <token ID>" -H "Content-Type: application/json" -d '{"sys_creds": {"username": "cloudadmin", "password": "cloudadmin", "newpassword": "<new-password>"}}' 4. Set the password on the second Enterprise appliance (ea2). (This command is the same as step 3 except that ea1 is changed to ea2.) curl -X PUT -H "X-Auth-Token: <token ID>" -H "Content-Type: application/json" -d '{"sys_creds": {"username": "cloudadmin", "password": "cloudadmin", "newpassword": "<new-password>"}}' 5. Set the password on the third Enterprise appliance (ea3). (This command is the same as step 3 except that ea1 is changed to ea3.) curl -X PUT -H "X-Auth-Token: <token ID>" -H "Content-Type: application/json" -d '{"sys_creds": {"username": "cloudadmin", "password": "cloudadmin", "newpassword": "<new-password>"}}' Logging in and changing the default HP CSA and Marketplace Portal password Log in to Enterprise using the default credentials in the following table. Cloud Service Management Console User name: admin Password: cloud Marketplace Portal User name: consumer Password: cloud Use the following procedures to change the password of the default user names used to log in to the Cloud Service Management Console and the Marketplace Portal. Prerequisites Enterprise is installed. See Installing the Enterprise appliance after First-Time Installation (page 90). You have access to the Enterprise appliance console using the hypervisor console. Procedure 47 Changing the default HP CSA admin password 1. Log in to Management appliance (ma1), then SSH to the Enterprise appliance (ea1). 2. Go to the folder /usr/local/hp/csa/tools/passwordutil. Use sudo su if you need elevated privileges access. 3. Run the following command and specify a new password in (encrypted_new_password). NOTE: Encrypted passwords must be enclosed in parentheses. /usr/local/hp/csa/openjre/bin/java -jar passwordutil.jar encrypt (encrypted_new_password) 4. Perform the following steps on each node of the Enterprise appliance trio (ea1, ea2, ea3). a. Edit the file /usr/local/hp/csa/jboss-as/standalone/deployments/csa.war /WEB-INF/classes/csa.properties. b. Search for the following line. You only need to change the securityadminpassword line. securityadminpassword = ENC(3oKr9eADA7bE53Zk2t9wIA==) c. Replace the password to the right of the equal sign after = ENC with the (encrypted_new_password) from step Manage the Enterprise appliance trio

93 d. Save the csa.properties file. 5. Go to the folder /usr/local/hp/csa/tools/passwordutil. 6. Run the following command and specify a new password in (encrypted_new_password). /usr/local/hp/csa/openjre/bin/java -jar passwordutil.jar encrypt encrypted_new_pasword,role_rest,consumer_service_administrator,service_busines_manager,service_designer,csa_admin,resource_suply_manager,service_operations_manager,enabled 7. Perform the following steps on each node of the Enterprise appliance trio (ea1, ea2, ea3). a. Edit the file /usr/local/hp/csa/jboss-as/standalone/deployments/ idm-service.war/web-inf/classes/csa-provider-users.properties. b. Search for the following line. You only need to change the admin line. admin=enc(r3a7qfeby8rckhciaefjaphsvysfzjwh/ptg43qn8sw=) c. Replace the password to the right of the equal sign after =ENC with the (encrypted_new_password) from step 6. d. Save the csa-provider-users.properties file. e. Restart the HP CSA service by entering: service csa restart Procedure 48 Changing the default Marketplace Portal consumer password 1. Log in to Management appliance (ma1), then SSH to the Enterprise appliance (ea1). 2. Go to the folder /usr/local/hp/csa/scripts. 3. Run the following command and specify a new password in (encrypted_new_password). NOTE: Encrypted passwords must be enclosed in parentheses. /usr/local/hp/csa/openjre/ bin/java -jar passwordutil.jar encrypt (encrypted_new_password),service_consumer,role_rest,enabled 4. Perform the following steps on each node of the Enterprise appliance trio (ea1, ea2, ea3). a. Edit the file /usr/local/hp/csa/jboss-as final/standalone/deployments/idm-service.war /WEB-INF/classes/csa-consumer-users.properties. b. Search for a line similar to: consumer=enc(uutpxlummjhjofhyvm47sl3jsbubs8/8lp6lw6bht80+pfp6sv1u0q==) c. Replace the password to the right of the equal sign after =ENC with the (encrypted_new_password) from step 3. d. Save the csa-consumer-users.properties file. e. Restart the HP CSA services by entering: service mpp restart Logging in and changing the default HP CSA and Marketplace Portal password 93

94 Part III Resource configuration in CloudSystem Use this part of the Administrator Guide to learn when and how to use the CloudSystem Operations Console to configure, monitor and manage virtual compute resources. The chapters are organized primarily by compute resource category. For the maximum number of configured resources supported in CloudSystem Foundation based on Helion OpenStack and CloudSystem Enterprise, see the HP Helion CloudSystem 9.0 Support Matrix at Enterprise Information Library.

95 15 Network configuration CloudSystem is built on Helion OpenStack Networking technology. The network administrator creates the underlying network infrastructure before you install the CloudSystem virtual appliances. See the HP Helion CloudSystem 9.0 Network Planning Guide at Enterprise Information Library for detailed information. After CloudSystem is installed, you can create the following types of networks that are used to manage instances: Tenant (private) networks are restricted and can be accessed only by virtual machine instances assigned to the network. Subnets must be defined in the OpenStack user portal before using this network. See Tenant networks (page 95). Provider Networks (Optional) are shared networks in the data center on which users can provision any number of virtual machine instances. See Provider networks (page 96). The External Network allows you to route virtual machine instances on Tenant networks out from the CloudSystem private cloud to the data center, the corporate intranet, or the Internet. The External Network must be created and subnets must be defined in the OpenStack user portal before using this network. The following table lists CloudSystem network tasks according to user roles and the interfaces used to perform them. Task User role User interface Additional information Create pools of VLAN IDs and segmentation ranges that can be assigned to Tenant Networks Infrastructure administrator Operations Console Tenant networks (page 95) Create Provider Networks Infrastructure administrator Operations Console OpenStack user portal Provider networks (page 96) Create External Network Create External Network subnet Infrastructure administrator OpenStack user portal External Network (page 100) Attach Tenant networks to instances Cloud user OpenStack user portal OpenStack End User Guide Create routers to connect networks Cloud user OpenStack user portal OpenStack End User Guide and Creating an External Network router (page 101) Manage IP addresses using either dedicated static IPs or DHCP Cloud administrator OpenStack user portal OpenStack End User Guide Access instances that are on Tenant networks from outside of the cloud using floating IP addresses Cloud user OpenStack user portal OpenStack End User Guide and Assigning floating IP addresses to instances (page 102) Tenant networks Using the CloudSystem Operations Console, you can select which VLANs are available for provisioning to Tenant Networks. After you add a Tenant Network VLAN, you can also use the Tenant networks 95

96 Operations Console to delete VLANs, removing them from the pool of VLANs available for Tenant Network assignment. The Operations Console Dashboard allows you to track the number of Tenant Network IP addresses that are assigned to instances. The Tenant Network Allocation box shows the number of VLAN IDs that are available for allocation to Tenant Networks. The Tenant Network Utilization box shows the percent of VLAN IDs that are already assigned to Tenant Networks. End users use the OpenStack user portal to create new Tenant Networks mapped to available VLANs, and to manage their Tenant Network topologies. When a user configures a Tenant Network in the OpenStack user portal, the OpenStack Networking service assigns a VLAN ID from the VLANs configured for that project. The user does not explicitly specify the VLAN ID for a Tenant Network. Add segmentation ID ranges (page 96) Delete segmentation ID ranges (page 96) Add segmentation ID ranges Use this procedure to add segmentation ID ranges to the pool of VLANs available for Tenant Network assignments. End users can then use the OpenStack user portal to create Tenant Networks from these assignable VLAN IDs. Prerequisites A pool of VLANs is available and the VLANs are not yet allocated. Procedure 49 Adding a segmentation ID range for use in Tenant Networks 1. From the Operations Console main menu, select Tenant Networks. 2. Click Add Segmentation ID Range. 3. Enter the segmentation ID, which is a unique VLAN ID assigned to the Cloud Data Trunk. Do not assign a VLAN ID that is already assigned to a Provider Network. To enter a single VLAN ID, add it in the format n-n. 4. Click Add. Example: To exit without adding segmentation ranges, click Cancel. Delete segmentation ID ranges Use this procedure to delete segmentation ID ranges from the pool of VLANs available for Tenant Network assignments. Deleting a VLAN does not impact instances already using a Tenant Network assigned to that VLAN. The delete action removes the VLAN from future use. Procedure 50 Deleting segmentation ID ranges 1. From the Operations Console main menu, select Tenant Networks. 2. Select the row of the range you want to delete. Select or clear checked resources in the table by clicking the down arrow in the selection icon. 3. Select Delete Segmentation Range. 4. Click Confirm Deletion. To exit the action without deleting the range, click Cancel. 5. Verify that the segmentation range was removed from the Tenant Networks overview screen. Provider networks Provider networks are created by the administrator for cloud tenants. They can be mapped or routed to an existing physical network in the data center, therefore cloud instances can communicate 96 Network configuration

97 with legacy datacenter resources. Provider networks can be shared among tenants or assigned to a specific tenant. If your cloud network type is VLAN, the Provider network could be part of the Cloud Data Trunk, or routed with the Cloud Data Trunk. If your cloud network type is VxLAN, the Provider network requires an SDN controller and at least one physical HP 5930 switch to bridge communicate with legacy data center networks. See the HP Helion CloudSystem 9.0 Network Planning Guide at Enterprise Information Library for information about managing provider networks in a VxLAN environment. Add a Provider network (page 97) Delete a Provider network (page 98) Manage Provider network subnets (page 99) After you add a Provider Network in the Operations Console, you can use the Operations Console to manage the network. You can also use the OpenStack user portal (Horizon) or the OpenStack Neutron API or CLI to manage the network. NOTE: The OpenStack Networking service assigns a unique identifier (ID) to each Provider Network. The service uses the ID to differentiate each network. Because you can create more than one network with the same name, but with different IDs, you might want to specify a unique name for each Provider Network so that you can easily differentiate between networks. Add a Provider network Adding a Provider Network enables you to communicate between an existing data center network and the cloud. For information about creating a service provider network to use with Helion Development Platform, see Configure the service provider network (page 166). Procedure 51 Adding a Provider Network 1. From the CloudSystem Operations Console main menu, select Provider Networks. 2. Click Add Provider Network. 3. In the Name field, enter a name for the network. 4. In the Project field, enter the name of the project the network will be assigned to in the OpenStack user portal. 5. In the Physical Network field, enter provider. 6. In the Segmentation ID field, enter the VLAN ID of the Cloud Data Trunk. IMPORTANT: The VLAN ID assigned here cannot be the same VLAN ID assigned to a Tenant Network Segmentation ID. This must be a unique VLAN ID. 7. If you want other users to have access to this network, click the Shared box. 8. If you want the network to be available for use immediately after it is created, click the Admin State Up box. 9. Click the Add Subnet box. a. In the Subnet Name field, enter a name for the subnet range. b. In the Network Address field, enter the network address in CIDR format. Your network administrator should have all subnet details, including the network address and gateway IP. The network address cannot overlap any other address range in your network infrastructure. This address must be unique so that the router can find it in the network routing tables. c. From the IP Version drop down box, select the version. Provider networks 97

98 d. In the Gateway IP field, enter the Gateway IP address provided by your network administrator. If you want to disable the gateway, click the Disable Gateway box. e. In the Allocation Pools field, enter the range of IP addresses available for the subnet. Example: , f. In the DNS Name Servers field, enter the IP address of the DNS server that resolves the IP addresses in your subnet. g. In the Host Routes fields, enter a route for each remote network. The Destination is a CIDR range for any external network you need to reach. The Next hop is a gateway on this network that can route to the remote network. 10. Click ADD. 11. Verify that the new provider network is displayed on the Provider Networks screen. Delete a Provider network Use this procedure to delete a Provider Network and its associated subnets. Upon deletion, the network and its associated subnets are no longer available in the cloud. Prerequisites A VM instance or router is not assigned an IP address on the network to be deleted. Procedure 52 Deleting a Provider Network 1. From the CloudSystem Operations Console main menu, select Provider Networks. 2. Click Delete Provider Network. 3. Select the row of the network to be deleted. 4. Click Delete. 5. Click Confirm Deletion. 6. Verify the network deletion by reviewing the fields on the Provider Networks screen. Edit a Provider network Use this procedure to edit a Provider network and add, edit, or remove its subnet(s). 1. From the main menu, select Provider Networks. 2. Select the row of the network to be edited. 3. Select the action dots to the right of the selected row, then click Edit from the Actions menu. 4. Option 1: Update the subnet associated with the Provider network. a. Click Manage Subnets. b. Select the row of the subnet to be edited. c. ClickEdit Subnet. d. Enter the data requested on the screen. Click the e. Click Update Subnet. icon for information about the text to enter. 5. Option 2: Add a new subnet to a provider network. a. Select Edit from the Actions menu. b. Click Manage Subnets. c. ClickAdd Subnet. Click the d. Click Create Subnet. icon for information about the text to enter. 6. Option 3: Remove a subnet from a provider network. 98 Network configuration

99 a. Select Edit from the Actions menu. b. Select the row of the subnet to be removed. c. Click Remove Subnet. d. Click Confirm Deletion. 7. Verify that the network update was successful by reviewing the fields on the Provider Networks screen. Manage Provider network subnets Use this procedure to add, edit, or remove the subnets associated with a Provider network. Procedure 53 Managing subnets 1. On the Provider Networks overview screen, select the row of the network whose subnets you want to manage. 2. Select Actions Manage Subnets. 3. To add a subnet to a Provider network: a. Click Add Subnet b. Enter the data requested on the screen. c. Click Create Subnet. 4. To edit a subnet associated with a provider network: a. ClickEdit Subnet. b. Enter the updated data in the subnet fields. c. Click Update Subnet. 5. To remove a subnet from a Provider network: a. Click Remove Subnet. b. Click Confirm Deletion. 6. From the main menu, select Provider Networks. 7. Select the row of the network to be edited. 8. Select Edit from the Actions menu. 9. Enter the data requested on the screen. 10. Option 1: Update the subnet associated with the Provider network. a. Click Manage Subnets. b. Select the row of the subnet to be edited. c. ClickEdit Subnet. d. Enter the data requested on the screen. Click the e. Click Update Subnet. icon for information about the text to enter. 11. Option 2: Add a new subnet to a Provider network. a. Select Edit from the Actions menu. b. Click Manage Subnets. c. ClickAdd Subnet. Click the d. Click Create Subnet. icon for information about the text to enter. 12. Option 3: Remove a subnet from a Provider network. a. Select Edit from the Actions menu. b. Select the row of the subnet to be removed. c. Click Remove Subnet. d. Click Confirm Deletion. Provider networks 99

100 External Network The External Network allows you to route virtual machine instances on Tenant networks out from the CloudSystem private cloud to the data center, the corporate intranet, and the internet. NOTE: You must create the External network and subnet(s) using the OpenStack user portal or the Neutron net-create command. One External Network is supported in CloudSystem. Virtual machines are not directly attached to the External Network. Internal Provider and Tenant networks connect directly to virtual machine instances. After installation, you can use the OpenStack user portal to enable use of the External Network for accessing VM instances on cloud networks. You create one or more subnets for the External Network. Cloud users can then create routers to connect the External Network to Tenant networks for their projects. Traffic from the External Network is routed to selected virtual machines inside the cloud using floating IP addresses. Creating the External Network Procedure 54 Creating the External Network 1. Log on to the OpenStack user portal. 2. From the Admin tab, in the System Panel section, select Networks. 3. Click + Create Network. The Create Network screen opens. 4. Complete the Create Network settings. Name Enter a unique name for the network. A maximum of 255 alphanumeric characters is allowed. Project Enter a project name. Provider Network Type Select Flat. Physical Network Enter external. Admin State Select UP. Shared Leave this check box cleared. External Check this check box. Configuring the External Network To configure the External Network for use in routing traffic to selected virtual machines inside the cloud, complete the following procedures: 1. Create the External Network subnet (page 101) 2. Creating a router to connect Tenant Network instances to the External Network subnet (page 101) 3. Assigning floating IP addresses to instances (page 102) Creating the External Network subnet Creating an External Network subnet allows cloud users to access virtual machine instances on Tenant networks. Use this procedure to create a subnet. IMPORTANT: Cloud users should never select the External Network when creating virtual machine instances. Do not delete the External Network. 100 Network configuration

101 Prerequisites External network was assigned a VLAN ID when you ran the First-Time Installer External network was created. See Creating the External Network (page 100) Procedure 55 Create the External Network subnet 1. Log on to the OpenStack user portal. 2. From the Admin tab, in the System Panel section, select Networks. The Network screen opens and displays a list of configured networks. 3. Click the External Network link. External Network details appear on the Network Overview screen. 4. On the right side of the Subnets section, click + Create Subnet. The Create Subnets screen opens with the Subnet tab selected. 5. Complete the Subnet tab settings. Subnet Name Enter a unique name for the subnet. A maximum of 255 alphanumeric characters is allowed. Network Address Enter an IPv4 address in CIDR format specifying the IP address range to use for the subnet. IP Version Leave the default setting at IPv4. Gateway IP Enter the IPv4 address of the router providing access to this subnet. Disable Gateway Leave this check box cleared to allow the router to access networks inside the cloud. 6. Select the Subnet Detail tab and complete these settings: Enable DHCP Click the check box to clear this option, allowing the use of floating IPs for routing traffic. Allocation Pools Enter the IP address ranges to make available for floating IP address assignment on the subnet. The IP address range is comma separated. DNS Name Servers Leave blank Host Routes Leave blank 7. Click Create. Details about the External Network subnet are displayed on the Network Overview screen. Cloud users should now be able to create routers to connect the External Network subnet to Tenant networks for their projects. You can verify that a router can be connected. See Creating a router to connect Tenant Network instances to the External Network subnet (page 101). Creating an External Network router Cloud users can create routers to connect Tenant networks for their projects to the External Network subnet. Use this procedure to verify that a router can be connected. Prerequisites Minimum required privileges: Cloud user An External Network subnet is created. See Create the External Network subnet (page 101). The Tenant Network that you want to connect to the External Network subnet is configured and available for use. Procedure 56 Creating a router to connect Tenant Network instances to the External Network subnet 1. If you are not already logged on to the OpenStack user portal, log on. External Network 101

102 2. From the Project menu, in the Network section, select Routers. The Routers overview screen opens and displays a list of configured routers. 3. Select + Create Router. The Create router screen opens. 4. Enter a name for the router, and then click Create router. Details about the new router are listed on the Routers overview screen. 5. Click Set Gateway next to the new router listing. 6. On the Set Gateway screen, select External Network, and then click Set Gateway. The Routers overview screen reopens. 7. Click the link for the new router to display its details screen. 8. Click + Add Interface. 9. On the Add Interface screen, click the Subnet arrow and select the tenant network you want to connect to the External Network. Leave the IP address blank. 10. Click Add interface. The router details screen reopens and displays details about the new interface. You can now use floating IP addresses to route traffic over the External Network subnet to specific virtual machine instances associated with a CloudSystem project. See Assigning floating IP addresses to instances (page 102). Assigning floating IP addresses to instances You can use floating IP addresses to route traffic over the External Network subnet to specific virtual machine instances associated with a CloudSystem project. Use this procedure to allocate and assign floating IP addresses. Prerequisites Minimum required privileges: Cloud user An External Network subnet is created. See Create the External Network subnet (page 101). A router is connected to the External Network subnet. See Creating a router to connect Tenant Network instances to the External Network subnet (page 101). The Tenant Network that you want to connect to the External Network subnet is configured and available for use. Procedure 57 Assigning floating IP addresses to instances 1. If you are not already logged on to the OpenStack user portal, log on. 2. Allocate IP addresses to a CloudSystem project. a. From the Project menu, select Compute Access & Security. The Security Groups screen opens and displays configured security groups. b. Select the Floating IPs tab. c. Click Allocate IP To Project. The Allocate Floating IP screen opens and displays floating IP information for the project. d. From the Pool list, select External Network, and then click Allocate IP. The Allocate Floating IPs screen reopens and displays the newly allocated floating IP addresses. 3. Associate a floating IP with an instance. a. From the Project menu, in the Network section, select Instances. 102 Network configuration

103 b. Next to the instance to which you want to assign a floating IP, click More, and then select Associate Floating IP. The Manage Floating IP Associations screen opens and displays floating IP information for the project. c. Click the + button under the IP Address field. The Allocate Floating IP screen opens. d. From the Pool list, select External Network, and then click Allocate IP. The Manage Floating IP Associations screen reappears with External Network listed in the IP Address field. e. Click Associate. The Instances screen reopens and displays the External Network floating IP address information associated with the instance. 4. Configure security group rules to enable SSH, ICMP, and other IP protocols on instances accessed using the External Network. a. From the Project menu, select Compute Access & Security. The Security Groups screen opens and displays security groups configured for instances. b. Next to the security group associated with the instance, click + Edit Rules. The Security Group Rules screen opens and displays all rules configured for the instance. c. Click + Add Rule. The Add Rule screen opens. d. Select rules to define which traffic is allowed over the External Network to instances in the security group. e. Click Add. The Security Group Rules screen reappears and displays information about the added rule. Users should now be able to access the instance using the associated floating IP from the External Network. To verify, use SSH on the External Network to reach the instance. External Network 103

104 16 Access and security for instances Security groups are virtual firewalls that control the traffic for instances. When you launch an instance, you associate one or more security groups with the instance. Key pairs allow you to use public-key cryptography to encrypt and decrypt login information. To log in to your instance, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance. Create a security group Administrators create security groups to define a set of IP filter rules that determine how network traffic flows to an instance. Cloud users can add additional rules to an existing security group to further define the access options for an instance. To create additional rules, go to Compute Access & Security, then find the security group and click Manage Rules. Security groups are project-specific and cannot be shared across projects. Once a security group is associated to an instance, the pathway to communicate with the instance is open, but you still need to configure key pairs. The key pair allows you to SSH into the instance. If a security group is not associated to an instance before it is launched, then you will have very limited access to the instance after it is deployed. You will only be able to access the instance from a VNC console. Prerequisites Minimum required privileges: Administrator Procedure 58 Creating a security group 1. Log in to the OpenStack user portal at 2. Click the Project tab. 3. Select your project from the drop down list at the top of the screen. 4. Click Compute Access & Security from the left menu. 5. Click the Security Groups tab. 6. Click Create Security Group. 7. Enter a name for the security group. Example: SGSSHandPing 8. Enter a description for the security group. 9. Click Create Security Group. 10. Verify that the new security group appears on the Security Group screen. 11. Click the Manage Rules action button to the right of the security group you just created. 12. Click + Add Rule. 13. Fill in the fields that define the rules you want to apply. For example, to create a rule that allows ping traffic, under Rule, select the All ICMP rule. 14. Click Add. The rule is added to the security group you created. Create a key pair Key Pairs allow you to log in to an instance after it is launched. Key pairs are only supported in instances that are based on images containing the cloud-init package. For more information on cloud-init, see Cloud-Init Documentation. You can generate a key pair from the CloudSytem Portal using the following procedure, or you can generate a key pair manually from a Linux or Windows system. 104 Access and security for instances

105 From a Linux system, generate the key pair with the ssh-keygen command ssh-keygen -t rsa -f cloud.key. This command generates a pair of keys: a private key (cloud.key) and a public key (cloud.key.pub). From a Windows system, you can use PuTTYgen to create private/public keys. Use the PuTTY Key Generator to create and save the keys, then copy the public key in the red highlighted box to your.ssh/authorized_keys file. After the instance is launched, log in using the private key. Prerequisites Minimum required privileges: Cloud User Procedure 59 Creating a key pair in the OpenStack user portal 1. Click the Project tab. 2. Click Access & Security from the left menu. 3. Click the tab for Key Pairs. 4. Click + Create Key Pair. 5. Enter a key pair name. 6. Click Create Key Pair. Your browser will give you the opportunity to open and save the key pair. When you create an instance, you can associate the key pair to the instance using the Access & Security tab. After the instance is launched, log in to the instance using the private key you created in the procedure above. More information Using the OpenStack user portal to launch instances (page 201) Create a key pair 105

106 17 Integrated tool connectivity and configuration The Integrated Tools screen allows you to connect the Operations Console to other data center management software. See CloudSystem user interfaces (page 19) for information about how to launch other CloudSystem user interfaces. Register VMware vcenter VMware vcenter is an appliance that is used to manage multiple ESXi hosts through a single console application. VMware ESXi is a virtualization platform on which you create and run virtual machines. VMware vcenter acts as a central administrator for ESXi hosts that are connected on a network. You can pool and manage the resources of multiple ESXi hosts while monitoring and managing your physical and virtual infrastructure. In CloudSystem, register VMware vcenter as an integrated tool to establish a connection between VMware vcenter and the CloudSystem management appliance. Once VMware vcenter is registered, ESXi clusters can be activated and used as compute nodes. From the CloudSystem Integrated Tools screen, you can also view and edit registered servers and remove the connection the between CloudSystem and the VMware vcenter. For more information, see VMware vsphere Documentation at VMware. Prerequisites A VMware vcenter is installed and configured and connected to the network. Procedure 60 Register VMware vcenter Use this procedure to register a connection to VMware vcenter in the Operations Console. After the connection is made, you can activate ESXi clusters to be used as compute nodes. 1. From the Operations Console main menu, select Integrated Tools, then click Register VMware vcenter in the VMware vcenter pane. 2. Enter the data requested on the screen. NOTE: When specifying the VMware vcenter Name: Enter only English alphanumeric characters and hyphens. Do not specify a FQDN, because it contains periods that are not allowed. If you register more than one vcenter, make sure that each vcenter has a unique name on this screen. You cannot change the VMware vcenter Name after the VMware vcenter is registered. The name is used for automatically configuring the VMware VMFS storage device. 3. Click Register VMware vcenter. To exit the action without registering VMware vcenter, click Cancel. 4. Verify that the updated number of registered VMware vcenters is displayed on the Integrated Tools screen. Manage VMware vcenter Use this procedure to edit or remove a connection to VMware vcenter in the Operations Console. Procedure 61 Manage VMware vcenter 1. From the Operations Console main menu, select Integrated Tools, then click Manage VMware vcenter in the VMware vcenter pane. 106 Integrated tool connectivity and configuration

107 2. To edit the connection to VMware vcenter: a. Click Edit Server to change the details of the VMware vcenter. b. Edit the data on the screen. You cannot edit the name of the vcenter after it is registered. c. Click Update Server. To exit the action without registering VMware vcenter, click Cancel. 3. To remove the connection to a VMware vcenter: a. Click Remove Server. You cannot remove a VMware vcenter if there are activated compute clusters. Deactivate any activated compute clusters on the Compute Nodes screen before continuing. The Remove Server action does the following: Deletes a VMDK device with the vcenter name. Deletes a VMDK driver in the Cinder service with the vcenter name. Deletes a volume type with the vcenter name. b. Click Confirm Removal. To exit the action without removing a VMware vcenter, click Cancel. HP Operations Orchestration Central OO Central is included as part of CloudSystem Enterprise. It contains a set of default workflows that allow you to manage administrative tasks associated with the private cloud. OO Central is automatically installed as part of the CloudSystem Enterprise appliance. CloudSystem supports full OO functionality, but only the workflows in the pre-defined bundle are available for use. You can optionally install OO Studio to edit existing workflows or create new workflows. After workflows are edited in OO Studio, you can load them back to OO Central and use them to perform administrative tasks such as: Monitor provisioned virtual machines and send notifications in the event of a failure. Check the status of memory, storage, and CPU usage. Run a health check on virtual machines. Apply patches to specific virtual machines. Schedule snapshot creation for specific virtual machines. See also HP Operations Orchestration management (page 210) For more information about HP Operations Orchestration, see HP Operations Orchestration Central 107

108 18 Image management Use the information in this chapter to learn how to bring existing images into CloudSystem for use in provisioning virtual machines. Use the OpenStack user portal to upload an image. Cloud users then use the OpenStack user portal to choose from available images, or create their own from existing servers. Users can also create images using OpenStack API or CLI. This chapter does not cover creating an image from scratch. To learn how, see documentation available on the Enterprise Information Library or at OpenStack Software. An image contains the operating system for a virtual machine. It defines the file system layout, the OS version, and other related information about the operating system to provision. An image can be provisioned to one or more virtual machines in the cloud. Images that you add (upload) are used to boot virtual machine instances in the cloud. Before virtual machine instances can be provisioned in the cloud, you must create at least one provider or tenant network, and upload at least one image. Using the OpenStack user portal, you upload images by doing one of the following: Entering a file server URL Selecting a local file Creating an image from a snapshot of a currently running instance. Image format support ESXi Flat and Sparse Virtual Machine Disk format (VMDK) image files with SCSI and IDE adapters are supported for VM guest provisioning on VMware ESXi hypervisors. Other formats, including compressed VMDK images, are not supported. If your image uses the Sparse VMDK format or an IDE adapter, you must set the required properties on the image in the OpenStack user portal. If your image requires an IDE adapter, Cinder volumes cannot be attached when the instance is powered on. See Setting custom attributes on Microsoft Windows images (page 109). See the OpenStack Configuration Reference at OpenStack Cloud Software for information about configuring VMware-based images for launching as virtual machines. Hyper-V VHD and VHDX formatted image files are supported for virtual machine provisioning on Hyper-V hypervisors. KVM Quick EMUlator (QEMU) copy-on-write format (QCOW2) formatted image files are supported for virtual machine provisioning on KVM hypervisors. 108 Image management

109 Image naming and single datastore support in VMware vcenter Each set of CloudSystem images must be in the same datastore in the vcenter Server. Folders cannot be used to separate an additional set of CloudSystem images that are uploaded to the vcenter Server. For example, if the Enterprise appliance image is added after the Foundation image, the Enterprise image must be uploaded to the same datastore as the running Foundation appliance, and it must have a unique name from other Enterprise appliances running in the same vcenter Server. Creating and obtaining images For information about creating and obtaining images that you can add to the CloudSystem Operations Console, refer to the following documents: Creating Windows Virtual Machine Images for Use with ESXi Compute Clusters in HP CloudSystem white paper at Enterprise Information Library OpenStack Virtual Machine Image Guide at OpenStack Cloud Software Setting custom attributes on Microsoft Windows images When you upload a Windows image (.VMDK file) from the OpenStack user portal, CloudSystem sets the following attributes on the image: vmware_ostype=windows8server64guest vmware_adaptertype=lsilogicsas NOTE: To set a different operating system type for the image, append -os-type-<windows OS type> to the name of the.vmdk file you upload. For example, if you have a 64-bit Windows 7 Server image, name your file WinImage-os-type-windows7Server64Guest.vmdk before you upload it. Updating image metadata You can change the attributes on an image using the OpenStack user portal or the OpenStack Glance CLI or API after you upload it, if no instances have been created from the image. For example: glance image-update <Windows-image.vmdk> --property vmware_ostype=windows7server64guest --os-cacert <mycert.cer> To set the VMware network adapter type, a best practice is to enter the following: glance --insecure image-update <Windows-image.vmdk> --property hw_vif_model=virtualvmxnet3 Procedure 62 Update Glance image properties (when no instances have been created from the image) 1. From the Admin tab, select the Images panel. 2. Select the image, the click Edit, and select Update Metadata. 3. Change the value of the vmware_adaptertype property. 4. (Optional) Use the Glance image-update command to update the image instead of the OpenStack user portal. If instances have been deployed using the image as a boot source, you must delete the image and upload a new image with the correct properties, because the image is cached. Procedure 63 Recreate the Glance image (when instances have been created from the image) 1. Log in to the Management appliance (ma1) using the cloudadmin credentials set during First-Time Installation. 2. SSH to the Cloud controller. Image naming and single datastore support in VMware vcenter 109

110 3. Delete the image from the <CC CLM IP>_base/<image-id>/<image-id>.* folder in the data store. 4. Create the image again using the Glance CLI or the OpenStack user portal. The OpenStack Glance image-create command requires that you specify VMware-specific properties, including vmware_disktype, hypervisor_type, and vmware_adaptertype. For more information, see OpenStack Glance commands at OpenStack Cloud Software. Expanding the Glance disk size If you need to expand the size of your glance disk after deploying CloudSystem, you can create a new disk in the management hypervisor and then attach it to the Cloud controller (cmc). Procedure 64 Expanding the size of a glance disk in an ESXi environment 1. Log in to the management hypervisor in vcenter. 2. Select the Cloud controller (cmc) and edit the virtual machine settings. 3. Add a Hard Disk. When adding, specify the size and select Thin Provisioning. 4. As the root user, log on to the Cloud controller (cmc) and run: extend_glance_store.py 5. On the Cloud controller (cmc), verify the change to the glance disk size: df h Procedure 65 Expanding the size of a glance disk in a KVM environment 1. Log in to the management hypervisor and find the name of your Cloud controller (cmc). virsh list --all 2. Create the new qcow2 disk: qemu-img create -f qcow2 <path_of_disk> <size_of_disk> Example: qemu-img create -f qcow2 /CloudSystem/cs-test.qcow 10G 3. Attach the disk to the Cloud controller (cmc): virsh attach-disk --domain <vm-name> --source <path_of_disk> --target <target_device> --driver qemu --subdriver qcow2 persistent Example: virsh attach-disk --domain cs-cloud1 --source /CloudSystem/cs-test.qcow --target vdd --driver qemu --subdriver qcow2 persistent 4. Using root credentials, log in to the Cloud Controller (cmc) and extend the glance disk. extend_glance_store.py 5. On the Cloud controller (cmc), verify the change to the glance disk size: df h Adding images In the OpenStack user portal, use this procedure to add an image to the Glance repository. The image can be used to create an instance. Procedure 66 Adding Images 1. From the Admin tab, select Images. 2. Click + Create image. 110 Image management

111 3. Select an Image Source: Image Location. Enter the URL (beginning with of the image to upload from a file server. For example, Image File: Path to an OS image that can be resolved by the browser. Select a single file that contains the image. 4. Enter data for this image, depending on the type of compute node on which you will launch the instance. ESXi: OS Type: Other Format: VMDK Hyper-V: OS Type: Windows Format: VHD KVM: OS Type: Linux Format: QCOW2 QEMU Emulator Architecture: (leave blank) Minimum Disk: 0 Minimum RAM: 0 Public: Yes Protected: No (Image can be deleted from Glance) or Yes (Image cannot be deleted from Glance) 5. To finish adding the image, click Add. To exit without uploading an image, click Cancel. 6. (Optional) Set custom attributes on Windows images using the OpenStack Glance CLI. See Setting custom attributes on Microsoft Windows images (page 109). Adding images 111

112 19 Storage management CloudSystem 9.0 adds support for HP StoreVirtual VSA and expands iscsi support, in addition to the existing HP 3PAR StoreServ and VMware VMFS options. Refer to the HP Helion CloudSystem 9.0 Support Matrix in the Enterprise Information Library for details on compatibility. For information about installing object storage (Scale out Swift), see Installing Object storage (page 145). Block storage (Cinder) VMware VMFS storage devices (page 113) HP 3PAR StoreServ storage devices (page 114) StoreVirtual VSA storage devices (page 117) Table 8 Block storage options Hypervisor Image type Block storage device type Device is created... ESXi VMDK VMware VMFS Automatically when a vcenter is registered on the Integrated Tools screen KVM QCOW2 HP 3PAR StoreServ Fibre Channel 1 HP 3PAR StoreServ iscsi HP StoreVirtual VSA On the Operations Console Block Storage Devices screen Hyper-V VHD, VHDX HP 3PAR StoreServ iscsi HP StoreVirtual VSA 2 On the Operations Console Block Storage Devices screen 1 Boot from volume is not supported in KVM-provisioned instances using HP 3PAR FC storage. 2 Boot from volume is not supported in Hyper-V provisioned instances when the Hyper-V compute node is part of a cluster. Block storage and HA Each block storage volume service is a singleton service, meaning it is a service running on one member of the Cloud controller trio at a time. Use the monitoring service available from the Operations Console to monitor the state of the block storage volume service in CloudSystem. Block storage networks Figure 12 Block storage networks 112 Storage management

113 The Block Storage Network is an iscsi network that is configured when you run the CloudSystem First-Time Installer. It connects your storage devices (VSA or 3PAR) to the management cluster and compute cluster. If you plan to use 3PAR Fibre Channel, then you must also configure an FC SAN network in your environment to connect 3PAR to your management cluster and compute cluster. VMware VMFS storage devices VMFS storage devices allow you to boot VMware instances from a volume. You must register a VMware vcenter in the Integrated Tools screen of the Operations Console, then the VMFS storage device is created automatically. IMPORTANT: If you are using multiple vcenters, the instance to which you are attaching an instance must be in the vcenter that is hosting the VMFS storage device. Process Overview Set up a VMFS storage device (page 113) Register a VMFS storage device (page 113) Manage a VMFS storage device (page 113) Set up a VMFS storage device Keep in mind the following considerations when integrating a VMFS storage device in the CloudSystem environment. Register the vcenter hosting the storage device on the Operations Console Integrated Tools screen. One VMFS storage device is automatically registered on the Block Storage Devices screen for each vcenter you register. For more information on vcenter, see ESXi and VMware documentation. Register a VMFS storage device After registering the vcenter, the VMFS storage device is automatically created. You will have: one registered VMware VMFS storage device one backend You can find the backend name by logging in to the OpenStack user portal and navigating to Admin Volumes Volume Types, then select the volume type that was created for the VMFS storage device and click View Extra Specs. There is an entry in that table for the backend. one VMFS volume type The volume type name matches the name given to the VMFS storage device when it was registered. Manage a VMFS storage device The Operations Console allows you to see the details of your VMFS storage device, but you cannot mange the device from the Operations Console Block Storage Devices screen. To unregister the VMFS storage device and remove the backend, deactivate all compute clusters in the vcenter, then unregister the vcenter on the Integrated Tools screen. See Manage VMware vcenter (page 106). To add or delete VMFS volume types, log in to the OpenStack user portal using Administrator credentials, select your project, then navigate to Admin Volumes Volume Types. Block storage (Cinder) 113

114 When you remove a vcenter: all compute clusters associated with the vcenter must first be deactivated the VMFS storage device that was automatically created when you registered the vcenter is removed after the vcenter is unregistered the backend associated with the VMFS storage device is removed after the vcenter is unregistered the volume type is removed and is no longer visible in the OpenStack user portal Volume Types screen. HP 3PAR StoreServ storage devices You can add HP 3PAR StoreServ Fibre Channel or iscsi storage devices to your cloud environment. If you are using storage domains, when you add one FC device type and one iscsi device type to the same storage system, both devices must reside in the same domain. When you add an iscsi device you must have connectivity from the targeted compute node to the 3PAR storage system iscsi port. If you do not configure the connection, block storage volumes will not attach to virtual machine instances. iscsi devices are created with Challenge Handshake Authentication Protocol (CHAP) disabled by default. CHAP is not supported in this release of CloudSystem. Special characters in an HP 3PAR iscsi host name cause the Cinder driver host creation to fail when attaching a volume. For iscsi host names, use the characters [a-z][a-z][. -] and [0-9] only. Process Overview Set up 3PAR storage device hardware (page 114) Register a 3PAR storage device (page 114) Manage CPGs for 3PAR FC (page 115) Manage CPGs for 3PAR iscsi (page 116) Unregister a 3PAR or VSA block storage device (page 120) Best practices for using HP 3PAR storage systems (page 117) Set up 3PAR storage device hardware Keep in mind the following considerations when integrating an HP 3PAR StoreServ storage device in the CloudSystem environment. Configure the HP 3PAR StoreServ FC or iscsi storage system to support storage requirements for compute nodes and virtual machine instances. The FC and iscsi devices require connectivity to the management console of a supported HP 3PAR storage system. For configuration information,see HP 3PAR StoreServ documentation. If you are using Hyper-V compute nodes, use 3PAR iscsi. Make sure that the 3PAR StoreServ storage system server certificate contains a Fully Qualified Domain Name (FQDN) in the CN attribute Subject field. For block storage volumes, enable REST API web services on the 3PAR StoreServ. Register a 3PAR storage device You can register a 3PAR storage device from the Block Storage Devices screen in the Operations Console. 114 Storage management

115 Prerequisites Storage systems and networking are configured. For an iscsi device, you must have connectivity from the target compute node to the 3PAR storage system iscsi port. Procedure 67 Register a 3PAR block storage device 1. From the Operations Console main menu, select Block Storage Devices. 2. Click Register Storage Device. 3. From the drop down list, select the type of device you want to register. 4. Enter the networking details for the storage device. General Name for the HP 3PAR storage system that will appear in CloudSystem Management IP address of the storage system User name and password for accessing the Management Console of the storage system. Enter the credentials for the OpenStack 3PAR edit role with domain set to "all". SAN Configuration SAN IP address of the SAN controller for SSH access to the storage system SAN user name and password for the SAN controller for SSH access to the storage system 5. Click Register. To exit the action without registering a device and to close the screen, click Cancel before registering. 6. Verify that the new storage device is displayed on the Block Storage Devices overview screen. Manage CPGs for 3PAR FC A common provisioning group (CPG) creates a virtual pool of logical disks that allows virtual volumes to share the CPG's resources and allocates space on demand. Prerequisites HP 3PAR FC storage device is registered Procedure 68 Managing a common provisioning group for 3PAR 1. From the Operations Console main menu, select Block Storage Devices. 2. Select the row of the 3PAR device where you want to associate a CPG. 3. From the Actions menu, select Manage CPGs. 4. Click Add CPG. A list of available CPGs is displayed. 5. Select one or more CPGs to associate with the device and click Register CPGs. To exit without making changes and to close the screen, click Cancel before registering CPGs. NOTE: While you can associate more than one CPG to a storage device, best practice is to limit the relationship to one CPG per storage device. 6. To remove the association of a CPG from the device, select the row of the CPG and click Remove CPG. Click Confirm Removal. To exit without making changes and to close the screen, click Cancel before you confirm removal. 7. Verify that the data is correct on the Block Storage Devices overview screen. Block storage (Cinder) 115

116 Procedure 69 Managing backends for 3PAR FC 1. From the Operations Console main menu, select Block Storage Devices. 2. From the Register Storage Device button, click the down arrow and select Manage Volume Backends. 3. From the Manage Volume Backends window, click Add Backend. Enter a unique name for the backend. 4. Under Backend Configuration, select StoreServe 3PAR FC. 5. Select the CPGs you want to associate with the backend. 6. Identify the volume type for the backend. To use an existing volume type, select Existing Volume Type, then choose the volume type from the list provided. NOTE: Volume types created in the OpenStack user portal display unfiltered in the Existing Volume Type drop down list. Do not associate a VMDK volume type with a 3PAR backend. To create a new volume type, select New Volume Type, then type the name of the volume type in the field provided. 7. Click Create backend. 8. Verify that the data is correct on the Block Storage Devices overview screen. Manage CPGs for 3PAR iscsi Prerequisites HP 3PAR iscsi storage device is registered. Procedure 70 Managing CPGs for 3PAR iscsi 1. From the Operations Console main menu, select Block Storage Devices. 2. Select the row of the 3PAR device where you want to associate a target. 3. From the Actions menu, select Manage CPGs. 4. Click Add CPG. A list of available targets is displayed. 5. Select one or more targets to associate with the device and click Register CPGs. To exit without making changes and to close the screen, click Cancel before registering CPGs. 6. To remove the association of a target from the device, select the row of the target and click Remove CPG. Click Confirm Removal. To exit without making changes and to close the screen, click Cancel before you confirm removal. 7. Verify that the data is correct on the Block Storage Devices overview screen. Procedure 71 Managing backends for 3PAR iscsi 1. From the Operations Console main menu, select Block Storage Devices. 2. From the Register Storage Device button, click the down arrow and select Manage Volume Backends. 3. From the Manage Volume Backends window, click Add Backend. 4. Under Backend Configuration, select StoreServe iscsi. 5. Select the CPGs you want to associate with the backend. 116 Storage management

117 6. Identify the volume type for the backend. To use an existing volume type, select Existing Volume Type, then choose the volume type from the list provided. NOTE: Volume types created in the OpenStack user portal display unfiltered in the Existing Volume Type drop down list. Do not associate a VMDK volume type with a 3PAR backend. To create a new volume type, select New Volume Type, then type the name of the volume type in the field provided. 7. Click Create backend. 8. Verify that the data is correct on the Block Storage Devices overview screen. Best practices for using HP 3PAR storage systems The following information may help you tune your ESXi or KVM environment with HP 3PAR storage. HP 3PAR StoreServ Storage and VMware vsphere 5 best practices HP 3PAR StoreServ Storage and VMware vsphere 6 best practices OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices A guide to advantages with deployment of Fibre Channel Zone Manager StoreVirtual VSA storage devices VSA clusters can be used with both ESXi and Hyper-V hosts. KVM hosts are not supported with this standard version of VSA. VSA can be part of the management cluster, but not part of the compute cluster. You can register multiple VSA block storage devices and activate multiple clusters for each registered VSA device. Block storage volumes can be attached as iscsi volumes to Hyper-V and KVM instances. Compute instances can be stored on hypervisor-specific file systems carved on VSA LUNs. Process Overview Set up HP StoreVirtual VSA storage device hardware (page 117) Register HP StoreVirtual VSA storage device (page 117) Manage clusters for VSA storage devices (page 118) Unregister a 3PAR or VSA block storage device (page 120) Set up HP StoreVirtual VSA storage device hardware Keep in mind the following considerations when integrating HP StoreVirtual VSA into a CloudSystem environment. Configure the HP StoreVirtual VSA storage system to support storage requirements for compute nodes and virtual machine instances. For configuration information, see HP StoreVirtual 4000 Storage. This option is supported for KVM and Hyper-V compute nodes. Register HP StoreVirtual VSA storage device Prerequisites Storage systems and networking are configured. For an iscsi device, you must have connectivity from the target compute node to the VSA storage system iscsi port. Block storage (Cinder) 117

118 Procedure 72 Register a VSA block storage device 1. From the main menu, select Block Storage Devices. 2. Click Register Storage Device. 3. From the drop down list, select the type of device you want to register. 4. Enter the required information. The default port number is Click Register. To exit the action without registering a device and to close the screen, click Cancel before registering. 6. Verify that the new device is displayed on the Block Storage Devices overview screen. Manage clusters for VSA storage devices The maximum size for a VSA node is 50 TB. You must have a minimum of three nodes per cluster, and a maximum of 32 nodes per cluster. You can associate multiple clusters to a VSA device to share resources and allocate space on demand. Prerequisites HP StoreVirtual VSA storage device is registered. Procedure 73 Managing a VSA cluster 1. From the Operations Console main menu, select Block Storage Devices. 2. Select the row of the VSA device where you want to associate a cluster. 3. From the Actions menu, select Manage clusters. A list of registered clusters is displayed. 4. Click Add clusters. 5. Select one or more clusters to associate with the device and click Register clusters. To exit without making changes and to close the screen, click Cancel before you register clusters. 6. To remove the association of a cluster from the device, select the row of the cluster and click Remove cluster. Click Confirm Removal. To exit without making changes and to close the screen, click Cancel before you confirm removal. 7. Verify that the changes you made were captured on the Block Storage Devices overview screen. Procedure 74 Managing a VSA backend 1. From the Operations Console main menu, select Block Storage Devices. 2. From the Register Storage Device button, click the down arrow and select Manage Volume Backends. 3. From the Manage Volume Backends window, click Add Backend. 4. Under Backend Configuration, select StoreVirtual (VSA Cluster). 5. Select the clusters you want to associate with the backend. 6. Identify the volume type for the backend. To use an existing volume type, select Existing Volume Type, then choose the volume type from the list provided. 118 Storage management

119 NOTE: Volume types created in the OpenStack user portal display unfiltered in the Existing Volume Type drop down list. Do not associate a VMDK volume type with a VSA backend. To create a new volume type, select New Volume Type, then type the name of the volume type in the field provided. 7. Click Create backend. 8. Verify that the data is correct on the Block Storage Devices overview screen. Managing 3PAR and VSA block storage device configurations and connections Viewing and downloading configuration (page 119) Editing block storage device connections (page 119) Unregister a 3PAR or VSA block storage device (page 120) Viewing and downloading 3PAR and VSA storage device configuration Procedure 75 Viewing and downloading configuration 1. From the Operations Console main menu, select Block Storage Devices. 2. Select the row of the device where you want to view or download the configuration file. 3. From the Actions menu, click View Config. A text file containing configuration details of the block storage device is displayed in the center of the screen. 4. Optional: Click Download Config to save the file. 5. Click Cancel to return to the Block Storage Devices screen. Procedure 76 Editing block storage device connections 1. From the Operations Console main menu, select Block Storage Devices. 2. Select the row of the device you want to edit. 3. From the Actions menu, select Edit. 4. Update the device information. 5. To apply the changes to the device, click Update. To exit without making changes and to close the screen, click Cancel before updating. 6. Verify that the data is correct on the Block Storage Devices overview screen. Unregistering a 3PAR or VSA block storage device Prerequisites The block storage device is registered. See Register a 3PAR storage device (page 114) or Register HP StoreVirtual VSA storage device (page 117) The volume type associated with the device is deleted. You can delete the volume type in the OpenStack user portal from the Admin Volume Volume Types screen. Block storage (Cinder) 119

120 Procedure 77 Unregister a 3PAR or VSA block storage device IMPORTANT: Before you unregister a backend, make sure there are no volumes attached. Use the OpenStack user portal to view Admin System Volumes. Find the Type for each volume Click the Volume Types tab and view the Extra Specs for each Volume Type to find associated backends If the backend you want to delete is associated to a volume, move any existing instances off the volume and delete the volume before unregistering the backend. 1. From the Operations Console main menu, select Block Storage Devices. 2. Select the row of the device you want to remove. 3. Select Unregister Backends. 4. Select or clear checked resources in the table. 5. Click Unregister. To exit the action without removing the device and to close the screen, click Cancel before unregistering. 6. Verify that the device was removed from the Block Storage Devices overview screen. Creating and attaching volumes in the OpenStack user portal Create volumes in the OpenStack user portal (page 121) Attach a volume to a VM instance in the OpenStack user portal (page 121) Delete Volumes (page 122) Use the OpenStack user portal to manage the volumes that define specific storage characteristics for instances deployed in the cloud. Volumes provide persistent block storage for virtual machine instances. OpenStack technology provides two classes of block storage: ephemeral storage and persistent volumes. Ephemeral storage is assigned to a VM instance when the instance is created and then released when the instance is deleted. All instances have some ephemeral storage. When you create a VM instance, you select a predefined flavor. The definition of a flavor includes the number of virtual CPUs, the amount of random access memory (RAM), and the amount of disk space allocated for storage. Storage defined as part of the flavor definition is ephemeral. Persistent storage, or block storage (OpenStack Cinder) volumes, persist as independent entities. A block storage volume can exist outside the scope of a VM instance. Once created, a block storage volume can be attached to one VM instance and later can be detached. The detached block storage volume can then be attached to a different VM instance. IMPORTANT: Before you create a volume in the OpenStack user portal, you must register a block storage device and, for ESXi environments, register your vcenter in the Operations Console. If you create a volume first, the volume will be created as a Logical Volume Manager (LVM) volume. This volume cannot be managed in CloudSystem. 120 Storage management

121 Create volumes in the OpenStack user portal Prerequisites Minimum required privileges: Cloud user In the Operations Console, the administrator: activated a compute node registered a storage device added a CPG or VSA cluster to any 3PAR or VSA storage devices created a volume backend for any 3PAR or VSA storage devices. You are logged in to the OpenStack user portal. NOTE: Access the OpenStack user portal using the link on the Operations Console Integrated Tools screen. Or, connect through a supported browser using the URL VIP address>. Procedure 78 Creating volumes in the OpenStack user portal 1. From the Project tab, select Compute Volumes. The Volumes screen is displayed. 2. Click the +Create Volume button. The Create Volume screen is displayed. 3. Enter a unique name for the volume, complete the required fields, then click the Create Volume button to complete the action. Clicking Cancel returns to the Volumes screen without completing the action. 4. Verify that the volume you created is displayed on the Volumes screen. Attach a volume to a VM instance in the OpenStack user portal Volume attachments are managed in the OpenStack user portal. Prerequisites Minimum required privileges: Cloud user You have at least one volume. You are logged in to the OpenStack user portal. NOTE: Access the OpenStack user portal using the link on the Operations Console Integrated Tools screen. Or, connect through a supported browser using the URL VIP address>. Procedure 79 Attaching volumes in the OpenStack user portal 1. From the Project tab, select Compute Volumes. The Volumes screen is displayed. 2. Click the check box next to the name of the volume you want to attach. 3. In the Action column, click Edit Attachments The Manage Volume Attachments screen is displayed. 4. In the Attach To Instance drop-down, select the VM instance that you want to attach to the volume. 5. Edit the Device Name if necessary. Block storage (Cinder) 121

122 6. Click Attach Volume to complete the action. Clicking Cancel returns to the Volumes screen without completing the action. 7. Verify that the volume you attached is displayed in the Attached To columns on the Volumes screen. NOTE: If the volume cannot be attached to the device you specified (for example /dev/vdc is specified), the device is ignored and the guest operating system automatically attaches the volume to the next available device (for example /dev/sdc is where the volume attached). Delete Volumes Prerequisites Minimum required privileges: Infrastructure administrator Volumes must be detached from their associated VMs Procedure 80 Deleting Volumes 1. From the Project tab, select Compute Volumes. The Volumes screen is displayed. 2. Check the box in the row of the volume to delete. 3. Click Delete Volume. 4. To confirm and delete the volume, click Delete Volume. To exit without deleting the volume and to close the screen, click Cancel before deleting. 5. Verify that the volume was removed from the Volumes screen. 6. With the filters set to All statuses, verify that the volume does not appear on the Volumes overview screen. 122 Storage management

123 20 Compute node creation Compute nodes manage the resources required to host instances in the cloud. You must first create compute nodes by applying CloudSystem requirements to a cluster or host. Then you activate the compute node, which makes it available to host virtual machine instances. CloudSystem can simultaneously manage the following types of compute nodes: ESXi clusters Hyper-V compute nodes (clustered or standalone) KVM compute nodes When ESXi, Hyper-V, and KVM compute nodes exist in CloudSystem, topology designs, offerings, and provisioned subscriptions in CloudSystem Enterprise can include ESXi, Hyper-V, KVM, or all three types of compute nodes. The Compute Nodes screen in the Operations Console displays all available compute clusters and compute nodes, along with their resources. You can activate, deactivate, and delete compute nodes from this screen. NOTE: CloudSystem supports ESXi and KVM on compute nodes and as the management hypervisor (the hypervisor software running on the physical server that hosts the CloudSystem appliances). Hyper-V is supported on compute nodes. Creating ESXi compute clusters ESXi compute hosts are created inside clusters in vcenter. For information about creating and configuring compute hosts in vcenter, see VMware vsphere Documentation at VMware. Process overview 1. Install and configure an ESXi compute cluster (page 123) 2. Configure networks for an ESXi cluster (page 124) 3. Enable ESXi networking for instance security groups (page 125) Install and configure an ESXi compute cluster The following requirements help you create a correctly configured ESXi cluster for use with CloudSystem. (Some VMware vsphere features required by CloudSystem may require an additional VMware license. See VMware vsphere Documentation.) 1. Install a supported version of ESXi on the compute cluster. Use the same version of ESXi for each host in the cluster. See HP Helion CloudSystem 9.0 Support Matrix at Enterprise Information Library. 2. Enable VMware vsphere Distributed Resource Scheduler (DRS) in the cluster. See VMware vsphere Documentation. 3. Configure time synchronization in the compute cluster. 4. Ensure that the compute cluster is not inside a folder in vcenter. 5. Ensure that a shared datastore is accessible by all hosts in the cluster. 6. Update the security profile for each host in the cluster. a. Select a host in the cluster. b. Select the Configuration tab. c. Under Software, select Security Profile. d. At the top of the Firewall section, select Properties. e. Scroll through the list of options and click VM serial port connected over network. f. Click OK. Creating ESXi compute clusters 123

124 g. Repeat these steps for each compute host in the cluster. 7. Optional: For console access in the OpenStack user portal, open the port range 5900 to 6105 for each compute host. Configure networks for an ESXi cluster NOTE: If deploying in a VxLAN network configuration, the Cloud Data Trunk interfaces for the compute hosts should be configured to access ports tagging the Tenant Underlay Network VLAN, not the Cloud Data Trunk port. Procedure 81 Create distributed virtual switches and port groups 1. Configure vsphere vmotion on the Data Center Management Network. vsphere vmotion of an instance within hosts of the cluster is supported. This vmotion can be automatic with DRS, or manually performed. See vsphere vmotion Documentation for more information. 2. Configure the ESXi compute cluster for a Split Data & Management Trunk. 3. Create a VMware vsphere Distributed Switch for management that is connected with vmnic0. NOTE: If you have a mixture of different versions of ESXi hosts in your environment and you have an existing distributed vswitch (DVS) that you want to reuse, then make sure that you have created the DVS with the oldest version of ESXi on the hosts. For example, if you have three ESXi 5.5 hosts and one ESXi 5.1 host and you want to use your existing DVS, then the DVS must be of version ESXi 5.1. Example: Before activation In vsphere, Home Inventory Networking shows CS-MGMT-06 as the new management Distributed Switch with a new port group. In vsphere, Home Inventory Hosts and Clusters shows the network configuration for the compute host for the Distributed Switch CS-MGMT-06. Before you activate a compute cluster, you can specify an existing Distributed Switch on the Cloud Data Trunk, or CloudSystem will automatically create one for you. If you allow CloudSystem to create a Distributed Switch, you must provide a single free NIC or multiple NICs. Multiple NICs will form a team. The NIC teaming mode is Active/Standby. 124 Compute node creation

125 Example: After activation The following Distributed Switches and port groups are created when you activate a compute cluster: Distributed Switch and port group for Cloud Data Trunk Distributed Switch and port group for Uplink Port groups for both the Data Center Management Network and the Cloud Management Network In vsphere, Home Inventory Networking shows that the Distributed Switches Cloud-Data-Trunk-06 and CS-OVS-Trunk-06 are created with generated port groups. In vsphere, Home Inventory Hosts and Clusters shows the network configuration for the ESXi compute host for the Distributed Switches CS-MGMT-06, CS-OVS-Trunk-06 and Cloud-Data-Trunk-06. Enable ESXi networking for instance security groups Perform the following steps to enable security groups for instances. Creating ESXi compute clusters 125

126 Open vswitch vapp (OVSvApp) The HP Virtual Cloud Networking Open vswitch vapp appliance enables security groups and networking between tenant virtual machines. You must upload the OVSvApp image to your VMware Datacenter. The OVSvApp image is included in the CloudSystem tools release package. When you activate the compute cluster, the OVSvApp appliance is installed on each ESXi compute hypervisor. OVSvApp networking is configured automatically so that OVSvApp VMs receive packets from and transmit packets to tenant VMs after applying the security group flows. Only one OVSvApp VM exists in a particular ESXi host. OVSvApp virtual machines have four NICs: NIC that connects to the Data Center Management Network Allows single VLAN traffic NIC that connects to the Cloud Management Network Allows single VLAN traffic NIC that connects to the data network Allows all VLAN traffic, and has promiscuous mode and forged transmit enabled NIC that connects to the trunk port group Allows all VLAN traffic, and has promiscuous mode and forged transmit enabled The trunk port group and tenant VM port groups are connected to the same Distributed Virtual Switch. The trunk Distributed Virtual Switch does not have a NIC configured so that the OVSvApp VM cannot communicate with other OVSvApp VMs on a different ESXi host over the trunk port group. Procedure 82 Upload the OVSvApp template 1. Extract the OVSvApp template file named cs-ovsvapp.ova from the CloudSystem release package. 2. In the vsphere client, upload the image to your ESXi compute cluster using File Deploy OVA Template. a. Browse to the location of the.ova file. b. Specify the name of the image after it is uploaded. NOTE: The image name must be unique within the entire vcenter Server. You may want to prepend your initials. c. Select the Datacenter, Cluster, and ESXi host where the image is to be uploaded. The OVSvApp template can reside anywhere in the Datacenter that the compute cluster resides. d. Select the shared datastore in which to store the image. e. Select Thin Provisioning for the Disk Format. 3. Convert the uploaded image into a template. When the image is uploaded, it is displayed in the Hosts and Clusters view as a non-running VM. Converting the image into a template avoids confusing non-running VM uploaded images with running VM appliances in the ESXi host. After the image is converted to a template, it will be displayed only in the VMs and Templates view in vsphere. If you are uploading an OVSvApp image that exists as a template in a different Datacenter, clone the template instead of deploying it, and rename it. 126 Compute node creation

127 Creating Hyper-V compute nodes Hyper-V compute nodes can be standalone or clustered hosts. Refer to Microsoft Windows Server Hyper-V Overview for instructions on creating and configuring Hyper-V compute nodes. Process overview 1. Install and configure a Hyper-V compute node (page 127) 2. Configure networks for a Hyper-V compute node (page 128) Install and configure a Hyper-V compute node The following requirements help you create a correctly configured Hyper-V compute node for use with CloudSystem. TIP: To automate steps 4 through 9, run a script on each Hyper-V host. 1. Download the script CSHyperVPreReq.ps1 from 2. Run the script in Powershell: PS C:\Users\Administrator.CLOUD\Desktop> powershell ExecutionPolicy Bypass.\CSHyperVPreReq.ps1 <Hyper-V-CLMIPAddress> Procedure 83 Install and configure a Hyper-V compute node 1. Install a supported version of Hyper-V on the compute host or cluster. See HP Helion CloudSystem 9.0 Support Matrix at Enterprise Information Library. Ensure that the Hyper-V host name contains 15 or fewer characters. This Microsoft limitation is described in Windows TechNet Library and Microsoft Support. Do not use the number sign or hashtag (#) character in the password for the Administrator user in Hyper-V compute hosts. The FreeRDP component that you download below treats the # character as a comment and prevents authentication to the Hyper-V host. 2. On the Hyper-V physical host, install the latest firmware updates. a. From a supported browser, navigate to HP Service Pack for ProLiant. b. Click Obtain software. You must have an active warranty or support agreement. c. Select the Installation Instructions tab. d. Follow the installation instructions to install the special pack on the Hyper-V physical host. 3. Enable the Hyper-V role. 4. For clustered hosts: a. Enable the Failover cluster feature on the Hyper-V host. b. Create cluster shared volumes for instance deployment in the cluster. 5. Enable Windows Remote Management (WinRM) on the Hyper-V host. Open the Powershell editor and execute the following commands. winrm set winrm set winrm set Set-Item wsman:\localhost\client\trustedhosts * Restart-Service WinRm See Microsoft Windows Remote Management. 6. Enable the Windows firewall for Inbound traffic in the Hyper-V host. This allows the Management appliance and the Cloud Controller appliances to communicate with the Hyper-V host. In Hyper-V Server Manager, select Windows Firewall with Advanced Security and right click Properties. Select Public Profile, then Inbound Connections, then change to Allow. Creating Hyper-V compute nodes 127

128 7. Start the iscsi Initiator Service and set the startup type as Automatic on the Hyper-V host. NOTE: Do not change the default Initiator name. If the Hyper-V host is standalone, the initiator name is iqn com.microsoft:<hyperv_hostname> If the Hyper-V host is clustered, the initiator name is iqn com.microsoft:<hyperv_hostname.domain_name> 8. Generate a self-signed certificate. 9. Restart WinRM. 10. Configure time synchronization. If the Hyper-V host is standalone, manually synchronize the time on the Hyper-V host with the Cloud controller. If the Hyper-V host is part of cluster, the host time is synchronized with Active Directory. The Cloud Controller Active Directory and the Hyper-V hosts in the cluster should be synchronized to a common NTP server. 11. Download FreeRDP-Webconnect from CloudBase Solutions and copy it to the Management appliance folder /var/csm/www/msi. FreeRDP is used by CloudSystem to enable RDP access to instances on Hyper-V hosts using the Launch Console feature. The infrastructure administrator is responsible for security and other updates to FreeRDP. a. Set privileges on the.msi file: chmod 644 FreeRDPWebConnect.msi b. Change the owner of the.msi file: chown root:root FreeRDPWebConnect.msi 12. Bring up the Cloud Management Network on the compute node. A new IP address is acquired for the compute node from the Management appliance DHCP server. ipconfig /release <Hyper-V server network adapter connected to the CLM> ipconfig /renew <Hyper-V server network adapter connected to the CLM> Configure networks for a Hyper-V compute node 1. Connect network interfaces on the Hyper-V host to the Cloud Management Network, Data Center Management Network, and Cloud Data Trunk. 2. Ensure that the Hyper-V host has a Cloud Data Trunk interface where all VLANs are trunked. 3. If a proxy is enabled, add Cloud Management Network to the exceptions. Creating KVM compute nodes KVM compute nodes are created on non-clustered hypervisor hosts. Process overview 1. Install and configure a KVM compute node (page 129) 2. Configure networks for a KVM compute node (page 129) 3. Check RHEL KVM 6.5 and 7.0 dependencies (page 131) 4. Create an RHEL repo on a KVM compute node (page 132) 128 Compute node creation

129 Install and configure a KVM compute node 1. Install a supported version of RHEL as a virtualized server. See HP Helion CloudSystem 9.0 Support Matrix at Enterprise Information Library. For information about installing RHEL, see Red Hat Enterprise Linux 6 documents or Red Hat Enterprise Linux 7 documents. 2. Allocate adequate disk space for a /var/lib/nova/instances directory that can support all anticipated provisioned instances. 3. Configure dhclient.conf to send the correct KVM compute identifier: vim /etc/dhcp/dhclient.conf send dhcp-client-identifier "<your short hostname>"; option is_kvm_node code 214 = string; send is_kvm_node "yes"; 4. If your network is configured for VxLAN, copy the dhclient.conf file to a file specific to your Cloud Management Network (CLM) interface, then delete the original dhclient.conf. In this example, the CLM interface is eno50. cp /etc/dhcp/dhclient.conf /etc/dhcp/dhclient-eno50.conf rm /etc/dhcp/dhclient.conf 5. Ensure that the host name for each compute host has a matching host name in any connected HP 3PAR storage system. The host name must be specified as a FQDN and not an IP address. 6. If you plan to connect to the 3PAR storage system using iscsi, make sure that every compute node has an interface connected to the Block Storage Network. The interface connected to the Block Storage Network should have DHCP or static IP assignment. 7. Disable the firewalld service and replace it with iptables. Execute the following commands on RHEL KVM 7.0 compute nodes: systemctl stop firewalld systemctl disable firewalld yum install iptables-services y # Create the following file to ensure iptables starts touch /etc/sysconfig/iptables systemctl enable iptables && systemctl start iptables See the recommendation from RHEL at Red Hat Customer Portal (subscription required) for more information. Configure networks for a KVM compute node For VLAN configurations, the interfaces for the Data Center Management Network and the Cloud Management Network can be in a VLAN tagged trunk. Example: EthM and EthN are in a bond ifcfg-bonda. Both interfaces carry the Data Center Management Network and the Cloud Management Network as tagged VLANs 100 and 101. The interfaces are ifcfg-bond0.100 and ifcfg-bond NOTE: Do not bond interfaces on the Cloud Data Trunk before activation. During activation, you can supply multiple interfaces on the Cloud Data Trunk and they will be bonded as part of the process. For the other CloudSystem networks, you can bond interfaces before activation. For VxLAN configurations, the Cloud Data Trunk interfaces for the compute hosts should be configured to access ports tagging the Tenant Underlay Network VLAN, not the Cloud Data Trunk port. Creating KVM compute nodes 129

130 1. Configure the network device where the Cloud Management Network is plumbed. In the example below, ethm represents the network device. Modify /etc/sysconfig/network-scripts/ifcfg-ethm: DEVICE="ethM" BOOTPROTO="dhcp" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" PEERDNS="yes" PERSISTENT_DHCLIENT=1 DHCP_HOSTNAME=<NON-FQDN> The DHCP_HOSTNAME should be a value other than the FQDN, for example the short host name of the compute node. The DHCP server uses this value to determine the host name to prepend to the hpiscmgmt.local domain to get the fully-qualified domain name for the Management appliance. 2. Edit the file /etc/sysconfig/network on the compute node with the following entries: NETWORKING=yes HOSTNAME=<FQDN> DHCP_HOSTNAME=<NON-FQDN> 3. Bring up the Cloud Management Network: ifdown ethm ifup ethm 4. Configure the network device where the Data Center Management Network is plumbed. In the example below, ethn represents the network device. DEVICE=ethN IPADDR=<your management network IP address> NETMASK=<your management network NETMASK> NM_CONTROLLED= no PEERDNS= no PERSISTENT_DHCLIENT=1 BOOTPROTO= static ONBOOT= yes 5. Bring up the Data Center Management Network: ifdown ethn ifup ethn 6. Configure the network device where the Cloud Data Trunk is plumbed. In the example below, ethp and ethq represent the network device. DEVICE=[ethP] or [ethq] NM_CONTROLLED= no PEERDNS= no ONBOOT= yes BOOTPROTO= none 7. Bring up the Cloud Data Trunk: ifdown ethp ifup ethp ifdown ethq ifup ethq 130 Compute node creation

131 8. Install or upgrade NIC firmware (kernel parameters): On KVM 7.0, install the Emulex NIC (be2net) device driver from the HP Support Center. On KVM 6.5, install the Emulex NIC (be2net) device driver or higher. See Citrix support. TIP: Emulex drivers and firmware versions should match, especially the second set of digits. For example, if you have a 10.2.x driver but have 10.1.x firmware, upgrade the firmware. To verify the version of your be2net NIC driver, run ethtool -i eth0 for every interface (ethm through ethq): # ethtool -i eth<m to Q> If you are using a be2net driver, set the rx_frag_size parameter to To verify, run cat /sys/module/be2net/parameters/rx_frag_size on the compute node. To change, add the line options be2net rx_frag_size=8192 to the file /etc/modprobe.d/be2net.conf. Create this file if it does not exist. If you are using an environment configured for Direct Virtual Routers (DVR), configure the network device where the External Network is plumbed. Check RHEL KVM 6.5 and 7.0 dependencies The following packages on the RHEL DVD are installed on the KVM compute node during installation or activation. Table 9 RHEL KVM common (6.5 and 7.0) dependencies curl iscsi-initiator-utils ipset iptables-ipv6 ipmitool vconfig python-libguestfs libvirt-python libvirt or higher nfs-kernel-server openssh-clients lvm2 rsync genisoimage bridge-utils openvswitch gtk-vnc gtk-vnc-python Table 10 RHEL KVM 7.0 dependencies libvirt-daemon-kvm conntrack-toolsiscsi-initiator-utils selinux-policy-devel policycoreutils selinux-policy policycoreutils-devel selinux-policy-targeted gtk-vnc2 policycoreutils-python Creating KVM compute nodes 131

132 Create an RHEL repo on a KVM compute node 1. Create a yum repo by modifying /etc/yum.repos.d/rhel-dvd.repo. 2. Add the name of the RHEL DVD, and if you have an external repository that is accessible by the compute node, set that repository as baseurl. Otherwise, set baseurl as shown: [RHELDVD] name=<locally Mounted RHEL DVD> baseurl=file:///rhel-dvd enabled=1 3. Create a local repo with the DVD image that will load the RHEL ISO in the ilo: mount /dev/cdrom /mnt/ mkdir /RHEL-DVD/ cp -r /mnt/* /RHEL-DVD/ rpm --import /etc/pki/rpm-gpg/rpm-gpg-key-redhat-release 4. Disable the Red Hat Subscription Manager. Modify /etc/yum/pluginconf.d/subscription-manager.conf: [main] enabled=0 5. Prevent RHEL from updating to a higher version, and prevent the update of sosreport. Edit /etc/yum.conf to add the following lines: exclude=kernel* sos* redhat-release* 6. Verify the repo: yum clean all yum update 7. Install packages to enable attaching volumes to the compute node: yum install -y sysfsutils sg3_utils 8. Reboot the compute node. IMPORTANT: If at any point you run service network restart, you must reboot the compute node to re-establish security group settings for the virtual machines running on the compute node. Calculating the number of instances that can be provisioned to a compute node The maximum number of virtual machines that can be provisioned to a compute resource is based on the following: Amount of installed memory, available disk capacity, and number of CPU cores on the compute resource Flavor settings of the virtual machines to be provisioned Resource oversubscription, which is individually applied to the memory, disk, and CPU calculation NOTE: On the Compute Summary screen, the virtual storage allocation does not include: The reserved storage for the image cache, which is 5% of the total storage size available for the compute node The space occupied by the operating system, if it is residing on the same volume Pre-existing instances that were not provisioned by CloudSystem The following table shows resource oversubscription rates so you can properly dimension the capacity of your compute resources based on virtual machine size requirements. HP recommends that you do not change these values. 132 Compute node creation

133 Table 11 Resource oversubscription rates for ESXi, Hyper-V, and KVM compute nodes Physical resource 1 CPU core 1 GB RAM 1 TB disk Virtual resource 8 CPU cores 1.5 GB RAM 1 TB disk Physical to virtual oversubscription rate 1:16 1:1.5 1:1 Calculating the number of instances that can be provisioned to a compute node 133

134 21 Compute node activation and management From the Compute Nodes screen in the Operations Console main menu, you can add, activate, deactivate, and delete compute nodes from your cloud. Adding compute nodes to the cloud (page 134) Activate an ESXi compute cluster (page 135) Activate a Hyper-V compute node (page 136) Activate a KVM compute node (page 138) Creating compute node host aggregates and availability zones (page 139) Compute node details (page 140) Deactivate a compute node (page 141) Delete a compute node (page 141) Within a cloud environment, compute nodes form a core of resources. A compute node provides the ephemeral storage, networking, memory, and processing resources that can be consumed by virtual machine instances. When an instance is created, it is matched to a compute node with available resources. A compute node can host multiple instances until all of its resources are consumed. CloudSystem can simultaneously manage the following types of compute nodes: ESXi clusters Hyper-V compute nodes (clustered or standalone) KVM compute nodes Compute resources are always placed in a common resource pool for provisioning. Instances are deployed to a hypervisor based on the image type. The Compute Nodes screen in the Operations Console displays the compute clusters and compute nodes in the cloud. You can activate, deactivate, and delete compute nodes from this screen. The Compute Summary and Dashboard in the Operations Console provide additional views of compute node resources. Adding compute nodes to the cloud Compute nodes are added to the cloud in different ways, depending on the type of compute node. ESXi clusters appear on the Compute Nodes screen after you register the management VMware vcenter. This action provides CloudSystem with the location and credentials of the VMware vcenter managing the ESXi cluster, and retrieves information about the ESXi cluster. Hyper-V and KVM compute nodes appear on the Compute Nodes screen after they are connected to and issued a DHCP lease from the Cloud Management Network. NOTE: If a network error prevented a KVM or Hyper-V compute node from being displayed on the Compute Nodes screen, you can manually add it to the cloud by clicking Import. After compute nodes are added to the cloud, their status is Imported. After compute nodes are added, you can activate them. Activated ESXi clusters, Hyper-V compute nodes, and KVM compute nodes are ready to host instances. For more information, see Compute node creation (page 123). 134 Compute node activation and management

135 Activating a compute node Activating a compute node or cluster performs the configuration required to bring the system into the cloud. NOTE: If you plan to activate a cluster or compute node that is already hosting instances managed by other tools, then you must manually remove the instances first. These instances are not managed in CloudSystem and will consume resources that could eventually cause an oversubscription problem. Remove the instances using the same tools that you are currently using to manage the instance. This action cannot be accomplished in CloudSystem. Activate an ESXi compute cluster You can activate ESXi clusters in the Operations Console Compute Nodes screen after you register a connection with VMware vcenter on the Integrated Tools screen. VMware vcenter acts as a central administrator for ESXi clusters that are connected on a network. VMware vcenter allows you to pool and manage the resources of multiple hosts, as well as monitor and manage your physical and virtual infrastructure. Prerequisites CloudSystem requirements have been configured in the ESXi cluster. For more information, see Compute node creation (page 123). The VMware vcenter is registered on the Integrated Tools screen. See Register VMware vcenter (page 106). The OVSvAapp image is converted to a template in vcenter and resides in the same vcenter as the compute clusters. The OVA image is included in the HP Helion CloudSystem Tools release package on the HP Software Depot at Procedure 84 Activate an ESXi cluster 1. From the CloudSystem Operations Console main menu, select Compute Nodes. Do not activate an ESXi cluster in more than one cloud. 2. Click the Activate button or select the row of the cluster or compute node you want to activate, and click Activate from the drop-down list or from the Action menu. You can also activate multiple compute nodes at the same time. Check the row of each compute node you want to activate, then click Activate. 3. Specify one of the following: The name of an existing VMware vsphere Distributed Switch (vds) on the Cloud Data Trunk One or more free interfaces (NICs) used to connect to the Cloud Data Network. (A Distributed Switch will be automatically created.) Example vmnic1, vmnic2 If your tenant underlay type is VxLAN, additional free NIC information is required. 4. Click Activate. To exit without activating the cluster or compute node, click Cancel. 5. Verify that the cluster state is ACTIVATED on the Compute Nodes overview screen. In a short time, the cluster status will change to green. Activating a compute node 135

136 Expand an activated cluster You can add additional ESXi hosts to an activated cluster. Any instances running on the activated compute hosts already in the cluster will continue to run. If you have virtual machines configured on the new host, you must move them before you add the host to the cluster. The compute scheduler will not recognize virtual machines created outside of the CloudSystem environment. Procedure 85 Expanding an activated cluster 1. Using administrator credentials, log in to vcenter. 2. Add the new ESXi compute host in maintenance mode to the activated cluster. Adding the host in maintenance mode ensures that VMware DRS will not schedule new VMs to the newly added host. OVSvApp is not yet deployed on the host, so any VMs added will go into error mode. 3. Delete the template cs-ovsvapp_<cluster_name>. 4. Go to Inventory Networking and add the new host to the Data Center Management Network and the Cloud Data Trunk uplink distributed switch. 5. Verify that all hosts in the cluster (including the new host) share the same datastore. 6. Using cloudadmin credentials, log in to ma1. 7. Switch to the root user: sudo -i 8. Source the OpenStack environment variables: export OS_USERNAME=admin export OS_TENANT_NAME=demo export OS_PASSWORD=<password-set-during-first-time installation> export OS_AUTH_URL= export OS_REGION_NAME=RegionOne 9. Activate the cluster: eon resource-activate --cloud-trunk-interface<name-of-vmnics-separated-by-commas> --option update <cluster-resource-id> Activate a Hyper-V compute node You can activate Hyper-V hosts (clustered or standalone) in the Operations Console Compute Nodes screen after they are connected to and issued a DHCP lease from the Cloud Management Network. Activate a Hyper-V compute node (clustered or standalone) (page 137) Clustering a Hyper-V compute node after it has been activated (page 137) Adding a new Hyper-V compute node to an existing Hyper-V cluster after activating other hosts in the cluster (page 138) Prerequisites CloudSystem requirements have been configured in the Hyper-V host. For more information, see Compute node creation (page 123). You are not planning to install Platform Services (Helion Development Platform and DNS as a Service). Hyper-V compute nodes are not supported by Platform Services. 136 Compute node activation and management

137 The Cloud Management Network is up on the compute node. This establishes the connection between the Management appliance and the compute node. A new IP address is acquired from the Management appliance DHCP server for the compute node. All Cloud controller nodes are up. All three Cloud controller nodes must be reachable for the instance RDP console to work correctly. IMPORTANT: For clustered hosts Before booting instances, ensure that all of the Hyper-V hosts in a Microsoft failover cluster are activated in CloudSystem. If you have a mix of activated standalone and clustered compute nodes and all of your deployed instances need to be highly available, create a host aggregate with Hyper-V compute nodes in the cluster and use an availability zone to launch instances. See Creating compute node host aggregates and availability zones (page 139). Procedure 86 Activate a Hyper-V compute node (clustered or standalone) 1. From the CloudSystem Operations Console main menu, select Compute Nodes. If the compute node does not appear on the overview screen, see Adding compute nodes to the cloud (page 134). 2. Click the Activate button or select the row of the compute node you want to activate, and click Activate from the drop-down list or from the Action menu. HP recommends that you activate all Hyper-V hosts in a cluster. You can activate multiple compute nodes at the same time. Check the row of each compute node you want to activate, then click Activate. 3. Enter the user name and password for the local administrator account for the operating system running on the compute node. These credentials are defined when the compute node hypervisor is provisioned. NOTE: Hyper-V compute nodes can be activated using local administrator account credentials only. Domain credentials are not supported. 4. To create or connect to a Hyper-V virtual switch, do one of the following. Enter one or more free interfaces (NICs) used to connect to the Cloud Data Network to create a Hyper-V virtual switch. For example, if you have two NICs, enter them as follows. Be sure to keep the space before the number. Example Ethernet 0, Ethernet 1 Enter a vswitch Name if a virtual switch already exists on the Cloud Data Trunk interface on the Hyper-V host. 5. Click Activate. To exit without activating the compute node, click Cancel. 6. Verify that the compute node appears in a green active state on the Compute Nodes overview screen. Procedure 87 Clustering a Hyper-V compute node after it has been activated If you activate a compute node as a standalone Hyper-V compute node and then add it to a cluster, you must perform the following steps to enable the HA feature for newly launched instances. 1. Deactivate the standalone compute node. Activating a compute node 137

138 2. Add the compute node to the cluster, then activate it to deploy it as a cluster compute node. Existing instances on the compute node will continue to operate if you do not change the compute node host name. However, existing instances on the compute node will not be highly available. 3. (Optional) If you have a mix of activated standalone and clustered compute nodes and all of your deployed instances need to be highly available, create a host aggregate with Hyper-V compute nodes in the cluster and use an availability zone to launch instances. See Creating compute node host aggregates and availability zones (page 139). Procedure 88 Adding a new Hyper-V compute node to an existing Hyper-V cluster after activating other hosts in the cluster 1. Activate the new Hyper-V compute node in CloudSystem. NOTE: Existing instances with volumes created on other hosts in the cluster cannot migrate to the newly added Hyper-V host in the cluster. 2. (Optional) If you created a host aggregate with Hyper-V compute nodes in the cluster, add the newly added Hyper-V cluster compute to the host aggregate. Limitations in clustered Hyper-V compute nodes and instances Hyper-V compute nodes in a cluster do not support the boot from volume operation. (Standalone Hyper-V compute nodes do support boot from volume.) Hyper-V provisioned instances created with block-storage volumes do not support: Adding a new host to an existing cluster after attaching or detaching volumes to instances. Powering down a host in a cluster while attaching or detaching volumes to instances. Attaching and detaching volumes is not supported if the instances are in the Running state in the Hyper-V compute cluster. Instances must be powered off before you can attach or detach a volume. This is a limitation in the Failover Clustering feature in Hyper-V. Activate a KVM compute node You can activate KVM compute nodes in the Operations Console Compute Nodes screen after they are connected to and issued a DHCP lease from the Cloud Management Network. Prerequisites CloudSystem requirements have been configured in the KVM host. For more information, see Compute node creation (page 123). The Cloud Management Network is up on the compute node. This establishes the connection between the Management appliance and the compute node. A new IP address is acquired from the Management appliance DHCP server for the compute node. For a cloud management interface where no VLAN tag is required, run ifdown eth1; ifup eth1. For an 802.1Q network trunk where multiple VLANs are coming into the network card, run ifdown eth1.1503, ifup eth Procedure 89 Activate a KVM compute node 1. From the CloudSystem Operations Console main menu, select Compute Nodes. If the compute node does not appear on the overview screen, see Adding compute nodes to the cloud (page 134). 138 Compute node activation and management

139 2. Click the Activate button or select the row of the compute node you want to activate, and click Activate from the drop-down list or from the Action menu. You can activate multiple compute nodes at the same time. Check the row of each compute node you want to activate, then click Activate. 3. Enter the user name and password for the operating system running on the compute node. These credentials are defined when the compute node hypervisor is provisioned. 4. Enter one or more free interfaces (NICs) used to connect to the Cloud Data network. If there are multiple interfaces, separate them with commas. Example For RHEL KVM 6.5 compute nodes, use the form eth1, eth2 Example For RHEL KVM 7.0 compute nodes, use the form ens1, ens2 NOTE: If you are using an environment configured for Direct Virtual Routers (DVR), configure the network device where the External Network is plumbed. 5. Click Activate. To exit without activating the cluster or compute node, click Cancel. 6. Verify that the compute node appears in a green active state on the Compute Nodes overview screen. Creating compute node host aggregates and availability zones CloudSystem deploys instances to compute nodes based on the optional availability zone you specify when you launch an instance in the OpenStack user portal. Host aggregates Host aggregates enable the cloud administrator to partition compute deployments into logical groups for load balancing and instance distribution. A host aggregate is a group of hosts with associated metadata. A host can be part of more than one host aggregate. A common use of host aggregates is to provide information for use with the Nova-scheduler, which deploys instances on specific hosts. For example, you might use a host aggregate to group a set of hosts that share specific flavors or images. You can also use host aggregates to separate different classes of hardware or servers on a separate power source. Availability zones A host aggregate is exposed to users in the form of an availability zone. When you create a host aggregate, you have the option of providing an availability zone name. If you specify a name, the host aggregate you created is available as an availability zone that can be requested by users when deploying instances. When users provision resources, they can specify the availability zone from which they want their instance to be deployed. This allows cloud consumers to ensure that their application resources are spread across hosts to achieve high availability in the event of hardware failure. Configuring host aggregates and availability zones in the OpenStack user portal Use the OpenStack user portal to create host aggregates and availability zones, and add hosts or remove hosts from the aggregate. You can also use the OpenStack Nova API or CLI to configure host aggregate and availability zones. For detailed information, see OpenStack Documentation for Juno releases. Creating compute node host aggregates and availability zones 139

140 Prerequisites Compute nodes are created and configured CloudSystem appliances are deployed using First-Time Installation in the Operations Console 1. Access the OpenStack user portal using the link on the Operations Console Integrated Tools screen, or enter the URL of the OpenStack user portal in a browser. 2. Log in to the OpenStack user portal using the admin account password you set during First-Time Installation. 3. From the Project tab, select Host Aggregates. 4. Click +Create Host Aggregate. 5. Enter a name for the host aggregate and availability zone. 6. Click the Manage Hosts with Aggregates tab. 7. Click the + button next to hosts you want to add to the aggregate. Hosts can be included in more than one aggregate. IMPORTANT: You cannot use the OpenStack user portal to add ESXi clusters to an aggregate if the cluster name contains an underscore. Example: cluster_1. You must use the OpenStack Nova CLI to add that cluster. See the OpenStack commands: Cinder, Glance (Linux only), Heat, Keystone, Nova, Neutron, Swift chapter of the HP Helion CloudSystem 9.0 Command Line Interface Guide in the Enterprise Information Library. Compute node details From the Operations Console Compute Nodes overview screen, you can show all available data for a compute node. Select the row of a compute node, click the Action menu ( ) and select Expand. The OpenStack user portal contains a Hypervisor tab, which allows administrators to see all activated compute clusters and compute nodes, along with the number of instances attached to each. NOTE: Hyper-V compute nodes are not monitored in this release. The status of Hyper-V compute nodes will be displayed as a question mark. Two groups of data are displayed when the compute data is expanded. The top row shows the allocation usage of the instances. The bottom row shows the physical usage of the instances. Virtual allocation These graphs show the virtual size, which is the physical size multiplied by the oversubscription rate. For clusters, this is the aggregate value of all hypervisors in the cluster. CPU allocation: Number of CPU cores designated for instances. Memory allocation: Amount of memory designated for instances. Storage allocation: Amount of compute storage designated for instances. Physical utilization These graphs show the physical size information. For clusters, this is the aggregate value of all hypervisors in the cluster. CPU usage: Number of actual CPU cores consumed by instances. Memory allocation: Amount of actual memory consumed by instances. Storage usage: Amount of actual compute storage consumed by instances. You can select the Collapse action to collapse the detailed view. 140 Compute node activation and management

141 The Compute Summary and Dashboard provide additional views of compute node resources. See Compute node summary (page 142) and Dashboard (page 68). Deactivate a compute node Use this procedure to deactivate an ESXi cluster or Hyper-V or KVM compute node. A deactivated compute node can no longer host virtual machine instances, but the compute node remains in the cloud and can be reactivated at a later time. Deactivating a KVM compute node deletes it from the Compute Nodes screen. To display a deactivated KVM compute node, log in to the compute node and take down (ifdown ethn) the NIC to the Cloud Management Network, then bring the NIC up (ifup ethn). NOTE: Deactivating ESXi compute clusters and Hyper-V and KVM compute nodes does not remove the disk files from the image cache (_base directory). You must remove these files manually to free disk space. Prerequisites The compute node is activated. No virtual machine instances are deployed on the cluster or compute node. If instances are deployed, then you must remove the instances and redeploy them on a different compute node before you can deactivate the cluster or compute node. Procedure 90 Deactivate an ESXi cluster or Hyper-V or KVM compute node 1. From the CloudSystem Operations Console main menu, select Compute Nodes. 2. Select the row of the cluster or compute node you want to deactivate. You can deactivate multiple compute nodes at the same time. Check the row of each compute node you want to deactivate, then click the down arrow next to the Activate button and select Deactivate. 3. From the drop down list, select Deactivate. 4. Click Confirm Deactivation. To exit the action without deactivating the cluster or compute node, click Cancel. 5. Verify that the cluster or compute node appears in a gray Unknown state on the Compute Nodes overview screen. Delete a compute node After you deactivate a compute node, you can delete the Hyper-V or KVM compute node if you do not plan to reactivate it at a later time. Deleting a compute node removes the node from the CloudSystem Operations Console and removes the ability for CloudSystem to manage the node. The compute node is not actually deleted, and still exists in the hypervisor. If you delete a compute node and want to bring it back into the cloud, click Import on the Compute Nodes screen and supply the requested data. Use this procedure to remove Hyper-V or KVM compute nodes from the Compute Nodes overview screen. The delete action expires the DHCP lease for the compute node and removes it from the Compute Nodes screen. You cannot delete ESXi clusters using the Delete action. ESXi clusters are automatically removed when the vcenter managing the clusters is removed from the Integrated Tools screen. Deactivate a compute node 141

142 Prerequisites The compute node is deactivated. See Deactivate a compute node (page 141). Procedure 91 Delete a Hyper-V or KVM compute node 1. From the CloudSystem Operations Console main menu, select Compute Nodes. 2. Select the row of the compute node you want to delete. You can delete multiple compute nodes at the same time. Check the row of each compute node you want to delete, then click the down arrow next to the Activate button and select Delete. 3. From the drop down list, select Delete. 4. Click Confirm Deletion. To exit the action without deleting the compute node, click Cancel. 5. Verify that the compute node was removed from the Compute Nodes overview screen. Compute node summary The Compute Nodes Summary graph on the Operations Console Compute Summary screen shows the total number of compute nodes in the center of the graph, and the total number of compute nodes in each status below the graph. The Compute Nodes State graph shows the total number of compute nodes in the center of the graph, and the total number of compute nodes in each state below the graph. Click the center of the Compute Nodes Summary graph and Compute Nodes State graphs to open the Compute Nodes screen. NOTE: Hyper-V compute nodes are not monitored, so the summary and state graphs return Unknown data. Table 12 Compute nodes summary Status Down Up Unknown Color Red Green Gray Description A critical alert message was received. Investigate Down compute nodes immediately. Normal behavior or information from a compute node. The status of the compute node is unknown, or the operating system on the compute node is Hyper-V. Possible compute node states are shown in the following table. If there are no compute node in a particular state, the state is not shown. Table 13 Compute nodes state State Activated Activating Imported Deactivating Unknown Description Ready to host instances. In process of being activated. Recognized by CloudSystem but not ready to host instances. In process of being deactivated. The state of the compute node is unknown, or the operating system on the compute node is Hyper-V. 142 Compute node activation and management

143 Compute node utilization and allocation graphs The Physical Utilization and Virtual Allocation graphs provide a visual representation of usage or allocation of physical or virtual CPU, memory, and compute node storage at the current point in time. The utilization graphs show the physical resources available for use in the virtual environment. For clusters, these are the aggregate values of all hypervisors in the cluster. NOTE: For Hyper-V compute nodes: Virtual allocation information is shown. Health, status, and physical utilization information is not shown. Table 14 Physical Utilization Physical Total CPUs Memory (GB) Storage for compute nodes (GB) Utilized Number of actual CPU cores consumed by instances in the last polling cycle. Amount of actual memory consumed by instances in the last polling cycle. Amount of actual compute storage consumed by instances in the last polling cycle. Percentage Utilized CPU utilization expressed as a percentage. A high percentage indicates that processes running on the device are consuming a considerable amount of CPU resources. If the percentage appears frozen at or near 100%, a process might not be responding. Memory utilization expressed as a percentage. Compute storage utilization expressed as a percentage. The allocation graphs show the virtual limit, which is the physical total multiplied by the oversubscription rate. For clusters, these are the aggregate values of all hypervisors in the cluster. Table 15 Virtual Allocation Virtual Limit CPUs Memory (GB) Storage for compute nodes (GB) Allocated Number of CPU cores designated for instances. Amount of memory designated for instances. Amount of compute storage designated for instances. Percentage Allocated CPU allocation expressed as a percentage. Memory allocation expressed as a percentage. Compute storage allocation expressed as a percentage. Instances history graph The Instances History graph shows the number of instances created and deleted per day in the past seven days. The numbers below the graph are the average number of instances created and deleted in the past seven days. Compute node utilization and allocation graphs 143

144 Part IV Optional services installation (Swift and Platform Services) CloudSystem includes two optional services that you can install and configure after you have the installed CloudSystem appliances. Object storage (OpenStack Swift) (page 145) Platform Services, including Helion Development Platform and DNS as a Service (page 165)

145 22 Object storage (OpenStack Swift) Installing Object storage HP Helion CloudSystem s object storage is based on OpenStack Swift technology. CloudSystem allows you to set up a physical object storage solution and connect it to your cloud environment, where a cloud user can store large amounts of unstructured data and retrieve objects in publicly accessible physical machines. Object storage has the capacity to scale from a few Terabytes (TB) to multiple Petabytes (PB) of storage and is designed to scale horizontally, handling large numbers of simultaneous connections. Object storage model The recommended object storage model includes a minimum of four nodes: 2 Proxy Account Container (PAC) nodes The Proxy Server ties together the Swift architecture. For each request, it looks up the location of the account, container, or object in the ring and routes the request accordingly. The public API is exposed through the Proxy Server. The Account Server manages container listings. The listings are stored as SQLite database files, and replicated across the cluster similar to how containers are managed. These servers require more I/O speed and less storage space than object servers. The Container Server manages object listings. It keeps track of the objects in a specific container. The listings are stored as SQLite database files, and replicated across the cluster similar to how objects are managed. 2 Object nodes The Object Server is a simple blob storage server that can store, retrieve and delete objects stored on local devices. Objects are stored as binary files on the filesystem with metadata stored in the file s extended attributes. Each object is stored using a path derived from the object name s hash and the operation s timestamp. Last write always wins, and ensures that the latest object version is served. The two PAC nodes and two Object nodes are divided evenly into two zones. This means that each zone has one PAC node and one Object node, to start. A zone represents redundancy across data centers and can be used to group devices based on physical location. NOTE: The best practice for managing server failures is to use a RAID 10 configuration on the OS drives and a RAID 0 (or no RAID) configuration on non-os drives. Installing Object storage 145

146 Figure 13 Object storage configuration For specific hardware requirements, see the HP Helion CloudSystem 9.0 Support Matrix in the Enterprise Information Library. Relationship to CloudSystem virtual appliances The Management appliance is the management point for all object storage cluster operations. Operators can access a console on this appliance to download updates and access logs. The Cloud controller provides authentication services for object storage users. Load balancing is initially provided via the HAProxy, but should be replaced with an external load balancer before going into production. The Monitoring appliance collects data from monitoring agents running on the object proxy nodes. The Update appliance supports upgrades to object storage functionality. Benefits Cluster capacity can be expanded on-demand Data traffic can be separated over multiple object storage networks Data is protected using a high-availability configuration Multi-tenancy support is available Object storage configuration Process Overview 1. Overview of object storage networks (page 147) 2. Prepare the object storage deployer (page 147) 3. Prepare servers for provisioning (page 148) 4. Install object storage (Swift) (page 149) 146 Object storage (OpenStack Swift)

147 Overview of object storage networks Four networks support object storage in CloudSystem. The Cloud Management Network (CLM) handles cluster management operations. The VLAN ID for this network is assigned during CloudSystem deployment. The PXE Network (PXE) is an untagged VLAN used to provision bare metal nodes through PXE. This network is automatically created during CloudSystem deployment, but requires some additional configuration through the CLI. The PXE network must be a private network with IPAM that is managed by the csprovisioner CLI tool. PXE is connected to the object storage nodes exclusively at eth0 (or the first NIC of the machine). No other networks can be connected to this NIC. See Prepare servers for provisioning (page 148). The Object Proxy Network (OPN) is the load balancing network that connects the PAC nodes and the Cloud controllers. This network is configured when you run the CloudSystem First-Time Installer. The Object Storage Network (OBS) is outside of the CloudSystem defined networks and must be created manually when configuring object storage. All account, container and object services run on this network. Two NICs are required for object storage, but you could optionally use a 3 or 4 NIC port configuration. A four NIC configuration is the method described below. NICs are mapped to the appliances as follows: Swift proxy server OPN -> eth3 / trunk port OBS -> eth2 / trunk port CLM -> eth1 / trunk port PXE -> eth0 / access port Swift object server OBS -> eth2 / trunk port CLM -> eth1 / trunk port PXE -> eth0 / access port Cloud Controller (CMC/CC1/CC2) CAN -> eth1 / trunk port CLM -> eth1 / trunk port DCM -> eth1 / trunk port PXE -> eth0 / trunk port OPN -> eth3 / trunk port Prepare the object storage deployer Prerequisites CloudSystem is installed in your environment. The hlinux_hotrod.iso file is unpacked from the CloudSystem Tools release kit and added to your staging environment. You have a CIDR range for the PXE Network that does not conflict with other networks in your environment. Optional: You have a public key and remote user configured. Procedure 92 Preparing the object storage deployer 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su Installing Object storage 147

148 3. Initialize the PXE server and set the IP address for the PXE interface: csprovisioner c <PXE_CIDR> Example: csprovisioner -c /24 4. Transfer the hlinux_hotrod.iso from your staging environment to the Management appliance (ma1). You can use a secure copy command or SFTP client to transfer the file. 5. Using the root account on the Management appliance (ma1), mount the hlinux_hotrod.iso to the MA file system, run: mkdir /mnt/hotrod mount t iso9660 /mnt/images/hlinux_hotrod.iso /mnt/hotrod 6. Change directory to the ISO mount directory and run the following utility script: cd./import_hotrod_iso.sh NOTE: This step may generate warning messages about kernel option lengths exceeding supported values. You can ignore these warnings. 7. Optional: Unmount and delete the ISO. Prepare servers for provisioning The csprovisioner utility is available on the Management appliance (ma1) and is used to provision bare metal servers for object storage deployment. Use the cprovisioner utility to prepare a JSON file with an entry for each server you plan to provision. When provisioning, set the server to legacy boot, since UEFI boot is not supported by the PXE network. 148 Object storage (OpenStack Swift)

149 Procedure 93 Preparing bare metal servers for provisioning 1. Create a bare metal JSON file that includes an entry for each server you plan to provision. Sample bare metal JSON file NOTE: The pm-ip-addr in the example below is the ilo IP addresses. { "servers": [ {... name : swf 1, "pxe-mac-addr": "00:50:56:00:00:01", "pxe-ip-addr": " ", "pm-username : "administrator", "pm-password": "**********", "pm-ip-addr": " ", ] }, {... name : swf 2, "pxe-mac-addr": "00:50:56:00:00:02", "pxe-ip-addr": " ", "pm-username : "administrator", "pm-password": "**********", "pm-ip-addr": " ", }, {... name : swf 3, "pxe-mac-addr": "00:50:56:3F:00:01", "pxe-ip-addr": " ", "pm-username : "administrator", "pm-password": "**********", "pm-ip-addr": " ", ] }, {... name : swf 4, "pxe-mac-addr": "00:50:56:3F:00:02", "pxe-ip-addr": " ", "pm-username : "administrator", "pm-password": "**********", "pm-ip-addr": " ", ] }, } 2. Set bare metal servers to One-Time Network Boot and run the command: csprovisioner a <BAREMETAL-JSON> NOTE: You can monitor the progress of the PXE boot from the provisioner. Once PXE boot completes, wait until you can ping the servers. This may take some time. 3. Generate a list of the provisioned servers: csprovisioner l Install object storage (Swift) The swift-deployer is used to facilitate object storage cluster deployment in CloudSystem. The driving component swift-deployer is Ansible, which is key in installing, configuring and managing object storage components across the cluster. Since swift-deployer works with a flat file database consisting of several JSON and Ansible artifacts, it is only operated through the first Management appliance (ma1). Installing Object storage 149

150 TIP: To see a complete list of swift-deployer commands, run swift-deployer h. You can find an explanation of the commands in the HP Helion CloudSystem 9.0 Command Line Interface Guide in the Enterprise Information Library. Prerequisites You have a model plan for the object storage configuration. HP recommends starting with two zones, with each zone containing one PAC node and one Object node. Object storage networks are configured. See Overview of object storage networks (page 147). Procedure 94 Creating the cluster NOTE: Do not create more than one cluster. Only one cluster is supported. 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. Generate the Network Configuration JSON file: swift-deployer generate-config -t four-nic-network 4. Configure the Network Configuration JSON file according to the format shown in the sample below. It should include: four network entries PXE is the traffic corresponding to the PXE Network CLM is the traffic corresponding to the Cloud Management Network OBS is the traffic corresponding to the Object Storage Network OPN is the traffic corresponding to the Object Proxy Network a supported NIC configuration: 150 Object storage (OpenStack Swift) two-nic network: Two NICs with one for PXE and another for the OPN, OBS and CLM network traffic three-nic network: Three NICs with one for PXE, and the remaining two for a subset of OPN, OBS and CLM network traffic four-nic network: A separate NIC for each network Sample Network Configuration (four-nic) JSON file { "interfaces": [ { "port": "eth0", "networks": [ { "segment-id": "", "network-address": { "cidr": " /24", "start-address": " " }, "type": "vlan", "name": "PXE_Network", "network-traffic": [ "PXE"

151 } ] }, { }, { }, { } 5. Create a cluster: ] } ] "port": "eth1", "networks": [ { "segment-id": "", "network-address": { "cidr": " /21", "start-address": " " }, "type": "vlan", "name": "Cloud_MGMT", "network-traffic": [ "CLM" ] } ] "port": "eth2", "networks": [ { "segment-id": "", "network-address": { "cidr": " /24", "start-address": " " }, "type": "vlan", "name": "Cluster_MGMT", "network-traffic": [ "OBS" ] } ] "port": "eth3", "networks": [ { "segment-id": "", "network-address": { "cidr": " /24", "start-address": " " }, "type": "vlan", "name": "Object_Proxy_Network", "network-traffic": [ "OPN" ] } ] swift-deployer create-cluster n <CLUSTER_NAME> m <CLUSTER_MODEL> f <NETWORK_CONFIG_FILE> NOTE: The cluster name can be any string, but the recommended cluster model is pac_o. 6. To view the cluster ID: Installing Object storage 151

152 swift-deployer list-clusters 7. To view the cluster details during the various stages of deployment: swift-deployer show-cluster c <CLUSTER_ID> Procedure 95 Allocating servers for the cluster IMPORTANT: If you are planning to allocate a server that has been repurposed, make sure that the server is clean and that all stored SSH keys (known_host) have been removed before allocating it to your cluster. 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. Generate the Server Inventory JSON file: swift-deployer generate-config -t server-inventory 4. Configure the Server Inventory JSON file according to the format shown in the sample below. It should include: an entry for each available server NOTE: Server roles represent the object storage service that is deployed on the node. Make sure that the roles are identified correctly and correspond to the model you plan to use. The default is two PAC nodes and two Object nodes. Sample Server Inventory JSON file { 152 Object storage (OpenStack Swift) "servers": [ { "pxe-mac-addr": "00:50:56:00:00:01", "vendor": "HP", "zone": "1", "failure-zone": "swift", "model": "DL680", "pxe-ip-addr": " ", "region": "1", "server-roles": [ "proxy", "account", "container" ] }, { "pxe-mac-addr": "00:50:56:00:00:02", "vendor": "HP", "zone": "2", "failure-zone": "swift", "model": "DL680", "pxe-ip-addr": " ", "region": "1", "server-roles": [ "proxy", "account", "container" ] }, { "pxe-mac-addr": "00:50:56:3F:00:01", "vendor": "HP", "zone": "1", "failure-zone": "swift",

153 } ] }, { } "model": "DL680", "pxe-ip-addr": " ", "region": "1", "server-roles": [ "object" ] "pxe-mac-addr": "00:50:56:3F:00:02", "vendor": "HP", "zone": "2", "failure-zone": "swift", "model": "DL680", "pxe-ip-addr": " ", "region": "1", "server-roles": [ "object" ] 5. To view a list of provisioned servers: csprovisioner l 6. Allocate servers to the cluster: swift-deployer allocate-nodes c <CLUSTER_ID> i <SERVER_INVENTORY_FILE> 7. To verify the servers allocated to the cluster: a. Generate a list of cluster IDs: swift-deployer list-nodes c <CLUSTER_ID> b. Confirm that you can log in to each node from the Management appliance (ma1) and store the SSH key: ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@<cloud MGMT IP_for_the_cluster> o StrictHostKeyChecking no exit Figure 14 Cluster details Example: ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@ o StrictHostKeyChecking no exit ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@ o StrictHostKeyChecking no exit ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@ o StrictHostKeyChecking no exit ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@ o StrictHostKeyChecking no exit Installing Object storage 153

154 Procedure 96 Formatting disks 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. To generate a list of available disks: swift-deployer list disks c <CLUSTER_ID> n <NODE_CMN_IP> 4. Format the disk using one of the following methods: format a single disk swift-deployer format disks c <CLUSTER_ID> n <NODE_CMN_IP> d <DISK> l <LABEL> format multiple disks Use the generate-config command with a configuration type -disk-conf to generate a sample JSON file. Make sure that the IP address you provide is for the Cloud Management Network. format the disks swift-deployer format disks config c <CLUSTER_ID> f <DISK_CONFIG_FILE> TIP: Do not reformat nodes with existing formatted disks. You can SSH in to the PAC or Object nodes and delete the partition from the disk if you want to re-format the disks using swift-deployer command line. ssh -i /home/sirius/.ssh/sirius_id_rsa sirius-access@ fdisk /dev/sdc d W When deleting a partition, the partition of the current production disks in the cluster do not need to have their partitions removed. Sample Disk Configuration JSON file { " ": [ { "disk": "/dev/sdb", "label": "disk1" }, { "disk": "/dev/sdc", "label": "disk2" } ], " ": [ { "disk": "/dev/sdb", "label": "disk1" }, { "disk": "/dev/sdc", "label": "disk2" } ], " ": [ { 154 Object storage (OpenStack Swift)

155 ] } }, { }, { "disk": "/dev/sdb", "label": "disk1" "disk": "/dev/sdc", "label": "disk2" "disk": "/dev/sdd", "label": "disk3" } ], " ": [ { "disk": "/dev/sdb", "label": "disk1" }, { "disk": "/dev/sdc", "label": "disk2" }, { "disk": "/dev/sdd", "label": "disk3" } Procedure 97 Activating the cluster NOTE: If CloudSystem is configured with Active Directory or openldap for authentication, then make sure to create the user in Active Directory or openldap. Then, update the same password in the Cluster Specification JSON file before activating the cluster. 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. Generate the Cluster Specification JSON file: swift-deployer generate-config -t cluster-spec 4. Configure the Cluster Specification JSON file according to the format shown in the sample below. It should include: details of the basic account, container and object rings disks to be added to each ring Make sure that the IP address used for each ring is an Object Storage Network IP address. Sample Cluster Specification JSON file { "container": { "min_part": 1, "replica": 3, "servers": [ { "disks": [ "disk1", "disk2" ], "ip_address": " " }, { "disks": [ Installing Object storage 155

156 156 Object storage (OpenStack Swift) "disk1", "disk2" ], "ip_address": " " } ], "part_power": 10 }, "account": { "min_part": 1, "replica": 3, "servers": [ { "disks": [ "disk1", "disk2" ], "ip_address": " " }, { "disks": [ "disk1", "disk2" ], "ip_address": " " } ], "part_power": 10 }, "object": { "min_part": 1, "replica": 3, "servers": [ { "disks": [ "disk1", "disk2", "disk3" ], "ip_address": " " }, { "disks": [ "disk1", "disk2", "disk3" ], "ip_address": " " } ], "part_power": 10 }, "rings": [ "account", "container", "object" ], "authentication": { "auth_protocol": "http", "auth_uri": " "admin_user": "swift", "admin_password": "swift", "auth_host": " ", "admin_tenant_name": "service", "operator_roles": "admin,swiftoperator",

157 } } "auth_port": Configure access to keystone from the Management appliance (ma1). (Change values to those of your environment.) export OS_AUTH_URL=" export OS_PASSWORD="<password-set-during-first-time installation>" export OS_REGION_NAME="RegionOne" export OS_TENANT_NAME="demo" export OS_USERNAME="admin" 6. Activate the cluster: swift-deployer activate-cluster -c <Cluster ID> -s <Cluster Config json> Procedure 98 Preparing CloudSystem to perform load balancing 1. From the Management appliance (ma1) configure CloudSystem as the HA proxy: swift-deployer configure-haproxy -c <cluster ID> -b :8080 <Cloud controller DCM VIP>:8080 -t Example: swift-deployer configure-haproxy -c SWF-1 -b : :8080 -t Configure keystone: swift-deployer configure-keystone -c <cluster ID> -pu <Cloud controller CAN VIP> pr in <Cloud controller DCM VIP> p 8080 t unset r RegionOne Example: swift-deployer configure-keystone -c SWF-1 -pu pr in p t unset -r RegionOne IMPORTANT: At this point, you can use object storage for testing purposes. If you plan to use it in a production environment, you must configure an external load balancer and configure keystone. Configure external load balancer and keystone Prerequisites A bare metal server with a supported Linux operating system is connected to the Consumer Access Network, Cloud Management Network and Object Proxy Network. This server is the external load balancer. You created the sirius-access user on the external load balancer. This user has sudo privileges. You manually created an ssh-key handshake between the first Management appliance (ma1) and the external load balancer. TIP: To create the handshake, run the following command: ssh-copy-id i /home/.ssh/sirius/sirius_id_rsa.pub sirius-access@<ext-lb-cmn-ip> To verify the handshake, run the following command: ssh i /home/.ssh/sirius/sirius_id_rsa sirius-access@<ext-lb-cmn-ip> Installing Object storage 157

158 Procedure 99 Configuring the external load balancer and keystone 1. To configure the external HAProxy, run the following command: swift-deployer configure-haproxy -c <CLUSTER_ID> -b <EXT-load-balancer-CAN-IP>:8080 t <EXT-load-balancer-CMN-IP> 2. To configure keystone, run the following command: swift-deployer configure-keystone -c <CLUSTER_ID> -pu <SERVICE_PUB_IP> -pr <SERVICE_PVT_IP> -in <SERVICE_INT_IP> -p <SERVICE_PORT> -t <KEYSTONE_ADMIN_TOKEN> -r <REGION_NAME> If using the default CloudSystem load balancer, the service Public IP, Private IP and Internal IP are the Cloud controller CAN-VIP, Cloud controller -CMN-VIP and Cloud controller -DCM-VIP respectively. For external load-balancers use the CAN-IP for the public endpoint and the CMN-IP for the other endpoints. NOTE: You can find the ADMIN_TOKEN in the /etc/keystone/keystone.conf file on the Cloud controller. swift-deployer configure-keystone -c SWF-1 -pu pr in p t <KEYSTONE_ADMIN_TOKEN> -r RegionOne -role admin Managing object storage Perform scale operations on a cluster Manage rings and storage policies (page 161) Monitor a cluster (page 163) Backup object storage cluster management data (page 163) 158 Object storage (OpenStack Swift)

159 Figure 15 Object storage management tasks Perform scale operations on a cluster IMPORTANT: Run the cs-backup utility to back up object storage deployment files before performing scale operations. Do not interrupt scale operations while they are in progress. Expand a cluster You can expand a cluster by adding new servers (nodes). To prevent networking issues, always add newly provisioned bare metal servers to the cluster that are only configured with the PXE NIC. See Reprovisioning a node that was removed from a cluster (page 160). Procedure 100 Adding nodes to the cluster 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. Generate the Server Inventory JSON file: swift-deployer generate-config -t server-inventory Managing object storage 159

160 4. Configure the Server Inventory JSON file according to the format shown in the figure below. It should include: an entry for each new server You can find the leased IP addresses and MAC address details in the /var/lib/misc/dnsmasq.leases file on the Management appliance. Figure 16 Sample Server Inventory JSON file Shrink a cluster 5. Activate the cluster with the new nodes: swift-deployer expand-cluster -c SWF- -i <NEW_SERVER_INVENTORY_FILE> 6. If you added a PAC node, configure HA Proxy for the node: swift-deployer configure-haproxy -c <CLUSTER_ID> -b <CMN_VIP>:8080 <DCM_VIP>:8080 t <CMC_CMN_IP> <CC1_CMN_IP> <CC2_CMN_IP> 7. To verify the nodes allocated to the cluster: swift-deployer list-nodes c <CLUSTER_ID> Before removing a node from a cluster, make sure that it is not included in a ring. Remove the node from the ring first, then remove it from the cluster. IMPORTANT: Do not cancel the shrink-cluster action while it is in progress. Canceling the shrink action may cause the cluster database to become irrecoverable. If you decide to reverse the shrink action, allow the action to complete, then re-expand the cluster using the expand-cluster command. Procedure 101 Removing a node from a cluster 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. If needed, remove the node from the ring: swift-deployer remove-hosts -c <CLUSTER_ID> -ring <RING_NAME> -ip <IP_ADDRESS> NOTE: If you are removing a PAC node, make sure that you also remove it from the Account and Container rings. 4. Remove the node from the cluster: swift-deployer shrink-cluster -c <CLUSTER_ID> -n [<NODE_PXE_IPS>] Example: swift-deployer shrink-cluster -c SWF-1 -n If you want to reuse the node later, run the csprovisioner command to remove the node from the provisioned database and then re-provision the node. Procedure 102 Reprovisioning a node that was removed from a cluster After you remove a node from a cluster, you can wipe node information from the provisioner database and then re-provision it for reuse at a later time. 160 Object storage (OpenStack Swift)

161 1. Using cloudadmin credentials, log in to ma1. 2. Switch to the root user: Sudo su 3. Find the node you removed previously: csprovisioner l 4. Remove the node from the provisioner database: csprovisioner r <node-name> 5. Set the node back to PXE boot from BIOS. The node is ready for reuse. Manage rings and storage policies The swift-deployer tool provides a wrapper around the swift-ring-builder tool that is used to manage storage-policies and rings. About policies Storage policies are tied to rings. To create a new storage policy, you must first create a new ring to map to the policy. By default, Policy-0, which corresponds to the default object ring, is enabled. Only replication policies are supported, which means only new object rings can be created. About rings The default rings, namely account, container and object, are created during cluster activation. When creating a new object ring, the ring name must be object-n, where n represents the policy index. For example, the object-1 ring corresponds to the policy that is assigned index 1. The index is automatically assigned in numeric order, starting at 0, each time a policy is created. Procedure 103 Performing ring administrative tasks 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su Managing object storage 161

162 3. Perform ring administrative tasks. Create a new ring: swift-deployer create-ring -c <CLUSTER_ID> -ring <RING_NAME> -m <MIN_PART_HR> -p <PART_POWER> -r <REPLICA> Example: swift-deployer create-ring -c SWF-1 -ring object-1 -m 18 -p 1 -r 3 Add new disks to ring: swift-deployer add-disks -c <CLUSTER_ID> -ring <RING_NAME> -ip <OBS_IP_ADDRESS> -d <DISKS> [<DISKS>...] Example: swift-deployer add-disks -c SWF-1 -ring object-1 -ip d disk1 disk2 disk3 NOTE: Always use the Object Storage IP address in ring operations. Rebalance the rings after adding new disks: swift-deployer rebalance-rings -c <CLUSTER_ID> -rings <RING_NAMES> [<RING_NAMES>...] Example: swift-deployer rebalance-rings -c SWF-1 -rings account object-1 Distribute rings to the object storage nodes after rebalancing them: swift-deployer distribute-rings -c <CLUSTER_ID> -rings <RING_NAMES> [<RING_NAMES>...] Example: swift-deployer distribute-rings -c SWF-1 -rings account object-1 Remove disks from a ring: To remove disks from a ring: swift-deployer remove-disks -c <CLUSTER_ID> -ring <RING_NAME> -ip <IP_ADDRESS> -d <DISKS> [<DISKS>...] Example: swift-deployer remove-disks -c SWF-1 -ring object-1 -ip d sdd1 sde1 To remove all disks and the host from a ring: swift-deployer remove-hosts -c <CLUSTER_ID> -ring <RING_NAME> -ip <IP_ADDRESS> Example: swift-deployer remove-hosts -c SWF-1 -ring object-1 -ip Do you want to remove the host? y/n y NOTE: After removing disks, you must rebalance and redistribute the rings for the changes to take effect. Procedure 104 Performing policy administrative tasks NOTE: Before creating a new storage policy, first create a ring that can be mapped to the policy. 162 Object storage (OpenStack Swift)

163 1. Using cloudadmin credentials, log in to the Management appliance (ma1). 2. Switch to the root user: sudo su 3. Perform policy administrative tasks. Monitor a cluster View a list of existing policies: swift-deployer list-storage-policies c <CLUSTER_ID> Create a new storage policy: swift-deployer add-storage-policy -c <CLUSTER_ID> -n <POLICY_NAME> -s <DEFAULT> Example: swift-deployer add-storage-policy -c SWF-1 -n Gold --default no Change a policy to default: swift-deployer set-default-policy -c <CLUSTER_ID> -n <POLICY_NAME> Example: swift-deployer set-default-policy -c SWF-1 -n Silver Deprecate a storage policy: swift-deployer deprecate-storage-policy -c <CLUSTER_ID> -n <POLICY_NAME> Example: swift-deployer deprecate-storage-policy -c SWF-1 -n Silver Object storage services and system-related parameters are captured in the Monitoring dashboard. CloudSystem uses the Monasca monitoring service to capture object storage alarms and then display them on the Monitoring dashboard. For a full explanation of CloudSystem monitoring, see Monitoring (page 71). Monitoring dashboard Monitoring CLI Access the Monitoring dashboard from the Operations Console main menu. After you launch the dashboard, log in using the username and password you set for the Operations Console during First-Time Installation. See Viewing monitoring information (page 72). There are several monitoring tasks that you can perform using the command line interface to the monasca-client API. For a comprehensive list of monitoring commands, see the Monitoring chapter of the HP Helion CloudSystem 9.0 Command Line Interface Guide in the Enterprise Information Library. Backup object storage cluster management data HP recommends that you perform a backup each time you update object storage clusters by adding a disk, a node, or applying patches. CloudSystem implements backup and restore as a service (BRAAS) using the attis service, which runs on the Management appliance. You can execute attis commands from the Management appliance. Managing object storage 163

164 To learn more about backup and restore operations: Find a comprehensive list of supported backup commands in the Backup, restore and recovery chapter of the HP Helion CloudSystem 9.0 Command Line Interface Guide in the Enterprise Information Library. Find an explanation of backup and restore best practices in the Backup, restore and recover CloudSystem appliances chapter of the HP Helion CloudSystem 9.0 Administrator Guide in the Enterprise Information Library. 164 Object storage (OpenStack Swift)

165 23 Platform Services, including Helion Development Platform and DNS as a Service Helion Development Platform is a Platform as a Service (PaaS) that enables developers to rapidly develop, deploy and scale applications across a mix of public and private clouds. It provides support for applications developed with Java,.NET, Python, Ruby, Go, Node.js, Scala, Clojure, Perl, as well as popular database and messaging technologies such as MySQL, Microsoft Sql Server, PostgreSQL, Redis, Memcached and RabbitMQ. Platform Services included in CloudSystem are: Database as a Service (DBaaS) is based on OpenStack technologies. This service can be managed and configured by IT, but is easily consumable by developers. Application Lifecycle Service (ALS) is a Cloud Foundry-based, managed runtime environment for applications. Domain Name System as a Service (DNSaaS) is based on the OpenStack Designate project. This service is engineered to help you create, publish, and manage your DNS zones and records securely and efficiently to either a public or private DNS server network. Obtaining the Platform Services kits Platform Services zip files are available from the HP Software Depot. Your license entitlement certificate contains the URL of the download page. The zip files you can download are: HP Helion Development Platform + DNS as a Service for ESX CloudSystem software - Sept 2015 (and signature file) HP Helion Development Platform + DNS as a Service for KVM CloudSystem software - Sept 2015 (and signature file) The Platform Services zip files contain two vmdk files (for ESXi management hypervisor environments) or one qcow2 file (for KVM management hypervisor environments). You upload the vmdk or qcow2 files to the Cloud controller, where they combine into a single virtual disk. You then mount the disk on the Cloud controller, and install individual components. Helion Development Platform and DNSaaS are deployed as a set of virtual machines. These VMs access management services (centralized logging, ephemeral CA, NTP) on the Data Center Management network via a unique Service Provider Network. The following sections explain how to install and configure a new deployment of Platform Services on CloudSystem 9.0. Prerequisites CloudSystem 9.0 is installed. Swift (object storage) is installed. Swift is required to enable backup/restore and replication for production deployments of HP Helion Development Platform Database Service (DBaaS). Swift is not installed on CloudSystem 9.0 by default. To install Swift, see Object storage (OpenStack Swift) (page 145). Process overview 1. Configure the service provider network (page 166) 2. Install the Platform Services disk (page 173) 3. Install HP Helion Development Platform (page 177) 4. Install HP Helion DNS as a Service (page 184) 165

166 More information For additional information on HP Helion Development Platform, see: Database as a Service: Using the Database Service DNS as a Service: HP Helion Public Cloud DNS API Specification Designate DNSaaS services for OpenStack Application Lifecycle Service: HP Helion Development Platform: Application Lifecycle Service ALS clusters: Configuring and Deploying an Application Lifecycle Cluster Windows DEAs and Windows SQL Server: Building and Deploying Windows DEA and SQL Server Express Images Configure the service provider network A service provider network creates a network connection for Helion Development Platform (HDP) service virtual machines (for example, DNSaas and ALS) to access management services (for example, NTP, centralized logging, and ephemeral CA). Prerequisites An external router must be present that can route between the DCM (Data Center Management Network) and SVC (Neutron service provider network) and allow them to communicate with each other. NOTE: Individual security requirements determine whether the SVC network can be routed outside of the local environment. In general, this external router should not allow access to the external network (for example, the CAN (Customer Access Network) or EXT (external network)) for either the DCM or SVC networks. 166 Platform Services, including Helion Development Platform and DNS as a Service

167 Figure 17 Service provider network architecture Procedure 105 Create the service provider network 1. Identify a VLAN that will be used as the service provider (SVC) network. (The network administrator may be responsible for this task.) The SVC VLAN ID: must be unique across cloud tenant and provider networks should be attached to the same NICs as the Data Trunk must not be one of the Data Trunk configured VLANs that are provisioned during FTI or in the Operations Console Configure the service provider network 167

168 2. Assign a subnet for the service provider network and make it layer 3 routable to and from the Data Center Management Network (DCM) and the service provider network (SVC). (The network administrator may be responsible for this task.) The subnet VLAN ID: is used to create the Neutron service provider network in Horizon should be part of the Data Trunk connected to the CloudSystem management host and compute nodes if the cloud network type is VLAN (not VxLAN) a. Calculate the size of the SVC subnet by adding 3 to the number of HDP service provider VMs. This will accommodate the VMs plus the network value, gateway and broadcast addresses. The service network start and end values are used to define the DHCP range on the SVC network. The recommended minimum is a /26 network. b. You will need the DCM network address, for example /24 and the SVC network address in the same form. Following is an example configuration. Replace the example values with values for your environment. DCM Subnet <DCM_Subnet> = /24 <DCM_Gateway> = <DCM_VLAN> => not needed SVC Subnet <SVC_Subnet> = /24 <SVC_Gateway> = <SVC_VLAN_ID> = 363 <SVC_Start> = <SVC_End> = From the Admin tab or the Neutron CLI, create the service provider network under the same tenant (for example demo ) dedicated for HDP services. From the Networks screen, click +Create Network. For Name, enter SVC. The Platform Services installers expect the Neutron service provider network to be named SVC. If more than one network named SVC is discovered, the installers require you to provide the correct UUID for the network. For Physical Network, enter provider. For Segmentation ID, enter the VLAN ID of the service provider network (for example, 363). 168 Platform Services, including Helion Development Platform and DNS as a Service

169 4. From the Admin tab or the Neutron CLI, create the SVC subnet for the service provider network. The subnet was assigned by the network administrator in step 2. From the Networks screen, select the SVC network, then click +Create Subnet. On the Subnet screen: a. Add the SVC_SUBNET address, for example, /24. b. Leave the Gateway IP blank. c. Check Disable Gateway. On the Subnet Detail screen: Configure the service provider network 169

170 d. Check Enable DHCP. e. In Allocation Pools, add the range <SVC_Start> to <SVC_End>. For example, , f. (Optional) Enter the IP address of the DNS name server. g. Configure a route from the tenant VM to the Data Center Management Network (DCM) via the SVC network. This route allows for communication with the rabbit endpoint for logging. To add this route, in Host Routes, enter the CIDR of the <DCM_Subnet> and the IP address of the SVC_Gateway> on the SVC network. For example, /24, Verify the SVC network by adding a DHCP agent. This step is performed automatically if you do not configure the agent in this step. On the Networks screen, select the SVC network, then click Add DHCP Agent to create an agent for the Cloud controller node (cc1). (Optional) Create additional DHCP agents for cmc and cc2. 6. Verify the addition of the DHCP agent. The status will initially be Down but will quickly change to Active on all agents. Refresh your browser to verify. 170 Platform Services, including Helion Development Platform and DNS as a Service

171 Configure the service provider network 171

172 7. Create a test instance. a. Create a VM on the SVC network from Horizon in Project Instances Launch Instance. Create the test VM using the distro of your choice, for example, CentOS. You will need the default access credentials, for example, root/password. b. On the Networking tab, select SVC from Available networks, then click Launch. c. From Horizon Project Instances, view the instance and note the IP address of the VM instance just created. A successful launch will result in an Active Status and a Running Power State. It may take several minutes for this process to complete. (An Error Status is often caused by an incorrect network configuration.) 172 Platform Services, including Helion Development Platform and DNS as a Service

173 8. Create reverse routes back to the service network from the cloud controllers. In the following command, /24 is the SVC network and is the gateway to the DCM network. a. Log into MA1 node using the cloudadmin credentials set during First-Time Installation. Add the route by running: ip route add /24 via dev eth0 b. SSH to cmc. ssh cmc password: c. Add the route on cmc. ip route add <SVC_network> via <DCM_Gateway> dev eth0 For example: sudo ip route add /24 via dev eth0 d. Add the route in the /etc/network/interfaces.d/eth0 file on cmc. up route add -net <SVC_network> gw <DCM_Gateway> dev eth0 Be sure to add this line as the first up route add command in the DC Management Network section. For example: # DC Management Network auto eth0 iface eth0 inet static address netmask gateway up route add -net /24 gw dev eth0 up route add -net /0 gw dev eth0 e. SSH to cc1 and repeat steps 8c and 8d. f. SSH to cc2 and repeat steps 8c and 8d. 9. Verify that the SVC network was configured correctly. a. From cc1 or cc2 or cmc (or once for each node), SSH to the test instance. b. From the test instance, ping the management appliance IP address. Install the Platform Services disk After you download the Platform Services disks, you must place the disks on their respective management cluster file systems. Install the Platform Services disk 173

174 Platform Services are supported on two homogenous CloudSystem configurations: ESXi (CloudSystem management plane) on ESXi (compute plane) KVM (CloudSystem management plane) on KVM (compute plane); this is RHEL/KVM Hypervisor and compute node ESXi KVM Disk type vmdk qcow2 Process overview 1. Mount the ESXi or KVM disk onto the Cloud controller (cmc) at /export/sherpa/import/platformservices 2. NFS mount that location to the other two controllers (cc1 and cc2). Option 1: ESXi management hypervisor and compute node installation For this option, your environment must contain an ESXi management hypervisor and ESXi compute nodes. You cannot have any KVM or Hyper-V compute nodes in your cloud. Procedure 106 Mount the ESXi vmdk disk 1. Copy the vmdk disks using vsphere Web Client into the storage associated with the Cloud controller. a. From Inventory VMs and Templates, select a VM (for example, cs.mgmt-controller). b. Right click on the storage name (for example, sandisk-esx-0). c. Browse the datastore, and click the VM name (for example, cs.mgmt-controller). d. Upload the disks (platform-services-vmdk_*.vmdk and platform-services-vmdk_*-flat.vmdk) to the datastore where the Cloud controller (cmc) can access them. 2. Add the existing vmdk hard disk to the cmc virtual machine in the vsphere Web Client. This step is documented at VMware vsphere Documentation Center. 3. Rescan the SCSI bus to which the storage devices are connected to make the new hardware visible. ls /sys/class/scsi_host/ while read host ; do echo "- - -" > /sys/class/scsi_host/$host/scan ; done /sbin/fdisk -l See RedHat Customer Portal. 4. Create the /export/sherpa/import/platformservices directory on the Cloud controller: sudo mkdir -p /export/sherpa/import/platformservices mount -t ext4 /dev/sdc1 /export/sherpa/import/platformservices df -k You now have mounted the platform_services* disk at /export/sherpa/import/platformservices. This is the location where the tools will expect the disk to be mounted. Next step NFS mount the /export/sherpa/import/platformservices to the other two Cloud controllers. See NFS mount the Platform Services disk (page 175). Option 2: KVM management hypervisor and compute node installation For this option, your environment must contain a KVM management hypervisor and KVM compute nodes. You cannot have any ESXi or Hyper-V compute nodes in your cloud. 174 Platform Services, including Helion Development Platform and DNS as a Service

175 Procedure 107 Mount the KVM qcow2 disk 1. Copy the platform-services* qcow2 disk to the Cloud controller (cs-mgmt-controller). 2. Log in to the Management appliance (ma1) (if you are not already logged in), and SSH to the Cloud controller (cmc). 3. Verify the name of the platform-services-qcow2*.qcow2 disk and change it in the command below if necessary, then enter the following commands from cmc: modprobe nbd max_part=63 qemu-nbd -c /dev/nbd0 platform-services-qcow2_cs9.0.qcow2 4. Mount the disk on the Cloud controller: sudo mkdir -p /export/sherpa/import/platformservices/ sudo mount /dev/nbd0p1 /export/sherpa/import/platformservices/ 5. (Optional) If you need to unmount the disk at a later time, enter the following: umount /export/sherpa/import/platformservices qemu-nbd -d /dev/nbd0 Next step NFS mount the /export/sherpa/import/platformservices to the other two Cloud controllers. See NFS mount the Platform Services disk (page 175). NFS mount the Platform Services disk When the ESXi or KVM platform-services* disk is mounted at /export/sherpa/import/platformservices on the Cloud controller (cmc), that location also needs to be NFS mounted to the other two Cloud controller nodes (cc1 and cc2). Procedure 108 Update prerequisites 1. Log in to the Management appliance (ma1) and SSH to the Cloud controller (cmc). 2. Enter sudo vi /etc/exports and add the following: /export/sherpa/import/platformservices *(rw,sync,fsid=0,crossmnt,no_root_squash,no_subtree_check) 3. Enter vi /etc/default/nfs-common and change: STATDOPTS="--port 4000" 4. Enter vi /etc/modprobe.d/options.conf // create new file and add: options lockd nlm_udpport=4001 nlm_tcpport= Enter vi /etc/default/nfs-kernel-server and add: RPCMOUNTDOPTS="--manage-gids -p 4002" 6. Restart services: sudo service rpcbind restart sudo service nfs-kernel-server restart Procedure 109 Check iptables and add iptables rules 1. Log in to the second and third controller nodes (cc1 and cc2) to check the iptables settings. cloudadmin@cc1:~$ sudo showmount -e where is the Cloud controller (cmc) IP address on eth1. If showmount hangs, perform the following steps on cmc to change the iptables settings to allow cc1 and cc2 to access the exported location. 2. SSH to the Cloud controller (cmc). 3. Back up iptables and copy the backup to a new file. iptables-save > iptables-backup.txt cp iptables-backup.txt backup-new.txt Install the Platform Services disk 175

176 4. Find the location marked ADD NFS IPTABLES LINES HERE in the following example of backup-new.txt. 5. Add all of the following rules to backup-new.txt in the location shown in step 3. -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p udp -m udp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport j ACCEPT -A INPUT -p udp -m udp --dport j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 892 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 892 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 875 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 875 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 662 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 662 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport j ACCEPT 6. Restore the iptables. iptables-restore < backup-new.txt // new file 7. List the current filter rules and check for sunrpc and nfs in the output of the command. iptables -L 176 Platform Services, including Helion Development Platform and DNS as a Service

177 Procedure 110 NFS mount the Platform Services disk 1. Mount the exported location to /export/sherpa/import/platformservices on the Cloud management controller (cmc). sudo showmount -e << cmc ips on eth1 >> Example: sudo showmount -e Export list for : /export/sherpa/import/platformservices * 2. Mount the exported location to /export/sherpa/import/platformservices on the remaining two Cloud controllers. Enter the following commands on cc1, then cc2. sudo mkdir -p /export/sherpa/import/platformservices sudo mount <CMC IP on eth1>:/export/sherpa/import/platformservices /export/sherpa/import/platformservices ls -la /export/sherpa/import/platformservices You should see the contents of the Platform Services disk. Example on cc1: cloudadmin@cc1:~$ sudo showmount -e Export list for : /export/sherpa/import/platformservices * cloudadmin@cc1:~$ sudo mkdir -p /export/sherpa/import/platformservices sudo mount :/export/sherpa/import/platformservices /export/sherpa/import/platformservices ls -la /export/sherpa/import/platformservices Install HP Helion Development Platform Process overview 1. Enable HP Helion Development Platform endpoint (page 177) 2. Optional: Install HP Helion Application Lifecycle Service (ALS) (page 181) 3. Install the HP Helion Development Platform Database Service (page 178) 4. Optional: Install Microsoft.NET support for Helion Development Platform (page 182) Prerequisites Cinder (block storage) service is configured with VSA or HP 3PAR. Swift (object storage) is required to enable backup/restore and replication for production deployments of HP Helion Development Platform Database Service. Swift is not installed on CloudSystem 9.0 by default. To install Swift, see Installing Object storage (page 145). Enable HP Helion Development Platform endpoint The HP Helion Development Platform UI is enabled by creating a Keystone endpoint in the Keystone catalog. Execute the following commands on the Cloud Controller (cmc) as a Keystone administrator, replacing the details specific to your environment. 1. Source the credentials. sudo -i source cmc_stackrc 2. Create the service: Install HP Helion Development Platform 177

178 keystone service-create --type development-platform --name development-platform --description 'Development Platform for HP Helion' Expected output is similar to following: Property Value description Development Platform for HP Helion enabled True id 664ae8280e994f85896de3262bc0af10 name development-platform type development-platform Create the endpoint, replacing the details specific to your environment: keystone endpoint-create --region RegionOne --service development-platform --publicurl --adminurl --internalurl Expected output is similar to following: Property Value adminurl id e1e2873e aa784825c8bf3254 internalurl publicurl region RegionOne service_id 664ae8280e994f85896de3262bc0af Install the HP Helion Development Platform Database Service Verify quotas The following sections describe how to install and configure the HP Helion Development Platform Database Service using the Horizon (OpenStack user portal) interface. Process overview 1. Verify quotas (page 178) 2. Download the Database Service from the local file system (page 179) 3. Configure the Database Service (page 179) 4. Configure the Cloud controller HAProxy for DBaaS (page 180) The Database Service will be installed into the demo tenant of the OpenStack user portal, unless you specify a different, previously created tenant. The tenant into which you install the Database Service must have admin privileges, sufficient quota available and unused resources for the service to use. To check existing quota availability, log in to the OpenStack user portal as the admin user and open the Overview panel under the Compute tab. HP recommends that you install ALS and DBaaS virtual machine instances in separate projects, to allow maximum headroom for DBaaS. You must have the following minimum quota available: Table 16 Quotas on the OpenStack user portal (Horizon) Resource Usage Floating IPs Instances Usage Platform Services, including Helion Development Platform and DNS as a Service

179 Table 16 Quotas on the OpenStack user portal (Horizon) (continued) Resource Usage Networks RAM (GB) Routers Security Groups Volumes Volume Storage (GB) Usage In addition to the quota in the preceding table, for every database instance that is created by a user, the necessary resources to create that instance will be deducted from the tenant quota. The user s database service quota will also be affected. Download the Database Service from the local file system 1. Open Horizon and log in as the admin user. You must run the entire installation logged in as admin. 2. Click on the Admin panel and select the Development Platform panel. 3. Click on the Configure Services panel. 4. In the Configure Services panel, locate the Database Service item in the Configure Services table and select Download Service. Download Service will change its status to Stage when the download is completed. 5. Click Stage. Stage will change its status to Configure Service when staging is completed. Configure the Database Service 1. When download and staging are complete, click Configure Service. 2. In the configuration dialog, specify the following configuration options. IP addresses are located on the Management appliance (ma1) in /etc/haproxy/haproxy.cfg. Passwords are located on the Cloud controller (cmc) in /boot/cloudsystem/json-template/cs-mgmt-controller-config.json. Service User Password (Required) The password for the admin user that is currently logged in. This password must match the password used to log in to the OpenStack user portal. Key Pair (Required) Key pair to install on all instances created as part of the database service. The public key can be used by an admin for SSH access to all instances. External Network (Required) Network name for the network that has external network access. For example, ext-net. Provider Network (Required) Network Name for the network that has network access to cloud infrastructure services. For example,svc. NTP Server IP IP Address to an NTP server to use if instances will not have outbound access to the internet. Install HP Helion Development Platform 179

180 Logstash RabbitMQ IP Address (Required) The IP address of the RabbitMQ Server publishing to the central Logstash server. Logstash RabbitMQ Password (Required) The password for the RabbitMQ Server publishing to the central Logstash server. Ephemeral CA Password (Required) - The password for the Ephemeral CA server. Ephemeral CA IP Address (Required) The IP address of Ephemeral CA server. Volume Type (Required) The volume type to use when creating database instances. Enable HA Specify if the database service is to be set up for high availability (HA). If selected, each component of the service will have three instances created and active at all times. 3. After all configuration options have been provided, click the Configure button to complete the configuration step. Wait for the configuration step to complete and the status to change to Configured. Configure the Cloud controller HAProxy for DBaaS The following steps will configure HAProxy to receive and forward HTTP requests to the VM that hosts the REST API endpoint for the Database Service. Log in to the Management appliance (ma1), then SSH to the Cloud controller (cmc) and run the following commands. 1. Identify the API server IP addresses on the SVC network. nova list awk '/trove[0-9]*_api/{ print $4,"\t", substr($12,5) }' 2. Identify the Virtual IP used by the controller nodes to be able to load balance the Helion OpenStack services: keystone endpoint-list awk '/8779/{ print $6 }' egrep -o "[0-9]+.[0-9]+.[0-9]+.[0-9]+" 3. Update the configuration on each of the Cloud controller nodes by connecting to the controller and doing the following: a. Edit the /etc/haproxy/manual/paas.cfg file and add the following lines. The last line should be repeated once for each API server identified in step 1. listen trove_api bind <Virtual IP from step 2>:8779 option httpchk GET / server trove-trove<n>_api-<uniqueid> <API server n's IP Address> check inter 2000 rise 2 fall 5 check-ssl ca-file /etc/ssl/certs/ca-certificates.crt b. Edit the /etc/iptables/rules.v4 file and add to it: -I INPUT -p tcp --dport j ACCEPT c. Run the following command as root: iptables -I INPUT -p tcp --dport j ACCEPT d. Reload the haproxy service configuration: sudo service haproxy reload 4. The installation is complete. Return to the OpenStack user portal (you may log out and log in with a non-admin account if desired), and click on the Database panel under the current Project to begin using the Database Service. 180 Platform Services, including Helion Development Platform and DNS as a Service

181 Optional: Install HP Helion Application Lifecycle Service (ALS) The Application Lifecycle Service (ALS) is a Cloud Foundry-based, managed runtime environment for applications. For more information, see HP Helion Development Platform: Application Lifecycle Service. Process overview 1. Install and configure the Application Lifecycle Service (page 181) 2. Enable the Application Lifecycle Service panel (page 181) Procedure 111 Install and configure the Application Lifecycle Service 1. Open the OpenStack user portal in a browser and log in as the admin user. You must run the entire installation logged in as admin or as a user with admin privileges. 2. Select a project with admin privileges. This includes the demo project. 3. Connect to the Download Service. a. Click the Admin panel and select Development Platform. b. Click Configure Services. 4. Download the Application Lifecycle Service. a. In the Configure Services panel, locate the Application Lifecycle Service item in the Configure Services table. b. Select Download Service and wait for the download to complete. Do not download multiple services at the same time. Doing so will leave the server in an error state. 5. Verify the installation. When the download is complete, verify that two images with the prefix HP Helion Development Platform - Application Lifecycle Service appear in Glance. Procedure 112 Enable the Application Lifecycle Service panel Log in to the Management appliance (ma1), then SSH to the Cloud controller (cmc) and run the following commands. 1. Load OpenStack environment variables: source stackrc 2. Create the ALS service: keystone service-create --name als --type als \ --description "Application Lifecycle Service" Install HP Helion Development Platform 181

182 3. Create the endpoint: keystone endpoint-create --service als \ --publicurl \ --adminurl \ --internalurl \ --region RegionOne 4. Log out from the OpenStack user portal, then log in again. 5. Verify that the Project panel includes the Application Lifecycle Service with a Clusters sub tab. Optional: Install Microsoft.NET support for Helion Development Platform Helion Development Platform in CloudSystem includes a full end-to-end solution for creating and deploying.net Framework applications. HDP includes Windows support by providing the ability to host Windows applications on a Windows DEA (Droplet Execution Agent) using your own Windows license. For more information, see HP Helion 1.2 Development Platform: Windows Overview. Process overview 1. Download the Glazier tool 2. Use the Glazier tool to create and upload your Windows images Prerequisites You installed an external NTP server, and you synchronized the management host and the Management appliance time server before deploying CloudSystem. See the Synchronize NTP servers appendix in the HP Helion CloudSystem 9.0 Installation and Configuration Guide at Enterprise Information Library. Application Lifecycle Service is installed and configured. You have a license and ISO images for Windows Server 2012 R2 provided by Microsoft. You have downloaded the Glazier tool (a collection of scripts) for CloudSystem from the link in the readme file included with the HDP image. See HP Helion Development Platform: Glazier Reference Guide. 182 Platform Services, including Helion Development Platform and DNS as a Service

183 Notes on hypervisor types ESXi Specify --hypervisor esxi to the create-glazier command for ESXi management hypervisor environments. Creating a glazier for ESXi installs the drivers found in VMware-tools-windows-VERSION.iso as well as VMware guest tools. The generated Windows image format is vmdk. KVM KVM is the default hypervisor for glazier. You do not need to specify the --hypervisor option to the create-glazier command. Creating a glazier for KVM installs the drivers found in virtio-win-version.iso. The generated Windows image format is qcow2. KVM for ESXi Advanced users can generate images compatible with both ESXi and KVM. Creating a glazier for KVM for ESXi installs VirtIO drivers and VMware drivers but does not install VMware guest tools. Specify an ISO for --virtio-iso that is a combination of the content of virtio-win-version.iso and VMware-tools-windows-VERSION.iso. To combine the contents, download both ISOs, mount the ISOs and then copy their contents to an empty folder. Make an ISO with the contents of that folder. (Each operating system has its own tools to accomplish this step.) The generated Windows image format is qcow2. Procedure 113 Install Microsoft.NET support for Helion Development Platform Use the following procedure to build Windows images for deployment to a Helion OpenStack environment. Deploying Windows DEAs and SQL Server services follows the same procedure as in Helion OpenStack 1.2. See HP Helion 1.2 Development Platform: Windows Overview. 1. Download required VirtIO ISOs. a. For ESXi management hypervisor environments, download VMware ESXi Guest Tools ISO. b. For KVM management hypervisor environments, download KVM Virtio ISO. 2. Create a glazier to install HDP.NET on CloudSystem. For example:./create-glazier \ --windows-iso <path to Windows Server 2012 R2 ISO> \ --virtio-iso <path to VirtIO ISO downloaded in step 1> \ --product-key <Windows Product Key> \ --os-network-id <found on Network Detail page under Project -> Network -> Networks -> (select network name) > \ --os-key-name <OS key pair name> --os-security-group <OS security group; for example, Default> \ --os-flavor <OS flavor name: for example, m1.small> \ --hypervisor esxi <Remove this argument for KVM> 3. (Optional) Deploy the Microsoft Contoso sample app by following HP Helion 1.2 Development Platform: Deploying your first.net Application. Install HP Helion Development Platform 183

184 Install HP Helion DNS as a Service Prerequisites HP Helion OpenStack managed DNS service, based on the OpenStack Designate project, is engineered to help you create, publish, and manage your DNS zones and records securely and efficiently to either a public or private DNS server network. The following installation instructions assume that the associated project currently contains no instances (VMs). If this is not the case, see the instructions for Increasing quotas (page 191) to ensure you have enough space before you begin. To install the HP Helion OpenStack managed DNS service ensure that you have met the following prerequisites: HP Helion CloudSystem is installed. The SVC Network is configured. Download the DNSaaS installer image and have it residing on your system. Obtain Target credentials. These credentials are defined as that of the user and tenant where the service is to be deployed. Ensure that you have the following items to obtain Target credentials: The user must have admin and member roles. User name. Password Tenant/Project Name Obtain Service credentials. These are credentials for both the user and the tenant used to validate end user tokens. Ensure that you have the following items to obtain Service Credentials: You must be in the service tenant, have admin and _member_ roles, and be called designate. Username Password Tenant/Project Name A generated SSH Key for accessing the Service VMs. A chosen back-end driver and driver prerequisites. PowerDNS (self hosted). You will need a domain name for the nameservers ("Namesever FQDNs"). For example, if your nameservers are named ns1.mycompany.com, you will need the mycompany.com domain name. Ensure that you have considered the following DynECT (3rd Party) elements. An active service contract with DynECT. Knowledge of the FQDNs for all DynECT nameservers allocated to your account ("Namsever FQDNs") as follows: ns1.p13.dynect.net ns2.p13.dynect.net ns3.p13.dynect.net ns4.p13.dynect.net 184 Platform Services, including Helion Development Platform and DNS as a Service

185 API credentials for DynECT such as: Customer Name User name Password Ensure that you have considered the following Akamai (3rd Party) elements: An active service contract with Akamai. Knowledge of the FQDNs for all Akamai nameservers allocated to your account ("Namesever FQDNs"). API credentials for Akamai such as: User name Password IMPORTANT: The DNS installation will not succeed if the associated project has existing instances (VMs). If the project has existing VMs, increase the quota levels to the following for that project before attempting to install DNS. Instances: 16 RAM: 44 GB Volumes: 6 Storage: 240 GB Creating prerequisite credentials You must create both target and service credentials. Target credentials Target credentials are the credentials of the user and tenant where the service is to be deployed. Target credentials include a tenant and a username. Service credentials can only be created after the Target credentials have been successfully created. To create Target credentials: Create a tenant using the following command line: keystone tenant-create --name dnsaas --description "DNSaaS Service" Create a username with the following command line: keystone user-create --name dnsaas --tenant dnsaas -- dnsaas@example.com --pass password Add a role. NOTE: The Admin role is added for the user. keystone user-role-add --user dnsaas --tenant dnsaas --role admin Install HP Helion DNS as a Service 185

186 Service credentials Service credentials are user and tenant credentials created to validate end user tokens. Service credentials can only be created after the Target credentials have been successfully created. This user must be in the service tenant, have the admin and _member_ roles, and be named designate. Create the service credentials using the following command line: keystone user-create --name designate --tenant service pass password Add admin role to the service user command keystone user-role-add --user designate --tenant service --role admin Publishing the update package and booting the installer VM Before proceeding with the DNSaaS installation, ensure that you have met all the prerequisites, which includes gathering the required information, creating the necessary users and projects, and ensuring that the users and projects have appropriate roles. Publishing CSU contents 1. Mount the Platform Services Disk to the CMC and nfs share to the CC1/CC2 nodes. See Install the Platform Services disk (page 173). 2. Log in to the OpenStack user portal using the Target credentials you have created. 3. Click the Admin tab in the left panel. 4. Click Updates and Extensions and then select Updates and Extensions. 5. Select the appropriate file (for example: dns.csu) from the list and click Download. 6. Wait for the file to download. When the download is complete, the Download button changes to Publish. 7. Click Publish to install the package. Boot the installer VM 1. Log in to the OpenStack user portal using the Target credentials you have created. 2. Click Project. The tab displays an option in the left panel. 3. Click Compute and then select Images to open the Images page. 4. Select the image file from the list and click Launch. For example: select dnsaasinstaller_ to launch this image. A Launch Instance dialog box displays with five tabs: Details Access & Security Networking Post-Creation Advance Options By default, Details is the active tab. 5. On the Details tab, do the following: a. Enter the name of the instance in the Instance Name (Virtual Machine (VM)) box. For example: dnsaas-installer. b. Select the flavor from the Flavor drop-down list. For example: m1.small m1.small is the minimum size supported by the installer VM. 6. On the Access and Security tab, in the Keypair drop-down list, select an appropriate SSH keypair. 7. On the Networking tab, select the default-net network, if it is not populated automatically. 186 Platform Services, including Helion Development Platform and DNS as a Service

187 8. Click Launch to launch the Instance. The Instances page is displayed with the progress bar showing its completion progress. 9. Select the launched instance in the Instance table. 10. Perform the following steps to generate a floating IP address to use: a. Open a shell. b. Make sure you have the nova command line client installed. c. Download the target credentials in an RC file using the OpenStack user portal. d. Source the credentials. A sample command line follows: source <filename> openrc.sh e. Run the nova floating-ip-create command to return a floating IP address. f. Run the nova floating-ip-associate dnsaas-installer <floating-ip> command using the IP address you obtained in the previous step. 11. Perform the following steps in the Manage Floating IP Associations area: a. Select the Floating IP address from the IP Address drop-down list. Ensure that you remember the selected IP address. b. Select the port from the Port to be associated drop-down list. c. Click Associate. 12. Click Access and Security. The Access and Security page is displayed. 13. Select the appropriate security group from the list and click Manage Rules. For example: select default as a security group. The page displays Manage Security Group Rules: <name of security> page. 14. Click Add Rule. Add Rule dialog box is displayed. 15. Enter the port value as 22 in the Port box. 16. (Optional) Restrict the CIDR from which SSH connections should be allowed. 17. Click Add. The rule is added for the instance. Installing and configuring DNSaaS IMPORTANT: During DNSaaS installation, non-deterministic issues within the infrastructure layer may cause the install to time-out. If this occurs, install DNSaaS up to two more times. 1. SSH to the installer VM as follows: ssh -i samplekey.pem debian@<floating IP Address associated with the DNS Installer VM> NOTE: Before you begin the installation, you must create a configuration file. You can do this by modifying the sample configuration file included with the DNSaaS installer files. See Step 5 for configuration file information. 2. Create the SSH Public Key which is used by the Service VMs. TIP: HP recommends that you use the same SSH key which was used to boot the installer VM. If you choose to use a different SSH key, ensure you retain both SSH private keys for future use. 3. Copy the SSH Public Key as follows: cp.ssh/authorized_keys id_rsa.pub 4. Copy the sample configuration file to your home directory. cp /etc/dnsaas-installer/dnsaas-installer.conf.sample-cloudsystem ~/dnsaas-installer.conf Install HP Helion DNS as a Service 187

188 5. Edit your copy of the configuration file with the required changes using the following command: nano dnsaas-installer.conf a. Change the following DEFAULT section parameters of the configuration file to reflect your configuration: target_project_name Project name where the service is installed. target_username Username used to deploy and run the service. target_region_name Region name to deploy the service in. undercloud_vip This is the CAN VIP used by all other OpenStack Services. overcloud_vip This is the same as the undercloud_vip. ssl_enabled Set to True if you require the Designate API to be wrapped in SSL, otherwise, set it to False. control_plane_ssl Set to True if your CloudSystem deployment uses SSL for the service APIs, otherwise, set it to False. b. Change the following Designate section parameters of the configuration file to reflect your configuration: ntp_servers List of NTP servers to use in the DNSaaS VMs. ssh_public_key The SSH public key to be installed on the instances for management access. ca_certificate The CA Certificate used by the Cloud controller API endpoints. This is available on the Clou controller control nodes. This should point to a CA cert file on disk. database_root_password Password for the database root user. This should be over 16 characters. database_designate_password Password for the database designate user. This should be over 16 characters. database_powerdns_password Password for the database powerdns user. This should be over 16 characters. messaging_root_password Password for the messaging root user. This should be over 16 characters. messaging_designate_password Password for the messaging designate user. This should be over 16 characters. service_project Project name for a user with permission to validate Keystone tokens. service_user Username for a user with permission to validate Keystone tokens. service_password Password for a user with permission to validate Keystone tokens. ephemeralca_password EphemeralCA Password. This must match the eca password value from the Cloud controller passwords file. This should be output by the Helion EE installer. ephemeralca_host EphemeralCA Host. This is the MA1 appliance IP address on the management network. enable_service_net Enable Service Net only enable this if the SVC network is setup. If the SVC network is set this should be set to True. backend_driver Backend driver to use (powerdns, dynect, akamai). 188 Platform Services, including Helion Development Platform and DNS as a Service

189 enable_beaver Enable Central Logging Support. beaver_rabbit_password Beaver RabbitMQ Connection Password This must match the RabbitMQ password value from the undercloud passwords file. This should be output by the Helion EE installer. nameserver_allow_axfr_ips CIDRs that are allowed to do zone transfers from the PowerDNS servers (Required for use with akamai, DynECT and Microsoft) list of IPs and / or CIDRs (comma separated). This should be set to allow connections from the Microsoft DNS Servers. c. If you select Akamai you must set the following options in the designate section: akamai_username The username that was set up as part of your Akamai signup. akamai_password The password that was set up as part of your Akamai signup. akamai_also_notify List of IP addresses for name servers to notify when a zone is changed (Comma separated). These should be the IP addresses provided by Akamai during signup. nameserver_allow_axfr_ips CIDRs that are allow do zone transfers from the PowerDNS servers. (Required for use with akamai, DynECT and Microsoft) list of IPs and / or CIDRs. (Comma separated). This should be set to allow connections from the Akamai Zone Transfer Agents (provided during Akamai signup). d. If you select DynECT you must set the following options in the designate section: dynect_customer_name Customer name provided by DynECT signup. dynect_username Username provided by DynECT signup dynect_password Password provided by DynECT signup. dynect_also_notify List of hostnames for name servers to notify when a zone is changed (Comma separated). These should be the IP addresses provided by DynECT during signup. nameserver_allow_axfr_ips CIDRs that are allowed to do zone transfers from the PowerDNS servers (Required for use with akamai, DynECT and Microsoft) - list of IPs and / or CIDR (comma separated). This should be set to allow connections from the Akamai Zone Transfer Agents (provided during Akamai signup). 6. Run the installer validation command to verify the configuration file using the following command: dnsaas-installer --target-password <Target User Password> validate 7. After you validate the configuration file, run the DNSaaS installer: dnsaas-installer --target-password <Target User Password> install Configuring the Cloud controller HAProxy for DNSaaS Use the following command to configure HAProxy: dnsaas-installer --target-password <Target User Password> haproxy The HA Proxy configuration file will be displayed similar to the following example: :31: INFO HAProxy configuration ### START HAPROXY CONFIG listen designate bind :9001 mode tcp balance source option tcpka option tcplog option httpchk GET / Install HP Helion DNS as a Service 189

190 server check inter 2000 rise 2 fall 5 server check inter 2000 rise 2 fall 5 server check inter 2000 rise 2 fall 5 ### END HAPROXY CONFIG After the configuration of HAProxy, SSH to all three Cloud controllers. Perform the following steps on each controller node: 1. SSH to the Cloud controller and sudo using the following commands: ssh cloudadmin@ <IP address of Cloud controller> sudo -i 2. Edit the pass.cfg configuration file with the following command: nano /etc/haproxy/manual/paas.cfg 3. Paste the HA Proxy configuration file, (generated in step 1), at the end of the pass.cfg file. 4. Use CTRL+X to save the pass.cfg file. 5. Reload the HA Proxy with the following command: service haproxy reload 6. Open the Designate API port in the firewall and execute the following commands: iptables -I INPUT 1 -p tcp -m tcp --dport j ACCEPT iptables-save > /etc/iptables/v4.rules Registering the service with Keystone You can register the DNS service and endpoint as a user or an admin. You do not have to immediately register the DNS service in Keystone; however, if you choose to register the DNS service and endpoint, execute the following user command: dnsaas-installer --target-password <Target User Password> keystone-registration Initial service configuration You must perform an initial configuration step to communicate the names of the servers that serve DNS to Designate. Ensure you have a valid set of admin credentials in the standard OS_* environment variables before proceeding. For the "Nameserver FQDNs" gathered during the prerequisites step, issue a servercreate command for each name to add the server as in the following example: designate server-create --name ns1.p13.dynect.net. Post-installation cleanup The installer VM is no longer required. Archive the configuration file and the SSH public and private keys used and optionally delete the dnsaas-installer instance. Uninstalling DNaaS To uninstall DNaaS: Enter the following command to list the DNaaS Stack ID: heat stack-delete <stack ID> NOTE: If not deleted already, delete Installer VM. The Keystone service and endpoints will not be deleted, if you want to remove these services, see the Keystone documentation at Platform Services, including Helion Development Platform and DNS as a Service

191 Increasing quotas If the associated project already has existing instances (VMs), the DNSaaS installation will not have sufficient room to complete. An admin must increase the quota levels to allow sufficient room before installation. 1. Log in to the OpenStack user portal. 2. Click Identity and then click Projects in the Project panel. 3. Find the project in the list and click Modify Users. 4. Click Edit Project and then click the Quota tab. 5. Increase the following quotas to create sufficient room: Instances: Add 16 RAM: Add 44 GB Install HP Helion DNS as a Service 191

192 Part V Cloud service provisioning and deployment CloudSystem interfaces with the ESXi cluster, KVM compute node or Hyper-V compute node to launch virtual machine instances and connect the networks. After you configure the necessary cloud resources in the CloudSystem Operations Console, you can log in to the OpenStack user portal and deploy virtual machine instances to the cloud. Service provisioning and deployment in CloudSystem Enterprise is accomplished through the Cloud Service Management Console, and users have secure access to these services in the Marketplace Portal. This part of the Administrator Guide gives you an introductory set of processes to get started using the management console. Consult HP CSA documentation at Enterprise Information Library for details.

193 24 Using Orchestration templates to launch a cloud CloudSystem contains OpenStack Heat functionality. Heat is a service that allows you to launch a cloud application using a template. The template defines the resources (instances, networks, security groups, etc) in a cloud. There are two supported template formats: Heat Orchestration Template (HOT) and AWS CloudFormation (CFN). The set of resources defined by the template is referred to as a stack. There are three ways to implement stacks: the OpenStack user portal, a CLI or REST APIs. This chapter explains how to implement a stack using the OpenStack user portal. More information OpenStack Orchestration API OpenStack Orchestration command-line client Launch a stack using the OpenStack user portal In the OpenStack user portal, you can use a template to create a cloud from the resources defined in a stack. Prerequisites You have security permission to perform Orchestration actions in your assigned project. You have a template that defines the resources that you want to assign to a cloud. See Template Guide. Procedure 114 Launching a stack 1. From the OpenStack user portal, go to Project Orchestration Stacks. 2. Click + Launch Stack. 3. From the Template Source field, select the method for referencing the template. The remaining fields are updated based on the type of source you select. 4. Enter the remaining template details and launch the stack. After the stack is launched, it appears in a table on the Stacks screen. Click on the stack name to see the stack details. There are four detail views: Topology, Overview, Resources and Events. To edit the stack, use the More button in the Action column to the right of the stack. More information OpenStack documentation: Launch and manage stacks Launch a stack using the OpenStack user portal 193

194 25 Using CloudSystem Enterprise to manage multiple HP Helion CloudSystem providers When you install HP Helion CloudSystem and include the Enterprise appliance as part of the installation, an HP Helion CloudSystem provider is automatically configured in HP CSA to support integration between CloudSystem Enterprise and CloudSystem Foundation. The HP Helion CloudSystem provider is associated to the CloudSystem Foundation environment and allows all resources in that environment to be managed from CloudSystem Enterprise via the HP CSA Cloud Service Management Console. Cloud administrators can configure CloudSystem Enterprise to support multiple OpenStack providers. This allows resources in multiple CloudSystem Foundation environments to be managed from a single HP CSA Cloud Service Management Console. Multitenancy You can use one instance of HP Helion CloudSystem to support multiple customers. This is accomplished by configuring multitenancy in CloudSystem. Map the organizations in CloudSystem Enterprise to the projects in CloudSystem Foundation. This type of configuration requires the following process: 1. Integrate LDAP (OpenLDAP or Microsoft Active Directory) in your environment. 2. Add all users to LDAP. 3. Create projects in the OpenStack user portal and add the required users to the project. 4. In the HP CSA Marketplace Portal, create a new organization with exactly the same name as the project you created in the OpenStack user portal. 5. Follow the instructions provided in the Configuring Multitenancy Using OpenLDAP or Microsoft Active Directory with HP CloudSystem white paper in the Enterprise Information Library. Requirements for supporting multiple CloudSystem Foundation providers in CloudSystem Enterprise If you plan to use multiple CloudSystem Foundation environments and manage them from CloudSystem Enterprise, then the following requirements must be in place. The OpenStack Keystone identity service on all three Cloud controllers must be integrated with the Global Active Directory or OpenLDAP. Both CloudSystem Foundation and Enterprise require the same LDAP integration. See Manage the Cloud controller trio (page 83). The new HP Helion CloudSystem provider must be used. Do not use an older legacy provider. A route must exist between CloudSystem Enterprise and the Consumer Access Network to allow secondary OpenStack providers to be registered in CloudSystem Enterprise. Most communication between CloudSystem Enterprise and CloudSystem Foundation services is conducted through an adminurl endpoint registered in the OpenStack Keystone identity service. 194 Using CloudSystem Enterprise to manage multiple HP Helion CloudSystem providers

195 Figure 18 Integrated LDAP/AD server Set up a CloudSystem environment with multiple OpenStack providers Process Overview Installing CloudSystem and integrating LDAP servers (page 196) Copying the CA root certificate to the certificate store (page 197) Creating new users in LDAP (page 198) Creating a new organization for each environment (page 198) Creating a new resource environment for each environment (page 200) Set up a CloudSystem environment with multiple OpenStack providers 195

HP Helion CloudSystem 9.0 Update 1 Installation Guide

HP Helion CloudSystem 9.0 Update 1 Installation Guide HP Helion CloudSystem 9.0 Update 1 Installation Guide About this guide This information is for use by administrators using HP Helion CloudSystem Software 9.0 Update 1, who are assigned to configure and

More information

HP Helion CloudSystem 9.0 Installation Guide

HP Helion CloudSystem 9.0 Installation Guide HP Helion CloudSystem 9.0 Installation Guide About this guide This information is for use by administrators using HP Helion CloudSystem Software version 9.0, who are assigned to configure and provision

More information

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP HPE HELION CLOUDSYSTEM 9.0 HPE Helion CloudSystem Foundation CloudSystem Foundation Key Use Cases Automate dev/test CICD on OpenStack technology compatible infrastructure Accelerate cloud-native application

More information

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary Course Summary Description This course will take students through an in-depth look at HPE Helion OpenStack V5.0. The course flow is optimized to address the high-level architecture and HPE Helion OpenStack

More information

HPE Helion CloudSystem 10.0 ESXi Installation Guide

HPE Helion CloudSystem 10.0 ESXi Installation Guide HPE Helion CloudSystem 10.0 ESXi Installation Guide For ESXi Management hypervisor About this guide The information in this guide is for administrators using HPE Helion CloudSystem Software version 10.0,

More information

HP StoreOnce Recovery Manager Central for VMware User Guide

HP StoreOnce Recovery Manager Central for VMware User Guide HP StoreOnce Recovery Manager Central 1.2.0 for VMware User Guide Abstract The guide is intended for VMware and database administrators who are responsible for backing up databases. This guide provides

More information

HPE Digital Learner OpenStack Content Pack

HPE Digital Learner OpenStack Content Pack Content Pack data sheet HPE Digital Learner OpenStack Content Pack HPE Content Pack number Content Pack category Content Pack length Learn more CP001 Category 1 20 Hours View now Why HPE Education Services?

More information

Build Cloud like Rackspace with OpenStack Ansible

Build Cloud like Rackspace with OpenStack Ansible Build Cloud like Rackspace with OpenStack Ansible https://etherpad.openstack.org/p/osa-workshop-01 Jirayut Nimsaeng DevOps & Cloud Architect 2nd Cloud OpenStack-Container Conference and Workshop 2016 Grand

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

HPE OneView for Microsoft System Center Release Notes (v 8.2 and 8.2.1)

HPE OneView for Microsoft System Center Release Notes (v 8.2 and 8.2.1) Center Release Notes (v 8.2 and 8.2.1) Part Number: 832154-004a Published: April 2017 Edition: 2 Contents Center Release Notes (v 8.2 and 8.2.1)... 4 Description...4 Update recommendation... 4 Supersedes...

More information

vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017

vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017 vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017 vrealize Suite Lifecycle Manager 1.0 Installation and Management You can find the most up-to-date technical documentation

More information

DEEP DIVE: OPENSTACK COMPUTE

DEEP DIVE: OPENSTACK COMPUTE DEEP DIVE: OPENSTACK COMPUTE Stephen Gordon Technical Product Manager, Red Hat @xsgordon AGENDA OpenStack architecture refresher Compute architecture Instance life cycle Scaling compute

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

HP OneView 1.05 User Guide

HP OneView 1.05 User Guide HP OneView 1.05 User Guide Abstract This guide describes HP OneView features, interfaces, resource model design, and secure working environment. It describes up-front planning considerations and how to

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

HPE Hyper Converged 380 User Guide

HPE Hyper Converged 380 User Guide HPE Hyper Converged 380 User Guide Abstract This document describes the management of the HPE Hyper Converged 380 System using the HPE Hyper Converged 380 Management User Interface. This document is for

More information

HPE Digital Learner Server Management Content Pack

HPE Digital Learner Server Management Content Pack Content Pack data sheet HPE Digital Learner Server Management Content Pack HPE Content Pack number Content Pack category Content Pack length Learn more CP002 Category 1 20 Hours View now This Content Pack

More information

Using the vrealize Orchestrator OpenStack Plug-In 2.0. Modified on 19 SEP 2017 vrealize Orchestrator 7.0

Using the vrealize Orchestrator OpenStack Plug-In 2.0. Modified on 19 SEP 2017 vrealize Orchestrator 7.0 Using the vrealize Orchestrator OpenStack Plug-In 2.0 Modified on 19 SEP 2017 vrealize Orchestrator 7.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

HP Matrix Operating Environment 7.4 Getting Started Guide

HP Matrix Operating Environment 7.4 Getting Started Guide HP Matrix Operating Environment 7.4 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

HP Matrix Operating Environment 7.2 Getting Started Guide

HP Matrix Operating Environment 7.2 Getting Started Guide HP Matrix Operating Environment 7.2 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1)

HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1) HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1) Abstract This document describes changes in HPE OneView for VMware vcenter to help administrators understand the benefits of obtaining the 8.2

More information

HPE OneView for VMware vcenter User Guide

HPE OneView for VMware vcenter User Guide HPE OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HPE OneView for VMware vcenter. It is intended for system administrators who are

More information

VMware Integrated OpenStack Quick Start Guide

VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack 1.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

VMware Integrated OpenStack User Guide. VMware Integrated OpenStack 4.1

VMware Integrated OpenStack User Guide. VMware Integrated OpenStack 4.1 VMware Integrated OpenStack User Guide VMware Integrated OpenStack 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about

More information

HP CloudSystem Matrix Federation using CSA

HP CloudSystem Matrix Federation using CSA Technical white paper HP CloudSystem Matrix Federation using CSA Integrating CSA and MOE for maximum Cloud agility Table of Contents Introduction 2 HP CloudSystem Matrix 2 HP Cloud Service Automation (CSA)

More information

vrealize Suite Lifecycle Manager 1.1 Installation, Upgrade, and Management vrealize Suite 2017

vrealize Suite Lifecycle Manager 1.1 Installation, Upgrade, and Management vrealize Suite 2017 vrealize Suite Lifecycle Manager 1.1 Installation, Upgrade, and Management vrealize Suite 2017 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

HP Database and Middleware Automation

HP Database and Middleware Automation HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty

More information

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi and vsphere. HP Part Number: 616896-409 Published: September

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

Table of Contents DevOps Administrators

Table of Contents DevOps Administrators DevOps Administrators Table of Contents DevOps Administrators Overview for DevOps Admins Managing Images, Projects, Users Configure a Registry Create Users Assign the Administrator Role Create a Project

More information

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until

More information

Red Hat OpenStack Platform 10 Product Guide

Red Hat OpenStack Platform 10 Product Guide Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team rhos-docs@redhat.com

More information

HPE Synergy with Helion CloudSystem 10 Reference Architecture

HPE Synergy with Helion CloudSystem 10 Reference Architecture HPE Synergy with Helion CloudSystem 10 Reference Architecture HPE Helion CloudSystem/HPE OneView/HPE Synergy Integration Contents Executive summary... 3 Overview... 3 Integration of Synergy with Helion

More information

VMware Integrated OpenStack User Guide

VMware Integrated OpenStack User Guide VMware Integrated OpenStack User Guide VMware Integrated OpenStack 3.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

Administering vrealize Log Insight. September 20, 2018 vrealize Log Insight 4.7

Administering vrealize Log Insight. September 20, 2018 vrealize Log Insight 4.7 Administering vrealize Log Insight September 20, 2018 4.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center 13 FEB 2018 VMware Validated Design 4.2 VMware Validated Design for Software-Defined Data Center 4.2 You can find the most up-to-date

More information

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide HP Storage Provisioning Manager (SPM) Version 1.3 User Guide Abstract This guide provides information to successfully install, configure, and manage the HP Storage Provisioning Manager (SPM). It is intended

More information

IBM Cloud Orchestrator Version User's Guide IBM

IBM Cloud Orchestrator Version User's Guide IBM IBM Cloud Orchestrator Version 2.5.0.4 User's Guide IBM IBM Cloud Orchestrator Version 2.5.0.4 User's Guide IBM Note Before you use this information and the product it supports, read the information in

More information

vcloud Director Administrator's Guide vcloud Director 8.10

vcloud Director Administrator's Guide vcloud Director 8.10 vcloud Director Administrator's Guide vcloud Director 8.10 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

HP Helion OpenStack Carrier Grade 1.1: Release Notes

HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade 1.1: Release Notes HP Helion OpenStack Carrier Grade Contents 2 Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes...3 Changes in This Release... 5 Usage Caveats...7

More information

Vendor: HP. Exam Code: HP0-D31. Exam Name: Designing HP Data Center and Cloud Solutions. Version: Demo

Vendor: HP. Exam Code: HP0-D31. Exam Name: Designing HP Data Center and Cloud Solutions. Version: Demo Vendor: HP Exam Code: HP0-D31 Exam Name: Designing HP Data Center and Cloud Solutions Version: Demo QUESTION 1 Which tool uses what-if scenarios and price-to-performance tradeoffs to provide valid, supported

More information

vcloud Director Administrator's Guide

vcloud Director Administrator's Guide vcloud Director 5.1.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of

More information

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion HPE Helion OpenStack Carrier Grade 1.1 Release Notes 2017-11-14 HPE Helion Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes... 3 Changes in This Release... 3 Usage Caveats...4 Known Problems

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date

More information

HPE OneView 3.1 User Guide

HPE OneView 3.1 User Guide HPE OneView 3.1 User Guide Abstract The User Guide is intended for administrators who are using the HPE OneView appliance graphical user interface or REST APIs to manage IT hardware in a converged infrastructure

More information

vcloud Director Administrator's Guide

vcloud Director Administrator's Guide vcloud Director 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of

More information

VMware Integrated OpenStack Administrator Guide

VMware Integrated OpenStack Administrator Guide VMware Integrated OpenStack Administrator Guide VMware Integrated OpenStack 2.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

Hyper Converged Systems 250 and 380

Hyper Converged Systems 250 and 380 Hyper Converged Systems 250 and 380 Martin Brandstetter Information Systems Architect Month day, year Transform to a hybrid infrastructure Accelerate the delivery of apps and services to your enterprise

More information

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] [VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] Length Delivery Method : 5 Days : Instructor-led (Classroom) Course Overview This five-day course features intensive hands-on training that

More information

HPE OneView 4.0 User Guide

HPE OneView 4.0 User Guide HPE OneView 4.0 User Guide Abstract The User Guide is intended for administrators who are using the HPE OneView appliance graphical user interface or REST APIs to manage IT hardware in a converged infrastructure

More information

HP Insight Control for VMware vcenter Server Release Notes 7.2.1

HP Insight Control for VMware vcenter Server Release Notes 7.2.1 HP Insight Control for VMware vcenter Server Release Notes 7.2.1 HP Part Number: 678314-006a Published: June 2013 Edition: 2 Copyright 2013 Hewlett-Packard Development Company, L.P. Acknowledgements Microsoft

More information

Administering vrealize Log Insight. 05-SEP-2017 vrealize Log Insight 4.3

Administering vrealize Log Insight. 05-SEP-2017 vrealize Log Insight 4.3 Administering vrealize Log Insight 05-SEP-2017 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds NephOS A Single Turn-key Solution for Public, Private, and Hybrid Clouds What is NephOS? NephoScale NephOS is a turn-key OpenStack-based service-provider-grade cloud software suite designed for multi-tenancy.

More information

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES Helion OpenStack Carrier Grade 4.0 RELEASE NOTES 4.0 Copyright Notice Copyright 2016 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The

More information

VMware vfabric Data Director Installation Guide

VMware vfabric Data Director Installation Guide VMware vfabric Data Director Installation Guide vfabric Data Director 1.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1 Introducing VMware Validated Design Use Cases Modified on 21 DEC 2017 VMware Validated Design 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

IBM Spectrum Protect Plus Version Installation and User's Guide IBM

IBM Spectrum Protect Plus Version Installation and User's Guide IBM IBM Spectrum Protect Plus Version 10.1.1 Installation and User's Guide IBM Note: Before you use this information and the product it supports, read the information in Notices on page 119. Third edition

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds NephOS A Single Turn-key Solution for Public, Private, and Hybrid Clouds What is NephOS? NephoScale NephOS is a turn-key OpenStack-based service-provider-grade cloud software suite designed for multi-tenancy.

More information

Installing and Configuring vcloud Connector

Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

Securing VMware NSX MAY 2014

Securing VMware NSX MAY 2014 Securing VMware NSX MAY 2014 Securing VMware NSX Table of Contents Executive Summary... 2 NSX Traffic [Control, Management, and Data]... 3 NSX Manager:... 5 NSX Controllers:... 8 NSX Edge Gateway:... 9

More information

Administering vrealize Log Insight. 12-OCT-2017 vrealize Log Insight 4.5

Administering vrealize Log Insight. 12-OCT-2017 vrealize Log Insight 4.5 Administering vrealize Log Insight 12-OCT-2017 4.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

HP Matrix Operating Environment 7.1 Getting Started Guide

HP Matrix Operating Environment 7.1 Getting Started Guide HP Matrix Operating Environment 7.1 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

Build your own Cloud on Christof Westhues

Build your own Cloud on Christof Westhues Build your own Cloud on Christof Westhues chwe@de.ibm.com IBM Big Data & Elastic Storage Tour Software Defined Infrastructure Roadshow December 2 4, 2014 New applications and IT are being built for Cloud

More information

HP OneView 1.20 User Guide

HP OneView 1.20 User Guide HP OneView 1.20 User Guide Abstract This guide describes HP OneView features, interfaces, resource model design, and secure working environment. It describes up-front planning considerations and how to

More information

High Availability for Enterprise Clouds: Oracle Solaris Cluster and OpenStack

High Availability for Enterprise Clouds: Oracle Solaris Cluster and OpenStack High Availability for Enterprise Clouds: Oracle Solaris Cluster and OpenStack Eve Kleinknecht Principal Product Manager Thorsten Früauf Principal Software Engineer November 18, 2015 Safe Harbor Statement

More information

VMware vcloud Air User's Guide

VMware vcloud Air User's Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

vcloud Director Administrator's Guide vcloud Director 9.0

vcloud Director Administrator's Guide vcloud Director 9.0 vcloud Director 9.0 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware Web site also provides the latest product updates. If you have

More information

Hewlett Packard Enterprise. HPE OmniStack for vsphere Upgrade Guide

Hewlett Packard Enterprise. HPE OmniStack for vsphere Upgrade Guide Hewlett Packard Enterprise HPE OmniStack for vsphere Upgrade Guide Part number: P00126-001 Published: September 2017 2017 Hewlett Packard Enterprise Development LP Notices The information contained herein

More information

vsphere Upgrade Update 2 Modified on 4 OCT 2017 VMware vsphere 6.0 VMware ESXi 6.0 vcenter Server 6.0

vsphere Upgrade Update 2 Modified on 4 OCT 2017 VMware vsphere 6.0 VMware ESXi 6.0 vcenter Server 6.0 Update 2 Modified on 4 OCT 2017 VMware vsphere 6.0 VMware ESXi 6.0 vcenter Server 6.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

Detail the learning environment, remote access labs and course timings

Detail the learning environment, remote access labs and course timings Course Duration: 4 days Course Description This course has been designed as an Introduction to VMware for IT Professionals, but assumes that some labs have already been developed, with time always at a

More information

Additional License Authorizations

Additional License Authorizations Additional License Authorizations For HPE Cloud Center software products Products and suites covered PRODUCTS E-LTU OR E-MEDIA AVAILABLE * NON-PRODUCTION USE CATEGORY ** HPE Cloud Service Automation (previously

More information

LINUX, WINDOWS(MCSE),

LINUX, WINDOWS(MCSE), Virtualization Foundation Evolution of Virtualization Virtualization Basics Virtualization Types (Type1 & Type2) Virtualization Demo (VMware ESXi, Citrix Xenserver, Hyper-V, KVM) Cloud Computing Foundation

More information

Additional License Authorizations

Additional License Authorizations Additional License Authorizations For HPE Cloud Center and HPE Helion Cloud Suite software products Products and suites covered PRODUCTS E-LTU OR E-MEDIA AVAILABLE * NON-PRODUCTION USE CATEGORY ** HPE

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

Dedicated Hosted Cloud with vcloud Director

Dedicated Hosted Cloud with vcloud Director VMware vcloud Architecture Toolkit for Service Providers Dedicated Hosted Cloud with vcloud Director Version 2.9 April 2018 Harold Simon 2017 VMware, Inc. All rights reserved. This product is protected

More information

271 Waverley Oaks Rd. Telephone: Suite 206 Waltham, MA USA

271 Waverley Oaks Rd. Telephone: Suite 206 Waltham, MA USA f Contacting Leostream Leostream Corporation http://www.leostream.com 271 Waverley Oaks Rd. Telephone: +1 781 890 2019 Suite 206 Waltham, MA 02452 USA To submit an enhancement request, email features@leostream.com.

More information

Cloud Storage. Patrick Osborne Director of Product Management. Sam Fineberg Distinguished Technologist.

Cloud Storage. Patrick Osborne Director of Product Management. Sam Fineberg Distinguished Technologist. Cloud Storage Patrick Osborne (@patrick_osborne) Director of Product Management Sam Fineberg Distinguished Technologist HP Storage Why HP will WIN with Converged Storage Industry Standard x86-based platforms

More information

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Introduction to HPE ProLiant Servers HE643S

Introduction to HPE ProLiant Servers HE643S Course data sheet Introduction to HPE ProLiant Servers HE643S HPE course number Course length Delivery mode View schedule, local pricing, and register View related courses HE643S 2 Days ILT, VILT View

More information

VMware - VMware vsphere: Install, Configure, Manage [V6.7]

VMware - VMware vsphere: Install, Configure, Manage [V6.7] VMware - VMware vsphere: Install, Configure, Manage [V6.7] Code: Length: URL: EDU-VSICM67 5 days View Online This five-day course features intensive hands-on training that focuses on installing, configuring,

More information

HP SDN Document Portfolio Introduction

HP SDN Document Portfolio Introduction HP SDN Document Portfolio Introduction Technical Solution Guide Version: 1 September 2013 Table of Contents HP SDN Document Portfolio Overview... 2 Introduction... 2 Terms and Concepts... 2 Resources,

More information

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Horizon Cloud with On-Premises Infrastructure Administration Guide. VMware Horizon Cloud Service Horizon Cloud with On-Premises Infrastructure 1.

Horizon Cloud with On-Premises Infrastructure Administration Guide. VMware Horizon Cloud Service Horizon Cloud with On-Premises Infrastructure 1. Horizon Cloud with On-Premises Infrastructure Administration Guide VMware Horizon Cloud Service Horizon Cloud with On-Premises Infrastructure 1.3 Horizon Cloud with On-Premises Infrastructure Administration

More information

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES WHITE PAPER JULY 2017 Table of Contents 1. Executive Summary 4 2.

More information

NexentaStor VVOL

NexentaStor VVOL NexentaStor 5.1.1 VVOL Admin Guide Date: January, 2018 Software Version: NexentaStor 5.1.1 VVOL Part Number: 3000-VVOL-5.1.1-000065-A Table of Contents Preface... 3 Intended Audience 3 References 3 Document

More information

Active System Manager Version 8.0 User s Guide

Active System Manager Version 8.0 User s Guide Active System Manager Version 8.0 User s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either

More information

Remove complexity in protecting your virtual infrastructure with. IBM Spectrum Protect Plus. Data availability made easy. Overview

Remove complexity in protecting your virtual infrastructure with. IBM Spectrum Protect Plus. Data availability made easy. Overview Overview Challenge In your organization, backup management is too complex and consumes too much time and too many IT resources. Solution IBM Spectrum Protect Plus dramatically simplifies data protection

More information

Installing and Configuring vcenter Support Assistant

Installing and Configuring vcenter Support Assistant Installing and Configuring vcenter Support Assistant vcenter Support Assistant 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide You can find the most up-to-date technical

More information

VMware vfabric Data Director Installation Guide

VMware vfabric Data Director Installation Guide VMware vfabric Data Director Installation Guide vfabric Data Director 2.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

HPE OneView Global Dashboard 1.40 User Guide

HPE OneView Global Dashboard 1.40 User Guide HPE OneView Global Dashboard 1.40 User Guide Abstract This user guide is intended for administrators who are using the HPE OneView Global Dashboard graphical user interface to monitor IT hardware in a

More information